CN110853743A - Medical image display method, information processing method, and storage medium - Google Patents

Medical image display method, information processing method, and storage medium Download PDF

Info

Publication number
CN110853743A
CN110853743A CN201911122046.XA CN201911122046A CN110853743A CN 110853743 A CN110853743 A CN 110853743A CN 201911122046 A CN201911122046 A CN 201911122046A CN 110853743 A CN110853743 A CN 110853743A
Authority
CN
China
Prior art keywords
image
nodule
attention object
display
lung
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911122046.XA
Other languages
Chinese (zh)
Inventor
石磊
程根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
According To Hangzhou Medical Technology Co Ltd
Original Assignee
According To Hangzhou Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by According To Hangzhou Medical Technology Co Ltd filed Critical According To Hangzhou Medical Technology Co Ltd
Priority to CN201911122046.XA priority Critical patent/CN110853743A/en
Publication of CN110853743A publication Critical patent/CN110853743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Abstract

The present disclosure relates to a method for displaying medical images, which mainly comprises: responding to the selection of the attention object contained in the medical image, and displaying the characterization content related to the attention object in the display interface; the representation content is used for representing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object. Or the display method mainly comprises the following steps: responding to the selection of the same attention object contained in any group of medical images, and displaying the characterization content related to the attention object in the same display interface; the representation content is used for representing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object. The present disclosure also relates to an information processing method, and a computer-readable storage medium. The technical scheme of the invention can visually judge the change trend of the medical image in the analysis and diagnosis process of the medical image, is beneficial to the analysis and diagnosis of the medical image, improves the efficiency and the accuracy and provides great convenience for clinic.

Description

Medical image display method, information processing method, and storage medium
Technical Field
The present disclosure relates to the field of medical image processing, identification, and display technologies, and in particular, to a medical image display method, an information processing method, and a computer-readable storage medium.
Background
In the prior art, although a distribution range of a CT value and image information of an object of interest are given for the object of interest included in a medical image, a change trend of the object of interest cannot be judged in a medical image analysis and diagnosis process only through the two kinds of information, and thus accurate judgment conforming to clinical significance cannot be made.
Disclosure of Invention
The present disclosure is intended to provide a display method, an information processing method, and a computer-readable storage medium for medical images, which can further process and display an object of interest included in a medical image, so that a change trend of the medical image can be visually determined in a medical image analysis and diagnosis process, which is beneficial to analysis and diagnosis of the medical image, improves efficiency and accuracy, and provides great convenience for clinical use.
According to one aspect of the present disclosure, there is provided a method for displaying a medical image, including:
responding to selection of an attention object contained in the medical image, and displaying representation content related to the attention object in a display interface; wherein:
the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
In some embodiments, the selecting comprises:
selecting the attention object through the operation of the operable interaction object in the display area; wherein:
the operable interactive object is linked to medical image information of the object of interest, and the medical image information at least comprises: and the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
In some embodiments, the display area comprises a first image display area and/or a second image display area;
the first image display area is at least used for displaying the information of the identified attention object;
the second image display area is used for displaying at least one of the following medical images:
3D medical images, cross-sectional medical images, sagittal medical images, coronal medical images.
In some embodiments, the selecting is performed based on a current display interface, wherein the display interface:
the current display interface is contained; or
Independent of the current display interface.
In some embodiments, the correspondence between the image parameter of the object of interest and the distribution of the feature points of the object of interest includes: a correspondence between a distribution range of the CT value of the object of interest and particles included in the object of interest;
the displaying the characterization content about the attention object in the display interface comprises: displaying the representation content in a first display mode and/or a second display mode;
the first display mode at least comprises: displaying a CT value histogram of the object of interest.
The second display mode at least comprises: displaying a ratio of the number of particles under the CT value of the attention object to the total number of particles included in the attention object, or a ratio of the number of pixels under the CT value of the attention object to the total number of pixels included in the attention object.
According to one aspect of the present disclosure, there is provided a method for displaying a medical image, including:
in response to the selection of the same attention object contained in any group of medical images, displaying the characterization content related to the attention object in the same display interface; wherein:
the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
In some embodiments, further comprising:
in response to selection of the same object of interest contained in any one group of medical images, determining image parameters of the object of interest;
and displaying the physical parameters of the attention object based on the image parameters of the attention object.
In some embodiments, further comprising:
and presenting or hiding the representation content in response to the operation in the display interface.
According to one aspect of the present disclosure, there is provided an information processing method including:
extracting medical image information of an object of interest in a medical image, the medical image information at least comprising: the image parameters of the attention object and the distribution condition of the characteristic points of the attention object;
and obtaining the corresponding relation between the image parameters of the concerned object and the distribution situation of the characteristic points of the concerned object based on the medical image information of the concerned object.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
the display method according to the above; or
According to the information processing method described above.
In the medical image display method, the information processing method, and the computer-readable storage medium according to various embodiments of the present disclosure, by further processing and displaying the target of interest included in the medical image, on the one hand, a corresponding relationship between an image parameter of the target of interest and a distribution situation of feature points of the target of interest can be obtained, on the other hand, a trend of the target of interest can be visually determined, which provides great convenience in clinical analysis and diagnosis, for example, for a lung nodule, and on the other hand, a corresponding relationship between a distribution range of a CT value and particles included in the lung nodule can be visually known through a CT value range of the lung nodule, how many particles are distributed in a certain range of the CT value can be known, and whether the lung nodule has a degree of deterioration can be accurately determined, which is beneficial for diagnosing whether the lung nodule is benign or malignant, and the diagnosis efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may designate like components in different views. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of like components. The drawings illustrate various embodiments generally, by way of example and not by way of limitation, and together with the description and claims, serve to explain the disclosed embodiments.
FIG. 1 illustrates an interface of chest CT aided diagnosis software according to the present disclosure;
fig. 2 is a schematic view of a display interface of a medical image display method according to an embodiment of the present disclosure;
fig. 3 is a schematic view illustrating an interaction mode of a display interface of a display method of a medical image according to an embodiment of the present disclosure;
fig. 4 is a schematic view illustrating another interaction mode of a display interface of a display method of medical images according to an embodiment of the present disclosure;
fig. 5 is a schematic view illustrating still another interaction mode of a display interface of a display method of a medical image according to an embodiment of the present disclosure;
fig. 6 illustrates an interface of still another breast CT aided diagnosis software related to the medical image display method according to the embodiment of the present disclosure, in which an alignment mode is shown;
fig. 7 is a schematic view illustrating still another display interface of a display method of medical images according to an embodiment of the present disclosure;
fig. 8 is a schematic view illustrating each display interface of the medical image display method according to the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure.
For the medical image related to the present disclosure, the three-dimensional medical image of the human body and each part or organ of the body obtained by various medical imaging devices may be, for example: the three-dimensional medical image may also be a three-dimensional image obtained by a Computed Tomography (CT) scan, or a three-dimensional image obtained by reconstructing a CT two-dimensional slice image obtained by a CT scan, and the disclosure is not limited thereto. The two-dimensional slice image is a two-dimensional sequence digital tomographic image of a human body and each part or organ of the body acquired by a medical imaging apparatus, for example, a two-dimensional slice image acquired by a computed tomography apparatus (CT), a magnetic resonance imaging apparatus (MRI), a positron emission tomography apparatus (PET), an Ultrasound apparatus (Ultrasound), and the like, and the disclosure is not limited thereto. The two-dimensional slice image may also refer to a two-dimensional image obtained by extracting features of a three-dimensional stereoscopic medical image and reconstructing the image.
The present disclosure will describe a medical image display method, an information processing method, and a computer-readable storage medium according to embodiments of the present disclosure, taking a CT image as a main illustrative example. It should be understood that DICOM images can be presented in full and detail for three-dimensional images of organs, with three-dimensional images being the primary building foundation. The sagittal plane (sagittal plane) is a plane that divides the human body into left and right parts, the left and right sections are sagittal planes, and the left and right equal sections are called the median sagittal plane, and the corresponding image can be defined as a sagittal view. The coronal plane (coronal plane) is a cross section obtained by longitudinally cutting a human body in the left and right directions into two parts, i.e., a front part and a rear part, and is called a coronal plane by a plane passing through a vertical axis and a horizontal axis and all planes parallel thereto, which divide the human body into the front part and the rear part, and a corresponding image can be defined as a coronal map. The sagittal and coronal planes correspond to the transverse plane (transverse plane). In the analysis and diagnosis process of CT medical images, for the parts, lesions, alien materials, placeholders, etc. to be analyzed and diagnosed in the medical images, all the objects having clinical analysis and diagnosis significance may be referred to as the objects of interest. In the embodiments of the present disclosure, a nodule is taken as an example of an object of interest included in a breast CT image, such as a lung nodule. In the context of thoracic CT imaging, lung nodules may refer to solid or subvisive lung shadows that appear as focal, ellipsoid-like, densitometric increases of less than 3 mm. The diameter of the small nodules is less than 5mm, and the diameter of the small nodules is 5-10 mm. The lung nodules can be either benign lesions or malignant or borderline lesions. Currently, a chest image can be acquired through CT, and a diagnosis of a lung nodule possibly existing in the chest image can be assisted through AI and the like. In some diagnostic information display interfaces, information about detected lung nodules is given, such as: benign and malignant, volume, long and short diameters, density, doubling time, CT value range and the like. The feature information of the lung nodule may include volume information, long/short path information, density information, and feature information of the lung nodule, which may be obtained from a chest image, and may also be simply and intuitively characterized based on a classification rule for the nodule and a classification result of the lung nodule, so that a clinical diagnosis may be made through a quick display of the corresponding feature information, for example, the feature information of the lung nodule may further include a type of the nodule, such as malignancy and benign, and may further include a risk level of the nodule, such as low risk, medium risk, and high risk, and further, the risk level may be presented in a percentage manner, such as a risk level of "X%", and the like, which is not specifically limited herein. Optionally, the classification rule of the Lung nodule may be a classification rule set by a user, and may also be an existing classification rule, such as a classification rule of a meio prediction model for the Lung nodule, a Lung image report and data system (Lung-RADS) classification rule, and the like, which is not specifically limited herein.
As one aspect, an embodiment of the present disclosure provides a method for displaying a medical image, including:
responding to selection of an attention object contained in the medical image, and displaying representation content related to the attention object in a display interface; wherein:
the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
As shown in fig. 1, fig. 1 is an interface of a chest CT aided diagnosis software according to the present disclosure, wherein:
the tool bar area is positioned at the top of the view and can be used as a tool bar area, and partial or whole interaction with the interface of the CT auxiliary diagnosis software can be realized through some operations in the tool bar area;
the medical image list area is positioned at the left side of the view, and the medical image needing analysis and diagnosis can be called through some operations in the medical image list area, wherein the medical image list area can also comprise a historical image list;
located in the middle of the view, may be arranged as a second image display area of an embodiment of the present disclosure for displaying at least one of the following medical images: 3D medical images, cross-sectional medical images, sagittal medical images, coronal medical images, and images transformed based on these medical images, such as corresponding organ expansion maps, organ segmentation maps, and the like. The drawings, which are particularly relevant to the present disclosure, may serve as an image display area in which medical images requiring analysis and diagnosis may be displayed. It should be understood that the display in this area may include static display of a single frame, multiple frames of medical images, including two-dimensional images, three-dimensional images, etc., as well as dynamic display of medical images;
located on the right side of the view, may be a diagnostic information display area in which clinical information of the medical image that needs to be analyzed and diagnosed may be shown. The presentation may be a list of information boxes, charts, text, digitized curves, and so on. The information content shown is preferably operable and interactive, namely: more information can be linked to the medical image and information related to the medical image by operating on the displayed information content. Taking lung nodules as an example, in the diagnostic information in this region, a list of information of all lung nodules diagnosed, for example, by AI, may be included. Clinically known, pulmonary nodule information includes: the malignancy, volume, density, doubling time, CT value, etc. of the nodule. As will be understood by those skilled in the art in conjunction with clinical practice, although the distribution range of the CT value of the lung nodule and the density of the lung nodule are given in the diagnostic information, it is impossible to intuitively determine whether there is a possibility of deterioration of the lung nodule in the analysis and diagnosis process only by these two kinds of information, because it is impossible to know the correspondence between the distribution range of the CT value and the particles included in the lung nodule according to the given CT value range, that is, it is impossible to know how many particles are distributed in a certain range of the CT value, and it is impossible to determine whether there is a deterioration degree of the lung nodule.
In order to enable intuitive analysis and diagnosis of a lung nodule, the present disclosure is directed to further enable providing information of the lung nodule in response to a selection of the lung nodule and medical image parameters of the selected lung nodule, the information being presented in a display interface. As illustrated in fig. 1, in an embodiment of selecting a lung nodule and selecting medical image parameters of the lung nodule, the selecting may be performed in one or more display regions of fig. 1. One way may be: selecting a lung nodule in the image display area and selecting an image parameter of the lung nodule by means of tool options of the toolbar area, wherein the image parameter can comprise a CT value, a CT value distribution range and the like; one way may be: determining a series of medical images through operation in the medical image list region, and on the basis, selecting lung nodules needing attention and image parameters of the selected lung nodules needing attention from the series of medical images in the image display region, wherein the image parameters can include CT values, CT value distribution ranges and the like; one way may be: and in the diagnostic information display area, performing interactive operation on one or some diagnostic information so as to select the lung nodule and the image parameter of the selected lung nodule. Specifically, in conjunction with the description of FIG. 1, in response to a user clicking on a CT value in the diagnostic information display area, the corresponding diagnostic system may determine a lung nodule shown in the image display area and select a corresponding CT value range of-aHU through bHU (e.g., -600HU through-149 HU) for the lung nodule.
In response to the selection of the lung nodule and the medical image parameter of the selected lung nodule, the characterization content of the embodiments of the present disclosure aims to present the corresponding relationship between the image parameter of the object of interest, i.e. the lung nodule, and the distribution of the feature points of the lung nodule to the user, so that at least the technical purpose that can be achieved includes: in the medical image analysis and diagnosis process, image parameters of an object of interest and a distribution of feature points of the object of interest, such as lung nodules and particles included in the lung nodules related to various embodiments of the present disclosure, corresponding image parameters may include, but are not limited to, benign and malignant properties, volume, density, doubling time, CT values, a class of feature points that can characterize the image significance of the object of interest in clinical, image analysis, and image diagnosis, and a distribution of the class of feature points may include a distribution number, a distribution range, a distribution degree, a distribution magnitude, and the like. In order to be able to visually present the characterization content of the present disclosure, in the embodiments related to the present disclosure, the characterization content related to the attention object is displayed in the display interface, and the characterization content may be displayed in various forms that enable interaction with the user interface, for example, including but not limited to graphic display to characterize, text display to characterize, curve display to characterize, dynamic picture display to characterize, and so on. For the characterization of the lung nodule, the CT value of the lung nodule can be displayed in a form of a histogram, and can also be displayed in a curve mode under the condition that the abscissa and the ordinate are not defined. As will be understood by the common sense of the skilled person, for different display scenes, the corresponding relationship between the image parameters of the object of interest and the distribution of the feature points of the object of interest according to the embodiments of the present disclosure may be displayed on the display interface and the display area as shown in fig. 1, or may be independent of the display interface and the display area as shown in fig. 1, and these display interfaces and display areas may be considered as the current display interface configured for the current operation. That is, it can be understood that the display interface different from the current display interface is used as a display carrier to implement display so as to represent the corresponding relationship between the image parameter of the attention object and the distribution situation of the feature point of the attention object in the embodiments of the present disclosure. These separate display interfaces may be displayed by: the display of the characterization content in the embodiments related to the present disclosure is implemented by a pop-up display interface, a floating display interface, a highlight display interface, or even a remote display interface different from the current display device.
In some embodiments, the lung nodule is taken as a target of interest in the present disclosure, and the image parameters of the lung nodule and the related diagnostic information may be determined in some ways as follows.
One, for pulmonary nodule detection
The present disclosure may be implemented by a method for detecting lung nodules as one of schemes for determining lung nodule information in various embodiments of the present disclosure.
The method comprises the following steps:
step 1: acquiring three-dimensional coordinates of a nodule image and candidate nodules in the nodule image;
specifically, the nodule image is a three-dimensional image, and the three-dimensional coordinates of the nodule candidate may be three-dimensional coordinates of a point within the nodule candidate (for example, three-dimensional coordinates of a nodule center point) or three-dimensional coordinates of a point on the surface of the nodule candidate.
Step 2: determining ROI (region of interest) of the candidate nodule from the nodule image according to the three-dimensional coordinates of the candidate nodule;
specifically, a pixel cube including a nodule candidate may be determined by extending a preset distance to the periphery with the three-dimensional coordinates of the nodule candidate as a center, where the preset distance is a preset multiple of the radius of the nodule candidate, such as 1.25 times of the radius of the nodule candidate. This pixel cube is then truncated and interpolated to scale to a certain size. And then, adding a spatial information channel to each pixel in the pixel cube, and outputting the ROI, wherein the spatial information channel is the distance between the pixel cube and the three-dimensional coordinates of the candidate nodule. For example, a (2L × 2L) pixel cube may be selected by extending L pixels in each direction of three coordinate axes with the three-dimensional coordinates of the nodule candidate as the center.
And step 3: and determining the confidence of the candidate nodule according to the ROI and the nodule detection model.
In each embodiment of the present disclosure, the nodule detection model is obtained by training a plurality of nodule images of a labeled nodule region by using a convolutional neural network, and specifically may be: the method comprises the steps of firstly obtaining a candidate nodule set with false positive results to be filtered out and judging results of all candidate nodules in the candidate nodule set, obtaining the candidate nodule set with the false positive results to be filtered out by using other schemes after a large number of chest CT images are collected, and judging whether the candidate nodules in the candidate nodule set are nodules or not by carrying out multiple judgment. And then performing data enhancement on the candidate nodules in the candidate nodule set to obtain an enhanced candidate nodule set. For example, the data amount can be increased to K times before, and the possible ways can be to increase the data amount to K times before by means of random horizontal mirroring, random rotation by any angle, random up-down-left-right translation by 0-5 pixels, random scaling by 0.85-1.15 times and the like. And determining the ROI of each candidate nodule in the enhanced nodule candidate set from the nodule image according to the enhanced nodule candidate set and the three-dimensional coordinates of each candidate nodule, and training the ROI of each candidate nodule in the enhanced nodule candidate set through a preset 3D convolutional neural network model to obtain a nodule detection model. During training, the nodule confidence coefficient output by the 3D convolutional neural network model and the label of a training sample can be used as cross entropy to be used as a loss function, training is carried out by a back propagation method, and the training optimization algorithm is SGD. The nodule detection model obtained through the steps comprises M3D convolution feature extraction models and a full-connection module. Each 3D convolution feature extraction model further includes a 3D convolution layer (J × J) and a max _ pool layer (H × H). The fully connected model may include two fully connected layers.
And 4, step 4: and filtering out false positive candidate nodules in the candidate nodules with the confidence degrees larger than the threshold value according to the candidate nodules with the confidence degrees larger than the threshold value, the segmentation result of the body part where the candidate nodules are located and the three-dimensional coordinates of the candidate nodules.
In the specific filtering process, the candidate nodules with the confidence degrees larger than the threshold value can be filtered out according to the three-dimensional coordinates of the candidate nodules and the pixels in the preset region where the candidate nodules are located. The threshold may be set empirically. For example, the Imm region may be expanded around the three-dimensional coordinates of the nodule candidate (I × Imm)3) The area of (a). Then, the pixels with CT value larger than 400 in the statistical region can be considered as candidate nodes of bone false positive if the occupied proportion is larger than the first threshold value, so that the candidate nodes can be filtered. And filtering out the candidate nodules with the confidence degrees larger than the threshold value, wherein the candidate nodules are false positive diaphragm type candidate nodules in the candidate nodules, and the segmentation results of the body parts where the candidate nodules are located. For example, from the three-dimensional coordinates of the candidate nodule, the region blocks with the diameter size are expanded to the four sides, the number of pixels inside and outside the lung in the region blocks is counted, and if the proportion of the number of pixels inside the lung and the number of pixels outside the lung in the nodule image is similar and is basically in the middle position of the image, the candidate nodule can be regarded as a diaphragm-like false positive, and thus the candidate nodule can be filtered out. And filtering out mediastinum false-positive candidate nodules with confidence degrees larger than a threshold value from the candidate nodules according to the three-dimensional coordinates of the candidate nodules and the segmentation results of the body parts where the candidate nodules are located. For example, if the center of the candidate is outside the lung, the vertical direction does not exceed the lung range, and the relative position on the central position, or the X axis, is between 0.45 and 0.55, the candidate is considered as a mediastinum-like false positive candidate.
Second, the pulmonary nodule density
The present disclosure may be implemented by a method for determining a lung nodule density as one of schemes for determining lung nodule information in various embodiments of the present disclosure.
The method comprises the following steps:
step 1: and acquiring a lung CT image.
Specifically, the lung CT image is an image obtained by performing cross-sectional scanning one by one around the lung of a human body using a precisely collimated X-ray beam, gamma rays, ultrasonic waves, or the like, together with a highly sensitive detector. It will be appreciated by those skilled in the art that the lung CT image is a three-dimensional image, that is, the method described in the embodiments of the present invention can determine the lung nodule density for a three-dimensional image.
Step 2: and determining the position of the lung nodule in the CT image, and extracting the lung nodule image from the lung CT image.
Taking the lung nodule image obtained according to the central coordinate and the radius as an example, the position of the lung nodule in the lung CT image is the central coordinate (x)0,y0,z0). In the actual image analysis and diagnosis, the lung nodule is irregular, and if the lung nodule is in the distance center coordinate (x)0,y0,z0) The closest distance is D1, e.g., one of the most inwardly concave edge point to center coordinates (x) of an irregular lung nodule0,y0,z0) From the center coordinate (x)0,y0,z0) The farthest distance is D2, e.g., one of the most outwardly extending edge points of an irregular lung nodule to center coordinate (x)0,y0,z0) Can then be based on the center coordinate (x)0,y0,z0) Taking 2 times of D2 as the side length as the central point, the obtained region is the lung nodule image corresponding to the lung nodule. That is, it can be considered that the distance center coordinate (x) is set in the present embodiment0,y0,z0) The maximum distance (the concept described above in D2) is the selected criterion to obtain the lung nodule image. In order to ensure that the obtained lung nodule image can include the region where the whole lung nodule is located, the length which is more than 2 times of the length of D2 can be used as the side length of the region, so that the lung nodule image is obtained, and further, the partial image of the lung nodule is prevented from being omitted. After obtaining the lung nodule image, the lung nodule image can be further processed, so that the preset characteristic extraction neural network is enlargedThe training sample size of the model further improves the accuracy of extracting the neural network model from the preset features. In the embodiment of the present invention, there are various processing manners for the lung nodule image, for example, the processing manner may be horizontal translation, up-down translation, horizontal mirroring, vertical mirroring, rotation, scaling, and the like.
And step 3: and (4) extracting the features of the lung nodule image by adopting a preset feature extraction neural network model to obtain corresponding feature vectors.
In the actual processing process, the parameters of the preset feature extraction neural network model may be obtained by training a plurality of lung nodule images. The preset feature extraction neural network model may be a shallow neural network model, that is, the preset feature extraction neural network may include N convolution modules, where N is less than or equal to a first threshold, and a specific numerical value of the first threshold is not limited herein. In a specific embodiment, the preset feature extraction neural network model may include three convolution modules. Each convolution module may further include a convolution layer, a Normalization (BN) layer connected to the convolution layer, an activation function layer connected to the BN layer, and a max firing layer connected to the activation function layer. The sizes of the convolution kernels of the convolution layers, the convolution kernel of the max boosting layer and the feature channel value extracted by each convolution module can be set and adjusted by a person skilled in the art according to experience and actual conditions, and are not limited specifically. Based on the input image being a three-dimensional image, the preset feature extraction neural network model according to the present disclosure may be a (3Dimensions, 3D) convolutional neural network, and accordingly, a convolution kernel corresponding to the 3D convolutional neural network may be (m × m), where m is an integer greater than or equal to 1.
And 4, step 4: and inputting the corresponding feature vector into a preset density classification neural network model, and obtaining the lung nodule density corresponding to the lung nodule output by the preset density classification neural network model.
In particular, the lung nodule density may include a plurality of types, for example, the lung nodule density may be a solid lung nodule density; as another example, the lung nodule density may be a ground glass lung nodule density; lung knotThe nodule density may be a semi-solid pulmonary nodule density. The person skilled in the art can classify the lung nodule density according to experience and practical situations, and the classification is not limited in particular. The preset parameters of the density classification neural network model are obtained by training a plurality of corresponding feature vectors and the lung nodule density corresponding to each lung nodule. Further, the preset density classification neural network may be a plurality of types of neural networks, for example, the preset density classification neural network model to which the present disclosure relates includes a first fully-connected layer, a second fully-connected layer, and a softmax layer. And after the feature vectors corresponding to the confirmed diagnosis are calculated sequentially through the first full-connection layer and the second full-connection layer, the classification result is output after the classification is carried out through the softmax layer, and therefore the lung nodule density corresponding to the lung nodules is obtained. In the specific training process, the feature vector with the number 1 is X1The density of the lung nodules corresponding to the lung nodules is semi-solidity; the feature vector of number 2 is X2The density of the lung nodules corresponding to the lung nodules is solidity; the feature vector of number 3 is X3The density of the lung nodules corresponding to the lung nodules is solidity; the feature vector of number 4 is X4The density of the lung nodules corresponding to the lung nodules is ground glass; the feature vector of number 5 is X5The lung nodule density corresponding to the lung nodule is semi-solid. And inputting the plurality of feature vectors corresponding to the plurality of numbers and the lung nodule density corresponding to each lung nodule into a preset density classification neural network model, so as to determine parameters of the preset density classification neural network model. The feature vectors corresponding to the plurality of numbers can be input into the initial density classification neural network model to obtain the predicted lung nodule density of each lung nodule, and then reverse training is performed according to the predicted lung nodule density of each lung nodule and the actual lung nodule density of each lung nodule to generate a preset density classification neural network model. After the trained preset density classification neural network model is obtained, the corresponding feature vectors can be input into the preset density classification neural network model to obtain confidence coefficients corresponding to a plurality of preset lung nodule densities, and then the preset lung nodule density with the highest confidence coefficient in the confidence coefficients corresponding to the plurality of preset lung nodule densities can be used as the lung nodule density corresponding to the lung nodule.
Third, regarding pulmonary nodule doubling time
The present disclosure may be implemented by a method for determining pulmonary nodule doubling time as one of schemes for determining pulmonary nodule information in various embodiments of the present disclosure.
The method comprises the following steps:
step 1: a first nodule image and a second nodule image are acquired.
Specifically, the first and second nodule images are nodule images captured at different times in the same body, and the second nodule image may be captured later than the first nodule image.
Step 2: and matching the target nodules in the first nodule image and the second nodule image, determining the length of the long and short diameters of the matched target nodules in the first nodule image and the second nodule image, determining the volume of the target nodules in the first nodule image according to the length of the long and short diameters of the target nodules in the first nodule image, and determining the volume of the target nodules in the second nodule image according to the length of the long and short diameters of the target nodules in the second nodule image. The doubling time of the lung nodule, i.e. the time required for a doubling of the lung nodule, can then be determined from the volume of the target nodule in the first nodule image and the volume of the target nodule in the second nodule image.
When performing the target nodule matching, first coordinates of the positioning anchors in the first nodule image and the second nodule image may be determined first. The first coordinate of the target nodule in the nodule image can be calibrated manually, a convolutional neural network can also be adopted to train the nodule image with the first coordinate of the calibrated nodule to determine a nodule detection model, and then the first coordinate of the target nodule in the nodule image is detected through the nodule detection model aiming at any nodule image. The positioning anchor point is a point which exists in both the first node image and the second node image and has a relatively fixed position in the first node image and the second node image, and the positioning anchor point may be preset according to actual conditions, for example, when a lung nodule is matched, the positioning anchor point may be set as a central point of a trachea bifurcation, or a central point of a vertebra, or a central point of a sternum, or a lung cusp point of the left and right lungs, or a combination of the above points. The first coordinates of each positioning anchor point in the first nodule image and the second nodule image can be calibrated manually or determined according to a positioning anchor point detection model, wherein the positioning anchor point detection model is determined after a plurality of nodule images marked with the first coordinates of the positioning anchor points are trained by adopting a convolutional neural network. And then determining a spatial transformation matrix according to the segmentation images of the first nodule image and the second nodule image and the first coordinates of the positioning anchors in the first nodule image and the second nodule image. After the spatial transformation matrix is obtained, the first coordinates of the target nodule in the first nodule image may be converted into second coordinates of the calibration coordinate system according to the spatial transformation matrix. And finally, determining a target nodule matched with the target nodule of the first nodule image in the second nodule image according to the second coordinate of the target nodule of the first nodule image. Wherein:
the process of determining the detection model of the positioning anchor point through the training of the convolutional neural network can adopt the following steps: acquiring a nodule image as a training sample; marking coordinates of a positioning anchor point in a training sample manually; and inputting the training sample into a 3D convolutional neural network for training, and determining a positioning anchor point detection model.
The process of determining the first coordinate of the positioning anchor point in the nodule image by using the positioning anchor point detection model determined by training may include the following steps: extracting the feature image of the nodule image sequentially through L3D convolution feature extraction blocks, wherein L is more than or equal to 2 and less than or equal to 5; the method comprises the steps of converting a feature image into a feature vector and mapping the feature vector into a first coordinate of a positioning anchor point in a nodule image through a full-connection module, wherein the first coordinate of the positioning anchor point is a three-dimensional coordinate, the positioning anchor point detection model comprises an input layer, L3D convolution feature extraction blocks, q full-connection modules and an output layer, L is more than or equal to 2 and less than or equal to 5, the specific value of L is determined according to the actual condition, q is more than 0, the 3D convolution feature extraction blocks comprise 3D convolution modules and max firing layers, each 3D convolution module comprises a 3D convolution layer, a Batch (BN) layer and an excitation function layer, and the size of each layer in each 3D convolution feature extraction block can be determined according to the actual condition.
The process of obtaining the positioning anchor point detection model through training of the 3D convolutional neural network and the 2D convolutional neural network can adopt the following steps: acquiring a nodule image as a first type of training sample; manually marking the coordinates of a first type positioning anchor point in a first type training sample; based on the coordinates of the first type of positioning anchor points, intercepting a two-dimensional nodule image from the first type of training sample as a second type of training sample; manually marking the coordinates of a second type of positioning anchor points in a second type of training samples; and inputting the first type of training samples into a 3D convolutional neural network for training, inputting the second type of training samples into a 2D convolutional neural network for training, and determining a positioning anchor point detection model.
The process of determining the first coordinate of the positioning anchor point in the nodule image by using the positioning anchor point detection model determined by training may include the following steps: extracting a first feature image of the nodule image sequentially through M3D convolution feature extraction blocks, wherein M is more than or equal to 2 and less than or equal to 5; converting the first feature image into a first feature vector and mapping the first feature vector into a first coordinate of a first type of positioning anchor point in the nodule image through a first full-connection module, wherein the first coordinate of the first type of positioning anchor point is a three-dimensional coordinate; intercepting a two-dimensional nodule image from the nodule image according to a first coordinate of a first type of positioning anchor point; extracting a second feature image of the two-dimensional nodule image sequentially through N2D convolution feature extraction blocks, wherein N is more than or equal to 2 and less than or equal to 5; converting the second feature image into a second feature vector and mapping the second feature vector into coordinates of a positioning anchor point in the two-dimensional nodule image through a second full-connection module; and determining the first coordinates of the second type of positioning anchor points according to the coordinates of the positioning anchor points in the two-dimensional nodule image and the coordinates of the first type of positioning anchor points, wherein the first coordinates of the second type of positioning anchor points are three-dimensional coordinates.
After the matching target nodule is obtained, an ROI (region of interest) containing the target nodule is determined from the nodule image according to the three-dimensional coordinates of the target nodule. Specifically, a pixel cube containing a nodule is determined by extending a preset distance to the periphery with the three-dimensional coordinates of the nodule as the center, where the preset distance is a preset multiple of the radius of the nodule, such as 1.25 times of the radius of the nodule. This pixel cube is then truncated and interpolated to scale to a certain size. And then, adding a spatial information channel to each pixel in the pixel cube, and outputting the ROI, wherein the spatial information channel is the distance between the three-dimensional coordinates of the pixel and the nodule. And segmenting a nodule region from the nodule image according to the ROI and a nodule segmentation model, wherein the nodule segmentation model is determined by training a plurality of nodule images of the marked nodule region by adopting a convolutional neural network. The length of the long and short diameters of the target nodule is determined by measuring the nodule region, and then the volume of the target nodule can be determined according to the long and short diameters of the target nodule, namely the volume of the target nodule in the first nodule image and the volume of the target nodule in the second nodule image can be determined. Thereby determining a doubling time of the lung nodule based on the volume of the target nodule in the first nodule image and the volume of the target nodule in the second nodule image.
Fourth, the benign and malignant pulmonary nodules
The present disclosure may be implemented by a method for determining the benign and malignant pulmonary nodules as one of the schemes for determining pulmonary nodule information in the embodiments of the present disclosure.
The method comprises the following steps:
step 1: acquiring a lung CT image, determining the position of a lung nodule in the lung CT image of the patient, and extracting the lung nodule image from the lung CT image;
step 2: extracting the features of the pulmonary nodule image of the patient by adopting a preset feature extraction neural network model to obtain corresponding feature vectors;
and step 3: and inputting the corresponding feature vector into a preset benign-malignant classification neural network model, and obtaining the benign and malignant corresponding to the lung nodule output by the preset benign-malignant classification neural network model.
The above steps can be understood in combination with the foregoing content, and the structure of the preset benign-malignant classification neural network model related to step 3 includes a first fully-connected layer, a second fully-connected layer and a softmax layer. The feature vectors corresponding to the confirmed diagnosis can be calculated sequentially through the first full-connection layer and the second full-connection layer, then classified through the softmax layer, and then the classification result is output, so that the benign and malignant pulmonary nodules of the patient can be obtained. And inputting a plurality of feature vectors and the benign and malignant degree of each pulmonary nodule of the patient into a preset benign and malignant degree classification neural network model, and determining the parameters of the preset benign and malignant degree classification neural network model. Specifically, a plurality of corresponding feature vectors may be input into the initial good-malignant classification neural network model to obtain the predicted good malignancy of the lung nodule suffered by each patient, and then reverse training may be performed according to the predicted good malignancy of the lung nodule suffered by each patient and the actual good malignancy of the lung nodule suffered by each patient to generate the preset good-malignant classification neural network model.
The skilled person in the art will appreciate, in conjunction with the clinical experience, that the above protocol is applicable to the determination of the signs of lobulation, bur, vacuolation, and pleura traction of a lung nodule.
Through the above detailed description on the determination of the pulmonary nodule information, in the display method according to the embodiment of the present disclosure, the correspondence between the image parameter of the attention object and the distribution of the feature points of the attention object includes: a correspondence between a distribution range of the CT value of the object of interest and particles included in the object of interest;
the displaying the characterization content about the attention object in the display interface comprises: displaying the representation content in a first display mode and/or a second display mode;
the first display mode at least comprises: displaying a CT value histogram of the object of interest.
The second display mode at least comprises: displaying a ratio of the number of particles under the CT value of the attention object to the total number of particles included in the attention object, or a ratio of the number of pixels under the CT value of the attention object to the total number of pixels included in the attention object.
The 3D-CT value is taken as an example for further explanation. In order to enable the CT value of the lung nodule to be more intuitively checked in the image analysis and diagnosis of the lung nodule, and further judge whether the current lung nodule has the deterioration degree according to the distribution of the CT value, the CT value of the lung nodule contained in the image display area, even the medical image list area, can be selected, and the CT value is displayed according to the mass point percentage or the frequency corresponding to the CT value. Through the user interaction interface, the CT value can be selected in an interactive display area, for example, a rectangular box is marked in the image display area, a circle is marked, a scale is marked on the lung nodule, and the like, so as to determine the CT value of the lung nodule or ROI (region of interest).
In an embodiment of the present disclosure, when the CT values of the diagnostic information display region, such as-600 HU to-149 HU, are clicked, as an operation mode, fig. 2 is a schematic diagram of a display interface of a display method of a medical image according to an embodiment of the present disclosure, in which a CT value histogram of a current lung nodule is displayed in the form of a floating window. In clinical practice, the CT value histogram is widely and effectively applied in liver image analysis and diagnosis, lung image analysis and diagnosis, kidney image analysis and diagnosis, bone image analysis and diagnosis, various gland image analysis and diagnosis, and other practical clinical situations, for example, including: the application of the CT value histogram in primary liver cancer diagnosis, the application of the correlation between alveolar cell cancer content and milled glass density content measured by different threshold CT in small lung adenocarcinoma, the application of the CT value histogram in assessing adrenal gland tumor differentiation, the application of differential diagnosis of giant cell tumor of bone and aneurysm-like bone cyst, and the like. It will be appreciated by those skilled in the art that the object of interest contained in the medical image, such as a lesion site, may be analyzed and diagnosed by CT value histograms. For example: the method includes the steps of selecting a three-dimensional object of interest, such as a lung nodule, and obtaining a CT value from an image of a cross section where a major axis of the lung nodule is located. Therefore, the CT value histogram referred to in the embodiments of the present disclosure may be referred to as a 3D-CT value histogram to distinguish it from the CT value histogram in the prior art. Hereinafter, the "3D-CT value histogram" will also be used as a description and explanation of the embodiments of the present disclosure.
In the embodiments of the present disclosure, as an implementation, a histogram is generated by using the CT value as the abscissa and the percentage or frequency of the particle corresponding to the CT value as the ordinate. In general, in combination with the above, when the density of lung nodules is frosted, the CT value is between-600 HU and-500 HU, when the density of lung nodules is in a solid transition, the CT value is between-200 HU and-300 HU, and when the density of lung nodules is in a solid, the CT value is greater than 0. Therefore, the present disclosure aims to visually display the correspondence between the image parameters of the object of interest and the distribution of the feature points of the object of interest, based on the fact that in the pulmonary nodules, especially for the ground glass pulmonary nodules, the higher the percentage of particles in the solid transition range to the total number of particles, the higher the probability that the ground glass pulmonary nodules are malignant, so that the percentage of particles in the ground glass pulmonary nodules with CT values between-200 HU and 300HU can be visually seen through the histogram, and further, whether the ground glass pulmonary nodules have the possibility of deterioration can be determined, and analysis and diagnosis are assisted through the 3D-CT value histogram, thereby providing convenience for analysis and diagnosis.
The percentage of particles referred to in this disclosure may be defined as the ratio of the number of particles at the CT value to the total number of particles included in the lung nodule. The frequency referred to in this disclosure may be defined as the ratio of the number of pixels at the CT value to the total number of pixels included in the lung nodule. Through the 3D-CT value histogram, the CT value of the lung nodule can be visually seen to be in a certain range, such as the particle percentage between-200 HU and-300 HU, and then whether the lung nodule has the possibility of deterioration can be judged. As an interactive mode, when any region outside the floating window is clicked, or a closing control of the floating window is clicked, the 3D-CT histogram of the lung nodule can be hidden, so that other operations can be continued on the interface of the chest CT auxiliary diagnosis software.
As a further interactive optimization way, in order to avoid frequent operations by a user, the display interface for displaying the characterization content related to the attention object of the embodiment of the present disclosure is configured as an interface that can be interactively operated by the user. Wherein:
the display interface is configured into an interactive operation interface mode I
Fig. 3 is a schematic view illustrating an interaction mode of a display interface of a medical image display method according to an embodiment of the present disclosure. The display interface displays the characterization content of the attention object, such as a 3D-CT value histogram, provides an operable object, such as a text input box, a numerical input box, a CT value selection operation object, and the like, and a user performs an operation on the display interface to re-determine the CT value or the CT value range of the lung nodule to which the embodiment of the present disclosure needs to pay attention. In response to these operations by the user, the display method of the embodiment of the present disclosure can update the display of the characterizing content on the display interface in real time, updating the 3D-CT value histogram to correspond to the newly determined CT value or CT value range of the lung nodule.
The display interface is configured into an interactive operation interface mode two
Fig. 4 is a schematic view illustrating another interaction manner of a display interface of a display method of a medical image according to an embodiment of the present disclosure, and fig. 5 is a schematic view illustrating still another interaction manner of a display interface of a display method of a medical image according to an embodiment of the present disclosure. The display interface displays the characterization content related to the object of interest, such as a 3D-CT value histogram, provides an operable object, such as a guiding line, a guiding box, a CT value selection operation object, and the like, and the user performs an operation on the CT value coordinates of the display interface, such as moving the guiding line to align with the CT value on the CT value coordinates to be determined, moving the guiding box to box the CT value range on the CT value coordinates to be determined, and the like, so as to be able to re-determine the CT value or the CT value range of the lung nodule that needs to be focused on by the embodiment of the present disclosure. In response to these operations by the user, the display method of the embodiment of the present disclosure can update the display of the characterizing content on the display interface in real time, updating the 3D-CT value histogram to correspond to the newly determined CT value or CT value range of the lung nodule.
Through the at least two ways of configuring the display interface for displaying the characterization content related to the attention object in the embodiment of the present disclosure as the interactive operation interface, those skilled in the art can know that this way of the embodiment of the present disclosure is intended to facilitate user operation, reduce user operation on the current display interface, and reduce user closing operation on the display interface, thereby reducing interaction of frequently calling and closing the interface between the two by the user, thereby improving user interaction experience, and improving medical image analysis and diagnosis efficiency.
As one aspect, an embodiment of the present disclosure provides a method for displaying a medical image, including:
in response to the selection of the same attention object contained in any group of medical images, displaying the characterization content related to the attention object in the same display interface; wherein:
the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
According to the description above, the display method of the embodiment of the present disclosure is also intended to provide a display method in the comparison mode based on the inventive concept of the display method in the above embodiments. In medical image analysis and diagnosis, the clinical significance of comparative analysis and diagnosis is very important for different groups of medical images of the same object of interest.
Specifically, the contrast display mode may be triggered by one or some operations in a toolbar area, a medical image list area, an image display area, or a diagnosis information display area similar to those shown in the figures, which may be implemented by operation modes known to those skilled in the art, and aims to display different groups of medical images of the same object of interest in a contrast manner in one or different image display areas at the same time. Fig. 6 illustrates an interface of still another breast CT aided diagnosis software related to the medical image display method according to the embodiment of the present disclosure, in which an alignment mode is shown.
As an operation mode, when the doubling time of a certain lung nodule in the diagnostic information display area as shown in fig. 6 is clicked, the display method of the embodiment of the disclosure may present an alignment mode. It is understood that the comparison mode is defined for the interaction between the user and the interface of the image device, that is: the comparison mode may be implemented by software configured in the imaging device, belonging to a specific mode, or may be implemented by user operation, being arranged in the current display area, and so on. The comparison mode can compare a plurality of groups of medical images, the content of the plurality of groups of medical images is not limited uniquely, and the medical images can be medical images of different patients, similar time periods and similar objects of interest, medical images of the same patient, different time periods and the same object of interest, medical images of different patients, different scanning devices and similar objects of interest, and the like; the sources of the plurality of sets of medical images are not limited, and may be medical images from local equipment, medical images from local equipment and remote equipment, medical images in a library of real-time updated medical images and standard images, and the like. The comparison mode of the embodiment of the present disclosure is presented herein by taking the historical image and the current image based on the same lung nodule as an example, and referring to the above comparison mode as the historical comparison mode.
The historical comparison mode is to display images of the same patient collected at different times in a comparison manner. As shown, the right area of the figure may be arranged for displaying images of a historically acquired set of breast images and the left area may be arranged for displaying images of a currently acquired set of breast images. It should be understood that the historical images in the embodiments of the present disclosure may be medical images at a historical time point, or may be one or more sets of medical images in a historical time period. In order to clearly present to the user that the historical image and the current image contain the same lung nodule, the consistency between the historical image and the current image in the embodiment of the present disclosure may be characterized by outputting some visual contents to the user, for example, outputting the labels and the number of layers of the historical image and the current image with consistency. In a specific embodiment, labels, numbers of layers, and the like of the historical image and the current image may be displayed in the display area, for example, the historical image is an mth layer, and the current image is an M 'layer, which may indicate that both images can contain the same lung nodule through the numerical relationship between M and M'. In the drawings herein, there is shown: the lower left side shows the matching result between the historical image and the current image, for example, as a registered nodule list, that is, the lung nodule at the 30 th layer in the current image matches the lung nodule at the 28 th layer in the historical image, that is, the two lung nodules are the same lung nodule.
In the registration nodule list, the lung nodule in the historical image and the lung nodule in the current image both correspond to a corresponding CT value. In order to enable the CT value of the lung nodule to be more visually checked in the image analysis and diagnosis of the lung nodule, and further judge whether the current lung nodule has the deterioration degree according to the distribution of the CT value, the historical image of the image display area or the CT value in the current image can be selected, and the mass point percentage or the frequency corresponding to the CT value is displayed. Through the user interaction interface, the CT value can be selected in the interactive display area, for example, a rectangular box, a circle, a scale on a lung nodule, etc. are marked in the historical image or the current image of the image display area, so as to determine the CT value of the lung nodule or the ROI (region of interest).
In this embodiment, as shown in fig. 7, a schematic diagram of another display interface of the display method of medical images according to the embodiment of the present disclosure is shown. The CT values of the lung nodules in the historical image or the current image can be clicked and the system displays the 3D-CT values of the lung nodules at different periods in the form of floating windows. For example, comparing a set of historical images with a set of current images, as shown in the figure, one single color is used to display the 3D-CT values of the lung nodules in the current image, and another single color is used to display the 3D-CT values of the lung nodules in the historical images. Through the distribution of the 3D-CT values of the same lung nodule in different periods, the overall density change condition of the lung nodule can be known, and then whether the lung nodule has the possibility of deterioration or not can be rapidly judged, specifically, as can be seen from fig. 4, compared with the distribution of the CT values of the lung nodule in the historical image, the distribution of the CT values of the lung nodule in the current image is wholly shifted to the right, that is, the density of the lung nodule is excessive (infiltrated) from glass grinding to reality. If the lung nodule is malignant, the possibility of the lung nodule deterioration can be intuitively and quickly judged according to the comparison of the 3D-CT values of the lung nodule of the historical image and the lung nodule of the current image. By comparing and displaying the 3D-CT values of the lung nodules in different periods in the same display interface, whether the lung nodules are possibly deteriorated or not can be judged quickly, and the diagnosis efficiency is improved.
As an implementation manner, fig. 8 is a schematic diagram of each display interface of the display method of medical images according to the embodiment of the present disclosure, which presents or hides the characterization content in response to an operation in the display interface. Specifically, in the process of analyzing and diagnosing the lung nodule medical image, the 3D-CT value of a lung nodule in a certain period is analyzed, for example, one of the historical images is analyzed, or the current image is analyzed, an operation can be performed on an operable object in the display interface, for example, a label of the lung nodule in the period is clicked, as shown in "N3", and then only the 3D-CT value histogram of the lung nodule in the period can be displayed. Of course, it is also possible to operate on the operable objects in the display interface in relation to the non-period, for example, click on the lung nodule label of the non-period, which is illustrated as "P3", to hide the 3D-CT value histogram of the lung nodule of the non-period, and then display the 3D-CT value histogram of the lung nodule of the period. Further preferably, all or a part of the image parameter values related to the label "P3" and/or the label "N3" may be displayed in the form of a list, a box table, or the like in the display interface.
In the comparison mode of the embodiment of the present disclosure, as a further interactive optimization manner, in order to avoid frequent operations by the user, in the comparison mode of the embodiment of the present disclosure, a display interface for displaying the characterization content of the attention object with respect to the plurality of groups of medical images is configured as an interface that can be interactively operated by the user. Wherein:
the display interface is configured into an interactive operation interface mode I
The display interface displays the characterization content related to the object of interest, such as a 3D-CT value histogram, provides an operable object, such as a text input box, a numerical input box, a CT value selection operation object, and the like, and a user performs an operation on the display interface, so that the CT value or the CT value range of the lung nodule that needs to be focused by the embodiment of the disclosure can be re-determined, that is, in the comparison mode, the CT value or the CT value range of the historical image and/or the current image can be determined. In response to these operations of the user, the display method of the embodiment of the present disclosure can update the display of the characterizing content on the display interface in real time, and update the 3D-CT value histogram of the historical image, the current image to correspond to the re-determined CT value or CT value range of the lung nodule.
The display interface is configured into an interactive operation interface mode two
The display interface displays the characterization content related to the object of interest, such as a 3D-CT value histogram, and provides an operable object, such as a guiding line, a guiding box, a CT value selection operation object, and the like, and a user performs an operation on the CT value coordinates of the display interface, such as moving the guiding line to align with the CT value on the CT value coordinates to be determined, moving the guiding box to select the CT value range on the CT value coordinates to be determined, and the like, so as to be able to re-determine the CT value or the CT value range of the lung nodule that needs to be focused by the embodiment of the present disclosure, that is, in the comparison mode, the CT value or the CT value range of the historical image and/or the current image can be determined. In response to these operations of the user, the display method of the embodiment of the present disclosure can update the display of the characterizing content on the display interface in real time, and update the 3D-CT value histogram of the historical image, the current image to correspond to the re-determined CT value or CT value range of the lung nodule.
Through the above at least two ways of configuring the display interface for displaying the characterization content of the attention object in the embodiments of the present disclosure as an interactive operation interface, those skilled in the art can know that the comparison mode of the embodiments of the present disclosure is intended to further facilitate user operations, and in comparing the attention objects of multiple groups of medical images, reduces user operations on the current display interface and reduces user closing operations on the display interface, thereby reducing interactions between the two, which are frequently called by the user, and the user closing the interface, thereby improving user interaction experience and improving medical image analysis and diagnosis efficiency.
As an implementation manner, the display method of the embodiment of the present disclosure further includes:
in response to selection of the same object of interest contained in any one group of medical images, determining image parameters of the object of interest;
and displaying the physical parameters of the attention object based on the image parameters of the attention object.
In the clinic, physical parameters of an object of interest comprised by a medical image, such as the size, volume, shape, edges, etc. of a lung nodule referred to herein, are meaningful criteria in analysis and diagnosis. Therefore, whether the 3D-CT value histogram of a certain group of lung nodules is displayed separately in the above description or the 3D-CT value histogram of a plurality of groups of lung nodules is displayed in the alignment mode, a preferable display scheme can be displayed and presented for the volume of the lung nodules or the volume proportion thereof. For example, for a certain group of lung nodules referred to in the drawings, the 3D-CT value histogram of lung nodules at any period of time, the preferred display scheme may select a first CT value first, and then select a second CT value, and according to the technical scheme of the embodiments of the present disclosure, the volume of lung nodules within the selected CT value range is automatically calculated according to the selected CT value range, and the volume and/or volume fraction of lung nodules within the selected CT value range is displayed within the selected range. Wherein volume to volume ratio may be defined herein as the ratio of the volume of a lung nodule to the volume of a lung nodule for the range of CT values. Of course, the volume and/or volume fraction of lung nodules for the CT value range may also be displayed in the blank space of the 3D-CT value histogram. By calculating and displaying the volume and/or volume ratio of the lung nodules in the selected CT value area, for example, the volume and volume ratio of the lung nodules in the CT value area of-200 HU to 0HU, the progress of tumor infiltration can be effectively prompted in clinical analysis and diagnosis.
As one aspect, an embodiment of the present disclosure provides an information processing method, including:
extracting medical image information of an object of interest in a medical image, the medical image information at least comprising: the image parameters of the attention object and the distribution condition of the characteristic points of the attention object;
and obtaining the corresponding relation between the image parameters of the concerned object and the distribution situation of the characteristic points of the concerned object based on the medical image information of the concerned object.
Specifically, one of the inventive concepts of the present disclosure is directed to obtaining a correspondence between an image parameter of an object of interest, i.e., a lung nodule, and a distribution of feature points of the lung nodule, so as to at least achieve the following technical objectives: in medical image analysis and diagnosis, image parameters of an object of interest and distribution of feature points of the object of interest, such as lung nodules and particles included in the lung nodules according to embodiments of the present disclosure, are known indirectly and conveniently, and corresponding image parameters and distribution of feature points may include, but are not limited to, benign and malignant, volume, density, doubling time, CT value, and the like. With reference to the foregoing description, the information processing method according to the embodiment of the present disclosure may specifically obtain: the inventive concept of the information processing method of the present disclosure is implemented by a ratio of the number of particles under the CT value of the object of interest, for example, a lung nodule to the total number of particles included in the lung nodule, or a ratio of the number of pixels under the CT value of the object of interest, for example, the lung nodule to the total number of pixels included in the lung nodule. Therefore, in the case where it is impossible to intuitively determine whether or not there is a possibility of deterioration of a lung nodule only by two kinds of information, i.e., the distribution range of the CT value of the lung nodule and the density of the lung nodule, the correspondence between the distribution range of the CT value and particles included in the lung nodule cannot be known from a given CT value range, how many particles are distributed in a certain range of the CT value cannot be known, and whether or not there is a deterioration degree of the lung nodule cannot be determined, the distribution trend and the state of the CT value of the lung nodule are directly obtained by the information processing method of the present disclosure, and it is thereby possible to accurately determine whether or not there is a deterioration degree of the lung nodule.
Based on the knowledge of the general knowledge of those skilled in the art, the medical image display method of the present disclosure can know that:
a display device comprising a display unit and a processor configured to: responding to selection of an attention object contained in the medical image, and displaying representation content related to the attention object in a display interface; wherein: the representation content is used for representing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object; and
a display device comprising a display unit and a processor configured to: in response to the selection of the same attention object contained in any group of medical images, displaying the characterization content related to the attention object in the same display interface; wherein: the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
The display device according to the embodiments of the present disclosure, which belongs to the same concept as the display method of the medical image in the embodiments, can further process and display the object of interest included in the medical image, so that the trend of change can be visually determined in the process of analyzing and diagnosing the medical image, which is beneficial to analyzing and diagnosing the medical image, improves efficiency and accuracy, and provides great convenience for clinical application. On the basis, the clinical strategy selection of diagnosis and treatment, medication, nursing, rehabilitation and the like, and the aspects of pathological analysis, case bank improvement and the like can achieve good beneficial effects.
In some embodiments, the display device as described above according to embodiments of the present disclosure may be integrated on an existing processing platform of an image in various ways. For example, the program module can be written on the existing processing platform of the chest image by using a development interface, so that the compatibility with the existing processing platform and the update of the existing processing platform are realized, the hardware cost is reduced, and the popularization and the application of the display device are facilitated.
The present disclosure also provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a display method of medical images according to the above.
The present disclosure also provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement an information processing method according to the above.
In some embodiments, a processor executing computer-executable instructions may be a processing device including more than one general-purpose processing device, such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, processor running other instruction sets, or processors running a combination of instruction sets. The processor may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
In some embodiments, the computer-readable storage medium may be a memory, such as a read-only memory (ROM), a random-access memory (RAM), a phase-change random-access memory (PRAM), a static random-access memory (SRAM), a dynamic random-access memory (DRAM), an electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), a flash disk or other form of flash memory, a cache, a register, a static memory, a compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD) or other optical storage, a tape cartridge or other magnetic storage device, or any other potentially non-transitory medium that may be used to store information or instructions that may be accessed by a computer device, and so forth.
In some embodiments, the computer-executable instructions may be implemented as a plurality of program modules that collectively implement the method for displaying medical images according to any one of the present disclosure.
The present disclosure describes various operations or functions that may be implemented as or defined as software code or instructions. The display unit may be implemented as software code or modules of instructions stored on a memory, which when executed by a processor may implement the respective steps and methods.
Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). A software implementation of the embodiments described herein may be provided through an article of manufacture having code or instructions stored thereon, or through a method of operating a communication interface to transmit data through the communication interface. A machine or computer-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., a computing display device, an electronic system, etc.), such as recordable/non-recordable media (e.g., Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory display devices, etc.). The communication interface includes any mechanism for interfacing with any of a hardwired, wireless, optical, etc. medium to communicate with other display devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or transmitting signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, the subject matter of the present disclosure may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.

Claims (10)

1. A display method of medical images comprises the following steps:
responding to selection of an attention object contained in the medical image, and displaying representation content related to the attention object in a display interface; wherein:
the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
2. The display method according to claim 1, wherein the selecting includes:
selecting the attention object through the operation of the operable interaction object in the display area; wherein:
the operable interactive object is linked to medical image information of the object of interest, and the medical image information at least comprises: and the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
3. The display method according to claim 2, wherein the display area includes a first image display area and/or a second image display area;
the first image display area is at least used for displaying the information of the identified attention object;
the second image display area is used for displaying at least one of the following medical images:
3D medical images, cross-sectional medical images, sagittal medical images, coronal medical images.
4. The display method of claim 1, wherein the selecting is performed based on a current display interface, wherein the display interface:
the current display interface is contained; or
Independent of the current display interface.
5. The display method according to claim 1,
the corresponding relationship between the image parameters of the attention object and the distribution condition of the feature points of the attention object comprises the following steps: a correspondence between a distribution range of the CT value of the object of interest and particles included in the object of interest;
the displaying the characterization content about the attention object in the display interface comprises: displaying the representation content in a first display mode and/or a second display mode;
the first display mode at least comprises: displaying a CT value histogram of the object of interest.
The second display mode at least comprises: displaying a ratio of the number of particles under the CT value of the attention object to the total number of particles included in the attention object, or a ratio of the number of pixels under the CT value of the attention object to the total number of pixels included in the attention object.
6. A display method of medical images comprises the following steps:
in response to the selection of the same attention object contained in any group of medical images, displaying the characterization content related to the attention object in the same display interface; wherein:
the characterization content is used for characterizing the corresponding relation between the image parameters of the attention object and the distribution condition of the characteristic points of the attention object.
7. The display method according to claim 6, further comprising:
in response to selection of the same object of interest contained in any one group of medical images, determining image parameters of the object of interest;
and displaying the physical parameters of the attention object based on the image parameters of the attention object.
8. The display method according to claim 6, further comprising:
and presenting or hiding the representation content in response to the operation in the display interface.
9. An information processing method comprising:
extracting medical image information of an object of interest in a medical image, the medical image information at least comprising: the image parameters of the attention object and the distribution condition of the characteristic points of the attention object;
and obtaining the corresponding relation between the image parameters of the concerned object and the distribution situation of the characteristic points of the concerned object based on the medical image information of the concerned object.
10. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
the display method according to any one of claims 1 to 5; or
The display method according to any one of claims 6 to 8; or
The information processing method according to claim 9.
CN201911122046.XA 2019-11-15 2019-11-15 Medical image display method, information processing method, and storage medium Pending CN110853743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911122046.XA CN110853743A (en) 2019-11-15 2019-11-15 Medical image display method, information processing method, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911122046.XA CN110853743A (en) 2019-11-15 2019-11-15 Medical image display method, information processing method, and storage medium

Publications (1)

Publication Number Publication Date
CN110853743A true CN110853743A (en) 2020-02-28

Family

ID=69600655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911122046.XA Pending CN110853743A (en) 2019-11-15 2019-11-15 Medical image display method, information processing method, and storage medium

Country Status (1)

Country Link
CN (1) CN110853743A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415341A (en) * 2020-03-17 2020-07-14 北京推想科技有限公司 Pneumonia stage evaluation method, pneumonia stage evaluation device, pneumonia stage evaluation medium and electronic equipment
CN111430014A (en) * 2020-03-31 2020-07-17 杭州依图医疗技术有限公司 Display method, interaction method and storage medium of glandular medical image
CN111524582A (en) * 2020-07-03 2020-08-11 嘉兴太美医疗科技有限公司 Method, device and system for loading medical image information and computer readable medium
CN111583177A (en) * 2020-03-31 2020-08-25 杭州依图医疗技术有限公司 Medical image display method and device and storage medium
CN112925461A (en) * 2021-02-23 2021-06-08 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2021120603A1 (en) * 2019-12-19 2021-06-24 北京市商汤科技开发有限公司 Target object display method and apparatus, electronic device and storage medium
CN113538599A (en) * 2021-07-30 2021-10-22 联合汽车电子有限公司 Neural network calibration efficiency evaluation method, device, medium, equipment and vehicle
CN114240880A (en) * 2021-12-16 2022-03-25 数坤(北京)网络科技股份有限公司 Medical scanning data processing method and device, medical equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160253443A1 (en) * 2015-02-26 2016-09-01 Washington University In St.Louis CT Simulation Optimization for Radiation Therapy Contouring Tasks
CN109583440A (en) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 It is identified in conjunction with image and reports the medical image aided diagnosis method edited and system
CN110211672A (en) * 2019-06-14 2019-09-06 杭州依图医疗技术有限公司 Information display method, equipment and storage medium for image analysing computer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160253443A1 (en) * 2015-02-26 2016-09-01 Washington University In St.Louis CT Simulation Optimization for Radiation Therapy Contouring Tasks
CN109583440A (en) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 It is identified in conjunction with image and reports the medical image aided diagnosis method edited and system
CN110211672A (en) * 2019-06-14 2019-09-06 杭州依图医疗技术有限公司 Information display method, equipment and storage medium for image analysing computer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
矫娜 等: "计算机辅助诊断定量分析表现为磨玻璃样结节的肺原位腺癌与非典型腺瘤样增生", 《中国CT和MRI杂志》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021120603A1 (en) * 2019-12-19 2021-06-24 北京市商汤科技开发有限公司 Target object display method and apparatus, electronic device and storage medium
CN111415341A (en) * 2020-03-17 2020-07-14 北京推想科技有限公司 Pneumonia stage evaluation method, pneumonia stage evaluation device, pneumonia stage evaluation medium and electronic equipment
CN111430014A (en) * 2020-03-31 2020-07-17 杭州依图医疗技术有限公司 Display method, interaction method and storage medium of glandular medical image
CN111583177A (en) * 2020-03-31 2020-08-25 杭州依图医疗技术有限公司 Medical image display method and device and storage medium
CN111583177B (en) * 2020-03-31 2023-08-04 杭州依图医疗技术有限公司 Medical image display method and device and storage medium
CN111430014B (en) * 2020-03-31 2023-08-04 杭州依图医疗技术有限公司 Glandular medical image display method, glandular medical image interaction method and storage medium
CN111524582A (en) * 2020-07-03 2020-08-11 嘉兴太美医疗科技有限公司 Method, device and system for loading medical image information and computer readable medium
CN111524582B (en) * 2020-07-03 2020-10-20 嘉兴太美医疗科技有限公司 Method, device and system for loading medical image information and computer readable medium
CN112925461A (en) * 2021-02-23 2021-06-08 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN113538599A (en) * 2021-07-30 2021-10-22 联合汽车电子有限公司 Neural network calibration efficiency evaluation method, device, medium, equipment and vehicle
CN114240880A (en) * 2021-12-16 2022-03-25 数坤(北京)网络科技股份有限公司 Medical scanning data processing method and device, medical equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110853743A (en) Medical image display method, information processing method, and storage medium
CN108022238B (en) Method, computer storage medium, and system for detecting object in 3D image
US20210104049A1 (en) System and method for segmentation of lung
US8335359B2 (en) Systems, apparatus and processes for automated medical image segmentation
US8355553B2 (en) Systems, apparatus and processes for automated medical image segmentation using a statistical model
JP6877868B2 (en) Image processing equipment, image processing method and image processing program
JP5687714B2 (en) System and method for prostate visualization
US9478022B2 (en) Method and system for integrated radiological and pathological information for diagnosis, therapy selection, and monitoring
US11017896B2 (en) Radiomic features of prostate bi-parametric magnetic resonance imaging (BPMRI) associate with decipher score
US8229188B2 (en) Systems, methods and apparatus automatic segmentation of liver in multiphase contrast-enhanced medical images
US7058210B2 (en) Method and system for lung disease detection
KR101805624B1 (en) Method and apparatus for generating organ medel image
CN111081352A (en) Medical image display method, information processing method, and storage medium
EP3796210A1 (en) Spatial distribution of pathological image patterns in 3d image data
CN112215799A (en) Automatic classification method and system for grinded glass lung nodules
CN111105414A (en) Processing method, interaction method, display method and storage medium
WO2022051344A1 (en) System and method for virtual pancreatography pipeline
US7653225B2 (en) Method and system for ground glass nodule (GGN) segmentation with shape analysis
CN101802877B (en) Path proximity rendering
Wang et al. Spatial attention lesion detection on automated breast ultrasound
Kishore et al. A multi-functional interactive image processing tool for lung CT images
KR102311472B1 (en) Method for predicting regions of normal tissue and device for predicting regions of normal tissue using the same
Skalski et al. Virtual Colonoscopy-Technical Aspects
Liu et al. Multimodal Imaging Radiomics and Machine Learning
WO2005013197A1 (en) System and method for ground glass nodule (ggn) segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228

RJ01 Rejection of invention patent application after publication