CN114334097A - Automatic assessment method based on lesion progress on medical image and related product - Google Patents

Automatic assessment method based on lesion progress on medical image and related product Download PDF

Info

Publication number
CN114334097A
CN114334097A CN202210006013.4A CN202210006013A CN114334097A CN 114334097 A CN114334097 A CN 114334097A CN 202210006013 A CN202210006013 A CN 202210006013A CN 114334097 A CN114334097 A CN 114334097A
Authority
CN
China
Prior art keywords
focus
image
lesion
target
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210006013.4A
Other languages
Chinese (zh)
Inventor
陈海斌
李庆运
康军伟
王东鉴
王磊
李杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Berui Biotechnology Co ltd
Original Assignee
Shenzhen Berui Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Berui Biotechnology Co ltd filed Critical Shenzhen Berui Biotechnology Co ltd
Priority to CN202210006013.4A priority Critical patent/CN114334097A/en
Publication of CN114334097A publication Critical patent/CN114334097A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an automatic assessment method based on lesion development on a medical image and a related product, wherein the method comprises the following steps: carrying out image segmentation on the image subjected to the lesion marking, and constructing and training a lesion identification model; performing focus recognition segmentation on the image shot for the first time through the trained focus recognition model to obtain a first target focus; performing focus identification and segmentation on the image shot again to obtain a second target focus; carrying out corresponding image registration, and determining the focus corresponding relation between focuses on the images shot twice; based on the corresponding relation of the focus, calculating the volume change and the change rate of the focus and the average gray level, the maximum gray level and the minimum gray level change of the focus, and generating a focus development evaluation report. The invention realizes the automatic identification, matching and qualitative and quantitative evaluation of the focus on the images of the same patient for a plurality of times at different times without human intervention, thereby avoiding the trouble and labor waste.

Description

Automatic assessment method based on lesion progress on medical image and related product
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of medical image analysis, in particular to an automatic assessment method based on lesion progress on a medical image and a related product.
[ background of the invention ]
In clinical diagnosis, the development and evaluation of solid lesions are generally evaluated by manually observing and measuring the size of the maximum cross section of the lesion, and for solid tumors, RECIST (Response evaluation criteria for the efficacy of solid tumors) is generally used for evaluating the progress or efficacy of the tumor lesions, but doctors are required to perform delineation identification and corresponding matching on the tumors, and the volume change rate is calculated. In addition, for breast patient images where diffuse lesions are very common, the efficiency of assessment in a RECIST-like manner is significantly compromised.
In view of the above, it is desirable to provide an automatic assessment method based on lesion development on medical images and related products to overcome the above-mentioned drawbacks.
[ summary of the invention ]
The invention aims to provide an automatic assessment method based on lesion development on a medical image and a related product, and aims to solve the problems that the existing lesion development assessment method is complex in implementation process, needs manual intervention, is labor-consuming, and accordingly realizes the functions of automatic identification, matching and qualitative and quantitative assessment of lesions on multiple images of the same patient at different times.
In order to achieve the above object, a first aspect of the present invention provides an automatic assessment method based on lesion development on a medical image, comprising the steps of:
step S100: performing image segmentation on the image subjected to the lesion marking, and constructing and training a lesion identification model based on a convolutional neural network;
step S200: performing focus recognition segmentation on the image shot for the first time through the trained focus recognition model to obtain a first target focus;
step S300: performing focus recognition segmentation on the image which is shot again on the same focus after a preset time interval through the trained focus recognition model to obtain a second target focus;
step S400: performing corresponding image registration according to the first target focus and the second target focus, and determining focus corresponding relation between focuses on the images shot twice;
step S500: based on the corresponding relation of the focus, calculating the volume change and the change rate of the focus and the average gray level, the maximum gray level and the minimum gray level change of the focus, and generating a focus development evaluation report.
In a preferred embodiment, the step S100 includes the steps of:
randomly dividing the case image data with the completed lesion marking into a training set and a verification set according to a preset proportion; wherein, each case image data comprises a medical image and a corresponding binary mask image marked with a focus;
segmenting each medical image into three-dimensional or two-dimensional focus image blocks according to a preset size, and segmenting the binary mask image into image blocks with the same size according to the same mode;
constructing a focus recognition model based on a convolutional neural network, taking focus image blocks as input, and taking corresponding binary mask images as output to carry out iterative training; and verifying the focus identification model once through the verification set until a preset iteration target is reached after the focus identification model is trained once through the training set.
In a preferred embodiment, when the focus recognition model is trained, a Dice loss function is used as a minimization target, and an adam optimizer is used for training the focus recognition model; and after multiple iterations, storing the model with the minimum Dice loss on the verification set in the trained rounds as a final focus identification model until the Dice loss on the verification set is not reduced any more in the continuous preset rounds.
In a preferred embodiment, the step S200 includes the steps of:
segmenting the obtained original medical image shot for the first time into segmented image blocks with the same size according to the specification of the focus image block;
sequentially inputting the segmentation image blocks obtained by segmentation into the trained focus recognition model to obtain a focus probability image with a probability value between 0 and 1;
merging the focus probability image blocks of the segmented image blocks into an integral focus probability image matched with the original medical image according to the segmentation sequence;
carrying out three-dimensional median filtering on the whole focus probability image by adopting a median filtering operator to obtain a filtered focus probability image; converting the filtered lesion probability image into a binary lesion mask image by taking 0.5 as a threshold value;
performing three-dimensional space connected domain analysis on the binary focus mask image, and if the focuses are connected in the three-dimensional space, taking the focuses as the same focus;
calculating the volumes of all focuses according to the pixel number and the image resolution of the focus area, sequencing all focuses according to the volume size to obtain a first target focus, and correspondingly numbering each focus as follows:
Figure BDA0003456759880000031
Figure BDA0003456759880000032
in a preferred embodiment, the step S300 includes the steps of:
obtaining a rephotography based on the same steps as the step S200Taking a second target lesion of the original medical image and numbering each lesion accordingly as:
Figure BDA0003456759880000041
in a preferred embodiment, the step S400 includes the steps of:
carrying out rigid registration on the original medical images shot twice to obtain a spatial mapping matrix M between the two images; wherein, M includes translation, scaling and rotation of the image, and a mapping point P 'of an arbitrary point P in the floating image space in the reference image space can be calculated by using a preset formula P' ═ P · M;
according to the obtained space mapping relation, mapping the binary focus mask image of the original medical image obtained by shooting again to the original medical image space obtained by shooting for the first time to obtain the focus
Figure BDA0003456759880000042
Calculating the second target focus pair by pair
Figure BDA0003456759880000043
And the focus of disease
Figure BDA0003456759880000044
Obtaining a matching relation through a space mapping relation between the two components:
Figure BDA0003456759880000045
wherein
Figure BDA0003456759880000046
And
Figure BDA0003456759880000047
the contact ratio between the two is higher than that between other focuses in the original medical image shot for the first time
Figure BDA0003456759880000048
Focal contact ratio parameter of (a); if the disease in the original medical image is taken againAnd if the focus is not superposed with the focus on all the original medical images shot for the first time, the focus is regarded as a newly added focus.
In a preferred embodiment, the step S500 includes the steps of:
calculating in the first target lesion
Figure BDA0003456759880000049
Image parameters of each lesion and all lesions; wherein the image parameters comprise volume, average gray scale, maximum gray scale and minimum gray scale;
calculating the lesions one by one
Figure BDA00034567598800000410
The image parameters of each and all lesions of (a); wherein the image parameters comprise volume, average gray scale, maximum gray scale and minimum gray scale;
according to the matching relationship of the focus
Figure BDA00034567598800000411
And calculating the change value and the change rate of the image parameter of each focus on the original medical image shot for the first time, and generating a focus progress report.
A second aspect of the present invention provides an automatic evaluation apparatus based on lesion progress on a medical image, comprising:
the focus identification model acquisition module is used for carrying out image segmentation on the image subjected to focus labeling and constructing and training a focus identification model based on a convolutional neural network;
the first target focus acquisition module is used for carrying out focus identification segmentation on the image shot for the first time through the trained focus identification model to obtain a first target focus;
the second target focus acquisition module is used for carrying out focus identification and segmentation on the image which is shot again on the same focus after a preset time interval through the trained focus identification model to obtain a second target focus;
a lesion correspondence determining module for performing corresponding image registration according to the first target lesion and the second target lesion, and determining a lesion correspondence between the lesions on the two photographed images;
and the focus evaluation report generation module is used for calculating the volume change and the change rate of the focus and the average gray level, the maximum gray level and the minimum gray level change of the focus based on the focus corresponding relation and generating a focus development evaluation report.
A third aspect of the present invention provides a terminal comprising a memory, a processor and a computer program stored in the memory, wherein the computer program, when executed by the processor, implements the steps of the automatic assessment method based on lesion development on medical images according to any one of the above embodiments.
A fourth aspect of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the automatic assessment method based on lesion development on medical images according to any one of the above embodiments.
A fifth aspect of the present invention provides a computer program product comprising a computer program or instructions which, when executed by a processor, performs the steps of the method for automatic assessment based on lesion development on medical images according to any one of the above embodiments.
The invention provides an automatic assessment method and related products based on focus progress on medical images, which can automatically realize the identification and the drawing of focuses through a focus identification model, then can fully automatically complete the image registration of medical images shot by different time nodes, correspond to the relationship among the focuses, including focus merging and differentiation, then can fully automatically complete the measurement of image parameters such as gray scale, volume and the like of the focuses, carry out qualitative and quantitative assessment on the development trend of the focuses on a time line, generate a focus progress report, carry out automatic identification, matching and qualitative and quantitative assessment on the focuses on multiple images of the same patient at different times, do not need manual intervention, avoid labor waste, and provide an automatic focus progress qualitative and quantitative assessment method for clinics.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a method for automated assessment of lesion progression based on medical images provided by the present invention;
FIG. 2 is a CT image of a patient with tuberculosis in the second embodiment;
FIG. 3 is a binary mask image of a tuberculosis lesion of the CT image of the tuberculosis patient shown in FIG. 2;
FIG. 4 is a three-dimensional image block of the CT image of the tuberculosis patient shown in FIG. 2 after being cut;
FIG. 5 is a diagram illustrating a convolutional neural network model according to a second embodiment;
FIG. 6 is an overall pulmonary tuberculosis lesion prediction probability image obtained by combining the segmented probability image blocks in the second embodiment;
fig. 7 is a binary mask image of the tuberculosis focus obtained by transforming the probability image of the tuberculosis focus prediction shown in fig. 6;
fig. 8 is a block diagram of an automatic evaluation apparatus based on lesion development on a medical image according to the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantageous effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Example one
In a first embodiment of the present invention, an automatic assessment method based on lesion development on a medical image is provided, which is used to solve the problem of automated assessment of the development of a disease of a patient in a clinical setting, perform automated identification, matching, qualitative and quantitative assessment on lesions on multiple images of the same patient at different times, and provide an automated qualitative and quantitative assessment method for lesion development in a clinical setting.
As shown in fig. 1, the automatic evaluation method based on lesion progress on a medical image includes the following steps S100 to S500.
Step S100: and carrying out image segmentation on the image subjected to the lesion marking, and constructing and training a lesion identification model based on the convolutional neural network.
Specifically, step S100 includes the following steps:
firstly, randomly dividing the case image data with finished lesion marking into a training set and a verification set according to a preset proportion; wherein, each case image data comprises a medical image and a corresponding binary mask image marked with a focus. The ratio of training set to validation set is typically set to 7: 3 or 8: 2, the setting can be made according to actual data.
Secondly, segmenting each medical image into three-dimensional or two-dimensional focus image blocks according to a preset size, and segmenting the binary mask image into image blocks with the same size according to the same mode.
And finally, constructing a focus identification model based on a convolutional neural network, taking focus image blocks as input, and taking corresponding binary mask images (which can be cut image blocks) as output to perform iterative training. And verifying the focus identification model once through the verification set until a preset iteration target is reached. Specifically, when the focus recognition model is trained, a Dice loss function is used as a minimization target, and an adam optimizer is used for training the focus recognition model. And after multiple iterations, storing the model with the minimum Dice loss on the verification set in the trained rounds as a final focus identification model until Dice loss on the verification set is not reduced for a continuous preset round (for example, 10 times).
Step S200: and performing focus recognition segmentation on the image shot for the first time through the trained focus recognition model to obtain a first target focus.
Specifically, step S200 includes the following steps:
(1) and dividing the acquired original medical image shot for the first time into the same-size segmented image blocks according to the specification of the focus image blocks. Therefore, the image blocks required by model training and the original medical images are segmented into image blocks with the same size in the same way;
(2) and sequentially inputting the segmentation image blocks obtained by segmentation into the trained focus recognition model to obtain a focus probability image with a probability value between 0 and 1. The larger the probability, the larger the probability that the target pixel is a lesion.
(3) Merging the focus probability image blocks of the segmented image blocks into an integral focus probability image matched with the original medical image according to the segmentation sequence;
(4) carrying out three-dimensional median filtering on the whole focus probability image by adopting a median filtering operator to obtain a filtered focus probability image; wherein, 0.5 can be used as a threshold value, and the filtered lesion probability image is converted into a binary lesion mask image;
(5) performing three-dimensional space connected domain analysis on the binary focus mask image, and if the focuses are connected in the three-dimensional space, taking the focuses as the same focus;
(6) calculating the volume of all the lesions according to the pixel number and the image resolution of the lesion area, and making all the lesions according to the volumeSorting the sizes of the lesions to obtain first target lesions, and numbering each lesion as follows:
Figure BDA0003456759880000091
step S300: and performing focus recognition segmentation on the image which is shot again on the same focus after a preset time interval through the trained focus recognition model to obtain a second target focus.
Specifically, step S300 includes the following steps: a second target lesion of the re-photographed original medical image is obtained based on the same procedure as the step S200 and each lesion is numbered accordingly as:
Figure BDA0003456759880000092
Figure BDA0003456759880000093
therefore, the detailed sub-steps of step S200 can be referred to for the specific obtaining manner of the second target lesion and the number thereof, and the detailed description of the present invention is omitted here. It should be noted that the time of the second photographing (for example, the second photographing compared to the first photographing) has a certain interval compared to the time of the first photographing, and in the process, the focus of the patient may change, such as volume change, number change, and the like.
Step S400: and carrying out corresponding image registration according to the first target focus and the second target focus, searching the focus in each focus area on the first shot image, and determining the focus corresponding relation between the focuses on the two shot images.
Specifically, step S400 includes the following steps:
firstly, carrying out rigid registration on original medical images shot twice to obtain a spatial mapping matrix M between two images; wherein, M includes translation, scaling and rotation of the image, and a mapping point P 'of an arbitrary point P in the floating image space in the reference image space can be calculated by using a preset formula P' ═ P · M;
secondly, the original medical image obtained by shooting again is obtained according to the obtained spatial mapping relationMapping the image binary focus mask image onto the original medical image space obtained by first shooting to obtain the focus
Figure BDA0003456759880000101
Figure BDA0003456759880000102
The focus of infection
Figure BDA0003456759880000103
Is an image space mapping of a second target lesion onto a first target lesion.
Finally, calculating the second target focus pair by pair
Figure BDA0003456759880000104
And the focus of disease
Figure BDA0003456759880000105
Obtaining a matching relation through a space mapping relation between the two components:
Figure BDA0003456759880000106
wherein
Figure BDA0003456759880000107
And
Figure BDA0003456759880000108
the contact ratio between the two is higher than that between other focuses in the original medical image shot for the first time
Figure BDA0003456759880000109
The lesion coincidence degree parameter (e.g., Dice coefficient). If the focus in the original medical image shot again is not overlapped with the focus on all the original medical images shot for the first time, the focus is considered as a newly-added focus and has no corresponding relation with the focus on the medical image shot for the first time.
Step S500: based on the corresponding relation of the focus, calculating the volume change and the change rate of the focus and the average gray level, the maximum gray level and the minimum gray level change of the focus, and generating a focus development evaluation report.
Specifically, step S500 includes the following steps:
first, a first target lesion is calculated
Figure BDA00034567598800001010
Image parameters of each lesion and all lesions; wherein the image parameters comprise one or more of volume, average gray scale, maximum gray scale, and minimum gray scale.
Secondly, calculate the lesions one by one
Figure BDA00034567598800001011
The image parameters of each and all lesions of (a); wherein the image parameters comprise one or more of volume, average gray scale, maximum gray scale, and minimum gray scale.
Finally, according to the matching relation of the focus
Figure BDA00034567598800001012
And calculating the change value and the change rate of the image parameter of each focus on the original medical image shot for the first time, and generating a focus progress report.
In conclusion, the method can automatically realize the identification and the delineation of the focus, adopts the Dice coefficient for evaluation, and has the accuracy of 0.80. Meanwhile, the image registration of different time nodes can be completed fully automatically, and the relation among focuses is corresponded, including focus combination and differentiation. In addition, the measurement of image parameters such as gray scale, volume and the like of the focus can be automatically completed, the development trend of the focus can be qualitatively and quantitatively evaluated on a time line, and a focus progress report can be generated.
Next, the following can be implemented by embodiment two: the detailed description of the specific implementation mode of the method is given in detail in the case of automatic qualitative and quantitative evaluation of the lesion of a tuberculosis patient.
Example two
(1) Acquiring 100 CT images of a tuberculosis patient as shown in FIG. 2, and marking a tuberculosis focus on the CT images by professionals such as imaging doctors to obtain a binary mask image of the tuberculosis focus as shown in FIG. 3.
(2) Marking the CT image data of the pulmonary tuberculosis patient and the corresponding pulmonary tuberculosis focus, and taking a case as a unit according to the proportion of 8: 2 are randomly divided into two parts of a training set and a verification set.
(3) And cutting the CT image into three-dimensional CT image blocks with the size of 64 x 64 according to the mode shown in FIG. 4, and cutting the marked binary mask image into CT image blocks with the same size according to the same mode.
(4) And constructing a convolution-based neural network model as shown in fig. 5, taking a CT image block of the pulmonary tuberculosis focus as input, and taking a binary mask image labeled corresponding to the pulmonary tuberculosis focus as output. And (3) taking a Dice loss function as a minimization target, and training a convolutional neural network model by adopting an adam optimizer. And completing one round of training for each round of training, namely completing one round of training for all training set data, and verifying the verification set data once. And (4) storing the model with the minimum Dice loss on the verification data set in the training round as the pulmonary tuberculosis focus segmentation model until Dice loss on the verification data set is not reduced for 10 consecutive rounds.
(5) And acquiring CT images of 20 tuberculosis patients at three different time nodes of the first visit, the second visit and the third visit. And (3) respectively cutting the CT image medical image of each time node into CT image blocks with the size of 64 x 64 in a mode shown in the figure 4 in the steps (1), (2) and (3). If the remaining pixels are less than 64 pixels wide during the slicing, the remaining pixel locations are filled with HU values of air: -1024.
(6) And sequentially inputting CT image blocks obtained by segmenting the CT image of each time node into the pulmonary tuberculosis focus segmentation model trained in the steps (1), (2) and (3) to obtain a pulmonary tuberculosis focus prediction probability image with a corresponding time node probability value of 0-1. Wherein, the larger the prediction probability is, the larger the probability that the target pixel is the tuberculosis focus is.
(7) And (3) merging the probability image blocks of the segmentation image blocks obtained in the step (6) into an integral pulmonary tuberculosis focus prediction probability image matched with the original CT image of the corresponding time node according to the segmentation sequence (as shown in figure 6).
(8) And carrying out three-dimensional median filtering on the predicted pulmonary tuberculosis focus prediction probability image by adopting a median filtering operator to obtain the filtered pulmonary tuberculosis focus prediction probability image. With 0.5 as a threshold, the tuberculosis focus prediction probability image is converted into a binary tuberculosis focus mask image (as shown in fig. 7).
(9) And carrying out three-dimensional space connected domain detection on the binary mask image of the tuberculosis focus at each time, and if the focuses are connected in three dimensions, determining that the focuses are the same focus. And calculating the volume of each pulmonary tuberculosis focus according to the focus region pixel number N and the CT image resolution. Sequencing the tuberculosis focus of each time node according to the size, and numbering the tuberculosis focus as follows: :
Figure BDA0003456759880000121
Figure BDA0003456759880000122
(10) and rigidly registering the CT images shot in the first visit as reference images and the CT images shot in the second visit and the third visit as floating images respectively to obtain a space mapping matrix M between the floating images, and calculating a mapping point P 'of an arbitrary point P in the floating image space in the reference image space by using a formula P' ═ P.M.
(11) Mapping the binary pulmonary tuberculosis focus mask image corresponding to the obtained CT images shot for the second time and the third time to the space of the CT image shot for the first time by using the space mapping relation obtained in the step (10) to obtain the focus
Figure BDA0003456759880000123
(12) Searching the focus on the CT image shot for the first time
Figure BDA0003456759880000124
And corresponding relation with the focus on the CT image shot for the second time. The specific calculation method comprises the following steps: in the reference image space, the focus is calculated
Figure BDA0003456759880000125
And the focus mapped on the second and third CT images to the first CT image space
Figure BDA0003456759880000131
Coefficient of DICE
Figure BDA0003456759880000132
Wherein ViIs a binary mask image of lesion i. If DC is presentij>0,i∈[1,m]Then it is considered as
Figure BDA0003456759880000133
Corresponding to the original focus, otherwise, it is considered as
Figure BDA0003456759880000134
It is a newly added focus. If it is tied to the original lesion
Figure BDA0003456759880000135
Then the focus is considered
Figure BDA0003456759880000136
And the focus of disease
Figure BDA0003456759880000137
Have a corresponding relationship. If the new lesion is a newly added lesion, the newly added lesion is taken as a first appearing lesion, and the corresponding relation between the subsequent time node and the subsequent time node is analyzed. Similarly, the focus on the first CT image can be calculated
Figure BDA0003456759880000138
Or the corresponding relation between the newly added focus for the second time and the focus on the CT image shot for the third time.
(13) Calculating the focus obtained in the step (12)
Figure BDA0003456759880000139
The volume, average gray level, maximum gray level and other lesion parameters of each lesion.
(14) And (4) calculating the volume, the average gray level and the maximum gray level change value of each focus based on the focus parameters in the step (13). A lesion progression report as shown in table 1 below was generated.
Table 1:
Figure BDA00034567598800001310
Figure BDA0003456759880000141
and (4) comparing the lesion segmentation results of 20 tuberculosis patients in the step (5) with the lesion segmentation results manually drawn by a doctor, and calculating the Dice coefficient of the whole lesion. The statistical results are shown in Table 2, and the average Dice coefficient is 0.815. + -. 0.071.
Table 2:
Figure BDA0003456759880000142
Figure BDA0003456759880000151
EXAMPLE III
The invention provides an automatic assessment device 100 based on lesion development on a medical image, which is used for solving the problem of automatic assessment of the development of a patient disease in clinic, automatically identifying, matching, qualitatively and quantitatively assessing lesions on multiple images of the same patient at different times, and providing an automatic lesion development qualitative and quantitative assessment device for clinic. It should be noted that the implementation principle and the implementation mode of the automatic assessment apparatus 100 based on the lesion development on the medical image are consistent with the above automatic assessment method based on the lesion development on the medical image, and therefore, the details are not repeated below.
As shown in fig. 8, the automatic evaluation apparatus 100 based on lesion progress on a medical image includes:
a focus identification model obtaining module 10, configured to perform image segmentation on the image with the focus labeling completed, and construct and train a focus identification model based on a convolutional neural network;
a first target lesion obtaining module 20, configured to perform lesion identification and segmentation on the image captured for the first time through the trained lesion identification model to obtain a first target lesion;
a second target lesion obtaining module 30, which performs lesion recognition and segmentation on the image re-photographed on the same lesion after a predetermined time interval through the trained lesion recognition model to obtain a second target lesion;
a lesion correspondence determining module 40, configured to perform corresponding image registration according to the first target lesion and the second target lesion, and determine a lesion correspondence between lesions on the two captured images;
and a lesion evaluation report generation module 50, configured to calculate volume change and change rate of a lesion and average gray level, maximum gray level, and minimum gray level change of the lesion based on a lesion correspondence relationship, and generate a lesion development evaluation report.
Example four
The invention provides a terminal comprising a memory, a processor and a computer program stored in the memory, wherein the computer program is used for realizing the steps of the automatic assessment method based on the lesion development on the medical image according to any one of the above embodiments when being executed by the processor.
EXAMPLE five
The present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the automatic assessment method based on lesion progress on medical images according to any one of the above embodiments.
EXAMPLE six
The present invention provides a computer program product comprising a computer program or instructions which, when being processed and executed, implement the steps of the method for automatic assessment based on lesion development on medical images according to any of the above embodiments.
In summary, the automatic assessment method and related products based on the lesion progress on the medical image provided by the invention can automatically realize the identification and delineation of the lesion through the lesion identification model, then can fully automatically complete the image registration of the medical images shot by different time nodes, correspond to the relationship among the lesions, including lesion merging and differentiation, then can fully automatically complete the measurement of image parameters such as gray scale, volume and the like of the lesion, and qualitatively and quantitatively evaluate the development trend of the lesion on a time line, generate a lesion progress report, automatically identify, match and qualitatively and quantitatively evaluate the lesion on multiple images of the same patient at different times, do not need manual intervention, avoid labor and time consumption, and provide an automatic qualitative and quantitative assessment method for the lesion progress clinically.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system or apparatus/terminal device and method can be implemented in other ways. For example, the above-described system or apparatus/terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The invention is not limited solely to that described in the specification and embodiments, and additional advantages and modifications will readily occur to those skilled in the art, so that the invention is not limited to the specific details, representative apparatus, and illustrative examples shown and described herein, without departing from the spirit and scope of the general concept as defined by the appended claims and their equivalents.

Claims (10)

1. An automatic assessment method based on lesion progress on medical images, characterized by comprising the following steps:
step S100: performing image segmentation on the image subjected to the lesion marking, and constructing and training a lesion identification model based on a convolutional neural network;
step S200: performing focus recognition segmentation on the image shot for the first time through the trained focus recognition model to obtain a first target focus;
step S300: performing focus recognition segmentation on the image which is shot again on the same focus after a preset time interval through the trained focus recognition model to obtain a second target focus;
step S400: performing corresponding image registration according to the first target focus and the second target focus, and determining focus corresponding relation between focuses on the images shot twice;
step S500: based on the corresponding relation of the focus, calculating the volume change and the change rate of the focus and the average gray level, the maximum gray level and the minimum gray level change of the focus, and generating a focus development evaluation report.
2. The automatic evaluation method based on lesion progress on medical image according to claim 1, wherein said step S100 comprises the steps of:
randomly dividing the case image data with the completed lesion marking into a training set and a verification set according to a preset proportion; wherein, each case image data comprises a medical image and a corresponding binary mask image marked with a focus;
segmenting each medical image into three-dimensional or two-dimensional focus image blocks according to a preset size, and segmenting the binary mask image into image blocks with the same size according to the same mode;
constructing a focus recognition model based on a convolutional neural network, taking focus image blocks as input, and taking corresponding binary mask images as output to carry out iterative training; and verifying the focus identification model once through the verification set until a preset iteration target is reached after the focus identification model is trained once through the training set.
3. The method for automatically evaluating the progress of a lesion on a medical image according to claim 2, wherein when the lesion recognition model is trained, a Dice loss function is used as a minimization target, and an adam optimizer is used for training the lesion recognition model; and after multiple iterations, storing the model with the minimum Dice loss on the verification set in the trained rounds as a final focus identification model until the Dice loss on the verification set is not reduced any more in the continuous preset rounds.
4. The automatic evaluation method based on lesion progress on medical image according to claim 2, wherein said step S200 comprises the steps of:
segmenting the obtained original medical image shot for the first time into segmented image blocks with the same size according to the specification of the focus image block;
sequentially inputting the segmentation image blocks obtained by segmentation into the trained focus recognition model to obtain a focus probability image with a probability value between 0 and 1;
merging the focus probability image blocks of the segmented image blocks into an integral focus probability image matched with the original medical image according to the segmentation sequence;
carrying out three-dimensional median filtering on the whole focus probability image by adopting a median filtering operator to obtain a filtered focus probability image; converting the filtered lesion probability image into a binary lesion mask image by taking 0.5 as a threshold value;
performing three-dimensional space connected domain analysis on the binary focus mask image, and if the focuses are connected in the three-dimensional space, taking the focuses as the same focus;
calculating the volumes of all focuses according to the pixel number and the image resolution of the focus area, sequencing all focuses according to the volume size to obtain a first target focus, and correspondingly numbering each focus as follows:
Figure FDA0003456759870000031
Figure FDA0003456759870000032
5. the automatic assessment method based on lesion progress on medical image according to claim 4, wherein said step S300 comprises the steps of:
obtaining a second target lesion of the re-photographed original medical image based on the same procedure as the step S200 and numbering each lesion accordingly as:
Figure FDA0003456759870000033
6. the automatic assessment method based on lesion progress on medical image according to claim 5, wherein said step S400 comprises the steps of:
carrying out rigid registration on the original medical images shot twice to obtain a spatial mapping matrix M between the two images; wherein, M includes translation, scaling and rotation of the image, and a mapping point P 'of an arbitrary point P in the floating image space in the reference image space can be calculated by using a preset formula P' ═ P · M;
according to the obtained space mapping relation, mapping the binary focus mask image of the original medical image obtained by shooting again to the original medical image space obtained by shooting for the first time to obtain the focus
Figure FDA0003456759870000034
Calculating the second target focus pair by pair
Figure FDA0003456759870000035
And the focus of disease
Figure FDA0003456759870000036
Obtaining a matching relation through a space mapping relation between the two components:
Figure FDA0003456759870000037
wherein
Figure FDA0003456759870000038
And
Figure FDA0003456759870000039
the contact ratio between the two is higher than that between other focuses in the original medical image shot for the first time
Figure FDA00034567598700000310
Focal contact ratio parameter of (a); and if the focus in the original medical image shot again is not overlapped with the focus on all the original medical images shot for the first time, the focus is regarded as a newly added focus.
7. The automatic assessment method based on lesion progress on medical image according to claim 6, wherein said step S500 comprises the steps of:
calculating in the first target lesion
Figure FDA00034567598700000311
Image parameters of each lesion and all lesions; wherein the image parameters comprise volume, average gray scale, maximum gray scale and minimum gray scale;
calculating the lesions one by one
Figure FDA0003456759870000041
The image parameters of each and all lesions of (a); wherein the image parameters include volume, mean gray, maximumLarge gray scale, minimum gray scale value;
according to the matching relationship of the focus
Figure FDA0003456759870000042
And calculating the change value and the change rate of the image parameter of each focus on the original medical image shot for the first time, and generating a focus progress report.
8. An automatic evaluation apparatus based on lesion progress on a medical image, comprising:
the focus identification model acquisition module is used for carrying out image segmentation on the image subjected to focus labeling and constructing and training a focus identification model based on a convolutional neural network;
the first target focus acquisition module is used for carrying out focus identification segmentation on the image shot for the first time through the trained focus identification model to obtain a first target focus;
the second target focus acquisition module is used for carrying out focus identification and segmentation on the image which is shot again on the same focus after a preset time interval through the trained focus identification model to obtain a second target focus;
a lesion correspondence determining module for performing corresponding image registration according to the first target lesion and the second target lesion, and determining a lesion correspondence between the lesions on the two photographed images;
and the focus evaluation report generation module is used for calculating the volume change and the change rate of the focus and the average gray level, the maximum gray level and the minimum gray level change of the focus based on the focus corresponding relation and generating a focus development evaluation report.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method for automatic assessment based on lesion progress on medical images according to any one of claims 1 to 7.
10. A computer program product comprising a computer program or instructions which, when being processed and executed, carry out the steps of the method for automatic assessment based on lesion development on medical images according to any one of claims 1 to 7.
CN202210006013.4A 2022-01-05 2022-01-05 Automatic assessment method based on lesion progress on medical image and related product Pending CN114334097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210006013.4A CN114334097A (en) 2022-01-05 2022-01-05 Automatic assessment method based on lesion progress on medical image and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210006013.4A CN114334097A (en) 2022-01-05 2022-01-05 Automatic assessment method based on lesion progress on medical image and related product

Publications (1)

Publication Number Publication Date
CN114334097A true CN114334097A (en) 2022-04-12

Family

ID=81025642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210006013.4A Pending CN114334097A (en) 2022-01-05 2022-01-05 Automatic assessment method based on lesion progress on medical image and related product

Country Status (1)

Country Link
CN (1) CN114334097A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092642A (en) * 2023-03-07 2023-05-09 福建智康云医疗科技有限公司 Medical image quality control method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116092642A (en) * 2023-03-07 2023-05-09 福建智康云医疗科技有限公司 Medical image quality control method and system
CN116092642B (en) * 2023-03-07 2023-06-20 福建智康云医疗科技有限公司 Medical image quality control method and system

Similar Documents

Publication Publication Date Title
CN107480677B (en) Method and device for identifying interest region in three-dimensional CT image
US8335359B2 (en) Systems, apparatus and processes for automated medical image segmentation
US8355553B2 (en) Systems, apparatus and processes for automated medical image segmentation using a statistical model
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN110717905B (en) Brain image detection method, computer device, and storage medium
CN111507381A (en) Image recognition method and related device and equipment
CN112885453A (en) Method and system for identifying pathological changes in subsequent medical images
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
EP2987114B1 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
JP2004195213A (en) Initialization method of model-based interpretation of radiograph
CN112529834A (en) Spatial distribution of pathological image patterns in 3D image data
WO2023044605A1 (en) Three-dimensional reconstruction method and apparatus for brain structure in extreme environments, and readable storage medium
CN111861989A (en) Method, system, terminal and storage medium for detecting midline of brain
CA3111320A1 (en) Determination of a growth rate of an object in 3d data sets using deep learning
CN110738633B (en) Three-dimensional image processing method and related equipment for organism tissues
CN114334097A (en) Automatic assessment method based on lesion progress on medical image and related product
WO2024126468A1 (en) Echocardiogram classification with machine learning
CN115210755A (en) Resolving class-diverse loss functions of missing annotations in training data
CN115661152B (en) Target development condition analysis method based on model prediction
CN116363104A (en) Automatic diagnosis equipment and system for image medicine
CN114360695B (en) Auxiliary system, medium and equipment for breast ultrasonic scanning and analyzing
US20230368913A1 (en) Uncertainty Estimation in Medical Imaging
CN113554647B (en) Registration method and device for medical images
CN115690556A (en) Image recognition method and system based on multi-modal iconography characteristics
Mohammed et al. Esophageal Cancer Detection Using Feed-Forward Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination