CN116630239A - Image analysis method, device and computer equipment - Google Patents

Image analysis method, device and computer equipment Download PDF

Info

Publication number
CN116630239A
CN116630239A CN202310478225.7A CN202310478225A CN116630239A CN 116630239 A CN116630239 A CN 116630239A CN 202310478225 A CN202310478225 A CN 202310478225A CN 116630239 A CN116630239 A CN 116630239A
Authority
CN
China
Prior art keywords
image
historical
current
region
analysis result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310478225.7A
Other languages
Chinese (zh)
Inventor
邵影
高耀宗
周翔
詹翊强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202310478225.7A priority Critical patent/CN116630239A/en
Publication of CN116630239A publication Critical patent/CN116630239A/en
Priority to PCT/CN2023/129173 priority patent/WO2024094088A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to an image analysis method, an image analysis device and computer equipment. The method comprises the following steps: the method comprises the steps of obtaining a historical PET image, a historical CT image, a current PET image and a current CT image of a subject, determining a historical analysis result according to the historical PET image, the historical CT image and a baseline model, and determining a current analysis result according to the current PET image, the current CT image, the historical analysis result and a follow-up model. The current analysis result comprises a third region of interest of the current CT image and/or comprises a fourth region of interest of the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest of the current PET image, and a first parameter of the fifth region of interest is greater than or equal to a first preset threshold; the fourth region of interest corresponds to the first region of interest, the fourth region of interest corresponds to a sixth region of interest of the current PET image, and the first parameter of the sixth region of interest is less than a first preset threshold.

Description

Image analysis method, device and computer equipment
Technical Field
The present application relates to the field of medical technology, and in particular, to an image analysis method, apparatus, and computer device.
Background
The physician typically observes the metabolic situation based on a PET (positron emission tomography, positron Emission Computed Tomography) image of the subject, and further determines the region of interest in the corresponding CT (Computed Tomography, electron computer tomography) image from the metabolic situation in the PET image. For example, a physician observes standard uptake values (standard uptake value, SUV) in a PET image and maps regions of SUV equal to or greater than 2.5 in the PET image onto corresponding CT images to determine regions of interest in the CT images.
However, the current image analysis method has a problem of limited application range.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image analysis method, apparatus, and computer device that can expand the application range.
In a first aspect, the present application provides an image analysis method. The method comprises the following steps:
acquiring a historical PET image, a historical CT image, a current PET image and a current CT image of a subject;
determining a historical analysis result according to the historical PET image, the historical CT image and a baseline model; the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold;
Determining a current analysis result according to the current PET image, the current CT image, the historical analysis result and a follow-up model; the current analysis result comprises a third region of interest in the current CT image and/or comprises a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, the first parameter of the fifth region of interest being greater than or equal to the first preset threshold; the fourth region of interest corresponds to the first region of interest, and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first predetermined threshold.
In one embodiment, the determining the historical analysis result according to the historical PET image, the historical CT image and the baseline model includes:
obtaining registered historical PET images according to the historical PET images and the historical CT images;
and determining the historical analysis result according to the historical CT image, the registered historical PET image and the baseline model.
In one embodiment, the determining the historical analysis result based on the historical CT image, the registered historical PET image, and the baseline model includes:
Preprocessing the historical CT image to obtain a preprocessed historical CT image;
preprocessing the registered historical PET image to obtain a preprocessed historical PET image;
and inputting the preprocessed historical CT image and the preprocessed historical PET image into the baseline model to obtain the historical analysis result.
In one embodiment, the method further comprises:
acquiring a historical PET image sample and a historical CT image sample;
training an initial baseline model according to the historical PET image sample, the historical CT image sample and the corresponding first segmentation gold standard to obtain the baseline model;
the first segmentation gold standard comprises a first area determined in the historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and the first parameter of the second area is larger than or equal to the first preset threshold.
In one embodiment, the determining the current analysis result according to the current PET image, the current CT image, the historical analysis result and the follow-up model includes:
obtaining a registered current PET image according to the current PET image and the current CT image;
obtaining registered historical analysis results according to the current CT image and the historical analysis results;
And determining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result and the follow-up model.
In one embodiment, the determining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model includes:
preprocessing the current CT image, the registered current PET image and the registered historical analysis result respectively to obtain a preprocessed current CT image, a preprocessed current PET image and a preprocessed historical analysis result;
and inputting the preprocessed current CT image, the preprocessed current PET image and the preprocessed historical analysis result into the follow-up model to obtain the current analysis result.
In one embodiment, the method further comprises:
acquiring a historical PET image sample, a historical CT image sample, a current PET image sample and a current CT image sample;
determining a historical CT segmentation result sample according to the historical PET image sample, the historical CT image sample and the baseline model; the historical CT segmentation result sample comprises a first area in the historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and a first parameter of the second area is larger than or equal to a first preset threshold value;
Training an initial follow-up model according to the historical CT segmentation result sample, the current PET image sample, the current CT image sample and a second segmentation gold standard to obtain the follow-up model; the second segmentation gold standard comprises a third region in the current CT image sample and/or comprises a fourth region in the current CT image sample corresponding to the first region; the third region corresponds to a fifth region in the current PET image sample, and the first parameter of the fifth region is greater than or equal to the first preset threshold; the fourth region corresponds to the first region, and the fourth region corresponds to a sixth region in the current PET image sample, the first parameter of the sixth region being less than the first predetermined threshold.
In one embodiment, determining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model includes:
obtaining a historical PET image registered to the current CT image according to the historical PET image and the current CT image;
and obtaining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, the registered historical PET image to the current CT image and the follow-up model.
In one embodiment, the method further comprises:
and determining a first parameter analysis result according to the historical analysis result and the current analysis result.
In one embodiment, the method further comprises:
mapping the historical analysis result to the historical PET image to obtain a first target area in the historical PET image;
mapping the current analysis result to the current PET image to obtain a second target area in the current PET image;
and determining a second parameter analysis result according to the first target area and the second target area.
In one embodiment, the method further comprises:
and determining a first parameter analysis result according to the historical analysis result and the current analysis result.
In one embodiment, the method further comprises:
mapping the historical analysis result to the historical PET image to obtain a first target area in the historical PET image;
mapping the current analysis result to the current PET image to obtain a second target area in the current PET image;
and determining a second parameter analysis result according to the first target area and the second target area.
In a second aspect, the application further provides an image analysis device. The device comprises:
The acquisition module is used for acquiring a historical PET image, a historical CT image, a current PET image and a current CT image of the detected person;
the first determining module is used for determining a historical analysis result according to the historical PET image, the historical CT image and the baseline model; the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold;
the second determining module is used for determining a current analysis result according to the current PET image, the current CT image, the historical analysis result and the follow-up model; the current analysis result comprises a third region of interest in the current CT image and/or comprises a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, the first parameter of the fifth region of interest being greater than or equal to the first preset threshold; the fourth region of interest corresponds to the first region of interest, and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first predetermined threshold.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of any of the methods described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the methods described above.
The image analysis method, the image analysis device and the computer equipment acquire the historical PET image, the historical CT image, the current PET image and the current CT image of the testee, determine the historical analysis result according to the historical PET image, the historical CT image and the baseline model, and further determine the current analysis result according to the current PET image, the current CT image, the historical analysis result and the follow-up model. Because the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold, the computer equipment can firstly determine the first region of interest of the subject in the historical examination, for example, the first region of interest of the subject in the historical examination, for which SUV is greater than or equal to 2.5, on the CT image according to the historical PET image and the historical CT image. Further, as the current analysis result includes a third region of interest in the current CT image and/or includes a fourth region of interest in the current CT image corresponding to the first region of interest, the third region of interest corresponds to a fifth region of interest in the current PET image, and a first parameter of the fifth region of interest is greater than or equal to a first preset threshold; the fourth region of interest corresponds to the first region of interest and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first preset threshold, whereby the computer device is able to determine the fourth region of interest even if the first parameter of the first region of interest has been less than the first preset threshold in the current examination. For example, if in the current examination the subject has a first region of interest SUV of SUV.gtoreq.2.5 on CT images in the historical examination that is already less than 2.5, the computer device may still determine a fourth region of interest of SUV < 2.5. Thus, the method provided by the embodiment can be suitable for the situation that the first parameter is larger than or equal to the first preset threshold value in each inspection, and is also suitable for the situation that the first parameter is larger than or equal to the first preset threshold value in the first inspection, but the first parameter of the area with the first parameter larger than or equal to the first preset threshold value in the second inspection is smaller than the first preset threshold value, so that the application range is enlarged.
Drawings
FIG. 1 is a diagram of an application environment of an image analysis method according to an embodiment of the present application;
FIG. 2 is a flow chart of an image analysis method according to an embodiment of the application;
FIG. 3 is a schematic illustration of a baseline model usage or training process;
FIG. 4 is a schematic illustration of a use process or training process of a follow-up model;
FIG. 5 is a flowchart illustrating a method for determining a historical analysis result according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating another embodiment of determining a historical analysis result;
FIG. 7 is a schematic flow chart of obtaining a baseline model according to an embodiment of the present application;
FIG. 8 is a flow chart of determining a current analysis result according to an embodiment of the present application;
FIG. 9 is a flowchart illustrating another embodiment of determining a current analysis result;
FIG. 10 is a flow chart of obtaining a current analysis result according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a sample in an embodiment of the application;
FIG. 12 is a flowchart illustrating another embodiment of determining a current analysis result;
FIG. 13 is a flowchart illustrating a second parameter analysis and result determination according to an embodiment of the present application;
FIG. 14 is a schematic overall flow chart of an image analysis method according to an embodiment of the application;
FIG. 15 is a flowchart illustrating an image analysis method according to an embodiment of the present application;
fig. 16 is a block diagram illustrating an image analysis apparatus according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Follow-up refers to the process of a hospital to regularly learn the change of the condition of a patient who was at a visit in the hospital, also called follow-up. In the follow-up analysis, taking the case of combining the PET image and the CT image, the doctor needs to observe not only that the region with SUV of 2.5 or more in the PET image is mapped to the region of interest of the CT image, but also that the last examination corresponds to the metabolic change of the region of interest in the present examination. For example, the metabolism of invasive lymphomas is generally high, and it is likely that SUV values for regions of interest where SUV is 2.5 or more will be less than 2.5 prior to this review.
That is, if the SUV is less than 2.5 in the PET image at the time of the last examination, the SUV becomes lower in the present examination, but if the residual metabolism of the area is higher than the surrounding background metabolism, the doctor needs to combine the corresponding CT image, describe the size, SUV value, etc. in the report, and compare and analyze the result of the last examination.
However, in the current image analysis technology, each inspection result is independently analyzed, for example, the same analysis algorithm is operated on the PET image and the CT image obtained in the previous inspection and the PET image and the CT image obtained in the current inspection, so based on the re-inspected PET image and CT image, only the area with SUV equal to or greater than 2.5 in the corresponding PET image can still be determined on the CT image. That is, if the SUV is less than 2.5 in the PET image at the time of the previous examination, the SUV is less than 2.5 in the present examination, the region of interest in the CT image mapped to the SUV less than 2.5 in the PET image cannot be determined at the time of the present examination, so the present analysis method has the problem of limited application range.
Based on this, it is necessary to provide an image analysis method, which will be described below.
Fig. 1 is an application environment diagram of an image analysis method in an embodiment of the present application, and in an embodiment of the present application, a computer device is provided, where the computer device may be a terminal, and an internal structure diagram of the computer device may be shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image analysis method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the architecture shown in fig. 1 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements may be implemented, as a particular computer device may include more or less components than those shown, or may be combined with some components, or may have a different arrangement of components.
The embodiment is illustrated by the method applied to the terminal, and it is understood that the method can also be applied to the server, and can also be applied to a system comprising the terminal and the server, and implemented through interaction between the terminal and the server. The terminal may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
Fig. 2 is a flow chart of an image analysis method according to an embodiment of the present application, which can be applied to the computer device shown in fig. 1, and in one embodiment, as shown in fig. 2, the method includes the following steps:
s201, acquiring a historical PET image, a historical CT image, a current PET image and a current CT image of a subject.
The PET image is an image having functional information, and for example, a doctor can determine the SUV of the target area through the PET image, thereby determining the metabolic condition of the target area. PET (Positron Emission Computed Tomography) equipment can determine corresponding SUV values according to parameters such as dosage, height, weight and the like of medicines taken by a subject when PET images are acquired, and develop the corresponding SUV values on the PET images. The higher the SUV value on the PET image, the darker the color of the PET image.
However, since the PET image cannot see the anatomical structure clearly, there is no way to accurately analyze the region where the first parameter is equal to or greater than the first preset threshold using only the PET image. Whereas CT (Computed Tomography) images are images with fine anatomical information, image analysis can be performed in combination with PET images and CT images.
In this embodiment, the computer device first acquires a historical PET image, a historical CT image, a current PET image, and a current CT image of the subject.
Where "current" and "history" represent two relatively time nodes having a back-and-forth order, the present embodiment is not limited to the time interval therebetween. The current PET image and the current CT image represent images obtained by inspection when the subject is visited at this time; the historical PET images and the historical CT images represent images obtained from an examination prior to the present subject follow-up. For example, the current PET image and the current CT image may be images of the subject examined the same day, and the historical PET image and the historical CT image may be images of the subject examined one month ago.
It is understood that the subject's historical PET image, historical CT image, current PET image, and current CT image all include at least one identical location. In order to ensure the analysis effect, the historical PET image and the historical CT image may be images acquired at the same historical time, and the current PET image and the current CT image may be images acquired at the same current time.
Optionally, the computer device may acquire a historical PET image, a historical CT image, a current PET image, and a current CT image acquired by the medical device, and may also receive a historical PET image, a historical CT image, a current PET image, and a current CT image sent by other electronic devices or the medical device. The computer device may also store the images in advance for direct recall when the image analysis method is to be executed.
S202, determining a historical analysis result according to a historical PET image, a historical CT image and a baseline model; the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold.
In this embodiment, after the computer device acquires the historical PET image, the historical CT image, the current PET image, and the current CT image of the subject, the historical analysis result is determined according to the historical PET image, the historical CT image, and the baseline model.
In the following, the first parameter is SUV, and the first preset threshold is 2.5, which may be understood that the first parameter may be other parameters indicating the metabolic condition of the body, and the first preset threshold may also be set to other values, which is not limited to this embodiment.
That is, the computer device determines a second region of interest in the historical PET image that is greater than or equal to 2.5 in SUV, and then corresponds the second region of interest to the historical CT image to obtain a first region of interest on the historical CT image.
FIG. 3 is a schematic illustration of a baseline model use or training process, as shown in FIG. 3, in which FIG. 3 (a) may represent historical CT images, FIG. 3 (b) may represent historical PET images, and FIG. 3 (c) may represent historical analysis results determined from FIGS. 3 (a) and 3 (b). As can be seen, the SUV of the left darker developed portion of the historical PET image shown in FIG. 3 (b) is ≡2.5, which is the second region of interest in the historical PET image. Whereas the boundary and structure of this second region of interest cannot be seen clearly in fig. 3 (b), the computer device corresponds this second region of interest with SUV ≡2.5 in fig. 3 (b) to that in fig. 3 (a), thus determining the white first region of interest shown by the white circular dashed box in fig. 3 (c).
Alternatively, the computer device may directly input the historical PET images and the historical CT images directly to the baseline model to output the historical analysis results from the baseline model. The computer device may also perform optimization processing on the historical PET image and the historical CT image, and then input the historical PET image and the historical CT image into the baseline model, so as to output a historical analysis result from the baseline model. The optimization process may be a registration process, a normalization process, a compression process, or the like.
It will be appreciated that the computer device first needs to be trained to obtain the baseline model before using the baseline model. The baseline model may be a target classification model, which may be a convolutional neural network (Convolutional Neural Networks, CNN), a cyclic neural network (Recurrent Neural Network, RNN), or may be other deep learning networks, machine learning networks, or the like.
One way to obtain a baseline model is: the computer device trains the initial baseline model according to the historical PET image sample, the historical CT image sample and the corresponding sample label, when the training stop condition is met, the training can be stopped, the baseline model is determined, and the training stop condition can be that the difference between the analysis result of the output of the baseline model and the actual result in the historical PET image sample and the historical CT image sample is smaller than a preset threshold value. The training process of developing the baseline model will be described in detail below, and will not be described in detail here.
S203, determining a current analysis result according to the current PET image, the current CT image, the historical analysis result and the follow-up model; the current analysis result comprises a third region of interest in the current CT image and/or comprises a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, and a first parameter of the fifth region of interest is greater than or equal to a first preset threshold; the fourth region of interest corresponds to the first region of interest, and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than a first preset threshold.
In this embodiment, after the computer device determines the historical analysis result, the current analysis result may be determined according to the current PET image, the current CT image, the historical analysis result, and the follow-up model.
Continuing with the example with SUV, there are three possibilities for the current analysis results. In a first possibility, the current analysis result includes a third region of interest on the current CT image, which corresponds to a fifth region of interest with SUV being 2.5 or more in the current PET image, that is, a region with SUV being 2.5 or more in the image of the subject at the current follow-up is still required to be focused on.
The second possibility is that the current analysis result comprises a fourth region of interest on the current CT image corresponding to the first region of interest, the fourth region of interest corresponds to the first region of interest with SUV greater than or equal to 2.5 in the historical PET image, and the fourth region of interest corresponds to a sixth region of interest with SUV less than 2.5 in the current PET image. That is, if the subject has an area with SUV > 2.5 at the time of examination prior to the current follow-up, but the SUV in the area at the time of the current follow-up is already less than 2.5, then there is still a need to pay attention to the portion of the area where SUV < 2.5.
A third possibility is that the current analysis result comprises both the third region of interest and the fourth region of interest. For example, the subject has a site with SUV.gtoreq.2.5 in the current follow-up image, and has a site with SUV.gtoreq.2.5 when examined prior to the current follow-up, but has SUV < 2.5 in the current follow-up image.
Fig. 4 is a schematic diagram of a use process or training process of a follow-up model, as shown in fig. 4, in which fig. 4 (a) may represent a historical CT image, fig. 4 (b) may represent a historical PET image, fig. 4 (c) may represent a current PET image according to a current CT image, fig. 4 (d) may represent a current PET image, and fig. 4 (e) may represent a determined current analysis result.
It can be seen that the lower left developed portion of the subject's historical PET image is the second region of interest with SUV being 2.5 or more in the historical PET image, but the boundary and structure of the second region of interest cannot be seen clearly in fig. 4 (b), so the result of the historical analysis determined by the computer device is the first region of interest shown by the white circular dotted line box in fig. 4 (a), that is, the region obtained by mapping the second region of interest with SUV being 2.5 or more in fig. 4 (b) to the region obtained after fig. 4 (a).
Further, as can be seen in conjunction with fig. 4 (d), the development of the lower left in the current PET image of the subject is already insignificant, suv=1.7. That is, when the subject rechecks after a period of time, the SUV value of the sixth region of interest in the current PET image, which corresponds to the second region of interest in the historical PET images, has become low. Likewise, the four regions of interest in the current CT image corresponding to the first region of interest in the historical PET image have also become smaller. Although the area becomes smaller and the metabolism becomes lower, the doctor still needs to pay attention.
Therefore, the computer device can still correspond the first region of interest with SUV being greater than or equal to 2.5 in the historical PET image at the time of the last examination to the current examination to determine the sixth region of interest with SUV being greater than or equal to 2.5 in the current PET image at the time of the last examination, and map the sixth region of interest to the current CT image to obtain the fourth region of interest, that is, the computer device can determine the white fourth region of interest shown in the white circular dashed frame of FIG. 4 (e).
Optionally, the computer device may directly input the current PET image, the current CT image and the historical analysis result to the follow-up model to output the current analysis result by the follow-up model, and may perform optimization processing on the current PET image, the current CT image and the historical analysis result and then input the result to the follow-up model to output the historical analysis result by the follow-up model.
Likewise, before using the follow-up model, training is first required to obtain the follow-up model. The follow-up model may be a target classification model, a convolutional neural network (Convolutional Neural Networks, CNN), a cyclic neural network (Recurrent Neural Network, RNN), or other deep learning networks, machine learning networks, or the like.
The process of training to obtain the follow-up model is similar to the process of obtaining the baseline model, and the training process of the follow-up model will be developed in detail and will not be described here.
In this way, the physician can perform a comparative analysis based on the historical analysis results and the current analysis results. In some embodiments, the computer device may highlight the historical analysis results after registration with the current analysis results to facilitate the doctor's comparative analysis.
According to the image analysis method provided by the embodiment, the historical PET image, the historical CT image, the current PET image and the current CT image of the detected person are obtained, the historical analysis result is determined according to the historical PET image, the historical CT image and the baseline model, and then the current analysis result is determined according to the current PET image, the current CT image, the historical analysis result and the follow-up model. Because the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold, the computer equipment can firstly determine the first region of interest of the subject in the historical examination, for example, the first region of interest of the subject in the historical examination, for which SUV is greater than or equal to 2.5, on the CT image according to the historical PET image and the historical CT image. Further, as the current analysis result includes a third region of interest in the current CT image and/or includes a fourth region of interest in the current CT image corresponding to the first region of interest, the third region of interest corresponds to a fifth region of interest in the current PET image, and a first parameter of the fifth region of interest is greater than or equal to a first preset threshold; the fourth region of interest corresponds to the first region of interest and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first preset threshold, whereby the computer device is able to determine the fourth region of interest even if the first parameter of the first region of interest has been less than the first preset threshold in the current examination. For example, if in the current examination the subject has a first region of interest SUV of SUV.gtoreq.2.5 on CT images in the historical examination that is already less than 2.5, the computer device may still determine a fourth region of interest of SUV < 2.5. Thus, the method provided by the embodiment can be suitable for the situation that the first parameter is larger than or equal to the first preset threshold value in each inspection, and is also suitable for the situation that the first parameter is larger than or equal to the first preset threshold value in the first inspection, but the first parameter of the area with the first parameter larger than or equal to the first preset threshold value in the second inspection is smaller than the first preset threshold value, so that the application range is enlarged.
Fig. 5 is a schematic flow chart of determining a historical analysis result according to an embodiment of the present application, and referring to fig. 5, this embodiment relates to an alternative implementation of how to determine a historical analysis result. Based on the above embodiment, the step S202 of determining a historical analysis result according to the historical PET image, the historical CT image and the baseline model includes the following steps:
s501, obtaining registered historical PET images according to the historical PET images and the historical CT images.
In some application scenarios, a PET-CT dual mode imaging may be used to obtain historical PET images and historical CT images. In PET-CT dual mode imaging, the PET detector and the CT detector are mounted on the same gantry and share the same scanning bed, so that the subject can directly perform a PET scan after performing a CT scan. PET-CT dual mode imaging, while capable of obtaining images of two different modalities, has a difference in scan time due to the fact that the two detectors are still scanned separately, resulting in inconsistent spatial positions of the same tissue or organ in the two images.
Therefore, in this embodiment, in order to improve accuracy of image analysis, after the computer device obtains the historical PET image and the historical CT image, the registered historical PET image is obtained first according to the historical PET image and the historical CT image.
The registration refers to mapping images of different imaging modes into the same coordinate system through spatial transformation, so that images of corresponding tissues and organs reach consistency of spatial positions. Registration methods include, but are not limited to, rigid registration algorithms or elastic registration methods.
Optionally, the computer device registers the historical PET image to the historical CT image with the historical CT image as a reference, so that the spatial resolution and the position of the registered historical PET image are consistent with those of the historical CT image.
S502, determining a historical analysis result according to the historical CT image, the registered historical PET image and the baseline model.
In this embodiment, after the registered historical PET image is obtained by the computer device, the historical analysis result is determined according to the historical CT image, the registered historical PET image and the baseline model.
Alternatively, the computer device may directly input the historical CT images and the registered historical PET images to the baseline model to output the historical analysis results from the baseline model. The computer device may also preprocess the historical CT images and the registered historical PET images before inputting them to the baseline model, so as to output the historical analysis result from the baseline model.
In the embodiment, the registered historical PET image is obtained according to the historical PET image and the historical CT image, so that the matching degree between the registered historical PET image and the registered historical CT image is improved, and the accurate historical analysis result can be determined according to the historical CT image, the registered historical PET image and the baseline model.
Fig. 6 is a schematic flow chart of another embodiment of determining a historical analysis result according to the present application, and referring to fig. 6, this embodiment relates to an alternative implementation of how to determine a historical analysis result. On the basis of the above embodiment, the above determination history analysis result includes the following steps:
s601, preprocessing the historical CT image to obtain a preprocessed historical CT image.
In this embodiment, when the computer device needs to determine the historical analysis result according to the historical CT image, the registered historical PET image and the baseline model, the computer device performs preprocessing on the historical CT image to obtain a preprocessed historical CT image.
The preprocessing includes normalization processing, compression processing, normalization processing, and the like, and the embodiment is not limited to a specific manner of preprocessing, as long as it can function to reduce the data amount.
Illustratively, the preprocessing includes normalization processing. The computer device sets window level (wl), window width (wl), so that the gray value of the historical CT image is truncated according to the following formula (1) by using the window level and window width, and the value of each pixel point in the historical CT image is limited to [ wl-ww/2, wl+ww/2].
Further, the computer device normalizes the value of each pixel point of the truncated historical CT image to [ -1,1] according to the following formula (2).
Wherein ww=400, wl=40, ww and wl can also be adjusted according to the need.
S602, preprocessing the registered historical PET image to obtain a preprocessed historical PET image.
In this embodiment, the computer device may further perform preprocessing on the registered historical PET image to obtain a preprocessed historical PET image. The preprocessing process may refer to S601, and will not be described herein. When normalization processing is performed on the registered historical PET images, ww=5 and wl=2.5, and the normalization processing can be adjusted according to requirements.
S603, inputting the preprocessed historical CT image and the preprocessed historical PET image into a baseline model to obtain a historical analysis result.
In this embodiment, after obtaining the preprocessed historical CT image and the preprocessed historical PET image, the computer device may input the preprocessed historical CT image and the preprocessed historical PET image into the baseline model, so as to obtain a historical analysis result output by the baseline model.
In this embodiment, the preprocessing is required to be performed on the history CT image to obtain a preprocessed history CT image, and the preprocessing is performed on the registered history PET image to obtain a preprocessed history PET image, so that the data amount input to the baseline model can be reduced, thereby improving the determination efficiency of the history analysis result.
The process of training to obtain the baseline model will be described below. Fig. 7 is a schematic flow chart of obtaining a baseline model according to an embodiment of the present application, and referring to fig. 7, this embodiment relates to an alternative implementation manner of obtaining a baseline model. On the basis of the above embodiment, the image analysis method further includes the following steps:
s701, acquiring a historical PET image sample and a historical CT image sample.
In this embodiment, the computer device acquires historical PET image samples and historical CT image samples during the stage of training to obtain the baseline model. It will be appreciated that to ensure the accuracy of the baseline model, the historical PET image samples and the historical CT image samples should include PET images and CT images of different subjects at different historical times.
Further, there is a correspondence between the historical PET image sample and the historical CT image sample, for example, the historical PET image sample 1 is a PET image of the subject 1 three months ago, then the historical CT image sample 1 corresponding to the historical PET image sample 1 also needs to be a CT image of the subject 1 three months ago, and the historical PET image sample 1 and the historical CT image sample 1 include at least one identical portion.
S702, training an initial baseline model according to a historical PET image sample, a historical CT image sample and a corresponding first segmentation gold standard to obtain a baseline model; the first segmentation gold standard comprises a first area determined by a historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and a first parameter of the second area is larger than or equal to a first preset threshold.
In this embodiment, the computer device further needs to determine a corresponding first segmentation gold standard after acquiring the historical PET image samples and the historical CT image samples. The first segmentation gold standard, namely the segmentation mask based on standard labeling of the baseline model.
With continued reference to fig. 3, in the training process, fig. 3 (a) may be a historical CT image sample, fig. 3 (b) may be a historical PET image sample, and the first segmentation criteria is shown in fig. 3 (d). That is, the computer equipment determines a second region with SUV not less than 2.5 in the historical PET image sample, maps the second region into the historical CT image sample, or marks the historical CT image sample according to the second region to obtain a first region on the historical CT image sample, and takes the first region as a first segmentation gold standard.
Further, the computer equipment takes the historical PET image sample and the historical CT image sample as samples, and the corresponding first segmentation gold standard is taken as a label, so that the initial baseline model can be trained to obtain the baseline model. For example, the computer device may stop training to obtain the baseline model if the difference between the historical analysis result of the initial baseline model prediction and the actual first segmentation gold standard is less than a preset difference.
Optionally, the computer device may obtain a registered historical PET image sample from the historical PET image sample and the historical CT image sample, and train the initial baseline model according to the historical CT image sample, the registered historical PET image sample, and the corresponding first segmentation gold standard.
Further optionally, the computer device may perform preprocessing on the historical CT image sample to obtain a preprocessed historical CT image sample, and perform preprocessing on the registered historical PET image sample to obtain a preprocessed historical PET image sample, and further train the initial baseline model according to the preprocessed historical CT image sample, the preprocessed historical PET image sample, and the corresponding first segmentation gold standard to obtain the baseline model.
Taking the example where the initial baseline model is VB-Net. The computer equipment can obtain a registered historical PET image sample according to the historical PET image sample and the historical CT image sample after obtaining the historical CT image sample and the historical PET image sample, respectively normalize the historical PET image sample and the historical CT image sample, and then input the normalized historical PET image sample and the normalized historical CT image sample into the VB-Net so as to train the initial baseline model to obtain the final used baseline model. Wherein the first segmentation gold standard is used as a learning label in the training process of the initial baseline model.
In this embodiment, a historical PET image sample and a historical CT image sample are obtained, and an initial baseline model is trained according to the historical PET image sample, the historical CT image sample, and a corresponding first segmentation gold standard, so as to obtain a baseline model. Because the first segmentation gold standard comprises a first area determined by the historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and a first parameter of the second area is larger than or equal to a first preset threshold value, a first region of interest with the first parameter larger than or equal to the first preset threshold value on the historical CT image can be determined according to the baseline model.
The process of determining the current analysis result continues. Fig. 8 is a schematic flow chart of determining a current analysis result in an embodiment of the present application, and referring to fig. 8, this embodiment relates to an alternative implementation of how to determine the current analysis result. Based on the above embodiment, the step S203 of determining the current analysis result according to the current PET image, the current CT image, the history analysis result and the follow-up model includes the following steps:
s801, obtaining a registered current PET image according to the current PET image and the current CT image.
S802, according to the current CT image and the historical analysis result, a registered historical analysis result is obtained.
S803, determining a current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result and the follow-up model.
In the embodiment, the same principle as S501, when determining the current analysis result according to the current PET image, the current CT image, the history analysis result and the follow-up model, the computer device obtains the registered current PET image according to the current PET image and the current CT image, and obtains the registered history analysis result according to the current CT image and the history analysis result.
Optionally, the computer device registers the current PET image to the current CT image with the current CT image as a reference, and registers the history analysis result to the current CT image, so that the spatial resolution and the position of the registered current PET image are consistent with those of the current CT image, and the spatial resolution and the position of the history analysis result are consistent with those of the current CT image.
In the embodiment, the registered current PET image is obtained according to the current PET image and the current CT image, and the registered historical analysis result is obtained according to the current CT image and the historical analysis result, so that the matching degree among the current CT image, the registered current PET image and the registered historical analysis is improved. Therefore, the current analysis result can be accurately determined according to the current CT image, the registered current PET image, the registered historical analysis result and the follow-up model.
Fig. 9 is a schematic flow chart of yet another embodiment of determining the current analysis result, and referring to fig. 9, this embodiment relates to an alternative implementation of how to determine the current analysis result. Based on the above embodiment, S803 described above determines a current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model, including the following steps:
S901, preprocessing the current CT image, the registered current PET image and the registered historical analysis result respectively to obtain a preprocessed current CT image, a preprocessed current PET image and a preprocessed historical analysis result.
S902, inputting the preprocessed current CT image, the preprocessed current PET image and the preprocessed historical analysis result into a follow-up model to obtain a current analysis result.
In this embodiment, when the computer device obtains the current analysis result, it needs to perform preprocessing on the current CT image, the registered current PET image, and the registered history analysis result, so as to obtain the preprocessed current CT image, the preprocessed current PET image, and the preprocessed history analysis result.
The preprocessing may refer to S601, and will not be described herein. Illustratively, ww=400 and wl=40 when the current CT image is normalized; when the registered current PET image is normalized, ww=5 and wl=2.5; when normalization processing is performed on the registered history analysis results, ww=1 and wl=0.5.
Further, the current CT image after pretreatment, the current PET image after pretreatment and the history analysis result after pretreatment are input into the follow-up model, so that the history analysis result output by the follow-up model can be obtained. In addition, as preprocessing is needed before the data is input into the follow-up model, the data quantity input into the follow-up model can be reduced, and therefore the determination efficiency of the current analysis result is improved.
The process of training to obtain the follow-up model will be described below. Fig. 10 is a schematic flow chart of obtaining a current analysis result in an embodiment of the present application, and referring to fig. 10, this embodiment relates to an alternative implementation manner of how to obtain the current analysis result. Based on the above embodiment, the step S802 of determining a current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result and the follow-up model includes the following steps:
s1001, a historical PET image sample, a historical CT image sample, a current PET image sample and a current CT image sample are acquired.
In this embodiment, the computer device acquires a historical PET image sample, a historical CT image sample, a current PET image sample, and a current CT image sample during the training to obtain the follow-up model.
It is appreciated that the time of the historical PET image samples and the historical CT image samples is earlier than the current PET image samples and the current CT image samples.
Further, there is also a correspondence between the historical PET image sample, the historical CT image sample, the current PET image sample, and the current CT image sample, for example, if the historical PET image sample 1 is a PET image of the subject 1 three months ago, then the historical CT image sample 1 corresponding to the historical PET image sample 1 also needs to be a CT image of the subject 1 three months ago; if the current PET image sample 1 is yesterday's PET image of the subject 1, the current CT image sample 1 corresponding to the current PET image sample 1 also needs to be yesterday's CT image of the subject 1. And the historical PET image sample 1, the current PET image sample 1 and the current CT image sample 1 all comprise at least one identical part.
S1002, determining a historical CT segmentation result sample according to a historical PET image sample, a historical CT image sample and a baseline model; the historical CT segmentation result sample comprises a first area in the historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and a first parameter of the second area is larger than or equal to a first preset threshold.
In this embodiment, a baseline model needs to be trained prior to training the follow-up model. Referring to the description of the baseline model in the above embodiment, after obtaining the historical PET image sample, the historical CT image sample, the current PET image sample and the current CT image sample, the computer device first determines the historical CT segmentation result sample according to the historical PET image sample, the historical CT image sample and the baseline model.
Fig. 11 is a schematic view of a sample according to an embodiment of the present application, as shown in fig. 11, in which fig. 11 (a) is a historical CT image sample, fig. 11 (b) is a historical PET image sample, fig. 11 (c) is a current CT image sample, and fig. 11 (d) is a current PET image sample.
Since the black developed SUV on the right side of FIG. 11 (b) is 2.5 or more, the second region of FIG. 11 (b) with SUV 2.5 or more is corresponding to the first region shown by the white circular dashed box in FIG. 11 (a) can be determined. Further, the computer device takes the first region in fig. 11 (a) as a historical CT segmentation result sample. That is, the first region is the portion of the current PET image sample where the SUV 2.5 or more corresponds to the clear boundary of the historical CT image sample.
S1003, training an initial follow-up model according to a historical CT segmentation result sample, a current PET image sample, a current CT image sample and a second segmentation gold standard to obtain a follow-up model; the second segmentation gold standard comprises a third region in the current CT image sample and/or comprises a fourth region in the current CT image sample corresponding to the first region; the third region corresponds to a fifth region in the current PET image sample, and a first parameter of the fifth region is larger than or equal to a first preset threshold value; the fourth region corresponds to the first region, and the fourth region corresponds to a sixth region in the current PET image sample, the first parameter of the sixth region being less than a first preset threshold.
In this embodiment, the computer device may determine the corresponding second segmentation gold standard according to the first segmentation gold standard corresponding to the historical CT image sample, the current PET image sample, and the current CT image sample.
The second segmentation gold standard not only comprises a third region, corresponding to the current CT image sample, of SUV not less than 2.5, but also comprises a fourth region, corresponding to the region with slight metabolism on the current PET image sample based on the first segmentation gold standard of the historical CT image sample, and the boundary of the fourth region can be clearly seen on the current CT image sample.
That is, on the one hand, the computer equipment determines a fifth region with SUV not less than 2.5 on the current PET image sample, and corresponds the fifth region to the current CT image sample to obtain a third region with a clear boundary in the current CT image sample; on the other hand, the computer device also corresponds a first region in the first segmentation gold standard to the current PET image sample based on the first segmentation gold standard corresponding to the historical CT image sample, determines a sixth region with SUV <2.5 in the current PET image sample, and corresponds the sixth region to the current CT image sample, so as to obtain a fourth region with a clear boundary in the current CT image sample.
Thus, the computer equipment not only determines the third region with SUV more than or equal to 2.5 on the current CT image sample, but also determines the fourth region with SUV more than or equal to 2.5 on the historical CT image sample, but with SUV less than 2.5 on the current CT image sample. Further, the computer device uses the third region and the fourth region as the second split gold standard.
With continued reference to FIG. 11, it can be seen from a combination of FIG. 11 (b) and FIG. 11 (d) that the portion of FIG. 11 (d) where the original SUV is greater than or equal to 2.5 corresponds to FIG. 11 (d) where the SUV value is low, i.e., the developed portion of FIG. 11 (d) where the SUV is already less than 2.5. However, since the doctor still needs to pay attention to this portion, the computer apparatus will take the developing portion in fig. 11 (d) as the sixth area and map the sixth area to the current CT image sample to obtain the white area outlined by the white circular dotted line in fig. 11 (e) as the fourth area. Further, the computer device uses the fourth region as the second division gold standard, as shown in fig. 11 (f).
Similarly, please refer to fig. 4, wherein fig. 4 (a) is a historical CT image sample, fig. 4 (b) is a historical PET image sample, fig. 4 (c) is a current CT image sample, and fig. 4 (d) is a current PET image sample.
As shown in the lower left developed portion of FIG. 4 (b), there is a second region in the historical PET image sample where SUV is 2.5. Gtoreq.and the current PET image sample where SUV is < 2.5, so the second region becomes smaller after corresponding to the current PET image sample, as shown in FIG. 4 (d).
Further, when the computer device trains the follow-up model, SUV is equal to or greater than 2.5 in the graph (b), but the SUV < 2.5 part in the graph (d) corresponds to the current CT image sample, so as to obtain a fourth area shown in the graph (e) of FIG. 4, and the fourth area is used as a second segmentation gold standard, as shown in the graph (f) of FIG. 4.
Furthermore, the computer equipment takes the historical CT segmentation result sample, the current PET image sample and the current CT image sample as samples, and the corresponding second segmentation gold standard as a label, so that the initial follow-up model can be trained to obtain the follow-up model. For example, the computer device may cease training to derive the follow-up model if the current analysis result predicted by the initial follow-up model differs from the actual second segmentation gold standard by less than a preset difference.
Optionally, a registered current PET image sample is obtained according to the current PET image sample and the current CT image sample, and a registered historical CT segmentation result sample is obtained according to the current CT image sample and the historical CT segmentation result sample, so that the initial follow-up model is trained according to the current CT image sample, the registered current PET image sample, the registered historical CT segmentation result sample and the corresponding second segmentation gold standard to obtain the follow-up model.
Further optionally, the computer device may perform preprocessing on the current CT image sample, the registered current PET image sample, and the registered historical CT segmentation result sample, respectively, to obtain a preprocessed current CT image sample, a preprocessed current PET image sample, and a preprocessed historical CT segmentation result sample, and perform training on the initial follow-up model by using the preprocessed current CT image sample, the preprocessed current PET image sample, the preprocessed historical CT segmentation result sample, and the corresponding second segmentation gold standard.
Further, the computer device may obtain a registered current PET image sample according to the current PET image sample and the current CT image sample, obtain a registered historical CT segmentation result sample according to the current CT image sample and the historical CT segmentation result sample, normalize the current CT image sample, the registered current PET image sample and the registered historical CT segmentation result sample respectively, and input the normalized current PET image sample, the registered historical CT segmentation result sample to the initial follow-up model for training, so as to obtain a final required follow-up model. In some embodiments, the initial follow-up model may also employ VB-Net. Wherein the second segmentation gold standard is used as a learning label in the training process of the initial follow-up model.
According to the embodiment, a historical PET image sample, a historical CT image sample, a current PET image sample and a current CT image sample are obtained, and a historical CT segmentation result sample is determined according to the historical PET image sample, the historical CT image sample and a baseline model, so that an initial follow-up model is trained according to the historical CT segmentation result sample, the current PET image sample, the current CT image sample and a second segmentation gold standard, and a follow-up model is obtained. The second segmentation gold standard comprises a third area in the current CT image sample and/or comprises a fourth area in the current CT image sample corresponding to the first area, wherein the third area corresponds to a fifth area in the current PET image sample, and a first parameter of the fifth area is greater than or equal to a first preset threshold; the fourth region corresponds to the first region, and the fourth region corresponds to a sixth region in the current PET image sample, wherein the first parameter of the sixth region is smaller than the first preset threshold, so that a third region of interest and/or a fourth region of interest on the current CT image can be determined according to the follow-up model.
Fig. 12 is a schematic flow chart of yet another embodiment of determining the current analysis result, and referring to fig. 12, this embodiment relates to an alternative implementation of how to determine the current analysis result. Based on the above embodiment, S803 described above, determining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model may further include the following steps:
And S1201, obtaining a historical PET image registered to the current CT image according to the historical PET image and the current CT image.
In this embodiment, the computer device may further obtain a historical PET image registered to the current CT image according to the historical PET image and the current CT image. That is, the computer device registers the historical PET image to the current CT image with the current CT image as a reference to obtain the historical PET image registered to the current CT image.
S1202, obtaining a current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, the registered historical PET image to the current CT image and the follow-up model.
In this embodiment, after S1201, the computer device may obtain the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, the registered historical PET image to the current CT image, and the follow-up model.
Optionally, the computer device may directly input the current CT image, the registered current PET image, the registered historical analysis result, and the historical PET image registered to the current CT image to the follow-up model, so as to obtain the current analysis result output by the follow-up model. The computer equipment can also respectively preprocess the current CT image, the registered current PET image, the registered historical analysis result and the registered historical PET image to the current CT image and then input the preprocessed current CT image, the registered current PET image and the registered historical PET image to the follow-up model so as to obtain the current analysis result output by the follow-up model.
It can be appreciated that the above-mentioned historical PET images registered to the current CT image can further improve the accuracy of the current analysis result output by the follow-up model. Of course, in this case, it is also necessary to use historical PET image samples registered to the current CT image sample when training the follow-up model. For example, as shown in fig. 12, when training the follow-up model, not only the current CT image sample, the registered current PET image sample, the registered historical CT segmentation result sample and the corresponding second segmentation gold standard need to be normalized and then input to the initial follow-up model, but also the historical PET image sample registered to the current CT image sample needs to be normalized and then input to the initial follow-up model, so that the initial follow-up model is trained to obtain the final follow-up model.
In this embodiment, since the historical PET image registered to the current CT image is further obtained according to the historical PET image and the current CT image, and the first parameter condition in the previous inspection can be analyzed by combining the historical PET image, the current analysis result can be determined by assisting the follow-up model after registering the historical PET image to the current CT image. Thus, the obtained current analysis result is more accurate according to the current CT image, the registered current PET image, the registered historical analysis result, the registered historical PET image to the current CT image and the follow-up model.
In an embodiment, optionally, the image analysis method further includes the following steps:
and determining a first parameter analysis result according to the historical analysis result and the current analysis result.
The first parameter analysis result represents a comparison analysis condition of a historical analysis result and a current analysis result, and is used for reflecting the change condition of structural information such as the size, the shape and the like of a region of interest of a doctor in a historical CT image and a current CT image of a subject, wherein the change condition comprises at least one of a character form, an image form and a chart form.
Optionally, the computer device may display the historical analysis results, the current analysis results, and the first parameter analysis results, so that a doctor can perform subsequent analysis work based on the CT image in time.
Because the first parameter analysis result can be determined according to the historical analysis result and the current analysis result, a doctor can conveniently conduct comparison analysis on the structural information such as the size, the shape and the like of the historical analysis result and the current analysis result.
Fig. 13 is a schematic flow chart of determining the second parameter analysis and the result in the embodiment of the present application, and referring to fig. 13, this embodiment relates to an alternative implementation of how to determine the current analysis result. On the basis of the above embodiment, the image analysis method may further include the following steps:
S1301, mapping the historical analysis result to the historical PET image to obtain a first target area in the historical PET image.
And S1302, mapping the current analysis result to the current PET image to obtain a second target area in the current PET image.
S1303, determining a second parameter analysis result according to the first target area and the second target area.
In this embodiment, the computer device may employ a rigid registration algorithm or an elastic registration method to map the historical analysis results to the historical PET images and the current analysis results to the current PET images. The mapping principle may refer to the registration procedure in the above-described embodiments.
Further, continuing to take the first parameter as SUV example, mapping the historical analysis result to the historical PET image, and mapping the current analysis result to the current PET image, wherein the first target area represents an area with SUV more than or equal to 2.5 in the historical PET image of the subject; the second target region then represents a region of SUV ∈2.5 in the current PET image of the subject and/or a region of SUV ∈2.5 in the historical PET image but less than 2.5 in the current PET image. That is, the historical PET image and the current PET image only contain the hypermetabolic information of the abnormal region concerned by the doctor, and do not contain redundant information such as the hypermetabolic information of the normal region, so that the comparison of the first target region and the second target region is facilitated, and the second parameter analysis result is determined.
Further, the second parameter analysis result represents the comparison analysis condition of the first target area and the second target area, and is used for reflecting the change condition of the metabolic information of the area concerned by the doctor in the historical PET image and the current PET image of the testee. Likewise, the second parameter analysis results also include, but are not limited to, being at least one of text, image, and chart.
Optionally, the computer device may display the first target area, the second target area, and the second parameter analysis result, so that a doctor can perform subsequent analysis work based on the PET image in time.
In the embodiment, the historical analysis result is mapped to the historical PET image to obtain the first target area in the historical PET image, and the current analysis result is mapped to the current PET image to obtain the second target area in the current PET image, so that the first target area and the second target area can describe the area focused by the doctor more accurately. Therefore, the second parameter analysis result is further beneficial to doctors to timely and conveniently compare and analyze the metabolic conditions of the first target area and the second target area.
Of course, in some embodiments, the computer device may also display the first and second parameter analysis results simultaneously for the physician to follow-up with the PET image and CT image.
In order to more clearly describe the image analysis method in the present application, the description is made with reference to fig. 14 and 15. Fig. 14 is an overall flow chart of an image analysis method according to an embodiment of the application, and fig. 15 is a flow chart of an image analysis method according to an embodiment of the application. As shown in fig. 14, the computer device performs the image analysis method as follows.
S1401, a historical PET image sample, a historical CT image sample, a current PET image sample, and a current CT image sample are acquired.
S1402, training the initial baseline model according to the historical PET image sample, the historical CT image sample and the corresponding first segmentation gold standard to obtain a baseline model.
S1403, determining a historical CT segmentation result sample according to the historical PET image sample, the historical CT image sample and the baseline model.
S1404, training the initial follow-up model according to the historical CT segmentation result sample, the current PET image sample, the current CT image sample and the second segmentation gold standard to obtain a follow-up model.
S1405, acquiring a historical PET image, a historical CT image, a current PET image and a current CT image of the subject.
S1406, obtaining registered historical PET images according to the historical PET images and the historical CT images.
S1407, preprocessing the historical CT image to obtain a preprocessed historical CT image.
S1408, preprocessing the registered historical PET image to obtain a preprocessed historical PET image.
S1409, inputting the preprocessed historical CT image and the preprocessed historical PET image into a baseline model to obtain a historical analysis result.
S1410, obtaining a registered current PET image according to the current PET image and the current CT image.
S1411, obtaining registered historical analysis results according to the current CT image and the historical analysis results.
And S1412, obtaining a historical PET image registered to the current CT image according to the historical PET image and the current CT image.
S1413, preprocessing the current CT image, the registered current PET image, the registered historical analysis result and the registered historical PET image to the current CT image respectively to obtain a preprocessed current CT image, a preprocessed current PET image, a preprocessed historical analysis result and a preprocessed registered historical PET image.
S1414, inputting the preprocessed current CT image, the preprocessed current PET image, the preprocessed historical analysis result and the preprocessed registration historical PET image into a follow-up model to obtain the current analysis result.
S1415, determining a first parameter analysis result according to the historical analysis result and the current analysis result.
S1416, mapping the historical analysis result to the historical PET image to obtain a first target area in the historical PET image.
S1417, mapping the current analysis result to the current PET image to obtain a second target area in the current PET image.
S1418, determining a second parameter analysis result according to the first target area and the second target area.
Wherein, S1401 to S1404 are training procedures and S1405 to S1418 are using procedures. Referring to fig. 15, in the use process, the computer device acquires a historical PET image and a historical CT image, registers the historical PET image to the historical CT image to obtain a registered historical PET image, performs normalization processing on the registered historical PET image and the registered historical CT image respectively, and inputs the normalized historical PET image and the normalized historical CT image to a baseline model to obtain a historical analysis result.
Further, the computer equipment acquires a current PET image and a current CT image, registers the current PET image, a historical analysis result and the historical PET image to the current CT image respectively to obtain a registered current PET image, a registered historical analysis result and a registered historical PET image to the current CT image, and then normalizes the current CT image, the registered current PET image, the registered historical analysis result and the registered historical PET image to the current CT image respectively by the computer equipment and inputs the normalized results to the follow-up model to obtain the current analysis result.
Thus, the computer device can determine the first parameter analysis result and the second parameter analysis result so that the doctor can carry out follow-up work. The steps in fig. 14 and 15 may refer to the description of the above embodiments, and are not repeated here.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image analysis device for realizing the image analysis method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation of one or more embodiments of the image analysis device provided below may be referred to the limitation of the image analysis method hereinabove, and will not be repeated herein.
Fig. 16 is a block diagram of an image analysis device according to an embodiment of the present application, as shown in fig. 16, in an embodiment of the present application, there is provided an image analysis device 1600, including: an acquisition module 1601, a first determination module 1602, and a second determination module 1603, wherein:
an acquiring module 1601 is configured to acquire a historical PET image, a historical CT image, a current PET image, and a current CT image of a subject.
A first determining module 1602, configured to determine a historical analysis result according to the historical PET image, the historical CT image, and the baseline model; the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold.
A second determining module 1603, configured to determine a current analysis result according to the current PET image, the current CT image, the historical analysis result, and the follow-up model; the current analysis result comprises a third region of interest in the current CT image and/or comprises a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, and a first parameter of the fifth region of interest is greater than or equal to a first preset threshold; the fourth region of interest corresponds to the first region of interest, and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than a first preset threshold.
The image analysis device provided by the embodiment acquires a historical PET image, a historical CT image, a current PET image and a current CT image of a subject, determines a historical analysis result according to the historical PET image, the historical CT image and a baseline model, and further determines a current analysis result according to the current PET image, the current CT image, the historical analysis result and a follow-up model. Because the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold, the computer equipment can firstly determine the first region of interest of the subject in the historical examination, for example, the first region of interest of the subject in the historical examination, for which SUV is greater than or equal to 2.5, on the CT image according to the historical PET image and the historical CT image. Further, the current analysis result includes a third region of interest in the current CT image and/or includes a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, and a first parameter of the fifth region of interest is greater than or equal to a first preset threshold; the fourth region of interest corresponds to the first region of interest and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first preset threshold, whereby the computer device is able to determine the fourth region of interest even if the first parameter of the first region of interest has been less than the first preset threshold in the current examination. For example, if in the current examination the subject has a first region of interest SUV of SUV.gtoreq.2.5 on CT images in the historical examination that is already less than 2.5, the computer device may still determine a fourth region of interest of SUV < 2.5. In this way, the device provided in this embodiment can be applied not only to the case where the first parameter is equal to or greater than the first preset threshold value during each inspection, but also to the case where the first parameter is equal to or greater than the first preset threshold value during the first inspection, but the first parameter of the area where the first parameter is equal to or greater than the first preset threshold value during the second inspection is already less than the first preset threshold value, thereby expanding the application range.
Optionally, the first determining module 1602 includes:
and the first determining unit is used for obtaining registered historical PET images according to the historical PET images and the historical CT images.
And the second determining unit is used for determining a historical analysis result according to the historical CT image, the registered historical PET image and the baseline model.
Optionally, the second determining unit includes:
and the first preprocessing subunit is used for preprocessing the historical CT image to obtain a preprocessed historical CT image.
And the second preprocessing subunit is used for preprocessing the registered historical PET images to obtain preprocessed historical PET images.
And the first determination subunit is used for inputting the preprocessed historical CT image and the preprocessed historical PET image into the baseline model to obtain a historical analysis result.
Optionally, the image analysis device 1600 further includes:
the first acquisition module is used for acquiring historical PET image samples and historical CT image samples.
The first training module is used for training the initial baseline model according to the historical PET image sample, the historical CT image sample and the corresponding first segmentation gold standard to obtain a baseline model; the first segmentation gold standard comprises a first area determined by a historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and a first parameter of the second area is larger than or equal to a first preset threshold.
Optionally, the second determining module 1603 includes:
and the third determining unit is used for obtaining the registered current PET image according to the current PET image and the current CT image.
And the fourth determining unit is used for obtaining a registered historical analysis result according to the current CT image and the historical analysis result.
And a fifth determining unit, configured to determine a current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model.
Optionally, the fifth determining unit includes:
and the third preprocessing subunit is used for respectively preprocessing the current CT image, the registered current PET image and the registered history analysis result to obtain a preprocessed current CT image, a preprocessed current PET image and a preprocessed history analysis result.
And the second determination subunit is used for inputting the preprocessed current CT image, the preprocessed current PET image and the preprocessed historical analysis result into the follow-up model to obtain the current analysis result.
Optionally, the image analysis device 1600 further includes:
the second acquisition module is used for acquiring a historical PET image sample, a historical CT image sample, a current PET image sample and a current CT image sample.
The third determining module is used for determining a historical CT segmentation result sample according to the historical PET image sample, the historical CT image sample and the baseline model; the historical CT segmentation result sample comprises a first area in the historical CT image sample, the first area corresponds to a second area in the historical PET image sample, and a first parameter of the second area is larger than or equal to a first preset threshold value;
the second training unit is used for training the initial follow-up model according to the historical CT segmentation result sample, the current PET image sample, the current CT image sample and the second segmentation gold standard to obtain a follow-up model; the second segmentation gold standard comprises a third region in the current CT image sample and/or comprises a fourth region in the current CT image sample corresponding to the first region; the third region corresponds to a fifth region in the current PET image sample, and a first parameter of the fifth region is larger than or equal to a first preset threshold value; the fourth region corresponds to the first region, and the fourth region corresponds to a sixth region in the current PET image sample, the first parameter of the sixth region being less than a first preset threshold.
Optionally, the fifth determining unit further includes:
and the third determination subunit is used for obtaining a historical PET image registered to the current CT image according to the historical PET image and the current CT image.
And the fourth determination subunit is used for obtaining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, the registered historical PET image to the current CT image and the follow-up model.
Optionally, the image analysis device 1600 further includes:
and the fourth determining module is used for determining the first parameter analysis result according to the historical analysis result and the current analysis result.
Optionally, the image analysis device 1600 further includes:
and the first mapping module is used for mapping the historical analysis result to the historical PET image to obtain a first target area in the historical PET image.
And the second mapping module is used for mapping the current analysis result to the current PET image to obtain a second target area in the current PET image.
And a fifth determining module, configured to determine a second parameter analysis result according to the first target area and the second target area.
The modules in the image analysis device may be all or partially implemented by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. An image analysis method, the method comprising:
acquiring a historical PET image, a historical CT image, a current PET image and a current CT image of a subject;
determining a historical analysis result according to the historical PET image, the historical CT image and a baseline model; the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold;
Determining a current analysis result according to the current PET image, the current CT image, the historical analysis result and a follow-up model; the current analysis result comprises a third region of interest in the current CT image and/or comprises a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, and the first parameter of the fifth region of interest is greater than or equal to the first preset threshold; the fourth region of interest corresponds to the first region of interest, and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first preset threshold.
2. The method of claim 1, wherein determining a historical analysis result from the historical PET image, the historical CT image, and a baseline model comprises:
obtaining registered historical PET images according to the historical PET images and the historical CT images;
and determining the historical analysis result according to the historical CT image, the registered historical PET image and the baseline model.
3. The method of claim 2, wherein the determining the historical analysis results from the historical CT images, the registered historical PET images, and the baseline model comprises:
preprocessing the historical CT image to obtain a preprocessed historical CT image;
preprocessing the registered historical PET image to obtain a preprocessed historical PET image;
and inputting the preprocessed historical CT image and the preprocessed historical PET image into the baseline model to obtain the historical analysis result.
4. The method of any of claims 1-3, wherein said determining a current analysis result from the current PET image, the current CT image, the historical analysis result, and a follow-up model comprises:
obtaining a registered current PET image according to the current PET image and the current CT image;
obtaining registered historical analysis results according to the current CT image and the historical analysis results;
and determining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result and the follow-up model.
5. The method of claim 4, wherein the determining the current analysis result from the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model comprises:
preprocessing the current CT image, the registered current PET image and the registered historical analysis result respectively to obtain a preprocessed current CT image, a preprocessed current PET image and a preprocessed historical analysis result;
and inputting the preprocessed current CT image, the preprocessed current PET image and the preprocessed historical analysis result into the follow-up model to obtain the current analysis result.
6. The method of claim 4, wherein the determining the current analysis result from the current CT image, the registered current PET image, the registered historical analysis result, and the follow-up model comprises:
obtaining a historical PET image registered to the current CT image according to the historical PET image and the current CT image;
and obtaining the current analysis result according to the current CT image, the registered current PET image, the registered historical analysis result, the historical PET image registered to the current CT image and the follow-up model.
7. A method according to any one of claims 1-3, wherein the method further comprises:
and determining a first parameter analysis result according to the historical analysis result and the current analysis result.
8. A method according to any one of claims 1-3, wherein the method further comprises:
mapping the historical analysis result to the historical PET image to obtain a first target area in the historical PET image;
mapping the current analysis result to the current PET image to obtain a second target area in the current PET image;
and determining a second parameter analysis result according to the first target area and the second target area.
9. An image analysis device, the device comprising:
the acquisition module is used for acquiring a historical PET image, a historical CT image, a current PET image and a current CT image of the detected person;
the first determining module is used for determining a historical analysis result according to the historical PET image, the historical CT image and the baseline model; the historical analysis result comprises a first region of interest in the historical CT image, the first region of interest corresponds to a second region of interest in the historical PET image, and a first parameter of the second region of interest is greater than or equal to a first preset threshold;
The second determining module is used for determining a current analysis result according to the current PET image, the current CT image, the historical analysis result and the follow-up model; the current analysis result comprises a third region of interest in the current CT image and/or comprises a fourth region of interest in the current CT image corresponding to the first region of interest; the third region of interest corresponds to a fifth region of interest in the current PET image, and the first parameter of the fifth region of interest is greater than or equal to the first preset threshold; the fourth region of interest corresponds to the first region of interest, and the fourth region of interest corresponds to a sixth region of interest in the current PET image, the first parameter of the sixth region of interest being less than the first preset threshold.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
CN202310478225.7A 2022-11-01 2023-04-28 Image analysis method, device and computer equipment Pending CN116630239A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310478225.7A CN116630239A (en) 2023-04-28 2023-04-28 Image analysis method, device and computer equipment
PCT/CN2023/129173 WO2024094088A1 (en) 2022-11-01 2023-11-01 Systems and methods for image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310478225.7A CN116630239A (en) 2023-04-28 2023-04-28 Image analysis method, device and computer equipment

Publications (1)

Publication Number Publication Date
CN116630239A true CN116630239A (en) 2023-08-22

Family

ID=87609064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310478225.7A Pending CN116630239A (en) 2022-11-01 2023-04-28 Image analysis method, device and computer equipment

Country Status (1)

Country Link
CN (1) CN116630239A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024094088A1 (en) * 2022-11-01 2024-05-10 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image analysis

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024094088A1 (en) * 2022-11-01 2024-05-10 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image analysis

Similar Documents

Publication Publication Date Title
CN109493328B (en) Medical image display method, viewing device and computer device
CN109993726B (en) Medical image detection method, device, equipment and storage medium
WO2020238734A1 (en) Image segmentation model training method and apparatus, computer device, and storage medium
US9058545B2 (en) Automatic registration of image pairs of medical image data sets
CN111080584B (en) Quality control method for medical image, computer device and readable storage medium
CN111223554A (en) Intelligent AI PACS system and its checking report information processing method
US12056890B2 (en) Method for measuring volume of organ by using artificial neural network, and apparatus therefor
CN110706207A (en) Image quantization method, image quantization device, computer equipment and storage medium
CN111080583B (en) Medical image detection method, computer device, and readable storage medium
CN114092475B (en) Focal length determining method, image labeling method, device and computer equipment
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
CN112001979B (en) Motion artifact processing method, system, readable storage medium and apparatus
KR101885562B1 (en) Method for mapping region of interest in first medical image onto second medical image and apparatus using the same
CN111681205B (en) Image analysis method, computer device, and storage medium
CN112530550A (en) Image report generation method and device, computer equipment and storage medium
CN116630239A (en) Image analysis method, device and computer equipment
CN111243052A (en) Image reconstruction method and device, computer equipment and storage medium
CN113989231A (en) Method and device for determining kinetic parameters, computer equipment and storage medium
CN112150485B (en) Image segmentation method, device, computer equipment and storage medium
CN110738639B (en) Medical image detection result display method, device, equipment and storage medium
CN114723723A (en) Medical image processing method, computer device and storage medium
CN114972026A (en) Image processing method and storage medium
CN113705807A (en) Neural network training device and method, ablation needle arrangement planning device and method
US11854126B2 (en) Methods and apparatus for deep learning based image attenuation correction
CN117745693A (en) Method, device, equipment and storage medium for delineating focus target area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination