CN116342603A - Method for obtaining arterial input function - Google Patents

Method for obtaining arterial input function Download PDF

Info

Publication number
CN116342603A
CN116342603A CN202310619923.4A CN202310619923A CN116342603A CN 116342603 A CN116342603 A CN 116342603A CN 202310619923 A CN202310619923 A CN 202310619923A CN 116342603 A CN116342603 A CN 116342603A
Authority
CN
China
Prior art keywords
input function
obtaining
time
image
arterial input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310619923.4A
Other languages
Chinese (zh)
Other versions
CN116342603B (en
Inventor
方蕙
刘欣
单晔杰
何京松
向建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arteryflow Technology Co ltd
Original Assignee
Arteryflow Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arteryflow Technology Co ltd filed Critical Arteryflow Technology Co ltd
Priority to CN202310619923.4A priority Critical patent/CN116342603B/en
Publication of CN116342603A publication Critical patent/CN116342603A/en
Application granted granted Critical
Publication of CN116342603B publication Critical patent/CN116342603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to a method for obtaining an arterial input function, comprising: acquiring 3D images at different moments based on the CT perfusion images, acquiring histograms of the 3D images at each moment, and acquiring filling moments of the contrast agent based on each histogram; taking the 3D image at the filling time as a reference image, and carrying out image registration on the rest 3D images; constructing a training data set for training the convolutional neural network, and further obtaining a trained neural network model; and constructing an input data set, screening by using a neural network model to obtain a time density curve of voxels belonging to a blood vessel category in the input data set, and further obtaining an arterial input function. The method is carried out fully automatically based on the neural network model, so that the accuracy and the robustness of the parameter map can be improved; and the filling time is obtained by using the histogram, so that the brain tissue structure is clear, the reliability of the training data set and the input data set is improved, and the classification accuracy of the depth network model is improved.

Description

Method for obtaining arterial input function
Technical Field
The present application relates to the field of medical image processing, and in particular, to a method for obtaining an arterial input function.
Background
CT perfusion (CTP) scans are commonly used to examine cerebrovascular diseases such as acute stroke, subarachnoid hemorrhage, carotid occlusion, and the like. After the intracranial blood vessel is injected with the contrast agent, continuous 3D brain scanning is carried out for a plurality of times to obtain a 4D scanning result comprising a time dimension, and then, the hemodynamic parameter diagrams such as cerebral blood flow capacity (cerebral blood volume, CBV), cerebral blood flow (cerebral blood flow, CBF), average transit time (mean transit time, MTT), residual function peak time (time to peak, tmax) and the like are obtained through automatic calculation, so that the cerebral perfusion condition is estimated.
CTP automated computation generally includes image preprocessing, arterial input function (arterial input function, AIF) selection, deconvolution computation, parametric map generation, lesion volume computation, and the like. Wherein AIF participates in deconvolution, the contrast agent residual curve can be obtained by deconvolution operation of brain tissue time density curve (time density curve, TDC) and AIF, and the above-mentioned various hemodynamic parameters and parameter diagrams thereof can be obtained by further calculation of the contrast agent residual curve.
The AIF points typically select vessel voxels located on middle cerebral artery (middle cerebral artery, MCA) and these curves typically have a peak height, small peak width, early peak time profile. At present, the AIF selection method mainly comprises the methods of manual selection, clustering and constructing a curve characteristic weighting model. However, the manual selection method has the disadvantages of long time consumption, low repeatability and reliance on operator experience; the clustering method may be difficult to exclude noise curves introduced by non-brain tissue points, the outcome of which may also depend on the effect of early skull removal; the method for constructing the curve characteristic weighting model needs to manually construct characteristics and design a complex mathematical model.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method for obtaining an arterial input function.
The method for obtaining the arterial input function comprises the following steps:
acquiring 3D images at different moments based on CT perfusion images, acquiring histograms of the 3D images at each moment, and acquiring filling moments of contrast agent based on each histogram;
taking the 3D image at the filling time as a reference image, and carrying out image registration on the rest 3D images;
constructing a training data set for training the convolutional neural network, and further obtaining a trained neural network model;
and constructing an input data set, screening and obtaining a time density curve of voxels belonging to a blood vessel category in the input data set by using the neural network model, and further obtaining an arterial input function.
Optionally, obtaining the filling time of the contrast agent based on each histogram specifically includes:
obtaining CT values of all voxels in the histogram and the number of voxels corresponding to the CT values;
for all histograms, the time at which the histogram with the highest number of voxels that fits the expectation is located is taken as the filling time.
Optionally, the compliance is that the CT value is greater than a first threshold.
Optionally, the method further includes performing a filtering process on each of the 3D images, where the filtering process includes:
for any target voxel, filtering is performed using a spatial neighboring voxel to the target voxel and a temporal neighboring voxel to the target voxel.
Optionally, the method further includes brain tissue extraction on the 3D image after the filtering process is completed, and specifically includes:
and removing the background of the 3D image by using a threshold method, obtaining the mass center of the rest part, and obtaining the brain tissue region by using a region growing method based on the mass center.
Optionally, constructing a training data set for training the convolutional neural network specifically includes:
taking the maximum CT value of each voxel in the 3D image in the time dimension to obtain a maximum density projection image in the time dimension;
on the maximum intensity projection image, voxels of a brain tissue region are labeled, the labeling including vessel labeling.
Optionally, constructing a training data set for training the convolutional neural network, further includes:
reading a time tag of the CT perfusion image, and performing uniform length processing and uniform interval processing on the time tag;
and obtaining a time density curve of each voxel in the 3D image, carrying the vascular markers, and forming the training data set.
Optionally, further obtaining an arterial input function specifically includes:
carrying out peak time sequencing on the time density curves of the voxels belonging to the blood vessel category;
screening and obtaining residual voxels with the peak time smaller than a second threshold value;
an arterial input function is obtained based on the time density curve of the remaining voxels.
Optionally, obtaining an arterial input function based on the time density curve of the remaining voxels specifically includes:
and selecting an expected number of time density curves with larger peak values from the rest voxels, and further obtaining an average curve to obtain the arterial input function.
Optionally, the second threshold is a quantile.
The method for obtaining the artery input function has at least the following effects:
the neural network model is fully automatically performed on the basis of training, so that the accuracy and the robustness of calculation and analysis of the CT perfusion image parameter map can be improved;
the filling time is obtained by using the histogram, the CT value is relatively high, the brain tissue structure is clear, the reliability of the training data set and the input data set is improved, and the classification accuracy of the depth network model is improved.
Drawings
FIG. 1 is a flow chart of a method of arterial input function according to an embodiment of the present application;
FIG. 2 is a graph showing the result of the arterial input function obtained in step S400 of FIG. 1;
FIG. 3 is a flow chart of a method for obtaining an arterial input function according to an embodiment of the present application;
FIG. 4 is a schematic structural design of a convolutional network model according to an embodiment of the present application;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
When CT perfusion image parameter map calculation and analysis are carried out, an artery input function AIF of brain tissue needs to be acquired first. The arterial input function AIF is a time density curve of brain tissue flowing into the artery, and participates in the deconvolution calculation process, so that the selection of the arterial input function AIF is an important step in the analysis of perfusion parameter diagrams.
Referring to fig. 1, in one embodiment of the present application, a method for obtaining an arterial input function is provided, which includes steps S100 to S400. Wherein:
step S100, obtaining 3D images at different moments based on CT perfusion images, obtaining histograms of the 3D images at each moment, and obtaining filling moments of contrast agent based on each histogram;
step S200, taking the 3D image at the filling time as a reference image, and carrying out image registration on the rest 3D images;
step S300, a training data set for training a convolutional neural network is constructed, and a trained neural network model is obtained;
step S400, an input data set is constructed, a neural network model is utilized to screen and obtain a time density curve of voxels belonging to a blood vessel category in the input data set, and then an artery input function is obtained.
The method for obtaining the artery input function provided by the embodiment is performed fully automatically based on the trained neural network model, so that the accuracy and the robustness of calculation and analysis of the CT perfusion image parameter map can be improved.
In the process of acquiring an original CT perfusion image, the position deviation of the original CT perfusion image at different moments often occurs due to the action of a patient, which affects the smoothness of a subsequent time density curve.
In this embodiment, the filling time is obtained by using the histogram, and at this time, the CT value is relatively high, and the brain tissue structure is clear, which is favorable to improving the reliability of the training data set and the input data set, and is favorable to improving the classification accuracy of the depth network model.
In step S100, the filling time of the contrast agent is obtained based on each histogram, specifically including:
step S110, CT values of all voxels in the histogram and the number of voxels corresponding to the CT values are obtained;
step S120, regarding all histograms, regarding the time at which the histogram with the largest number of voxels that meet the expectation is located as the filling time. Specifically, it is expected that the CT value is larger than the first threshold, which may be, for example, a 100Hu value.
In the CT perfusion image acquisition process, the CT value of the skull part is not changed when the contrast agent is injected, and the CT value in the blood vessel is affected by the contrast agent. In this embodiment, the histogram is used to focus on the area affected by the contrast agent, and the filling time is determined, so that the tissue structure at the filling time is clearer, and the image registration is more facilitated.
In an embodiment of the present application, a method for obtaining an arterial input function is further provided, corresponding to step S100 to step S400, for describing and explaining the working process in detail.
Referring to fig. 3, the main flow of the method of obtaining the arterial input function includes training of the convolutional neural network, and arterial input function AIF calculation process. Wherein the training of the convolutional neural network comprises: (1) image preprocessing of CT perfusion images; (2) training data set construction; (3) Convolutional neural network training and generating a trained neural network model.
(1) Preprocessing of CT perfusion images, comprising: motion correction (corresponding to step S100 to step S200), filtering processing, and brain tissue extraction.
Motion correction is performed on 4D CT perfusion images. The 4D CT perfusion images are 3D images at different moments. And in the motion correction process, using the histogram analysis to search the 3D image at the moment of the most filling of the contrast agent as a reference image, and carrying out image registration on the 3D images at other moments with the reference image in sequence.
The image registration method comprises an image similarity measurement, a transformation method and an optimizer, wherein the similarity measurement can use a mutual information method or a root mean square error method, the transformation method can use a quaternion rigid transformation or an Euler 3D rigid transformation, and the optimizer can select a gradient descent method or an LBFGS Newton method.
The filtering process is performed on the 3D image at each moment, and is performed on the CT perfusion image with motion correction, the filtering type can use Gaussian filtering or bilateral filtering, and the filtering mode can use a 3D or 4D filtering core. The use of a 4D filter kernel specifically includes: for any target voxel, filtering is performed using spatially adjacent voxels to the target voxel and temporally adjacent voxels to the target voxel. The spatially adjacent voxels may be, for example, the outer 8 voxels spatially directly adjacent to the target voxel, and the temporally adjacent voxels may be, for example, voxels at the same spatial location and at the front and rear sampling points.
The extraction of brain tissue is carried out on the CT perfusion image after the filtering, which comprises the steps of firstly removing an image background by a threshold method, enabling the mass center of the rest part to be a seed point, and obtaining a brain tissue region mask by using a region growing method. Specifically: and removing the background of the 3D image by using a threshold method, obtaining the mass center of the rest part, and obtaining the brain tissue region by using a region growing method based on the obtained mass center. The method can avoid interference of the non-brain tissue region on the training data set and the input data set, and improve the credibility of the neural network model.
(2) The training data set is constructed, corresponding to step S300. The method specifically comprises the following steps: and taking the maximum CT value of each voxel in the 3D image in the time dimension to obtain a maximum density projection image in the time dimension, and marking the voxels of the brain tissue region on the maximum density projection image, wherein the marking comprises vessel marking. Voxels of brain tissue regions are classified into 4 classes, e.g. blood vessels, brain tissue, ventricles, other tissues, respectively, by means of artificial labeling. This process may be accomplished using a common marking tool that will output the 3D coordinates of the various types of voxels, painting the region of a certain class on the image. It can be understood that the voxel classification mark is a classification mark for a spatial position, and the observation is clearer by adopting the maximum density projection image annotation.
Further, constructing a training data set for training the convolutional neural network, further comprising: reading a time tag of the CT perfusion image, and carrying out uniform length processing and uniform interval processing on the time tag; and obtaining a time density curve of each voxel in the 3D image, carrying a blood vessel mark, and forming a training data set. The method also comprises interpolation processing of CT perfusion images when uniform length processing and uniform interval processing are carried out.
Specifically, each voxel time density curve is spline-interpolated three times at 1 second intervals, and the time density curve is complemented to 100 seconds, i.e., 100 values in the time dimension, using the last point value of each time density curve. And extracting the interpolated time density curves corresponding to each voxel and 8 voxels around each voxel from the CT perfusion image according to the coordinates of each labeled category voxel, rearranging the time density curves into a matrix of 9 voxels by 100 (time labels), and carrying the labeled category numbers of the matrix to form a training data set.
Referring to fig. 4, the convolutional neural network may adopt the following first to eighth layer structural designs: the first layer is an input layer, and a training data set is input into a convolutional neural network model; the second layer is a convolution layer, the convolution kernel size is 3×11, the number of convolution kernels is 20, the step length is 1, the ReLU activation function is used, and the output matrix size is 20×7×90; the third layer is a pooling layer, the maximum pooling calculation is used, the filter size is 1*2, the longitudinal and transverse step sizes are 1 and 2 respectively, and the output matrix size is 20 x 7 x 45; the fourth layer is a convolution layer, the convolution kernel size is 3×11, the number of convolution kernels is 40, the step length is 1, the ReLU activation function is used, and the output matrix size is 40×5×35; the fifth layer is a pooling layer, the maximum pooling calculation is used, the filter size is 1*2, the longitudinal and transverse step sizes are 1 and 2 respectively, and the output matrix size is 40 x 5 x 17; the sixth layer is a full-connection layer, and the node number is 512; the seventh layer is a full-connection layer, and the node number is 128; the eighth layer is the output layer, and the Softmax function is used to give probabilities that the results are class 4 tissues, respectively.
Training the convolutional neural network specifically comprises the following steps: and (3) using a cross entropy loss function, using a gradient descent method to update the network parameters in an iterative manner, and adjusting the learning rate according to the training condition. And storing the structure and parameters of the convolutional neural network model to obtain a trained neural network model for calculation to obtain an arterial input function.
The Convolutional Neural Network (CNN) method can well treat classification problems, the AIF curve can be correctly selected by a reasonably trained model, and meanwhile, the result is objective, so that subjectivity of manual selection is avoided. In addition, the training process of the convolutional neural network model is independently carried out, and only the input data set is input into the neural network model which is trained in actual calculation, so that the calculation efficiency and accuracy are improved, and the analysis speed is improved.
The arterial input function AIF calculation process includes: (4) preprocessing CT perfusion images; (5) Inputting the image data into a neural network model and outputting a classification result; (6) And (5) analyzing the peak time and peak value of the curve to finally obtain an AIF curve of the arterial input function.
(4) Step S400 corresponds to (6). It will be appreciated that the process of constructing the input data set is substantially the same as the process of constructing the training data set, except that the input data set does not include indicia carried by the training data set relative to the training data set.
Specifically, after the pretreatment is completed, the time label of the CT perfusion image after the pretreatment is read, the time density curve of each voxel is subjected to cubic spline interpolation at intervals of 1 second, and the time density curve is supplemented to 100 seconds by using the last point value of each time density curve, namely, 100 values in the time dimension. And extracting the time density curves of each voxel and the interpolation corresponding to 8 voxels around each voxel on the CT perfusion image according to the coordinates of each voxel, and arranging the time density curves into a matrix of 9 x 100 to form an input data set.
And inputting the input data set into the trained neural network model to obtain a classification result of the voxels, wherein the classification result at least comprises vessel marking classification.
The curve peak time analysis specifically comprises the following steps: and (3) sorting peak time of the time density curves of voxels belonging to the blood vessel category, screening to obtain residual voxels with peak time smaller than a second threshold value, and obtaining an arterial input function based on the time density curves of the residual voxels. In particular, the second threshold is a quantile, for example 10%.
Specifically, the original time density curves of the voxels belonging to the blood vessel category in the classification result are sorted, the peak time of each curve is calculated and ordered, the 10% quantile is used as a second threshold value, and the voxels with the peak time larger than the threshold value are filtered.
Peak analysis, i.e. obtaining an arterial input function based on a time density curve of the remaining voxels, specifically comprises: and selecting an expected number of time density curves with larger peak values from the rest voxels, and further obtaining an average curve to obtain an arterial input function.
Specifically, in the remaining voxels, the peaks of each time-density curve are ranked, the time-density curve with the largest expected number of peaks (e.g., 5) is selected, and an average curve is calculated to obtain the arterial input function AIF.
It should be understood that, although the steps in the flowcharts of fig. 1 and 3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 and 3 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed sequentially, but may be performed alternately or alternately with at least a portion of the other steps or sub-steps of other steps.
An embodiment of the present application provides an apparatus for obtaining an arterial input function, where the apparatus is configured to perform a method for obtaining an arterial input function, and the method includes:
step S100, obtaining 3D images at different moments based on CT perfusion images, obtaining histograms of the 3D images at each moment, and obtaining filling moments of contrast agent based on each histogram;
step S200, taking the 3D image at the filling time as a reference image, and carrying out image registration on the rest 3D images;
step S300, a training data set for training a convolutional neural network is constructed, and a trained neural network model is obtained;
step S400, an input data set is constructed, a neural network model is utilized to screen and obtain a time density curve of voxels belonging to a blood vessel category in the input data set, and then an artery input function is obtained.
The means for obtaining an arterial input function may for example be a computer device, which may be a terminal, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of obtaining an arterial input function. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description. When technical features of different embodiments are embodied in the same drawing, the drawing can be regarded as a combination of the embodiments concerned also being disclosed at the same time.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A method of obtaining an arterial input function, the method comprising:
acquiring 3D images at different moments based on CT perfusion images, acquiring histograms of the 3D images at each moment, and acquiring filling moments of contrast agent based on each histogram;
taking the 3D image at the filling time as a reference image, and carrying out image registration on the rest 3D images;
constructing a training data set for training the convolutional neural network, and further obtaining a trained neural network model;
and constructing an input data set, screening and obtaining a time density curve of voxels belonging to a blood vessel category in the input data set by using the neural network model, and further obtaining an arterial input function.
2. The method of deriving an arterial input function as claimed in claim 1, wherein deriving a filling time of the contrast agent based on each of said histograms comprises:
obtaining CT values of all voxels in the histogram and the number of voxels corresponding to the CT values;
for all histograms, the time at which the histogram with the highest number of voxels that fits the expectation is located is taken as the filling time.
3. The method of deriving an arterial input function as claimed in claim 2, wherein the compliance is expected to be that the CT value is greater than a first threshold value.
4. The method of deriving an arterial input function according to claim 1, further comprising filtering each of the 3D images, the filtering comprising:
for any target voxel, filtering is performed using a spatial neighboring voxel to the target voxel and a temporal neighboring voxel to the target voxel.
5. The method for obtaining an arterial input function according to claim 4, further comprising brain tissue extraction of the 3D image after the filtering process is completed, specifically comprising:
and removing the background of the 3D image by using a threshold method, obtaining the mass center of the rest part, and obtaining the brain tissue region by using a region growing method based on the mass center.
6. The method of deriving an arterial input function as claimed in claim 1, wherein constructing a training dataset for training a convolutional neural network comprises:
taking the maximum CT value of each voxel in the 3D image in the time dimension to obtain a maximum density projection image in the time dimension;
on the maximum intensity projection image, voxels of a brain tissue region are labeled, the labeling including vessel labeling.
7. The method of deriving an arterial input function according to claim 6, wherein constructing a training dataset for training a convolutional neural network further comprises:
reading a time tag of the CT perfusion image, and performing uniform length processing and uniform interval processing on the time tag;
and obtaining a time density curve of each voxel in the 3D image, carrying the vascular markers, and forming the training data set.
8. The method for obtaining an arterial input function as claimed in claim 1, further comprising:
carrying out peak time sequencing on the time density curves of the voxels belonging to the blood vessel category;
screening and obtaining residual voxels with the peak time smaller than a second threshold value;
an arterial input function is obtained based on the time density curve of the remaining voxels.
9. The method of deriving an arterial input function as claimed in claim 8, wherein deriving an arterial input function based on the time density curve of the remaining voxels comprises:
and selecting an expected number of time density curves with larger peak values from the rest voxels, and further obtaining an average curve to obtain the arterial input function.
10. The method of deriving an arterial input function according to claim 8, wherein the second threshold value is quantile.
CN202310619923.4A 2023-05-30 2023-05-30 Method for obtaining arterial input function Active CN116342603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310619923.4A CN116342603B (en) 2023-05-30 2023-05-30 Method for obtaining arterial input function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310619923.4A CN116342603B (en) 2023-05-30 2023-05-30 Method for obtaining arterial input function

Publications (2)

Publication Number Publication Date
CN116342603A true CN116342603A (en) 2023-06-27
CN116342603B CN116342603B (en) 2023-08-29

Family

ID=86876264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310619923.4A Active CN116342603B (en) 2023-05-30 2023-05-30 Method for obtaining arterial input function

Country Status (1)

Country Link
CN (1) CN116342603B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19645579A1 (en) * 1996-10-18 1998-04-23 N I R Control Medizintechnik G Heart and blood circulation system function evaluation device
CN105559810A (en) * 2015-12-10 2016-05-11 上海交通大学 Computing method of blood flow volume and blood flow velocity of blood vessel per unit time
CN106803241A (en) * 2017-01-20 2017-06-06 深圳市安健科技股份有限公司 The processing method and processing device of angiographic image
CN109726753A (en) * 2018-12-25 2019-05-07 脑玺(上海)智能科技有限公司 The dividing method and system of perfusion dynamic image based on time signal curve
US20190150764A1 (en) * 2016-05-02 2019-05-23 The Regents Of The University Of California System and Method for Estimating Perfusion Parameters Using Medical Imaging
CN209269682U (en) * 2018-01-05 2019-08-20 博动医学影像科技(上海)有限公司 Quantitative blood flow Fraction analysis equipment with registration function
CN110827236A (en) * 2019-09-25 2020-02-21 平安科技(深圳)有限公司 Neural network-based brain tissue layering method and device, and computer equipment
CN110992351A (en) * 2019-12-12 2020-04-10 南京邮电大学 sMRI image classification method and device based on multi-input convolutional neural network
CN114037626A (en) * 2021-10-28 2022-02-11 上海联影医疗科技股份有限公司 Blood vessel imaging method, device, equipment and storage medium
CN114119688A (en) * 2021-12-02 2022-03-01 北京邮电大学 Single-mode medical image registration method before and after coronary angiography based on deep learning
WO2022069883A1 (en) * 2020-09-29 2022-04-07 King's College London Method and system for estimating arterial input function
CN114451907A (en) * 2022-01-27 2022-05-10 上海市普陀区人民医院(上海纺织第一医院) Method and system for calculating CT angiography parameters
CN115797387A (en) * 2022-12-16 2023-03-14 杭州脉流科技有限公司 Method and device for full-automatic selection of artery input function of CT perfusion image
CN115861475A (en) * 2022-11-18 2023-03-28 上海联影医疗科技股份有限公司 Method and device for determining artery input function curve and computer equipment
CN115880159A (en) * 2022-12-16 2023-03-31 杭州脉流科技有限公司 Method and computer readable storage medium for CT perfusion image parameter map correction
CN116013533A (en) * 2022-12-30 2023-04-25 上海联影医疗科技股份有限公司 Training method, evaluation method and system of hemodynamic evaluation model of coronary artery
CN116091444A (en) * 2023-01-05 2023-05-09 东软医疗系统股份有限公司 Side branch evaluation method and device, storage medium and terminal
CN116109618A (en) * 2023-03-03 2023-05-12 上海联影医疗科技股份有限公司 Vascular imaging method, vascular imaging device, electronic equipment and medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19645579A1 (en) * 1996-10-18 1998-04-23 N I R Control Medizintechnik G Heart and blood circulation system function evaluation device
CN105559810A (en) * 2015-12-10 2016-05-11 上海交通大学 Computing method of blood flow volume and blood flow velocity of blood vessel per unit time
US20190150764A1 (en) * 2016-05-02 2019-05-23 The Regents Of The University Of California System and Method for Estimating Perfusion Parameters Using Medical Imaging
CN106803241A (en) * 2017-01-20 2017-06-06 深圳市安健科技股份有限公司 The processing method and processing device of angiographic image
CN209269682U (en) * 2018-01-05 2019-08-20 博动医学影像科技(上海)有限公司 Quantitative blood flow Fraction analysis equipment with registration function
CN109726753A (en) * 2018-12-25 2019-05-07 脑玺(上海)智能科技有限公司 The dividing method and system of perfusion dynamic image based on time signal curve
CN110827236A (en) * 2019-09-25 2020-02-21 平安科技(深圳)有限公司 Neural network-based brain tissue layering method and device, and computer equipment
CN110992351A (en) * 2019-12-12 2020-04-10 南京邮电大学 sMRI image classification method and device based on multi-input convolutional neural network
WO2022069883A1 (en) * 2020-09-29 2022-04-07 King's College London Method and system for estimating arterial input function
CN114037626A (en) * 2021-10-28 2022-02-11 上海联影医疗科技股份有限公司 Blood vessel imaging method, device, equipment and storage medium
CN114119688A (en) * 2021-12-02 2022-03-01 北京邮电大学 Single-mode medical image registration method before and after coronary angiography based on deep learning
CN114451907A (en) * 2022-01-27 2022-05-10 上海市普陀区人民医院(上海纺织第一医院) Method and system for calculating CT angiography parameters
CN115861475A (en) * 2022-11-18 2023-03-28 上海联影医疗科技股份有限公司 Method and device for determining artery input function curve and computer equipment
CN115797387A (en) * 2022-12-16 2023-03-14 杭州脉流科技有限公司 Method and device for full-automatic selection of artery input function of CT perfusion image
CN115880159A (en) * 2022-12-16 2023-03-31 杭州脉流科技有限公司 Method and computer readable storage medium for CT perfusion image parameter map correction
CN116013533A (en) * 2022-12-30 2023-04-25 上海联影医疗科技股份有限公司 Training method, evaluation method and system of hemodynamic evaluation model of coronary artery
CN116091444A (en) * 2023-01-05 2023-05-09 东软医疗系统股份有限公司 Side branch evaluation method and device, storage medium and terminal
CN116109618A (en) * 2023-03-03 2023-05-12 上海联影医疗科技股份有限公司 Vascular imaging method, vascular imaging device, electronic equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOSEPH KETTELKAMP 等: "Arterial input function and tracer kinectic model-driven network for rapid inference of kinectic maps in dynamic contrast-enhanced MRI(AIF-TK-net)", 《2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING(ISBI)》 *
陈晓霞 等: "超声心动图对高血压患者左室舒张功能的研究", 《医学综述》, vol. 13, no. 14, pages 1111 *

Also Published As

Publication number Publication date
CN116342603B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN110689038B (en) Training method and device for neural network model and medical image processing system
WO2019200753A1 (en) Lesion detection method, device, computer apparatus and storage medium
CN110197492A (en) A kind of cardiac MRI left ventricle dividing method and system
CN110717905B (en) Brain image detection method, computer device, and storage medium
US11263744B2 (en) Saliency mapping by feature reduction and perturbation modeling in medical imaging
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN112308846B (en) Blood vessel segmentation method and device and electronic equipment
CN113554131B (en) Medical image processing and analyzing method, computer device, system and storage medium
WO2021011775A1 (en) Systems and methods for generating classifying and quantitative analysis reports of aneurysms from medical image data
CN114170440A (en) Method and device for determining image feature points, computer equipment and storage medium
CN116681716B (en) Method, device, equipment and storage medium for dividing intracranial vascular region of interest
CN116342603B (en) Method for obtaining arterial input function
CN111160441B (en) Classification method, computer device, and storage medium
CN111932575A (en) Image segmentation method and system based on fuzzy C-means and probability label fusion
CN111971751A (en) System and method for evaluating dynamic data
CN114693671A (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN111210414B (en) Medical image analysis method, computer device, and readable storage medium
Korabelnikov et al. Liver tumor segmentation ct data based on alexnet-like convolutional neural nets
Li et al. Segmentation evaluation with sparse ground truth data: Simulating true segmentations as perfect/imperfect as those generated by humans
CN113160199A (en) Image recognition method and device, computer equipment and storage medium
CN112348796A (en) Cerebral hemorrhage segmentation method and system based on combination of multiple models
CN112862785A (en) CTA image data identification method, device and storage medium
CN112862786A (en) CTA image data processing method, device and storage medium
CN111428812A (en) Construction method and device of medical image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant