JP4714228B2  Index calculation method, apparatus and storage medium for blood flow dynamics of capillaries in brain tissue  Google Patents
Index calculation method, apparatus and storage medium for blood flow dynamics of capillaries in brain tissue Download PDFInfo
 Publication number
 JP4714228B2 JP4714228B2 JP2008010951A JP2008010951A JP4714228B2 JP 4714228 B2 JP4714228 B2 JP 4714228B2 JP 2008010951 A JP2008010951 A JP 2008010951A JP 2008010951 A JP2008010951 A JP 2008010951A JP 4714228 B2 JP4714228 B2 JP 4714228B2
 Authority
 JP
 Japan
 Prior art keywords
 pixel
 time
 value
 brain tissue
 image
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
Images
Classifications

 A—HUMAN NECESSITIES
 A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
 A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
 A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
 A61B6/50—Clinical applications
 A61B6/507—Clinical applications involving determination of haemodynamic parameters, e.g. perfusion CT
Description
The present invention relates to an index calculation method and calculation apparatus related to blood flow dynamics of capillaries in brain tissue.
Conventionally, in Xray CT examinations, morphological information can be obtained from simple CT images, and dynamic information on blood flow around a lesion can be obtained as visual information by dynamic scanning using contrast CT. In recent years, highspeed scanning by multislice CT has become possible, and it is considered that the range of utilization of dynamic scanning by contrast CT will be expanded.
As one of the directions, there is a method called a CBP study for calculating an index related to blood flow dynamics of capillaries in brain tissue. In the CBP study, indexes such as CBP, CBV, MTT, and Err that quantitatively indicate the dynamics of “blood flow through capillaries” are obtained, and a map of these indexes is output.
CBP represents the blood volume per unit volume and unit time in the brain tissue [ml / 100 ml / min], CBV represents the blood volume per unit volume in the brain tissue [ml / 100 ml], and MTT represents the capillary blood vessel. It represents the average blood passage time [seconds].
Indexes CBP, CBV, and MTT, which quantitatively represent the blood flow dynamics of the capillaries in these brain tissues, together with the elapsed time information from the onset of cerebral ischemia, are used to identify the pathophysiology of ischemic cerebrovascular disorders, It is expected to be useful information for evaluating the presence or absence of capillaries and blood flow rate. For example, generally in an ischemic cerebrovascular disorder, the blood pressure of the provided artery is reduced, and the blood flow velocity in the blood vessel is reduced. As a result, even if CBV is constant, MTT is prolonged and CBP is lowered. In the cerebral infarction hyperacute phase, in order to compensate for the decrease in blood flow rate due to the decrease in blood pressure, the capillaries are expanded and the blood flow rate is increased to suppress the decrease in blood flow CBP (auto Regulation). Therefore, even if CBP decreases due to the extension of MTT, if CBV increases, it is information suggesting the possibility of reopening of capillaries.
In the CBP study, a contrast agent having no cerebral vascular permeability, such as an iodine contrast agent, is used as a tracer. The iodine contrast agent is injected from the cubital vein by an injector, for example. The iodine contrast agent intravenously injected by the injector flows from the cerebral artery via the heart and lungs. Then, it flows out from the cerebral artery to the cerebral vein through the capillaries in the brain tissue. At this time, the iodine contrast medium passes through the capillary blood vessels in normal brain tissue without leaking out of the blood vessels. FIG. 1 schematically shows this state.
The state of passage of the contrast agent is photographed by dynamic CT, and from the continuous image, the temporal density curve Ca (t) of the cerebral artery pixel, the temporal density curve Ci (t) of the brain tissue pixel including the capillary, the brain The time density curve Csss (t) of each vein pixel is measured.
Here, in the CBP study, an ideal relationship established between the blood concentration Ca (t) of the cerebral blood vessels close to the brain tissue and the blood concentration Ci (t) of the capillaries is analyzed for the blood concentration of the contrast agent. This is a model. In other words, when a contrast medium is injected from a blood vessel just before entering the brain tissue, the time density curve in the brain tissue unit volume (1 pixel) including the capillaries has a vertical rise and remains constant for a while. After that, it will fall down steeply. This is approximated by a rectangular function (boxMTF method: boxModulation Transfer Function method).
Using the cerebral artery blood time concentration curve Ca (t) as an input function and the brain tissue time concentration curve Ci (t) as an output function, the characteristics of the process of the tracer passing through the capillary can be obtained as a rectangular transfer function. .
The first problem in such a CBP study is as follows. When bolus injection is performed on the cubital vein, the contrast effect observed by CT increases the CT value of blood (several tens of HU when not contrasted) to a maximum of several hundred HU. However, in order to effectively analyze cerebral blood flow, the contrast effect must be measurable with an error of no more than a few percent. That is, even if the blood contrast effect (CT value increase) is about 20 to 40 HU, it needs to be detected.
The volume ratio of capillaries in a unit volume of brain tissue is at most about 3 to 4 percent. Therefore, when the CT value of blood increases by 20 to 40 HU, the average CT value of brain tissue only increases by about 0.5 to 1.5 HU.
In CT images, the standard deviation (sd) of noise is inversely proportional to the square root of the irradiation Xray dose, and sd is, for example, about 5 to 10 HU under typical irradiation conditions. Therefore, in order to detect a contrast effect of 0.5 HU, the Xray dose must be increased by about 10 to 100 times, which means that the patient exposure dose is significantly increased. Furthermore, in the dynamic CT, the same part is photographed several tens of times, and therefore, the skin exposure at the photographing part is several hundred to several thousand times the normal, and inflammation, hair loss, necrosis, Radiation damage such as carcinogenesis may occur.
Rather, in dynamic CT, the Xray dose must be reduced compared to normal imaging. In general, the Xray dose per scan is reduced to, for example, about 1/2 to 1/10 of the usual amount. As a result, the Xray exposure can be reduced to several to 20 times that of normal one CT imaging, which is a level that does not cause radiation damage. However, in such a CT image with a reduced Xray dose, sd is, for example, about 15 to 20 HU, and a contrast effect of about 0.5 to 1.5 HU cannot be detected.
Therefore, suppressing the noise component of the image is one of the most important issues in the CBP study. For this reason, 1) increasing the slice thickness, 2) bundling pixels, and 3) smoothing the image are generally available measures. However, these have the following problems.
In order to “increase the slice thickness”, the slice thickness is set to be thick at the time of shooting, or an image of a thick slice is generated by averaging several consecutive thin slice images. Since the Xray dose per pixel increases in proportion to the slice thickness, the image noise sd decreases in inverse proportion to the square root of the slice thickness. However, by increasing the slice thickness, a partial volume effect occurs, that is, one pixel does not represent a uniform brain tissue, and multiple tissues (white matter, gray matter, blood vessels, sulci, ventricles) Etc.), and the error in the value of the cerebral blood flow obtained as an analysis result increases.
In particular, normal analysis is not possible for pixels including the influence of blood vessels. For this reason, when the slice thickness is increased, only a very poor result including many pixels that are inaccurate and cannot be analyzed can be obtained.
“Pixel bundling” sacrifices some spatial resolution. For example, an average value of a square area (including n × n pixels) having n pixels on one side is obtained, and this is set as an average CT value of the entire square. Such a square is regarded as a pixel, It spreads out to form a “pixel bundled image”. For example, if the original image is composed of 512 pixels on one side (including 512 × 512 pixels) and n = 2, the “pixel bundle image” is composed of pixels on one side (512/2). (Including 256 × 256 pixels). According to this method, noise can be reduced in inverse proportion to n. Furthermore, since the number of pixels to be analyzed becomes 1 / (n × n) times, there is an advantage that the amount of calculation is also reduced.
However, when n is increased, the spatial resolution is reduced, and a partial volume effect is caused accordingly. That is, one pixel does not represent a uniform brain tissue, and a plurality of tissues (white matter, gray matter, blood vessels) , Cerebral sulcus, ventricle, etc.) is likely to represent an average CT value, and an error in values such as cerebral blood flow obtained as an analysis result increases. In particular, normal analysis is not possible for pixels including the influence of blood vessels. For this reason, if n is increased, only a very poor result including a low spatial resolution, inaccurate and unanalyzable pixels can be obtained. For this reason, in practical use, n = 2 to 4 is the limit, and a sufficient noise suppression effect cannot be obtained only by this.
Further, if a method of smoothing an image, that is, smoothing by applying a twodimensional spatial filter for each CT image, the spatial resolution is significantly impaired in exchange for a sufficient noise suppression effect. In particular, a pixel close to a location where a thick blood vessel (artery / vein) exists is affected by the contrast effect generated in the thick blood vessel, and the time density curve of these pixels becomes incorrect. Therefore, only a slight smoothing must be performed. Here, it is important to make the size of the image filter very small, for example, to set it to about 3 × 3 when performing very smooth smoothing. When trying to obtain the maximum image noise suppression effect using a 3 × 3 smoothing filter, the upper limit is to reduce the noise sd to 1/3, and it is impossible to suppress the noise further. It is. Therefore, a sufficient noise suppression effect cannot be obtained.
On the other hand, temporal smoothing, that is, using a technique in which the time density curve obtained for each pixel is regarded as a curve and smoothed with a onedimensional filter, the time resolution is reduced to obtain a sufficient noise suppression effect. Significantly damaged. Originally, dynamic CT in CBP studies is performed with a short sampling period to obtain a high temporal resolution, and a slight change in the time density curve (especially how much smoothing effect is caused by physiological structure) Therefore, temporal smoothing is not appropriate at all.
An object of the present invention is to improve the analysis accuracy of a CBP study by suppressing noise by suppressing a decrease in spatial and temporal resolution.
In the present invention, the similarity between pixels constituting each of a plurality of CT images obtained by continuously capturing the brain of a subject into which a contrast medium having no cerebral vascular permeability has been injected as a subject of imaging is represented by the plurality of CT images. Based on the time density curve of each pixel, weighted average of pixels for each image with weights according to the similarity, and time density curves of cerebral artery pixels from the plurality of weighted averaged CT images And a time density curve of the brain tissue pixel, calculate a transfer function of the time density curve of the brain tissue pixel with respect to the time density curve of the cerebral artery pixel, and based on the transfer function, capillary blood vessels in the brain tissue Multiple types of indices related to blood flow dynamics are calculated.
According to the present invention, by using a coherent filter or coherent regression together, it is possible to suppress a decrease in space and time resolution and suppress noise, thereby improving the analysis accuracy of a CBP study.
Hereinafter, the present invention will be described by way of preferred embodiments with reference to the drawings.
As a feature of this embodiment, a coherent filter is used to effectively suppress noise while suppressing a decrease in spatial and temporal resolution, and thus, such as CBP that quantitatively represents blood flow dynamics of capillaries in brain tissue. It is to calculate the index with high accuracy.
(Device configuration)
FIG. 2 shows the configuration of the Xray CT apparatus according to the present embodiment. The Xray CT apparatus includes a gantry unit 10 and a computer device 20. The gantry unit 10 includes an Xray tube 101, a high voltage generator 101a, an Xray detector 102, and a data acquisition unit 103 (DAS; Data Acquisition System). The Xray tube 101 and the Xray detector 102 are mounted at positions facing each other with the subject P sandwiched between rotating rings (not shown) that rotate continuously at high speed.
The computer device 20 includes an image processing device 30, an image display unit 107, and an input unit 109. The image processing apparatus 30 has a control unit 108 as a center, a preprocessing unit 104 that converts raw data output from the data collection unit 103 into projection data through correction processing and the like, a memory unit 105 that stores projection data, and projection data Image reconstructing unit 106 for reconstructing CT image data from, storage device 10M for storing CT image data, coherent filter processing unit 110 for performing coherent filter processing on CT image data, and CT that has undergone coherent filter processing And a CBP study processing unit 120 that executes CBP study processing using image data.
The coherent filter processing unit 110 includes a variance value estimation unit 111, a weight function calculation unit 112, and a pixel value calculation unit (coherent filter unit) 113. The functions of the variance value estimation unit 111, the weight function calculation unit 112, and the pixel value calculation unit 113 will be described in the detailed description of coherent filter processing described later.
The CBP study processing unit 120 includes an ROI setting support unit 121, a time concentration curve creation unit 122, a cerebral artery time concentration curve correction unit 123, an MTF processing unit 124, an index calculation unit 125, a map creation unit 126, and a map composition unit 127. Is done.
The ROI setting support unit 121 provides information (AT map, PT map, TT map, etc. for the cerebral artery ROI) for supporting the operation of setting the region of interest ROI for the cerebral artery and cerebral vein on the CT image. Create and provide.
The cerebral artery ROI is individually set in the left brain region and the right brain region for the anterior cerebral artery (ACA), middle cerebral artery (MCA), and posterior cerebral artery (PCA). Accordingly, a total of six cerebral arteries ROI are set, three on each side. Further, another time concentration curve Csss (t) is used to correct the time concentration curve Ca (t) of the cerebral artery. This time density curve Csss (t) is created for an ROI set on a blood vessel that is sufficiently thick so that there are pixels that do not contain a partial volume. For example, it is preferable to set the thickest upper sagittal sinus in the cerebral blood vessel.
The time concentration curve creation unit 122 generates time concentration curves related to cerebral arteries, cerebral veins, and brain tissues (capillaries) from dynamic CT image data (a plurality of temporally continuous image data) stored in the storage device 10M. create. The cerebral artery time concentration curve Ca (t) is individually created for the set six cerebral arteries ROI. The cerebral venous time concentration curve Csss (t) is created for the cerebral venous ROI set in the upper sagittal sinus. Further, the time density curve Ci (t) of the brain tissue is created for each pixel for all the pixels on the brain tissue.
The cerebral artery time concentration curve correction unit 123 uses the cerebral artery time concentration curve Ca (t) based on the upper sagittal sinus time concentration curve Csss (t) in order to remove the influence of noise and the partial volume effect. To correct. This correction method will be described later. The MTF processing unit 124 calculates the transfer function MTF on the brain tissue by the boxMTF method based on the corrected cerebral artery time concentration curve Ca (t) and the brain tissue time concentration curve Ci (t). The calculation is performed for each pixel for all the pixels.
The index calculation unit 125 calculates an index (CBP, CBV, MTT, Err) representing the blood flow dynamics of the brain tissue from the calculated transfer function MTF for every pixel on the brain tissue. The map creation unit 126 generates a map of each calculated index for each cerebral artery (ACA, MCA, PCA). This map is created for each slice, index type (= 4) × number of types of cerebral arteries (3 of ACA, MCA, and PCA) = 12 types. In multislice, the number of types of maps is created that is the number of slices. A map composition unit 127 is provided in order to reduce the enormous number of maps by the composition process and improve the diagnosis efficiency.
Hereinafter, the coherent filter process and the CBP study process will be described in order.
The principle of the coherent filter will be briefly described. Based on the weighted average of neighboring local pixels such as 3 × 3 and the weighted average value as the value of the local central pixel, the weight of each peripheral pixel is determined as the central pixel. It is characterized in that it is changed according to the similarity between the pixel and the peripheral pixel. The similarity referred to here is an index indicating the degree of possibility that the pixels are close to the anatomical structure, specifically, the capillaries under the control of the same cerebral artery between the pixels. By giving a high weight to high pixels and conversely giving a low weight close to zero to pixels with low similarity, it is possible to suppress a decrease in spatial resolution while suppressing noise. Yes. Here, the calculation of the similarity is important. In this embodiment, a contrast medium having no cerebral vascular permeability, for example, the brain of a subject into which an iodinated contrast medium is injected (intravenous injection) is continuously taken as an imaging target. Using the acquired plurality of CT images (dynamic CT images), the similarity is calculated by comparing the time density curves of each pixel. Therefore, the probability of similarity is determined depending on the sampling frequency, that is, the number of images per unit time, and the number of samplings, that is, the total number of images. Therefore, it is effective to shorten the scan interval to 0.5 seconds, for example.
(Coherent filter)
(General description of coherent filter)
(Pixel value v (x))
In general, a digital image acquired via an imaging means such as a camera is composed of a plurality of pixels (or the image can be considered as a set of such pixels). In the following description, the position of the pixel is expressed as a vector x (that is, a vector of coordinate values), and a value (for example, a numerical value indicating shading, CT value HU) of the pixel x is expressed as a Kdimensional vector. In the case of a twodimensional image, the pixel x is a twodimensional vector indicating coordinate values (x, y) representing a position on the image. The “pixel value v (x)” defined for a pixel x is
v (x) = (v _{1} (x), v _{2} (x),..., v _{K} (x)) (1)
Is written. Note that each of v _{1} (x), v _{2} (x),..., V _{K} (x) on the right side of the equation (1) is hereinafter referred to as a “scalar value” for the pixel x.
For example, when the image is a “color image”, each pixel has brightness (scalar value) of three primary colors (red, green, and blue), and therefore the pixel value v (x) of each pixel is It can be considered that the dimension is a vector of K = 3 (assuming the case where the subscripts are “red”, “green”, and “blue” in each term on the right side of the above equation (1). For example, if the image is a moving image composed of K still images and each pixel of the nth image has a scalar value v _{n} (x), Kdimensional vector values v _{n} (x) = (v _{1} (x), v _{2} (x),..., Arranged by arranging pixel values (scalar values) of pixels x at the same common point (same coordinates). v _{K} (x)) is a pixel value as a vector value described below.
(Similarity (fitness or risk factor) p (x, y) and weight w (p (x, y)))
Consider an appropriate pixel set N (x) for the pixel x (this set N (x) may include the pixel x). Next, a weight w (p (x, y)) is considered between each pixel y which is an element of N (x) and the pixel x. The weight w (p (x, y)) has the following properties.
(Similarity p (x, y))
First, the meaning of the function p (x, y) that affects the value of the weight w (p (x, y)) will be described. This p (x, y) is a means for quantifying the “similarity” referred to in the present embodiment, and generally speaking, to what extent the pixel x and the pixel y∈N (x) are in some sense. A specific numerical value indicating whether they are similar (for example, the degree of statistical difference recognized between the pixel values v (x) and v (y) of both pixels x and y) is given.
More specifically, for example, when p (x, y) gives a small value, the pixel x and the pixel y have a “statistically significant difference between the pixel values v (x) and v (y)”. Is not (that is, the degree of similarity is high) ", and when p (x, y) gives a large value when it is determined that there is a high possibility of being similar," there is a statistically significant difference (that is, the degree of similarity is low). "Is judged as follows.
By the way, the pixel values v (x) and v (y) (or scalar values v _{1} (x),..., V _{K} (x) and v _{1} (y),..., V _{K} (y)) are always noise. Must be considered to be included. For example, when considering a case where an image is acquired by a CCD image pickup device, each pixel constituting the image includes noise or the like due to dark current in the device or irregular fluctuation of the amount of light incident from the outside.
Since such noise generally takes various values for all pixels, even if the pixel x and the pixel y reflect the same object (in the outside world), they are actually observed. On the image, it may not have the same value. In other words, if it is assumed that the respective noises are removed from the pixel x and the pixel y reflecting the same object, they are displayed on the image as representing the same object. (= Recognized as such) and both have essentially the same (or very close) pixel values.
Therefore, based on the abovementioned noise characteristics, regarding the above p (x, y), using the concept of the “null hypothesis” that is well known in the statistical test method, this p (x, y) Specifically, we can say as follows. In other words, the null hypothesis H “pixel x and pixel y have the same pixel value when the respective noises are removed”, in other words, “v (x) = v (y)” "Excluding the difference caused by" (that is, when such a proposition is established, it is considered that "the similarity between both pixels x and y is high (the degree of matching is high)"). x, y) can be said to be a risk factor (or significance level) when rejecting this hypothesis H (in this case, p (x, y) has a value range of [0, 1]. (P (x, y) ε [0, 1])).
Therefore, when the risk factor p (x, y) is large, that is, when there is a high risk of rejection being incorrect, it can be said that the above hypothesis H is likely to be satisfied, and conversely, when it is small, that is, rejection is incorrect. If the risk is small, it can be said that there is a high possibility that the hypothesis H is not satisfied (note that although it is a wellknown matter in the statistical test, the hypothesis H is not “rejected”, it is “true”. (In this case, it simply means that the proposition indicated by hypothesis H cannot be denied.)
(Weight w (p (x, y)))
The weight w (p (x, y)) is a function of the risk factor p (x, y) as described above (more generally, a function of fitness (the fitness is If ρ (x, y), it can be configured to be w (ρ (x, y)), and in order to obtain this weight w (p (x, y)), x and y Generally speaking, the weighting function w to be applied to the risk factor p (x, y) obtained for each combination has the effect of embodying the “rejection.” Specifically, the risk factor. When p (x, y) is large, the value of the weighting function w, that is, the weight w (p (x, y)) takes a large positive value, and vice versa, a small positive value (or “0”). ")", Etc. (The specific form of the weight function w will be described later.) That is, the weight w (p (X, y)) takes a large value when the pixel x and the pixel y seem to satisfy the proposition shown in the above hypothesis H, and takes a small value in the opposite case. It may be configured so that there are only two possible values of w, “0” or a fixed value other than “0”.
In addition, when the relationship between the hypothesis H, the risk factor p (x, y), and the weight w (p (x, y)) described above is summarized, when the null hypothesis H is highly likely to be correct, When p is also increased and the weight w given to the pixel is increased, on the other hand, when the possibility that the null hypothesis H is correct is low, the similarity p is also lowered and the weight w given to the pixel is lowered. Thus, by changing the contribution (weight) to the weighted average value according to the similarity, it is possible to effectively suppress noise while suppressing a decrease in resolution. Further, the weight function w (t) can be more generally referred to as “a nonnegative monotonically increasing function defined by t∈ [0, 1]”, and the property to be satisfied by w (t) is at least That is all that is necessary.
(Coherent filter processing)
From the above description, the “coherent filter” is derived as follows. That is, first, the weight w (p (x, y)) described above is calculated for all the pixels y that are elements of the set N (x) for a certain pixel x constituting the image. Next, a new scalar value v ′ _{k} (x) constituting the pixel x is calculated by the following equation (2) using the plurality of weights w (p (x, y)). That is,
However, k = 1, 2,..., K. Then, by using the v 'k obtained by the formula _{(x),} the pixel value after the conversion of the pixel x (new pixel value) v' a (x),
v ′ (x) = (v ′ _{1} (x), v ′ _{2} (x),..., v ′ _{K} (x)) (3)
Configure as.
Here, the case where the pixel value v (y) = (v _{1} (y), v _{2} (y),..., V _{K} (y)) (y = x) expressed by the above equation (1) is included. .) Is converted to v ′ (x) = (v ′ _{1} (x), v ′ _{2} (x),..., V ′ _{K} (x)) is a “coherent filter” format. As is apparent from the expression, this represents a weighted average value of the scalar values v _{k} (y) constituting the pixel values.
Such processing yields the following results. That is, the pixel value v ′ (x) is likely to be the same pixel value as the pixel x except for noise (= the weighted average value that places importance on the pixel y that is likely to satisfy the proposition of the above hypothesis H). It represents a vector composed of v ′ _{k} (x). Further, if there are a sufficient number of such pixels y, the pixel value v ′ (x) is not deviated from its true value that the pixel x should originally have, and noise is caused by the abovedescribed averaging operation. It has the value which suppressed only.
Even if the risk factor p (x, y) is small and therefore the null hypothesis H is “rejected” and the weight w (p (x, y)) is small, the above description also indicates As you can see, this is not necessarily completely “rejected”. Although this depends on the specific form of the weight function w described later, even when the risk factor p (x, y) is close to “0” (= 0%), w (p ( x, y)) ≠ 0 (where p (x, y) is a smaller positive value than when p (x, y) is close to “1”) (p (x, y) = 1) (Some cases are when v (x) = v (y), as will be described later.)
In other words, it is not a complete rejection but a small contribution may be accepted (in this case, if w (p (x, y)) = 0, a complete rejection is performed. Is synonymous with
Such processing can be generally described as follows. That is, when there are (a plurality of) pixels x constituting an image, the degree of matching between this pixel x and an arbitrary pixel y (in the above, yεN (x)) is quantified (above Is based on p (x, y).) When the degree of matching is large, a large contribution is recognized for the pixel y in the weighted averaging process using the pixel value v (y). It can be said that the image processing method effectively suppresses noise of the pixel x by allowing only a small contribution when the degree is small. In other words, when the pixel x and the pixel y are “similar”, the pixel y contributes more to the averaging process. When the pixel x and the pixel y are “similar”, the pixel y is almost or completely ignored. In other words, the weight may be zero or an approximate value thereof.
By applying such processing to the entire image, an extremely high noise suppression effect can be exhibited with almost no blurring of the image, that is, a reduction in spatial resolution. Further, the present invention is not limited to the use of noise suppression. For example, also in the field of pattern recognition, an excellent effect can be exhibited by making a weighting function or a coherent filter suitable in a specific form.
Here, “dynamic CT” imaging means that the Xray tube 101 and the Xray detector 102 repeatedly image the same part of the subject P (repetitive scanning, in a continuous rotation CT apparatus, repeated imaging by continuous rotation is performed). This is an imaging method in which projection data is acquired one after another, and reconstruction processing is performed one after another based on the projection data to obtain a series of timeseries images (in this case, images The image display on the display unit 107 is controlled so as to be performed after a predetermined time from the scan start point or end point related to the collection of projection data that is the source of the image, for example, by a counter (not shown).
Therefore, the image acquired / displayed in this way is a socalled moving image composed of a plurality of timeseries still images as in a movie or the like. Note that such an imaging method typically injects a contrast medium into the subject P, observes and analyzes changes over time, and analyzes other pathological conditions such as stenosis and occlusion in blood vessels, for example. Used for. A method of performing CT imaging of the same part only twice before and after contrast medium administration can also be considered as dynamic CT imaging in a broad sense.
Conventionally, at the time of “dynamic CT” imaging as described above, for example, some change in the subject P (for example, a change in the concentration of a contrast medium, respiratory movement, etc.) is generally considered while K imaging is performed. In order to suppress the image noise without impairing the spatial resolution, smoothing in the time direction was necessary. As a result, the adverse effect that the time resolution is impaired cannot be avoided.
However, since the image acquired by dynamic CT imaging is a moving image as described above and is performed for the purpose of closely observing temporal changes, it is originally preferable that the resolution is impaired. It's not a situation.
If a coherent filter is used, the following dynamic coherent filter processing that can suppress noise for all (a plurality of images) of K still images without degrading resolution can be performed. it can.
First, for the pixel x defined for the K still images, which are the moving images obtained as described above, as described above, as the pixel value v (x),
v (x) = (v _{1} (x), v _{2} (x),..., v _{K} (x)) (1 reprint)
Can be configured. Here, subscripts 1, 2,..., K in each term on the right side are serial numbers of K still images.
Next, a specific form of the weight function w1 in this case is given by the following equation (4), for example.
However, yεN (x) and the set N (x) may be arbitrarily set for each pixel x (= may be set according to any criterion). However, in practice, the pixel x and the pixel y far away from the pixel x can satisfy the hypothesis “v (x) = v (y). However, excluding differences caused by noise between the two pixels”. Therefore, limiting the set N (x) on the basis of a set of pixels close to x has practical significance such as an increase in calculation speed.
Therefore, here, as an example, the set N (x) is a set of pixels included in the surrounding rectangular area centered on the pixel x. More specifically, as the set N (x), for example, when all pixels constituting one still image of interest are 128 × 128 pixels, 3 × It may be an area for 3 pixels, or in the case of 512 × 512 pixels, it may be an area for 13 × 13 pixels centered on the pixel x.
In addition, σ _{k} in the above equation (4) is a noise standard deviation estimated on the assumption that each pixel of the kth still image has a certain degree common to all the pixels. C is a parameter that can determine and adjust the degree of action when the weight w1 (p (x, y)) is substituted into the above equation (4).
Hereinafter, these σ _{k} and C will be described in order.
First, σ _{k in the} equation (4) will be described (hereinafter, described as variance σ _{k} ^{2} ). As described above, σ _{k} ^{2} is the variance of the noise component of the scalar value of each pixel on the kth still image. The variance σ _{k} ^{2} in the above equation (4) is estimated on the assumption that the scalar value of each pixel of the kth image includes noise having a constant variance σ _{k} ^{2.} is there. In general, such assumptions are sufficiently valid against the background described below.
When the size of the subject P, the structure of the Xray tube 101 and the Xray detector 102, the reconstruction unit 106, etc. are constant and the energy of the irradiation Xray is constant, the noise of the CT image is irradiated. It is determined by the Xray dose, that is, the product of the tube current and the irradiation time in the Xray tube 101 that is proportional to the Xray dose (socalled tube current time product (mA · s)).
On the other hand, it is also known that CT image noise is additive and generally follows a Gaussian distribution. That is, for an arbitrary scalar value v _{n} (x) (n = 1, 2,..., K) constituting the pixel value v (x) of a certain pixel x, the true value (value obtained by removing the noise contribution). V _{n} ^{0} (x), these difference values v _{n} (x) −v _{n} ^{0} (x) generally follow a Gaussian distribution with an average of 0 and a variance σ _{k} ^{2} (note that the irradiation Xray dose or the tube current The time product m · As and the noise variance σ _{k} ^{2} are generally in inverse proportion.)
The variance σ _{k} ^{2} also depends on the position of the pixel x itself (as described above, for example, each coordinate value x = (x, y)), but in the normal Xray CT apparatus 100, Between the Xray tube 101 and the Xray detector 102, a physical Xray filter (for example, a socalled “wedge” or “Xray filter” composed of a copper foil, a metal lump or the like) that adjusts the Xray irradiation amount. So that it can be ignored. This is because the wedge is irradiated so that the same amount of Xrays can be detected by any Xray detector 102 by using the fact that the subject P is made of a substance having the same density as water. The Xray dose has a function of partially reducing the Xray dose. Therefore, according to such a wedge, as a result, the noise variance σ _{k} ^{2} is brought to an almost constant value almost independent of the position of the pixel x. (Incidentally, this wedge is generally installed for the purpose of effectively using the dynamic range of the Xray detector 102).
From the above, on the K still images acquired by dynamic CT imaging, it is reasonable to estimate that the variance σ _{k} ^{2} is almost constant for all pixels on the kth still image. is there. Of course, it can be easily inferred that the present embodiment can be extended in the case where the variance is different for each pixel.
Now, in order to specifically calculate the above equation (2), what value is assigned as the variance σ _{k} ^{2} becomes a problem. This is a problem because usually the shape of the noise distribution can be assumed (Gaussian distribution in the above), but the specific value of the variance σ _{k} ^{2} is often unknown.
Furthermore, in general, imaging may be performed by changing the irradiation dose (Xray tube current × irradiation time (mAs)) for each imaging.
In the kth image (k = 1, 2,..., K), the noise dispersion of the scalar value of each pixel is σ _{k} ^{2,} and the irradiation dose used for photographing the kth image is When R _{k} , σ _{k} ^{2} is proportional to R _{k} . Therefore, if σk _{O} ^{2} can be specified for at least one k = k _{0} ,
Can accurately estimate σ _{k} ^{2} .
In the present embodiment (this situation applies), a specific numerical value of σ _{k} ^{2} can be estimated for at least one k by the following method.
Of the K times of imaging, the expected value E [σ _{k} for the variance σ _{k} ^{2 is} measured by using N times (1 <N ≦ K) images that can be assumed to have little change in the subject P. ^{2} ] is effective. For simplicity of explanation below, it is assumed that the irradiation doses in these N images are the same, and therefore σ _{k} ^{2} is constant (denoted as σ ^{2} ) for k = 1, ^{2} ,. Noise included in each of the scalar values v _{1} (x _{f} ), v _{2} (x _{f} ),..., V _{K} (x _{f} ) constituting the pixel value v (x _{f} ) of a certain pixel x _{f} in these N images. Is expected to follow a Gaussian distribution with mean 0 and variance σ ^{2} as described above, so these average values are expressed by the following equation (6):
Is the expected value E [σ ^{2} ] for the true variance σ ^{2} ,
Can be obtained as The expected value E [σ ^{2} ] of the variance can be considered to be appropriate for all pixels x on all K still images as described above, and can be used as a substitute for the true variance σ ^{2.} This is a value for which the certainty is guaranteed over a certain level. Therefore, in the actual calculation of the above equation (4), this E [σ ^{2} ] may be substituted for σ ^{2} of equation (4).
More specifically, such E [σ ^{2} ] may be obtained from measured values based on, for example, the first and second still images in K still images (the above (6)). And, in terms of equation (7), this corresponds to N = 2). Further, for the pixel x _{f} to be subjected to the actual operation of the (6) and (7), for example, air or bone are selected only appropriate pixel x _{f} excluding the portion being imaged (s If selected, an average of all the obtained E [σ ^{2} ] may be taken. Furthermore, in general, it is better to devise measures to suppress the influence of the movement of the subject P.
Even when the irradiation dose is not constant in taking these N images, it can be easily estimated that σ _{k} ^{2} is correctly estimated by using the fact that σ _{k} ^{2} is proportional to R _{k} .
Next, the parameter C in the above equation (4) will be described. First, the expression (4) includes the concept of the risk factor p (x, y) described in the above general form as follows. That is, the expression in the root sign in the numerator on the right side of the equation (4) matches the χ ^{2} value that follows the socalled χ square distribution, which is divided by (2σ) ^{2} and the whole parenthesis Is the risk factor p1 (x, y) itself. That means
And the above equation (4) relates to p1 (x, y) expressed as this equation (8).
It is nothing else. A is a constant and is standardized so that p1 has a value of (0 to 1).
Eventually, in formula (4), the risk factor p (x, y) described in the general form as described above is not explicitly displayed, but the actual condition of the weight w1 (p (x, y)). Can be regarded as a function of the risk factor (= p1 (x, y)) as described above (equation (9)), that is, “function of fitness” (however, the risk factor and As described above, the degree of fitness has a relationship in which when one increases, the other increases.
As can be seen from the above equation (9), the parameter C has an effect of determining how sensitively the weight w1 (p (x, y)) reacts to the risk factor p1 (x, y). That is, when C is increased, p1 (x, y) is slightly decreased, and w1 (p (x, y)) approaches 0. Moreover, when C is made small, such a sensitive reaction can be suppressed. Specifically, C may be about 1 to 10 and preferably C = 3.
In this embodiment, the similarity determination regarding both the pixels x and y, in other words, the determination of rejection of the abovedescribed null hypothesis H regarding both the pixels x and y is, as is clear from the above, the risk factor p1 ( x, y) based on the socalled chisquare test method (statistical test method).
In addition, as can be seen from the expression (4), in the present invention, after calculating the risk factor p (x, y) for each combination of x and y, the weight w (p (x, y)) It is not always necessary to go through the procedure of obtaining, and it may be configured to directly calculate (wOp) as a composite function without specifically obtaining the risk factor p (x, y).
As described above, the variance σ ^{2} is estimated (for example, E [σ ^{2} ] in the equation (7)), and the parameter C is appropriately determined (for example, C = 3). For all pixels y included in the set N (x) defined for a certain pixel x (for example, an area corresponding to 3 × 3 pixels centered on the pixel x, for example) The weight w1 (p (x, y)) can be obtained. After that, by using this w1 (p (x, y)) instead of w (p (x, y)) in the above equation (2), it is possible to carry out a specific numerical calculation of the coherent filter. It becomes possible. As a result, the pixel values v ′ (x) = (v ′ _{1} (x), v ′ _{2} (x),..., V, in which noise is strongly suppressed without impairing the spatial resolution as well as the temporal resolution. ' _{K} (x)) (= Equation (3)), that is, such K still images or moving images can be obtained.
FIG. 3 illustrates such image processing so that it can be conceptually easily understood. That is, in FIG. 3A, in a still image of 1, 2,..., K still images, for a certain pixel x, a rectangular area N ^{3 × 3} (3 × 3 pixels centered on the pixel x). x) is assumed. The pixels in the left corner corner of the rectangular area N ^{3 ×} 3 (x), if _{y 1,} the pixel _{y 1,} as also shown in FIG. 3, a pixel value v _{(y 1)} ing.
Then, the scalar value _{v} 1 constituting the pixel value _{v (y 1) (y 1} ), v 2 (y 1), ..., v K (y 1) and the scalar value of the pixel values v (x) _{v} 1 ( x), v _{2} (x),..., v _{K} (x), the weight w1 (p (x, y _{1} )) is calculated by the above equation (4) (FIG. 3B). The same applies to the remaining pixels y _{2} ,..., Y _{8} in the rectangular area N ^{3 × 3} (x), and as shown in FIG. 3B, w1 (p (x, y _{1} )),. , W1 (p (x, y _{8} )) and w1 (p (x, x)). (In this case, the risk factor p (x, x) is “1” from equation (8), and therefore the weight w1 (p (x, x)) is also “1” from equation (9) (= The maximum weight has been)).
Next, the weights w1 (p (x, y _{1} )),..., W1 (p (x, y _{8} )), w1 (p (x, x)) obtained in this way are used as the corresponding pixels. Are multiplied by scalar values v _{k} (y _{1} ), v _{k} (y _{2} ),..., V _{k} (y _{8} ), v _{k} (x) in the _{kth image} , respectively, to obtain the sum ((2 ), And this is divided by the sum of the weights w1 related to the rectangular area N ^{3 × 3} (x) (also corresponds to the denominator of equation (2)), the kth A scalar value v ′ _{k} (x) in which noise is suppressed can be obtained for the pixel x in the image (FIG. 3C). Further, for all images of k = 1, 2,..., K, the same weights w1 (p (x, y _{1} )),..., W1 (p (x, y _{8} )), w1 (p (x, x) )) To obtain a scalar value v ′ _{k} (x) in which noise is suppressed, thereby obtaining a pixel value v ′ _{k} (x) = (v ′ _{1} (x), v in which noise in the pixel x is suppressed. ' _{2} (x), ..., v' _{K} (x)) is obtained. If the above calculation is repeated for all the pixels x, K images with reduced noise can be obtained.
In the image composed of the pixel values v ′ (x) calculated by the coherent filter in this way, random noise seen in the original image is sufficiently suppressed.
The abovedescribed processes may be performed in accordance with a flowchart as shown in FIG. 4, for example, and calculations and image display related to the processes are performed on the actual Xray CT apparatus 100. In order to realize this, for example, as shown in FIG. 2, an image processing unit 110 configured by a variance value estimation unit 111, a weight calculation unit 112, and a pixel value calculation unit 113 may be provided and implemented.
Among these, the weight calculation unit 112 is configured to obtain the weight w1 (p (x, y)) directly from the pixel values v (x) and v (y) as described above. Therefore, the calculation unit 112 does not specifically calculate the value of the risk factor p1 (x, y) (that is, the “risk rate calculation unit (incorporating the“ fitness quantification unit ”according to the present invention) and weights). It should be noted that the “risk rate calculation unit (fitness quantification unit)” that specifically calculates the value of the risk factor p1 (x, y) and the output thereof are not the abovedescribed configuration. The weight calculation unit 112 may be configured to take a twostep procedure called “weight calculation unit” that calculates the weight w1 (p (x, y)) based on the above. The weight w1 (p (x, y)) is calculated using the variance σ ^{2} and v (x) and v (y).
In addition, the pixel value calculation unit 113 uses the pixel values v (x) and v (y) and the weight w1 (p (x, y)) numerically calculated by the weight calculation unit 112 to generate a pixel value v ′ ( x) is calculated. That is, the calculation unit 113 actually performs processing for suppressing noise of the original image, that is, application of a coherent filter (hereinafter, this is expressed as “coherent filter is applied”).
In the case of applying a coherent filter to a moving image composed of K still images in the dynamic coherent filter processing as described above, the processing in the image processing unit 110 is performed after once reconstructing all still images. These may be stored in the storage device 10M and subjected to a coherent filter later as postprocessing. However, the present embodiment is not limited to such a form. In the flow of continuous projection data collection, continuous reconstruction, and continuous display, a process for applying a coherent filter may be performed in real time (hereinafter referred to as “real time coherent filter process”).
In a preferred embodiment of realtime coherent filtering, each time a new image is taken and reconstructed, the following processing is performed. Among the first obtained image (image number 1) to the latest image (image number M), common identical points on K still images having image numbers M, M1,..., MK + 1. The pixel values (scalar values) of the pixel x at (same coordinates) are arranged and the Kdimensional vector value v (x) = (v _{M} (x), v _{M−1} (x),..., V _{M−K + 1} (x) ). In this way, a coherent filter can be applied in exactly the same manner as the “dynamic coherent filter processing” described above. However, the pixel value calculation unit 113 does not actually calculate all the elements of the pixel value v ′ (x), but only the scalar value v _{M} ′ (x) corresponding to the latest image (image number M). calculate. As a result, since the calculation speed is improved, the latest image in which noise is suppressed can be displayed in real time.
As another preferred embodiment of this “realtime coherent filtering”, when the first K images are obtained, a coherent filter is applied in the same manner as described above, and v _{1} ′ (x) _{,.} ′ (X) is obtained, and thereafter, using K still images having Kdimensional vector values having image numbers M, M−1,..., M−K + 1, v (x) = (v _{M} (x ), V _{M−1} ′ (x),..., V _{M−K + 1} ′ (x)), and the realtime coherent filter processing described above may be applied to this. It is convenient that the dimension K of the pixel value vector v (x) can be changed at any time by manual setting or automatic setting during the realtime coherent filter processing.
In this way, CBP study is performed using CT images that effectively suppress only noise without reducing spatial and temporal resolution by using a coherent filter, and the dynamics of blood in brain tissue (capillary blood vessels) are quantitatively analyzed. By analyzing and obtaining the indexes (CBP, CBV, MTT, Err), improvement in accuracy and reliability can be expected.
Below, the CBP study will be described.
(CBP study)
(principle)
In the CBP study, indexes of CBP, CBV, MTT, and Err that quantitatively indicate the dynamics of “blood flow through capillaries” in brain tissue are obtained, and a map of these indexes is output.
CBP: Unit volume and blood flow per unit time in capillaries of brain tissue [ml / 100 ml / min]
CBV: Blood volume per unit volume in brain tissue [ml / 100ml]
MTT: Average blood passage time in capillaries [sec]
Err: An index of the residual deviation of the actual measurement value from the analysis model. Depending on the degree of this index, it is possible to analyze the discriminating between the dominating tissue and the nondominating tissue of the cerebral artery.
In the CBP study, a contrast agent having no cerebral vascular permeability, such as an iodine contrast agent, is used as a tracer. The iodine contrast agent is injected from the cubital vein by an injector, for example. The iodine contrast agent intravenously injected by the injector flows from the cerebral artery via the heart and lungs. Then, it flows out from the cerebral artery to the cerebral vein through the capillaries in the brain tissue. At this time, a contrast agent having no cerebral vascular permeability, for example, an iodinated contrast agent, passes through the capillary in normal brain tissue without leaking out of the blood vessel.
The state of passage of the contrast agent is continuously photographed by dynamic CT, and the time density curve Ca (t) of the pixel on the cerebral artery and the time density curve Ci of the pixel on the brain tissue including the capillary are obtained from the continuous image. (T) The time density curve Csss (t) of the pixel on the cerebral vein is measured.
In the CBP study, the blood concentration of the contrast agent is ideal between the time curve Ca (t) of the blood concentration of the cerebral blood vessels close to the brain tissue and the time curve Ci (t) of the blood concentration of the capillary blood vessels. When the contrast medium is injected from the blood vessel immediately before entering the brain tissue, the time density curve in the brain tissue unit volume (1 pixel) including the capillary blood vessel has a vertical rise and a slight gradient. It will be in the form of falling with. This can be approximated by a rectangular function (boxMTF method: boxModulation Transfer Function method).
Using the cerebral artery blood time concentration curve Ca (t) as the input function and the brain tissue time concentration curve Ci (t) as the output function, the characteristics of the process of passing through the capillaries are obtained as a transfer function represented by a rectangular function. Can do.
(Specific steps)
5 and 6 show a typical procedure of the CBP study according to the present embodiment.
First, bolus injection (contrast medium is administered all at once) is performed on a blood vessel such as an elbow vein, and dynamic CT (images of the same part are repeatedly taken) is performed immediately after or immediately before. As a most typical procedure, when bolus injection is performed on the elbow vein, imaging is repeated at intervals of, for example, 0.5 to 2 seconds over a period of approximately 20 to 40 seconds. The CT value of each jth pixel (x, y) in N CT images obtained by dynamic CT is represented by v (x, y, j). This is nothing but a sample of the time density curve (smooth curve) f (t, x, y) at this pixel (x, y).
First, as preprocessing, in step S1, pixels that are clearly identified as tissues other than brain tissue are excluded from the analysis target from each CT image. That is, a pixel indicating a value that does not fall within a range that can be considered as a CT value of brain tissue (for example, a CT value of 10 to 60 HU) is a pixel corresponding to air, bone, fat, or the like, and is not related to cerebral blood flow quantification. So these can be ignored. This analysis range is set to 10 to 60 HU as a default, but can be arbitrarily set via the input unit 109.
Further, as preprocessing, the contrast effect is initialized in step S2. In order to obtain a contrast effect (an increase in CT value) in each pixel, for each pixel (x, y), an image before the contrast agent reaches the tissue corresponding to that pixel (generally a plurality of images are obtained). Is represented by serial numbers 1, 2,... K, the temporal average value is
And this value is b (x, y). For pixel values v (x, y, j) of each image of j = K + 1, K + 2,.
q (x, y, j) = v (x, y, j) b (x, y)
About j <K
q (x, y, j) = 0
And it is sufficient. In order to simplify the processing, the same K may be adopted for any pixel. The q (x, y, j) obtained in this way is considered to be nothing but the sampled (smooth) time density curve q (t, x, y) of the contrast effect at t = t1, t2,. be able to. Quantitative analysis of cerebral blood flow is performed using this q (t, x, y).
In the quantitative analysis, it is necessary to first separate the right brain area and the left brain area on the CT image. As described above, in the CBP study, the blood flow dynamic state of the capillary is obtained as the transfer function MTF of the brain tissue time concentration curve Ci (t) with respect to the cerebral artery time concentration curve Ca (t). It is assumed that the brain tissue to be analyzed is under the control of Ca (t) cerebral artery. At least the left brain and the right brain are separately analyzed using the time concentration curves Ca (t) of the separate cerebral arteries. That is, the time concentration curve Ca (t) of the left cerebral artery is used only for analyzing the brain tissue of the same left brain. In use, the right brain cerebral artery time concentration curve Ca (t) needs to be used only for analysis of the same right brain brain tissue.
In order to divide the brain into the left brain area and the right brain area, as shown in FIG. 7, a dividing line is displayed on the screen as a figure superimposed on the CT image (S3). The dividing line may be initially displayed at the center of the image. The operator refers to the image, moves the dividing line, and moves the plurality of constituent points constituting the dividing line to arbitrarily bend, thereby dividing the left and right areas.
Next, the cerebral artery ROI is set on the cerebral artery on the CT image. In order to improve and facilitate the accuracy of this setting, a support map is created by the ROI setting support unit 121. Separately or overlaid on the CT image (S4). Examples of the support map include an AT (appearance time) map, a PT (peak time) map, and a TT (transit time) map. For each pixel, as shown in FIG. 8, time AT, time T0 from an arbitrary time before contrast (for example, data collection start time) T0 to a time when the contrast agent concentration reaches several percent (for example, 1%) of the peak peak The time (peak time) PT from the time until the contrast agent concentration reaches the peak, or TT that represents the moving time of the contrast agent, for example, by a halfvalue width, is calculated and generated and displayed as a map. By default, all types of the AT map, PT map, and TT map are generated and displayed, but the operator can select any one type or any two types.
These values of cerebral arteries tend to appear as eigenvalues to some extent higher than others, so color display through a color map (lookup table) set to extract and display the window centered on that value By doing so, the location of the cerebral artery can be easily identified and the cerebral artery ROI can be accurately set (S5). Typically, in each of the left and right brain areas, three cerebral arteries ROI are set: an anterior cerebral artery (ACA), a middle cerebral artery (MCA), and a posterior cerebral artery (PCA).
In the case of multislice imaging, for example, when four adjacent slices are to be analyzed, setting the cerebral artery ROI individually for each slice as shown in FIG. This is not necessary for analysis. Therefore, the cerebral artery ROI set in one arbitrary slice is also shared with other slices. Alternatively, a cerebral artery time concentration curve Ca (t) that can be commonly used in all slices may be created using a coherent regression method described later.
Next, a time density curve Ca (t) is created for each set cerebral artery ROI by the time density curve creating unit 122 from continuous image data by dynamic CT (S6).
Here, many cerebral arteries are much thinner than pixels, and since they are generally not orthogonal to the CT imaging slice, none of the pixels on the image accurately represents the CT value of arterial blood. One pixel is composed of a mixture of cerebral arteries and other tissues, and because of its partial volume effect, it usually exhibits a lower contrast effect. Further, image noise is large in these pixels including the artery as a partial volume. In particular, in an artery or the like at a site where cerebral infarction occurs, the effect of noise is enormous because the contrast effect is relatively small. Although the image noise is suppressed by the abovedescribed coherent filter, the influence of the partial volume effect still remains.
This problem can be suppressed by applying a coherent regression method to be described later using pixels in a solid including the artery rather than measuring the time density curve of the cerebral artery in a single slice image. Is possible. Therefore, instead of the abovedescribed coherent filter method, a coherent regression method may be applied at this stage.
In addition, according to this method, for each artery, a cerebral artery time density curve corresponding to it can be obtained. Therefore, it can be used for analysis of any part in all slices within the imaging range. Allows you to select the slice for which the time density curve of the cerebral artery is most clearly obtained for a particular artery, and apply the time density curve of that cerebral artery to all slices. Can be reduced.
(Coherent regression method)
In creating the time concentration curve, it is important to remove the effects of the partial volume effect and random noise. First, the “time density curve” is a curve representing a change with time of an image density value at a specific portion in the dynamic CT image. In particular, in the abovementioned medical image diagnostic apparatus, the temporal change of the contrast medium concentration in a specific tissue of a human body is used as a time concentration curve for the purpose of examining details such as blood flow dynamics and metabolic functions in the human tissue. Measuring is done. In astronomical observation or the like, a time concentration curve is used for the purpose of analyzing a change in luminous intensity of a specific celestial object. And more formally specified, i.e., the timedensity curve, when the density value of a site at time _{t k} and _{d k,} the pair row _{{<t k, d k>} (k = 1,2, · .., K)}. Also, for many applications of time density curves, it is not always necessary to have an absolute value of d _{k} , but rather it is sufficient if only the increment (d _{k} −d _{1} ) relative to the first image 1 is obtained. is there. Moreover many of such applications, (here A unknown proportional coefficient) simply _{(d} k d _{1)} proportional to the data A _{(d} k _{d} 1) is sufficient as long obtained only. In this case, therefore, the pair of columns {<t _{k} , A (d _{k} −d _{1} )> (k = 1, 2,..., K)} is the time density curve to be obtained.
In order to obtain such a time density curve, in principle, an attempt is made to measure the time density curve in each image k (k = 1, 2,..., K) constituting the dynamic CT image. Using the scalar value v _{k} (x) of the pixel x included in the region to be paired, {<t _{k} , v _{k} (x)>} or {<t _{k} , A (v _{k} (x) −v _{1} (x)) >>.
However, in practice, there is a problem that the time density curve to be originally measured cannot be obtained accurately because random noise is included in the dynamic CT image taken by the medical image diagnostic apparatus or the like.
Furthermore, in practical use, a socalled “partial volume effect” occurs in these dynamic CT images. The partial volume effect means that an image of a minute object in a subject is represented by a small number of pixels on the image, and an image of an adjacent object in the subject is also included in these few pixels. This is a phenomenon in which the pixel values of these small number of pixels exhibit relatively small fluctuations (although they are proportional to the fluctuations of the density value to be measured). In other words, the pixel values of these few pixels contain only a few signals. Therefore, when the partial volume effect is generated, the pair of columns {<t _{k} , v _{k} (x)> (k = 1, 2,..., K)} is extremely high regardless of which pixel x is taken. Since the signal level is low and affected by the change of the concentration value in the tissue that is not originally measured, and there is random noise, the time concentration curve to be originally measured {<t _{k} , d _{k} There is a problem that it is not possible to accurately obtain>
Therefore, in the past, temporal or spatial smoothing has been used to suppress random noise, but temporal resolution is lost when temporal averaging is performed, and when spatial averaging is performed, There has been a problem that a change with time in the concentration of a portion other than the portion to be measured is mixed into the measured value. In order to solve such a problem and obtain a more accurate time density curve, a coherent filter is employed.
First, the null hypothesis to be used in the coherent filter of this embodiment will be described. When it is assumed that the true time concentration curve at the site to be measured is {<t _{k} , d _{k} > (k = 1, 2,..., K)}, {<t _{k} , A (d _{k} −d _{1} )> (k = 1, 2,..., K)} (where A is an unknown coefficient), the part to be measured A set R of pixels substantially corresponding to is set. For any pixel x∈R that is an element of the set R, the condition Q: “If this pixel x well reflects the true time density curve, it is almost affected by changes in density over time in other parts. If not, the partial volume effect and random for pixel values v (x) = (v _{1} (x), v _{2} (x),..., V _{K} (x)) (as vector values) By considering the effects of noise,
v _{k} (x) = p (x) d _{k} + q (x) + γ _{k} (x) (11)
(K = 1, 2,..., K)
Can be assumed to hold. Here, p (x) and q (x) are unknown coefficients that are different for each pixel x but do not change depending on the image number k (that is, the photographing time t _{k} ), and model the partial volume effect. . Γ _{k} (x) is a model of random noise, and has a value different for each pixel x and for each image number k, but its expected value is 0, and its statistical distribution is the pixel x. And image number k.
According to the above assumption, for any two pixels x and y that are elements of the set R, if the proposition “both pixels x and y satisfy the condition Q (above)” holds. It can be proved that the relationship of the following equation holds.
v _{k} (x) = a _{1} v _{k} (y) + a _{2} + ξ _{k} (k = 1, 2,..., K)
(12)
Here, a _{1} and a _{2} are unknown coefficients that differ for each pixel set x and y but do not change depending on the image number k (ie, the photographing time t _{k} ). Ξ _{k} is random noise, and the value is different for each pixel set x, y and for each image number k, but the expected value is zero.
Equation (12) is derived as follows. That is, an expression obtained by substituting y for x
v _{k} (y) = p (y) d _{k} + q (y) + γ _{k} (y) (13)
Transforming
By doing so, the equation (12) is derived. Here, a _{1} and a _{2} in the equation (16) are parameters representing the partial volume effect, and ξ _{k} in the equation (16) represents random noise.
From the above, the proposition that “pixels x and y both satisfy condition Q” is the null hypothesis H _{0} ′ “v _{k} (x) = a _{1} v _{k} (y) + a _{2} + ξ _{k} (k = 1,... , K) ”.
Next, the null hypothesis H _{0} ′ “v _{k} (x) = a _{1} v _{k} (y) + a _{2} + ξ _{k} (k = 1,..., K)” is substantially equivalent and actually. Describes how to convert the proposition to a form that can be tested. When this null hypothesis is described in mathematically exact expression, the null hypothesis H _{0} ′ “There are certain constants a _{1} and a _{2} and ξ _{k} = v _{k} (x) −a _{1} v _{k} (y ) −a _{2} (k = 1,..., K) follows a normal distribution with an average of 0 and variance (σ ^{2} h (a _{1} )). Here, the coefficient h (a _{1} ) is
h (a _{1} ) = 1 + a _{1} ^{2} (17)
It is. (Equation (17) is immediately derived from equation (16), which is the definition of a _{1} and ξ _{k} , and the general nature of the variance with respect to random variables.) The value of the variance σ ^{2} is simple And it can be estimated sufficiently accurately for practical use.
From the above, if the above constants a _{1} and a _{2} can be determined, it is possible to test the null hypothesis H _{0} ′. In practice, it is sufficient ^{to} obtain optimal estimates a _{1} ^{to} a _{2} ^{to} these constants.
A known fitting method (fitting) can be used as it is for calculating the optimum estimated values of the constants a _{1} and a _{2} . Therefore, an outline in the case of using the linear least square method will be described below as a typical example of such a fitting method. In order to apply the linear least square method to this embodiment, the sum of squares of ξ _{k} of the above null hypothesis is simply set as S (a), that is,
Define The value of S (a) depends on the constant vector a = (a _{1} , a _{2} ), that is, the values of the above constants a _{1} and a _{2} . Be calculated constant vector a as the S (a) takes a minimum value, to the constant a _{1} and a _{2,} the optimum estimated values a _{1} ^{~} and a _{2} ^{~} of the sense of unbiased estimate can be obtained. In addition, as a specific calculation method of the linear least square method, various known methods can be used, and these known calculation methods are all very simple and require a very short calculation time. It is.
Thus, as a result of calculating the optimal estimated values a _{1} ^{to} a _{2} ^{to} the constants a _{1} and a _{2} , residuals defined by the following equations are calculated.
r _{k} ^{to} (x, y) = v _{k} (x) −a _{1} ^{to} v _{k} (y) −a _{2} ^{to} (19)
Can be calculated specifically. Therefore, using this residual r _{k} ^{˜} , the abovedescribed null hypothesis H _{0} ′ is substantially equivalent to the null hypothesis H _{0} ″ “r _{k} ^{˜} (x, y) (k = 1,..., K ) mean 0, variance _{ (1+ (a 1 ~) 2 } ) σ 2 of normally distributed. "and can be translated. This is a concrete proposition that can actually perform the calculation of the test.
In addition, expression by vector
(However, the vectors a and ξ depend on the pixel set x, y.) And the following expression f (a ^{~} , v (y)) = a _{1} ^{~} v (y) + a _{2} ^{~} ... ( 21)
In other words the null hypothesis _{H 0} 'using defined as a vector value function f, the null hypothesis _{H 0} "is" v (x) = f (a ~, v (y)) + ξ, ( provided that, xi] mean 0, variance _{ (1+ (a 1 ~) 2 } ) follows the normal distribution of the sigma ^{2.)} ", and this is exactly the same format as the null hypothesis _{H 0} described above. That is, it is clear that this embodiment is a modification of the abovedescribed coherent filter. Here, the ^{f (a ~, v (y} )) and is, i.e., the pixel value v (y) of the pixel y, optimally adjusted to the parameter a indicating the partial volume effect, a pixel x The pixel value v (x) is converted so as to have the highest matching degree.
Next, in the present embodiment, a method for obtaining a time density curve by a coherent filter using the abovedescribed null hypothesis H _{0} ″ will be described. For a set R of pixels roughly corresponding to a region to be measured, this set For one pixel xεR included in R, the following calculation is performed for all the pixels yεR that are elements of the set R. That is, using the abovedescribed method, the residual r _{k} ^{~} (X, y) (k = 1,..., K) is calculated, and then the abovedescribed null hypothesis H _{0} ″ “r _{k} ^{˜} (x, y) (k = 1,..., K) is an average of 0 , According ^{to} the normal distribution of variance (1+ (a _{1} ^{to} ) ^{2} ) σ ^{2} ”, the risk factor p (x, y) or weight w (p (x, y)) when specifically rejecting is calculated. Then, the weighted average v ′ _{k} (x) is calculated by the following equation (22), and the time density curve {<t _{k} , v ′ _{k} (x) −v ′ _{1} (x)> (k = 1) at the pixel x. , 2,..., K)}.
The time density curve thus obtained is {<t _{k} , A (d _{k} −d _{1} )>} (which is a linear transformation of the true time density curve {<t _{k} , d _{k} >} at the pixel x. A is a measurement value approximating an unknown coefficient), and random noise is suppressed by the effect of the weighted average. In addition, as is apparent from the equation, a pixel value vector of another pixel y corrected for the influence of the partial volume effect is used. Furthermore, the present embodiment has a common feature of the coherent filter, that is, “the time average is not used at all, and the spatial average is calculated using a weight based on the degree of fitness with the pixel x”. Therefore, according to the present embodiment, it is possible to obtain a time density curve in which the influence of the partial volume effect is suppressed and random noise is suppressed without impairing the time resolution. The method for obtaining the time density curve in this way is particularly referred to as “coherent regression method”.
Next, an example of clinical use of a time density curve in a dynamic CT image obtained by dynamic CT imaging or the like in medical Xray CT will be specifically described. In this application example, imaging such as dynamic CT is performed while rapidly injecting a contrast medium into a blood vessel, and a change in the density of an image of an artery present in a human tissue is measured as a time density curve, whereby blood in the tissue is measured. It is intended to diagnose the flow dynamics.
In this application example, in many cases, arteries in human tissue are generally very thin, and thus an arterial image that appears on a tomographic image by CT produces a partial volume effect. Furthermore, it goes without saying that the image contains random noise. For this reason, in the conventional method, it is difficult to obtain a sufficiently accurate time concentration curve related to the artery, and if measurement is performed forcibly, it is a linear transformation of the true time concentration curve <t _{k} , D _{k} > related to the artery. <T _{k} , A (D _{k} −D _{1} )> (where D _{k} represents a pixel value (which is a scalar value) of a group of pixels corresponding to an artery image at time t _{k} , and k = 1. , 2,..., K) to some extent, only measured values <t _{k} , (v _{k} (x) −v _{1} (x))> were obtained. This measurement includes random noise. Also, the coefficient A remains unknown due to the effect of the partial volume effect.
Therefore, measured values <t _{k} , (v ′ _{k} (x) −v ′ _{1} (x))> (k = 1, 2,...) Sufficiently approximating <t _{k} , A (D _{k} −D _{1} )>. .., K) can be obtained. On the other hand, some veins that can be observed on the same tomographic image are considerably thick. Therefore, for those veins, a sufficiently good approximation value <t _{k} , (J _{k−} J _{1} )> (k = 1, 2,..., K) can be obtained. Here, J _{k} represents a pixel value at a time t _{k} of a group of pixels corresponding to a vein image.
By the way, in the time concentration curve relating to blood circulation, the proposition S: “If the contrast agent concentration in the blood at time t _{1} is 0, the time concentration curve relating to which blood vessel d <t _{k} , (d _{k} d _{1} )> is also known to have the property that the area under the curve (AUC) matches. The area under the curve here means the integration of the time density curve <t _{k} , (d _{k} d _{1} )> with respect to time t.
Therefore, the area under the curve AUC (d) of the time concentration curve <t _{k} , (d _{k} −d _{1} )> relating to a certain blood vessel d can be approximately calculated by the following equation, for example.
Therefore, the area under the curve AUC (J) for the time concentration curve {<t _{k} , (J _{k} −J _{1} )>} obtained by the conventional method for veins can be calculated using the equation (22). (J can be substituted for d.) Also, regarding the artery, if the time concentration curve {<t _{k} , (D _{k} −D _{1} )>} is known, the area under the curve AUC (D) is It can be calculated in the same way using equation (18), and AUC (D) ≈AUC (J) according to the proposition S above (24)
Should be true. However, in practice, the time concentration curve <t _{k} , (D _{k} −D _{1} )> is unknown, so AUC (D) cannot be calculated.
On the other hand, the time concentration curve <t _{k} , (v ′ _{k} (x) −v ′ _{1} (x))> obtained by the method according to the present invention represents <t _{k} , A (D _{k} −D _{1} )>. The latter contains an unknown coefficient A. For this reason, the area under the curve AUC (v ′) that can be specifically calculated using the equation (23) from {<t _{k} , (v ′ _{k} (x) −v ′ _{1} (x))>} is AUC ( It must be exactly A times D). That is,
AUC (v ′) ≈A AUC (D) (25)
It is. That is, from the equations (24) and (25),
A≈AUC (v ′) / AUC (J) (26)
This relationship holds. Since the right side of equation (26) can be specifically calculated using equation (23), the unknown value of coefficient A can be specifically determined. Therefore, if the value of the coefficient A is used to construct a time concentration curve <t _{k} , (v ′ _{k} (x) −v ′ _{1} (x)) / A>, this is equivalent to the arterial time concentration curve <t _{It is} nothing but an approximation of _{k} , (D _{k} −D _{1} )>. The method of constructing the time concentration curve in which the unknown value of the proportionality coefficient A is determined using the area under the curve in this way is referred to as “AUC method”.
From the above, in the clinical use of the time density curve in dynamic CT images obtained by dynamic CT imaging or the like, it is difficult or impossible to measure with the conventional method by combining the AUC method with the coherent regression method. Even for a fine arterial timeconcentration curve, which is possible, a measurement value that eliminates the influence of the partial volume effect and random noise and does not include the unknown proportionality factor A is obtained.
Of course, the AUC method can also be applied to a timeconcentration curve <t _{k} , (v ′ _{k} (x) −v ′ _{1} (x))> relating to an artery measured by a conventional method alone (randomly). Although the influence of noise and the partial volume effect cannot be excluded, it is possible to construct a time density curve in which the value of the proportionality coefficient A that has been unknown is determined.
(Correction of cerebral artery time concentration curve using upper sagittal sinus time concentration curve (suppression of partial volume effect))
In order to suppress the influence of partial volume, the time concentration curve Ca (t) of the cerebral artery is corrected using the time concentration curve Csss (t) of the superior sagittal sinus instead of or in combination with this coherent regression. You may make it do.
First, in step S7 of FIG. 5, as shown in FIG. 11, a large upper sagittal sinus ROI is set so as to surround the upper sagittal sinus on the CT image. Since the upper sagittal sinus is larger than the artery and the position is fixed, setting the upper sagittal sinus ROI is easy. The large upper sagittal sinus ROI includes a plurality of pixels.
Next, the upper sagittal sinus ROI is reduced so that all the pixels in the upper sagittal sinus ROI are included in the upper sagittal sinus over the entire area (S8). As the reduction processing, for example, first, threshold processing (binarization) is performed for each pixel in the upper sagittal sinus ROI, and a binary map (“0”, “1”) in the ROI is obtained. create. The threshold value is set to a value that separates the image of the superior sagittal sinus from the images of surrounding tissues and bones. “1” represents a pixel on the image of the upper sagittal sinus, and “0” represents a pixel on an image of surrounding tissue or bone. Each pixel (center pixel) of this binary map is replaced according to the value of its neighboring 4 or 8 pixels. Only when the central pixel is “1” and all the neighboring 4 or 8 pixels are “1”, the value of the central pixel is maintained at “1”. That is, when the central pixel is “0”, even if it is “1”, if any one of the neighboring 4 or 8 pixels indicates “0”, the value of the central pixel is changed. Replace with “0”. Accordingly, the upper sagittal sinus ROI is reduced by at least one pixel from the outline of the image of the upper sagittal sinus. As a result, the condition that all the pixels in the upper sagittal sinus ROI subjected to the reduction process are pixels on the upper sagittal sinus sinus image can be realized with high accuracy.
Instead of this method, the upper sagittal sinus ROI may be corrected using the area AUC under the time density curve. In this case, the area under the curve AUC of the time density curve is calculated for each of the pixels in the large ROI as a search range. The area under the curve AUC of the pixel on the upper sagittal sinus image is clearly higher than that of the peripheral pixel due to the contrast effect. Therefore, by executing threshold processing on the area AUC under the curve, only pixels on the upper sagittal sinus image can be selected from the ROI.
In this way, the time density curve of each pixel is averaged for a plurality of highly accurate pixels that are on the upper sagittal sinus image picked up by the AND condition using either method or both methods in combination. Then, a time concentration curve Csss (t) of the superior sagittal sinus is created (S9).
Here, since the iodine contrast medium does not pass through the barrier, in principle, the iodine concentration does not change between the cerebral artery and the cerebral vein, that is, the area under the curve of the time density curve Csss (t) of the upper sagittal sinus. AUC is substantially equivalent to the area under the curve AUC of the time concentration curve Ca (t) of the cerebral artery created in S6. Therefore, as shown in FIG. 10, the area under the curve AUCa of the time concentration curve Ca (t) of the cerebral artery created in S6 with respect to the area under the curve AUCss of the time concentration curve Csss (t) of the upper sagittal sinus. Are corrected by multiplying each time value of the time concentration curve Ca (t) of the cerebral artery created in S6 by AUC (sss / AUCa) (S10).
Next, using the time concentration curve Ca (t) of the cerebral artery shown in FIG. 13A in which the noise and the partial volume effect are suppressed as described above, the state of the blood flow dynamics of the brain tissue (capillary blood vessel) is shown. Quantify. For this purpose, first, a time density curve Ci (t) shown in FIG. 13B is created for each pixel on the brain tissue (S11).
Next, as shown in S12, the cerebral artery time density curve Ca (t) is used for each pixel by using separate cerebral artery time density curves Ca (t) in the left and right areas, and the brain tissue time density curve is used as an input function. Using Ci (t) as an output function, the characteristic of the process of the tracer passing through the capillary is obtained as the transfer function MTF. That is, for the time density curve Ci (t) of the brain tissue in the left area, the time density curve Ca (t) of the same cerebral artery in the left area is used, and the time density curve Ci (t) of the brain tissue in the right area. , The transfer function MTF is obtained using the time concentration curve Ca (t) of the cerebral artery in the same right area. Moreover, since the cerebral artery time concentration curve Ca (t) is created for each ACA, MCA, and PCA as described above, the calculation of the transfer function MTF is repeated for each Ca (t).
Here, as shown in FIG. 14, an ideal relationship established between the time concentration curve Ca (t) of the cerebral artery and the time concentration curve Ci (t) of the capillary is used as the analysis model. In other words, when a contrast medium (tracer) is injected instantaneously from a blood vessel just before entering the brain tissue, the time density curve in the brain tissue unit volume (1 pixel) including the capillary blood vessels has a slight rise in the vertical rise. This represents the impulse response of the tissue, that is, the transfer function. Therefore, the transfer function is approximated by a rectangular function. This is called a boxMTF method.
FIG. 15 shows the principle of the boxMTF method. The transfer function boxMTF represented by a rectangular function is convoluted with the time concentration curve Ca (t) of the cerebral artery, and Ci ′ (t) obtained by the convolution and the measured Ci (t) created in S11. ) And the transfer function boxMTF is corrected so as to reduce the sum of squares of the residual. By repeating this routine, the residual is minimized.
Based on the transfer function boxMTF that minimizes the residual, as shown in FIG. 14, CBP, CBV, and MTT are calculated (S13), and the square sum of the residual minimized in S12 is directly taken as Err. It is. Strictly speaking,
CBP = CBP
CBV = (1Ht) / (1b * Ht) * CBV ′
MTT = (1Ht) / (1b * Ht) * MTT ′
It is corrected by. Here, Ht is the hematocrit value of the large blood vessel, and b * Ht is the hematocrit value of the peripheral blood vessel (generally, b is about 0.7).
As shown in FIG. 16, output ranges (appropriate ranges) are individually set for CBP, CBV, MTT, and Err calculated from the transfer function thus obtained by the boxMTF method (S14). For pixels having a value within the corresponding output range, the value is maintained, and for pixels having a value outside the output range, the value is replaced with a value corresponding to black on the display, for example (S15). .
Next, as shown in FIG. 17, respective maps are generated from CBP, CBV, MTT, and Err that have undergone output optimization (S16). The CBP, CBV, MTT, and Err indices are calculated individually for each anterior cerebral artery ACA, middle cerebral artery MCA, posterior cerebral artery PCA, and for each slice. Therefore, the map is 4 × 3 = 12 even in one slice as shown in FIG. If the number of cerebral arteries to be set is increased, the number of maps increases by four times the increase. It is not realistic to evaluate many such maps comprehensively. Therefore, to reduce the number of maps, the maps are synthesized (S17).
As a synthesis method, a CBP map of the anterior cerebral artery ACA, a CBP map of the middle cerebral artery MCA, and a CBP map of the posterior cerebral artery PCA are represented by the residual Err of the anterior cerebral artery ACA, the residual Err of the middle cerebral artery MCA. Synthesize based on residual Err of posterior cerebral artery PCA. For example, when the transfer function MTF is obtained from the time concentration curve Ca (t) of the anterior cerebral artery ACA and the time concentration curve Ci (t) of the brain tissue under its control, the residual Err is relatively small, and conversely When the transfer function MTF is obtained from the time concentration curve Ci (t) of the brain tissue that is not under control, the residual Err is relatively large. That is, the residual Err represents the dominance possibility of each cerebral artery.
Therefore, for each pixel, a CBP value corresponding to the lowest residual Err is selected from the CBP value of the anterior cerebral artery ACA, the CBP value of the middle cerebral artery MCA, and the CBP value of the posterior cerebral artery PCA. Select as the value of that pixel. The map synthesized in this way is composed of CBP values of brain tissue that are likely to be under the control of the anterior cerebral artery ACA, middle cerebral artery MCA, and posterior cerebral artery PCA. The same applies to the map synthesis of other indexes CBV and MTT.
The cerebral artery having the lowest residual Err is selected from the anterior cerebral artery ACA, the middle cerebral artery MCA, and the posterior cerebral artery PCA, and the selected anterior cerebral artery ACA and middle cerebral artery are selected. By giving different color information depending on the MCA or the posterior cerebral artery PCA, the dominant region of the anterior cerebral artery ACA, the dominant region of the middle cerebral artery MCA, and the dominant region of the posterior cerebral artery PCA were distinguished by color coding. A dominance map may be created.
Here, the map composition will be described in detail below. The time density curve obtained from the pixel located at the position corresponding to the artery reflects the arterial blood contrast medium concentration, and the abovementioned coherent regression method is applied to the time density curve of the correct arterial blood contrast medium concentration. Can be obtained. Such a timeconcentration curve of the cerebral artery can be created for each artery, and there are differences depending on the blood circulation state. In particular, this difference may be significant in cases that have cerebrovascular disorders. For example, the time concentration curve of the cerebral artery obtained from K arteries is represented by A _{k} (t) (k = 1, 2,..., K).
By comparing the timeconcentration curve of a tissue with the timeconcentration curve of the cerebral artery of the artery that is feeding (dominating) the tissue, the microcirculation (structure and function of the capillary system in the tissue) ) Reflecting CBP or the like can be obtained. Since these indexes are calculated for each part (x, y, z), an image having the values as pixel values can be configured, and such an image is an index map. For example, when R types of indexes (typically four types of CBP, CBV, MTT, and Err as described above) are obtained, R maps can be configured. The R maps created in this way can be regarded as one map (vector value map) in which each pixel takes a vector value. That is,
V _{k} (x, y, z) = <P _{k, 1} (x, y, z), P _{k, 2} (x, y, z),..., P _{k, R} (x, y, z) >
It becomes.
For example, in a CBP study, typically R = 4 as described above, P _{k, 1} (x, y) is the CBP value, P _{k, 2} (x, y, z) is the CBV value, P _{k, 3} (x, y, z) can be configured to represent the value of MTT, and P _{k, 4} (x, y, z) can represent the value of residual Err.
Among the parts (x, y, z), those that are clearly not corresponding to the organ to be analyzed are excluded from the analysis from the beginning, and P _{k, r} (x, y) May be substituted with a special value indicating that it is not subject to analysis (steps S14 and S15). As such a value, it is convenient to use a negative value having a large absolute value. Alternatively, as yet another element to be added to the vector V _{k} (x, y, z)
P _{k, R + 1} (x, y, z) = (0 if (x, y, z) is not subject to analysis, 1 otherwise)
You can make a map. Such a map is called a “mask”.
Such a vector value map V _{k} is created for each referenced cerebral artery time density curve A _{k} . For example, if the time concentration curve of the cerebral artery is obtained from the left and right middle cerebral artery, the anterior cerebral artery, and the posterior cerebral artery, then K = 6, and further, the time concentration curve of the cerebral artery is obtained from several arteries around the lesion. K = about 1015.
Of these, the cerebral artery timeconcentration curve obtained from the artery in the right hemisphere is only for the analysis of the part (x, y, z) belonging to the right hemisphere, and the time of the cerebral artery obtained from the artery in the left hemisphere. The concentration curve should be used only for the analysis of the part (x, y, z) belonging to the left hemisphere. Therefore, it is desirable that the operator designates the boundary (midline) between the right hemisphere and the left hemisphere as a straight line, a curved line or a polygonal line, a plane, a curved surface, or the like, and creates a map for each hemisphere. However, there may still be as many cerebral artery time concentration curves as K = 310 for each hemisphere on one side.
Thus, when the number K of the cerebral artery time density curve A _{k} (k = 1, 2,..., K) is large, the resulting vector value map V _{k} (k = 1, 2,... ., K) is inconvenient to observe because of the large number of sheets. That is, if a normal grayscale image or color scale image is to be observed, one map is composed of R images, and there are K sets, so a total of K × R images must be compared. . Further, it is not always obvious which part is nourished by which artery, and which map V _{k} (k = 1, 2,..., K) is observed for each part using anatomical knowledge. You must decide what to do. In particular, in cases where cerebrovascular disorders such as cerebral infarction have occurred, the arteries that control the tissue do not necessarily match the anatomical knowledge, and abnormal control is often observed. Due to these problems, there is a problem that it is difficult to interpret the vector value map.
To solve this problem, map composition is performed. That is, the K vector value maps V _{k} (k = 1, 2,..., K) are aggregated into one vector value map V using the residual map. For example, if P _{k, R} (x, y, z) is a residual map,
V (x, y, z) = V _{k} (x, y, z) where k is the minimum of  P _{k, R} (x, y, z)  Let k be such that
In addition, a map for indicating which of k = 1, 2,...
P _{0} (x, y, z) = (k = 1,2,..., K, where  P _{k, R} (x, y, z)  is the smallest)
Can also be added.
According to this method, that is, when an image is to be observed as a normal grayscale image or color scale image, R or R + 1 images may be observed.
According to this method, there is a possibility that a calculation result using _{Ak} is erroneously used at a site (x, y, z) that should be calculated using the time density curve A _{j} of the cerebral artery. However, in order to make such an error, as is apparent from the definition of V (x, y, z),  P _{k, R} (x, y, z)  < P _{j, R} (x, y, z) ｜
This relationship only occurs when A _{j} and A _{k} are very similar. For this reason, V _{k} (x, y, z) and V _{j} (x, y, z) are considered to be similar to each other at the site (x, y, z). There is almost no possibility of errors in the interpretation of y, z).
When this method is actually applied, P _{0} (x, y, z) = k for each site in the tissue that seems to be almost uniform only when A _{j} and A _{k} are very similar. Or P _{0} (x, y, z) = j, and P _{k, r} (x, y, z) ≈P _{j, r} (x, y, z) (r = 1) , 2,..., R), and it is observed that the result is almost the same no matter which one is employed.
Conversely, a tissue that are dominated by a specific artery, in a case where timedensity curve A _{k} of the cerebral artery corresponding to it is not similar to other curve, by using this method, of the tissue In the region (x, y, z), V _{k} (x, y, z) is selected almost certainly and automatically. Therefore, by observing the above P _{0} (x, y, z), it is possible to observe which tissue is controlled by which artery without anatomical knowledge.
Returning now to FIG. As shown in FIGS. 20A, 20B, 20C, and 20D, the CBP map, the CBV map, and the MTT synthesized as described above or each cerebral artery alone. A region of interest ROI including a plurality of pixels is set for the map and the Err map (S18), and an average value (CBP average value, CBV) of pixel values (CBP value, CBV value, MTT value, Err value) in the ROI is set. An average value, an MTT average value, and an Err average value) are calculated (S19), and the average value may be used as a diagnostic material. At the time of this averaging, an appropriate range is set for each of CBP, CBV, MTT, and Err in step S14, and a value within the range is maintained. A value outside the range is, for example, a minimum corresponding to black expression. Since the value is replaced with a value, if the average including the replaced value is averaged, the average value naturally includes an error. Therefore, in this averaging process, it is necessary to perform the averaging process by selecting only values within the appropriate range or excluding the replaced values.
Further, in setting the region of interest ROI for averaging, if the region of interest ROI is set on any of the CBP map, CBV map, MTT map, and Err map, the ROI becomes another map. Are also used in common, simplifying ROI setting work, and enabling calculation of average values (CBP average value, CBV average value, MTT average value, Err average value) related to the same ROI Yes.
As described above, according to the present embodiment, by using a coherent filter or coherent regression together, it is possible to suppress a reduction in spatial and temporal resolution and suppress noise, thereby improving analysis accuracy of a CBP study. it can.
Note that the present invention is not limited to the abovedescribed embodiment as it is, and can be embodied by modifying the constituent elements without departing from the scope of the invention in the implementation stage. In addition, various inventions can be formed by appropriately combining a plurality of components disclosed in the embodiment. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, constituent elements over different embodiments may be appropriately combined.
10 ... Gantry part,
20 ... Computer device,
101 ... Xray tube,
101a ... high voltage generator,
102 ... Xray detector,
103 ... data collection unit,
30. Image processing device,
107: Image display unit,
109 ... input section,
108 ... control unit,
104 ... Preprocessing unit,
105: Memory unit,
106: Image reconstruction unit,
110: Coherent filter processing unit,
120 ... CBP study processing unit,
111... Variance value estimation unit,
112 ... Weight function calculation unit,
113 ... Pixel value calculation unit (coherent filter unit),
121... CBP study processing unit,
122 ... ROI setting support part,
122... Time concentration curve creation unit,
123 ... cerebral artery time concentration curve correction unit,
124 ... MTF processing unit,
125: Index calculation unit,
126 ... map creation part,
127: Map composition unit.
Claims (6)
 The subject of the brain injected with concrete Kagezai continuously generate a sequence of a plurality of CT images when obtained as a photographing target,
The first temporal density change configured based on the corresponding pixel value of the plurality of CT images at the coordinates of the first pixel of the CT image, and the plurality of CT images at the coordinates of the second pixel. Using the statistical test result between the second temporal density change configured based on the corresponding pixel value, the similarity between the first pixel and the second pixel is obtained,
Obtaining a weight according to the similarity,
Based on the weights, the CT image pixels are weighted averaged locally,
Generating a time density curve of cerebral artery pixels and a time density curve of brain tissue pixels from the plurality of weighted averaged CT images;
Calculating the transfer function of the time density curve of the brain tissue pixel to the time density curve of the cerebral artery pixel;
An index calculation method for blood flow dynamics of capillaries in brain tissue, wherein a plurality of types of indexes related to blood flow dynamics of capillaries in the brain tissue are calculated based on the transfer function.  Using an analysis model that assumes that the impulse response of the tissue concentration change with respect to the time axis is represented by a rectangular function, the time concentration curve obtained by convolution of the cerebral artery pixel time concentration and the transfer function, and the brain The index is calculated from the obtained transfer function by determining the transfer function by curve fitting so that an index of the residual between the tissue pixel and the time density curve is minimized. Index calculation method.
 As the index, from the obtained transfer function,
CBP: blood volume per unit volume and unit time in the brain tissue
CBV: blood volume per unit volume in brain tissue
MTT: Average blood passage time in capillaries
Err: sum of squares of the minimized residuals
The index calculation method according to claim 2, wherein and are calculated.  2. The index calculation method according to claim 1, wherein when the similarity is relatively high, a relatively high weighting factor is given, and when the similarity is relatively low, a relatively low weighting factor is given. .
 Means for continuously generating a acquired time series a plurality of CT images a subject brain injected with concrete Kagezai as an imaging target,
The first temporal density change configured based on the corresponding pixel value of the plurality of CT images at the coordinates of the first pixel of the CT image, and the plurality of CT images at the coordinates of the second pixel. Means for determining a similarity between the first pixel and the second pixel using a statistical test result between a second temporal density change configured based on the corresponding pixel value;
Means for obtaining a weight according to the similarity;
Based on the weight, and means you weighted by the local within a pixel of the CT image,
It means that generates a time density curve of the timedensity curve and brain tissue pixels cerebral artery pixels from the weighted averaged plurality of CT images,
Means for calculating a transfer function of the time density curve of the brain tissue pixel with respect to the time density curve of the cerebral artery pixel;
Based on the transfer function, the index calculation unit about hemodynamics in brain tissue capillaries, characterized in that and means you calculating an index of a plurality of types regarding hemodynamics capillaries of the brain tissue .  A computerreadable storage medium storing a program code for causing a computer to calculate an index relating to blood flow dynamics of capillaries in brain tissue, the program code comprising:
Means for continuously generating a acquired time series a plurality of CT images a subject brain injected with concrete Kagezai as an imaging target,
The first temporal density change configured based on the corresponding pixel value of the plurality of CT images at the coordinates of the first pixel of the CT image, and the plurality of CT images at the coordinates of the second pixel. Means for determining a similarity between the first pixel and the second pixel using a statistical test result between a second temporal density change configured based on the corresponding pixel value;
Means for obtaining a weight according to the similarity;
Based on the weight, and means you weighted by the local within a pixel of the CT image,
It means that generates a time density curve of the timedensity curve and brain tissue pixels cerebral artery pixels from the weighted averaged plurality of CT images,
Means for calculating a transfer function of the time density curve of the brain tissue pixel with respect to the time density curve of the cerebral artery pixel;
On the basis of the transfer function, the storage medium characterized by comprising a plurality of types of means you calculating an index concerning blood flow dynamics of capillaries of the brain tissue.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

JP2008010951A JP4714228B2 (en)  20080121  20080121  Index calculation method, apparatus and storage medium for blood flow dynamics of capillaries in brain tissue 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

JP2008010951A JP4714228B2 (en)  20080121  20080121  Index calculation method, apparatus and storage medium for blood flow dynamics of capillaries in brain tissue 
Related Child Applications (1)
Application Number  Title  Priority Date  Filing Date 

JP2001318344 Division  20011016 
Publications (2)
Publication Number  Publication Date 

JP2008100121A JP2008100121A (en)  20080501 
JP4714228B2 true JP4714228B2 (en)  20110629 
Family
ID=39434858
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

JP2008010951A Active JP4714228B2 (en)  20080121  20080121  Index calculation method, apparatus and storage medium for blood flow dynamics of capillaries in brain tissue 
Country Status (1)
Country  Link 

JP (1)  JP4714228B2 (en) 
Families Citing this family (4)
Publication number  Priority date  Publication date  Assignee  Title 

US8315449B2 (en) *  20080624  20121120  Medrad, Inc.  Identification of regions of interest and extraction of time value curves in imaging procedures 
JP5322548B2 (en) *  20080917  20131023  株式会社東芝  Xray CT apparatus, medical image processing apparatus, and medical image processing program 
JP5897284B2 (en)  20100901  20160330  株式会社東芝  Medical image processing device 
EP2623457A1 (en)  20120202  20130807  VTU Holding GmbH  Use of an ionic liquid for storing hydrogen 
Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

JP2000050109A (en) *  19980529  20000218  Stmicroelectronics Inc  Nonlinear image filter for removing noise 
WO2000057777A1 (en) *  19990326  20001005  Oestergaard Leif  Method for determining haemodynamic indices by use of tomographic data 
JP2001061157A (en) *  19990614  20010306  Nikon Corp  Image processing method, machinereadable recording medium with image processing program recorded therein and image processor 

2008
 20080121 JP JP2008010951A patent/JP4714228B2/en active Active
Patent Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

JP2000050109A (en) *  19980529  20000218  Stmicroelectronics Inc  Nonlinear image filter for removing noise 
WO2000057777A1 (en) *  19990326  20001005  Oestergaard Leif  Method for determining haemodynamic indices by use of tomographic data 
JP2001061157A (en) *  19990614  20010306  Nikon Corp  Image processing method, machinereadable recording medium with image processing program recorded therein and image processor 
Also Published As
Publication number  Publication date 

JP2008100121A (en)  20080501 
Similar Documents
Publication  Publication Date  Title 

US6898263B2 (en)  Method and apparatus for softtissue volume visualization  
US7058155B2 (en)  Methods and apparatus for detecting structural, perfusion, and functional abnormalities  
EP1526808B1 (en)  Systems for detecting components of plaque  
CN100553561C (en)  Method and apparatus for segmenting structure in CT angiography  
US7260252B2 (en)  Xray computed tomographic apparatus, image processing apparatus, and image processing method  
US6301498B1 (en)  Method of determining carotid artery stenosis using Xray imagery  
EP1614070B1 (en)  Imaging internal structures  
US5776063A (en)  Analysis of ultrasound images in the presence of contrast agent  
US6891918B2 (en)  Methods and apparatus for acquiring perfusion data  
US7558611B2 (en)  Automatic detection and quantification of coronary and aortic calcium  
US20040101179A1 (en)  Method and apparatus for partitioning a volume  
JP2004105731A (en)  Processing of computer aided medical image  
Baumueller et al.  Lowdose CT of the lung: potential value of iterative reconstructions  
JP4820582B2 (en)  Method to reduce helical windmill artifact with recovery noise for helical multislice CT  
JP5248648B2 (en)  Computer tomography system and method  
JP4575909B2 (en)  Xray tomography equipment  
US6765983B2 (en)  Method and apparatus for imaging a region of dynamic tissue  
US10186056B2 (en)  System and method for estimating vascular flow using CT imaging  
US20110150309A1 (en)  Method and system for managing imaging data, and associated devices and compounds  
US6836528B2 (en)  Methods and apparatus for detecting structural, perfusion, and functional abnormalities  
KR100722596B1 (en)  Image processing method and image processing device  
US20050113680A1 (en)  Cerebral ischemia diagnosis assisting apparatus, Xray computer tomography apparatus, and apparatus for aiding diagnosis and treatment of acute cerebral infarct  
US6496560B1 (en)  Motion correction for perfusion measurements  
US6745066B1 (en)  Measurements with CT perfusion  
US6101238A (en)  System for generating a compound xray image for diagnosis 
Legal Events
Date  Code  Title  Description 

A621  Written request for application examination 
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20080220 

A131  Notification of reasons for refusal 
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20101207 

A521  Written amendment 
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20110207 

A01  Written decision to grant a patent or to grant a registration (utility model) 
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20110301 

A61  First payment of annual fees (during grant procedure) 
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20110325 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20140401 Year of fee payment: 3 

S111  Request for change of ownership or part of ownership 
Free format text: JAPANESE INTERMEDIATE CODE: R313114 Free format text: JAPANESE INTERMEDIATE CODE: R313117 Free format text: JAPANESE INTERMEDIATE CODE: R313111 

R371  Transfer withdrawn 
Free format text: JAPANESE INTERMEDIATE CODE: R371 

S111  Request for change of ownership or part of ownership 
Free format text: JAPANESE INTERMEDIATE CODE: R313114 Free format text: JAPANESE INTERMEDIATE CODE: R313117 Free format text: JAPANESE INTERMEDIATE CODE: R313113 

R350  Written notification of registration of transfer 
Free format text: JAPANESE INTERMEDIATE CODE: R350 

S533  Written request for registration of change of name 
Free format text: JAPANESE INTERMEDIATE CODE: R313533 

R350  Written notification of registration of transfer 
Free format text: JAPANESE INTERMEDIATE CODE: R350 