CN110634117B - Method for quantitatively evaluating image by utilizing three-dimensional parameters - Google Patents

Method for quantitatively evaluating image by utilizing three-dimensional parameters Download PDF

Info

Publication number
CN110634117B
CN110634117B CN201810548477.1A CN201810548477A CN110634117B CN 110634117 B CN110634117 B CN 110634117B CN 201810548477 A CN201810548477 A CN 201810548477A CN 110634117 B CN110634117 B CN 110634117B
Authority
CN
China
Prior art keywords
image
dimensional
roi
selecting
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810548477.1A
Other languages
Chinese (zh)
Other versions
CN110634117A (en
Inventor
杨鑫
齐新宇
林承光
黄晓延
夏云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Furui Value Medical And Health Industry Co ltd
Original Assignee
Guangzhou Furui Value Medical And Health Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Furui Value Medical And Health Industry Co ltd filed Critical Guangzhou Furui Value Medical And Health Industry Co ltd
Priority to CN201810548477.1A priority Critical patent/CN110634117B/en
Publication of CN110634117A publication Critical patent/CN110634117A/en
Application granted granted Critical
Publication of CN110634117B publication Critical patent/CN110634117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method for quantitatively evaluating an image by utilizing three-dimensional parameters, which comprises the following steps: s1: scanning parameters: scanning the target image by using a measuring die body, and obtaining a plurality of detection images; s2: two-dimensional parameter calculation: respectively calculating two-dimensional parameters corresponding to the detection images; s3: three-dimensional parameter expansion calculation: obtaining three-dimensional parameters of each image according to the calculated two-dimensional parameters of each image; and evaluating the target image by combining the three-dimensional parameters. By the technical scheme, the image quality can be evaluated more objectively and accurately.

Description

Method for quantitatively evaluating image by utilizing three-dimensional parameters
Technical Field
The invention relates to the technical field of image processing, in particular to a method for quantitatively evaluating an image by utilizing three-dimensional parameters.
Background
The CBCT image quality is typically evaluated by selecting parameters such as Uniformity (Uniformity), low Contrast resolution (Low Contrast Visibility, LCV), contrast-to-Noise Ratio (CNR), geometric accuracy, mean square error (Mean Squared Error, MSE), and peak signal-to-Noise Ratio (Peak Signal to Noise Rate, PSNR). At present, the image quality is generally evaluated based on two-dimensional parameters, and in the evaluation of the two-dimensional parameters, the level selection is greatly influenced by human factors in reality. In addition, the selection of the region of interest (Region of Interest, ROI) and the like are also subject to personal subjectivity, which results in different persons and even different fractions of the same person with possibly the same scanning parameters, and different results of the obtained image quality evaluation parameters.
Therefore, how to reduce the influence of error factors such as ROI selection, so that the image quality is more objectively and comprehensively reflected, and the image quality is more objectively and accurately evaluated, and the method is one of the technical problems concerned by related industries.
Disclosure of Invention
In order to overcome the defects of the prior art, the technical problem solved by the invention is to provide a method for evaluating the image quality more objectively and accurately by utilizing three-dimensional parameters.
In order to solve the technical problems, the technical scheme adopted by the invention comprises the following specific contents:
a method for quantitatively evaluating an image using three-dimensional parameters, the method comprising the steps of:
s1: scanning parameters: scanning the target image by using a measuring die body, and obtaining a plurality of detection images;
s2: two-dimensional parameter calculation: respectively calculating two-dimensional parameters corresponding to the detection images;
s3: three-dimensional parameter expansion calculation: selecting a plurality of layers of adjacent images and ROIs from the detection images according to the calculated two-dimensional parameters of each image, obtaining the average value of voxels of each three-dimensional ROI, and calculating to obtain the three-dimensional parameters of each image; or respectively selecting a plurality of layers of adjacent images from the detection images according to the calculated two-dimensional parameters of each image, and then selecting the preprocessed images of the images and calculating to obtain the three-dimensional parameters of each image;
and evaluating the target image by combining the three-dimensional parameters.
In order to improve the objectivity and accuracy of target image evaluation, the inventor further combines the two-dimensional parameters of the image in the technical scheme to perform a three-dimensional parameter expansion calculation method, and evaluates the image quality by using the calculation result of the three-dimensional parameters. In the technical scheme, the three-dimensional parameters are utilized for calculation, and the method has the following technical advantages:
(1) On the one hand, the quality of the evaluated image is more objective and accurate by a quantitative analysis method. In the prior art, the evaluation method of the image quality mainly comprises qualitative evaluation and quantitative evaluation, wherein the qualitative evaluation mainly carries out image quality judgment through naked eyes, is greatly influenced by human factors, is related to personal experience, and is also influenced by the window width and the window level of the image. According to the technical scheme, an image quantitative evaluation method is adopted, quantitative two-dimensional image quality evaluation parameters and three-dimensional image quality evaluation parameters are calculated through a program, so that the quality of the image can be more accurately and intuitively represented, and the method is more advantageous than qualitative evaluation;
(2) On the other hand, three-dimensional parameters are adopted for evaluation, in the process of calculation, the selected layers are expanded from single layers to multiple layers, and pixels are expanded to voxels, so that the technical problem that errors are greatly influenced in two-dimensional parameter evaluation can be effectively solved. The three-dimensional image quality parameters are comprehensively determined through multiple layers, so that randomness of artificial layer selection and subjectivity of ROI selection can be reduced to a certain extent, image quality can be comprehensively reflected, the requirement of image quality evaluation can be met, and the accuracy and objectivity of image quality evaluation can be improved. Therefore, the three-dimensional parameters are adopted to evaluate the image, and the problem that errors are greatly influenced during two-dimensional parameter evaluation can be effectively reduced through multi-level comprehensive determination, so that the evaluation result is more accurate.
In this technical solution, the preprocessing method may be various, for example, selecting to perform median filtering processing on the image. However, in practice, the parameter is not limited to this processing method, and the processing may be various filtering processing methods such as image enhancement, image denoising, etc., so that the processing is summarized as "a preprocessed image".
Preferably, in the process of calculating the two-dimensional parameters, a program selects one layer of image in the detection image and a plurality of ROIs of the image, and substitutes the images into a formula to calculate; and/or in the process of calculating the two-dimensional parameters, selecting one layer of image in the detection images by a program, then selecting the preprocessed image of the layer, and substituting the preprocessed image into a formula for calculation.
Preferably, the two-dimensional parameters or the types of three-dimensional parameters include one or more of image uniformity, low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error, and peak signal-to-noise ratio; the measuring die body comprises a first measuring module and a second measuring module; the first measuring module is used for measuring image uniformity; the second measurement module is used for measuring one or more of low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio of the image.
More preferably, the two-dimensional parameters or the types of three-dimensional parameters include image uniformity, low contrast resolution, contrast to noise ratio, geometric accuracy, mean square error, and peak signal to noise ratio.
In a preferred embodiment, the target image is a CT (CBCT) image; the quality of the target image is evaluated through quantitative analysis by combining image uniformity, low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio, so that the evaluation can be more comprehensive and accurate.
Selecting four ROIs in the cross section center and the orthogonal direction from the center of one layer of image in the detection image from the first measurement module, respectively calculating the pixel average value mean of each ROI, and obtaining the uniformity of the two-dimensional image according to the formulas (1) and (2):
wherein,
wherein f (i, j) is the pixel value of the ith row and the jth column in the ROI; m, N is the length and width of the image; mean (high) represents the maximum value of the average of five ROIs; mean (low) represents the minimum of the five ROI averages.
The three-dimensional parameter calculation method of the image uniformity comprises the following steps:
selecting the centers of a plurality of adjacent images from the detected images in the three-dimensional direction and forming three-dimensional ROIs by four areas in the orthogonal direction from the centers, expanding parameters in a two-dimensional formula into three dimensions, respectively calculating the average value T_mean of voxels in each ROI, substituting the average value T_mean into the formula (3) (4), and calculating the three-dimensional uniformity of the images:
wherein,
wherein f (i, j, k) is the voxel value of the ith row and the jth column of the kth layer in the ROI; m, N is the length and width of the image, O is the number of layers; t_mean (high) represents the maximum value of the average of five three-dimensional ROIs; t_mean (low) represents the minimum of the five three-dimensional ROI averages.
The three-dimensional image uniformity difference, relative to the two-dimensional parameter, is affected by the ROI size selection and is between the single-layer parameters, so that the image uniformity can be represented more typically.
Preferably, the two-dimensional parameter calculation method of the image low-contrast resolution is as follows:
selecting one layer of image in the detection image from the second measurement module, selecting ROIs with the same size in the range of the first substance and the second substance image, respectively calculating a pixel mean and a variance SD, substituting the mean and the variance SD into formulas (5), (6) and (7) to calculate a two-dimensional image low-contrast resolution value LCV of the image:
wherein,
wherein CT First substance And CT Second substance Representing the CT value of the first substance and the CT value of the second substance respectively, wherein f (i, j) is the pixel value of the ith row and the jth column in the ROI, and M, N is the length and the width of the image;
the three-dimensional parameter calculation method of the image low-contrast resolution comprises the following steps:
selecting a plurality of adjacent images of the detection image from a second measurement module, selecting ROIs with the same size in the range of the first substance image and the second substance image respectively to form two groups of three-dimensional ROIs, calculating the average value T_mean and the variance T_SD of three-dimensional ROI voxels respectively, substituting the average value T_mean and the variance T_SD into a three-dimensional expansion formula (8) (9) (10) to obtain a three-dimensional LCV:
wherein,
where f (i, j, k) is the voxel value of the ith row and jth column of the kth layer in the ROI, and O is the layer number.
It should be noted that, in some embodiments, if the second measurement module is a CTP404 module, the first substance is a substance with a CT value of = -35 or so, such as polystyrene, and the second substance is a substance with a CT value of = -100 or so, such as LDPE (low density polyethylene), as appropriate for the application requirement of the module.
It should be noted that, similarly, when the selected ROI size is the same, LCV results obtained for different slices are different, and it is explained that the low-contrast resolution of the image is greatly affected by the slice selection. For each slice, the larger the selected ROI, the larger the LCV value, and the worse the low contrast resolution of the image. As the ROI increases, the resulting amplitude of the three-dimensional LCV will be significantly smaller than the amplitude of the monolayer, so the three-dimensional LCV of the image is less affected by the ROI size. It is known that the evaluation by using the three-dimensional parameters of the low contrast resolution of the image can make the evaluation more objective and accurate.
Selecting one layer of image in the detected image from the second measuring module, selecting the ROI in the image range, then selecting the ROI with the same size at the adjacent background, respectively calculating the mean value and standard deviation sigma of pixels, substituting the mean value and standard deviation sigma into formulas (11) (12) (13) to obtain the two-dimensional image contrast-to-noise ratio CNR of the image:
CNR=2|mean background -mean ROI |/(σ BackgroundROI ) (11)
Wherein,
wherein, mean ROI And mean Background Mean values of ROI and background pixel values are represented, respectively; sigma (sigma) ROI Sum sigma Background Standard deviation representing ROI and background pixel values; f (i, j) is the pixel value of the ith row and jth column in the ROI, M, N is the length and width of the image;
the three-dimensional parameter calculation method of the image contrast noise ratio comprises the following steps:
selecting a plurality of adjacent images in the detected image from the second measuring module, selecting a plurality of ROIs in the third object image to form a three-dimensional ROI, selecting the ROIs with the same size at the adjacent background, respectively calculating the average value T_mean and standard deviation T_sigma of the voxels, substituting the average value T_mean and standard deviation T_sigma into a formula (14) (15) (16) to obtain the three-dimensional CNR' of the image:
CNR'=2|T_mean background -T_mean ROI '|/(T_σ+T_σ') (14)
Where f (i, j, k) is the voxel value of the ith row and jth column of the kth layer in the ROI, and O is the layer number.
In some embodiments, when the second measurement module is CTP404 module, the third substance is a substance with CT value=990 or so, such as Teflon (polytetrafluoroethylene), which is suitable for the application requirement of the module.
In terms of the image contrast-to-noise ratio, the result is greatly affected by the slice selection and the ROI results are different. The larger the ROI, the larger the image is affected by noise, the smaller the CNR value of the image, the smaller the difference between the ROI and the background, and the worse the reflected image quality. However, as the ROI increases, the three-dimensional CNR value change amplitude is smaller than that of a single layer, and the reflected image three-dimensional CNR is less affected by the ROI size and has certain advantages over the two-dimensional parameters. It is known that the evaluation by using the three-dimensional parameters of the image contrast-to-noise ratio can also make the evaluation more objective and accurate.
Selecting one layer of image in the detection image from the second module, carrying out circle detection on the image to obtain circle center coordinates of four circles in the horizontal direction and the vertical direction, calculating circle center distances of the two circles in the horizontal direction and the vertical direction through a program, namely respectively obtaining values of geometric accuracy of the image in the horizontal direction and the vertical direction, and finally judging the geometric accuracy of the target image in the horizontal direction and the vertical direction through comparison with the actual physical distance;
the three-dimensional parameter calculation method of the geometric accuracy of the image comprises the following steps:
selecting a plurality of adjacent images in the detection images from the second module, respectively carrying out circle detection on the plurality of images to obtain circle center coordinates of four circles in the horizontal direction and the vertical direction, calculating the circle center distances of the two circles in the horizontal direction and the vertical direction of each image by a program, and expanding the three dimensions of the geometric accuracy of the horizontal direction and the vertical direction into the average value of the two circle center distances of the interpolation module in the horizontal direction and the vertical direction of the selected layer, namely
D=(l 1 +l 2 +…+l n )/n (17)
Wherein n represents the number of layers, l n Representing the horizontal or vertical distance of the slice, and determining the geometric accuracy of the target image in the horizontal and vertical directions by comparison with the actual physical distance.
It should be noted that, in terms of geometric accuracy of the image, it is known from practical operation that the horizontal distances of the two-dimensional and three-dimensional parameters are usually not greatly different from the actual distances, which means that the horizontal distances of the three selected layers are consistent and all meet the requirements. The vertical distances of the two-dimensional and three-dimensional parameters are usually different, which means that the two-dimensional vertical distances of the three selected layers are different, the geometric accuracy of the image in the vertical direction is slightly weaker than that in the horizontal direction, but the two-dimensional vertical distance of the image cannot display the information, and the three-dimensional geometric accuracy is more advantageous. It is known that the evaluation by using the three-dimensional parameters of the geometric accuracy of the image can also make the evaluation more objective and accurate.
In some embodiments, the circle detection may be performed on the image using hough transform.
Preferably, the two-dimensional parameter calculation method of the image mean square error is as follows:
selecting one layer of image and the processed image of the corresponding layer from the detection image, and obtaining a two-dimensional parameter of the mean square error through the following formula:
where M, N is the length and width of the image, f (i, j) represents the pixel value of the original image, and f (i, j)' represents the pixel value of the processed image;
the three-dimensional parameter calculation method of the image mean square error comprises the following steps:
selecting a plurality of layers of adjacent images and images processed by corresponding adjacent layers from the detection images, and obtaining three-dimensional parameters of mean square error through the following formula:
where M, N is the length and width of the image, O is the number of layers, f (i, j, k) represents the voxel value of the original image, and f (i, j, k)' represents the voxel value of the processed image.
Preferably, the two-dimensional parameter calculation method of the image peak signal-to-noise ratio is as follows:
selecting one layer of image from the detection image and the image of the corresponding layer after processing, and obtaining the two-dimensional parameter of the peak signal-to-noise ratio through the following formula:
wherein L is the maximum signal quantity, replaced by the maximum pixel value of the image;
the three-dimensional parameter calculation method of the image peak signal-to-noise ratio comprises the following steps:
selecting a plurality of layers of adjacent original images and processed images of corresponding adjacent layers, and obtaining three-dimensional parameters of peak signal-to-noise ratio through the following formula:
where k is the number of layers.
In the aspect of evaluating the image distortion by using the image peak signal-to-noise ratio, the three-dimensional MSE is equal to the average value of the individual single-layer MSEs of the three-layer image, and the three-dimensional PSNR value is higher than the two-dimensional corresponding parameter, which indicates that the three-dimensional PSNR is more sensitive to the image distortion degree and has more advantages in evaluating the image distortion degree. It can be seen that the evaluation by using the three-dimensional parameters of the peak signal-to-noise ratio of the image can also be more objective and accurate.
Preferably, the measurement motif is Catphas 504.
More preferably, the first measurement module is a CTP486 module.
More preferably, the second measurement module is CTP404 module.
Further, ct= -35 or so of the first substance; ct= -100 or so of the second substance; ct= -990 or so for the third substance.
Preferably, in the three-dimensional parameter expansion calculation of the respective image quality parameters, the size of the ROI selection of the image uniformity, the low contrast resolution, and the contrast-to-noise ratio is 5×5×3mm.
It should be noted that, the size of the ROI selection is 5×5×3mm, which has the following beneficial effects: the small circle radius of the module interpolation substance is 6 pixels, the maximum inscribed square is 8 x 8 pixels, the larger the ROI is, the more easily the calculation parameters are affected by the edge pixels, the smaller the ROI is, the random error is easily increased, therefore, the size of the ROI of 5*5 pixels is proper, and the calculation result of the corresponding parameters is optimal.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method for quantitatively evaluating the image by utilizing the dimension parameters, in the process of calculating the two-dimensional parameters, the inventor utilizes a layer of image of the selected target image, acquires a plurality of ROIs of the image, calculates the two-dimensional parameters by the ROIs, and can calculate the two-dimensional parameters more objectively and accurately by the method;
2. in the method for quantitatively evaluating the image by utilizing the dimension parameters, in the process of calculating the three-dimensional parameters, the selected layers are expanded from single layers to multiple layers, pixels are expanded to voxels, and the optimal selection range of the ROI is discussed, so that errors of artificial layer selection and ROI selection are reduced, and the quantitative evaluation of the three-dimensional parameters on the image quality is realized. The three-dimensional image quality parameters are comprehensively determined through multiple layers, so that randomness of human layer selection and subjectivity of ROI selection can be reduced to a certain extent, the image quality can be comprehensively reflected, the requirement of image quality evaluation can be met, and the accuracy of the image quality evaluation can be improved;
3. the invention relates to a method for quantitatively evaluating an image by using dimension parameters, wherein the target image is a CT (CBCT) image; the quality of the target image is evaluated through quantitative analysis by combining image uniformity, low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio, so that the evaluation can be more comprehensive and accurate.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention, as well as the preferred embodiments thereof, together with the following detailed description of the invention, given by way of illustration only, together with the accompanying drawings.
Drawings
FIG. 1 is a flow chart of a basic embodiment of a method for quantitatively evaluating images using dimension parameters according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the specific implementation, structure, characteristics and effects according to the invention with reference to the accompanying drawings and preferred embodiments:
example 1
The invention provides a method for quantitatively evaluating an image by using dimension parameters, as shown in fig. 1, comprising the following steps:
s1: scanning parameters: scanning the target image by using a measuring die body, and obtaining a plurality of detection images;
s2: two-dimensional parameter calculation: respectively calculating two-dimensional parameters corresponding to the detection images;
s3: three-dimensional parameter expansion calculation: selecting a plurality of layers of adjacent images and ROIs from the detection images according to the calculated two-dimensional parameters of each image, obtaining the average value of voxels of each three-dimensional ROI, and calculating to obtain the three-dimensional parameters of each image; or respectively selecting a plurality of layers of adjacent images from the detection images according to the calculated two-dimensional parameters of each image, and then selecting the preprocessed images of the images and calculating to obtain the three-dimensional parameters of each image;
and evaluating the target image by combining the three-dimensional parameters.
The inventor further combines the two-dimensional parameters of the image in the technical scheme to perform a three-dimensional parameter expansion calculation method, and evaluates the image quality by using the calculation result of the three-dimensional parameters. In the technical scheme, the three-dimensional parameters are utilized for calculation, and the method has the following technical advantages:
(1) On the one hand, the quality of the evaluated image is more objective and accurate by a quantitative analysis method. In the prior art, the evaluation method of the image quality mainly comprises qualitative evaluation and quantitative evaluation, wherein the qualitative evaluation mainly carries out image quality judgment through naked eyes, is greatly influenced by human factors, is related to personal experience, and is also influenced by the window width and the window level of the image. According to the technical scheme, an image quantitative evaluation method is adopted, quantitative two-dimensional image quality evaluation parameters and three-dimensional image quality evaluation parameters are calculated through a program, so that the quality of the image can be more accurately and intuitively represented, and the method is more advantageous than qualitative evaluation;
(2) On the other hand, three-dimensional parameters are adopted for evaluation, in the process of calculation, the selected layers are expanded from single layers to multiple layers, and pixels are expanded to voxels, so that the technical problem that errors are greatly influenced in two-dimensional parameter evaluation can be effectively solved. The three-dimensional image quality parameters are comprehensively determined through multiple layers, so that randomness of artificial layer selection and subjectivity of ROI selection can be reduced to a certain extent, image quality can be comprehensively reflected, the requirement of image quality evaluation can be met, and the accuracy and objectivity of image quality evaluation can be improved. Therefore, the three-dimensional parameters are adopted to evaluate the image, and the problem that errors are greatly influenced during two-dimensional parameter evaluation can be effectively reduced through multi-level comprehensive determination.
In combination with the above embodiment, in another preferred embodiment, in the process of calculating the two-dimensional parameter, a program selects a layer of image of the target image and a plurality of ROIs of the image, and substitutes the two-dimensional parameter into a formula to calculate; and/or in the process of calculating the two-dimensional parameters, selecting one layer of image of the target image by a program, then selecting the preprocessed image of the layer, and substituting the preprocessed image into a formula for calculation.
The foregoing is a summary of one embodiment of calculating two-dimensional parameters, and the method is explained in more detail in the examples below.
Example 2
This example is a preferred embodiment based on example 1 above, and differs from example 1 above in that:
the types of the two-dimensional parameters or the three-dimensional parameters comprise one or more of image uniformity, low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio; the measuring die body comprises a first measuring module and a second measuring module; the first measuring module is used for measuring image uniformity; the second measurement module is used for measuring one or more of low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio of the image.
As a more preferred embodiment, the two-dimensional parameters or the types of three-dimensional parameters include image uniformity, low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error, and peak signal-to-noise ratio.
The technical scheme can be applied to the evaluation of CBCT image quality. In this process, parameters such as Uniformity (Uniformity), low Contrast resolution (Low Contrast Visibility, LCV), contrast-to-Noise Ratio (CNR), geometric accuracy, mean square error (Mean Squared Error, MSE), and peak signal-to-Noise Ratio (Peak Signal to Noise Rate, PSNR) are often selected for analysis. More preferably, three-dimensional expansion is performed on six parameters including uniformity, low contrast resolution (LCV), contrast-to-noise ratio (CNR), geometric accuracy, mean Square Error (MSE) and peak signal-to-noise ratio (PSNR), and quantitative analysis is performed by combining the three-dimensional expansion, so that CBCT image quality evaluation is more comprehensive and accurate.
In a more specific embodiment, the measurement motif may be a Catphan504, which is composed of 4 modules for detecting different image quality indicators. In the technical scheme, two modules are used, namely a CTP486 module which is composed of uniform solid substances with CT values equal to those of water and an image uniformity measuring module; and CTP404 module for measuring geometric accuracy and CT value accuracy. In the scheme, the uniformity is measured by the CTP486 module, namely the uniformity is used as a first measuring module; the remaining five indices are measured by the CTP486 module, i.e., as the second measurement module.
Therefore, in a specific embodiment, the obtained CBCT image is imported into MATLAB R2015b, the above six image quality parameters are selected, calculation of two-dimensional parameters and expansion and calculation of three-dimensional parameters of image quality are realized through MATLAB self-programming, and the result is analyzed.
Example 3
This example is a preferred implementation based on examples 1 and 2 above, and is merely a specific implementation, and is not meant to be limited to the implementation of this example.
The two-dimensional parameter calculation method of the image uniformity comprises the following steps:
selecting four ROIs in the cross section center of a layer of image in the CTP486 module and in the orthogonal direction 4.5cm away from the center, respectively calculating the pixel average value (mean) of each ROI, and obtaining the uniformity of the two-dimensional image according to the formulas (1) and (2):
wherein,
wherein f (i, j) is the pixel value of the ith row and jth column in the ROI, and mean (high) represents the maximum value of the average values of the five ROIs; mean (low) represents the minimum of the five ROI averages.
For example, m=2, n=2 may be selected; m=4, n=4; three sets of m=6, n=6 ROIs were analyzed. Each ROI was 3*3 pixels, 5*5 pixels and 7*7 pixels, respectively.
The three-dimensional parameter calculation method of the image uniformity comprises the following steps:
selecting four areas in the orthogonal direction of the centers of a plurality of adjacent images and 4.5cm from the centers in the three-dimensional direction to form a three-dimensional ROI, expanding parameters in a two-dimensional formula into three dimensions, respectively calculating the average value (T_mean) of voxels in each ROI, and substituting the average value (T_mean) into the formula (3) (4) to calculate the three-dimensional uniformity of the images:
wherein,
wherein f (i, j, k) is the voxel value of the ith row and the jth column of the kth layer in the ROI; t_mean (high) represents the maximum value of the average of five three-dimensional ROIs; t_mean (low) represents the minimum of the five three-dimensional ROI averages.
For example, m=2, n=2, o=2 may be selected; m=4, n= 4,O =2; m=6, n= 6,O =2 three sets of ROIs, each ROI was 3 x 3 voxels, 5 x 3 voxels, and 7 x 3 voxels, respectively.
The three-dimensional image uniformity difference, relative to the two-dimensional parameters, is affected by the ROI size selection between the single layer parameters, and can more typically represent image uniformity.
The following calculation results can be obtained:
from the above calculations we can have the following analysis:
since the uniformity results obtained are different for different slices with the same size of the selected ROI, it is indicated that the uniformity parameters of the image are greatly affected by the slice selection. In addition, the selected ROI size of each layer of image is different, and the obtained uniformity result is different. And the larger the selected ROI, the larger the value of uniformity, which means more noise, the worse the image uniformity. Within a single layer, the variation amplitude of the image uniformity parameter is different along with the variation of the ROI, the uniformity of the 3*3 pixel ROI in the first layer is minimum, the uniformity of the 7 x 7 pixel ROI is maximum, the uniformity is 0.0716, and the phase difference absolute value is 0.0022; the uniformity of the second layer is maximally 0.0621, the minimum value is 0.0552, and the phase difference absolute value is 0.0069; the third layer has a maximum uniformity of 0.0647, a minimum uniformity of 0.0628, and a difference of 0.0019. And the three-dimensional uniformity has a maximum value of 0.0661, a minimum value of 0.0625, and a phase difference of 0.0036. The three-dimensional image uniformity difference, relative to the two-dimensional parameters, is affected by the ROI size selection between the single layer parameters, and can more typically represent image uniformity. In the above-mentioned ROI, the smaller ROI error is large, the more the larger ROI introduces noise, the larger the calculation error, so further, the present scheme recommends selecting an ROI with 5×5×3 size, that is, the three-dimensional uniformity of the image is 0.0650.
Example 4
This example is a preferred implementation based on examples 1 and 2 above, and is merely a specific implementation, and is not meant to be limited to the implementation of this example.
The two-dimensional parameter calculation method of the image low-contrast resolution comprises the following steps:
selecting a layer of image in a CTP404 module, selecting the ROI with the same size in the image range of a first substance with CT value of about minus 35, such as polystyrene, and a second substance with CT value of about minus 100, such as LDPE (low density polyethylene), respectively calculating a mean value (mean) and a variance (SD) of pixels, substituting the mean value and the variance (SD) into formulas (5) (6) (7), and calculating a two-dimensional LCV value of the image:
wherein,
wherein CT First substance And CT Second substance And f (i, j) is the pixel value of the ith row and the jth column in the ROI.
For example, m=2, n=2 are selected respectively; m=4, n=4; three sets of m=6, n=6 ROIs were analyzed. Each ROI was 3*3 pixels, 5*5 pixels and 7*7 pixels, respectively.
The three-dimensional parameter calculation method of the image low-contrast resolution comprises the following steps:
selecting a plurality of adjacent images in a CTP404 module, selecting a plurality of adjacent images in a second measurement module, selecting two groups of three-dimensional ROIs formed by ROIs with the same size respectively in the range of a substance with CT value of about minus 35, such as polystyrene, and a substance with CT value of about minus 100, such as LDPE (low density polyethylene), respectively, calculating the average value (T_mean) and variance (T_SD) of the three-dimensional ROI voxels, substituting the average value (T_mean) and variance (T_SD) into a three-dimensional expansion formula (8) (9) (10) to obtain a three-dimensional LCV':
where f (i, j, k) is the voxel value of the kth layer, i-th row, j-th column in the ROI. For example, m=2, n=2, o=2; m=4, n= 4,O =2; m=6, n= 6,O =2 three sets of ROIs, each ROI was 3 x 3 voxels, 5 x 3 voxels, and 7 x 3 voxels, respectively.
The following calculation results can be obtained:
from the above calculations we can have the following analysis:
since the LCV results are different for different slices with the same ROI size selected, this indicates that the low contrast resolution of the image is greatly affected by the slice selection. For each slice, the larger the selected ROI, the larger the LCV value, and the worse the low contrast resolution of the image. As the ROI increases, the resulting amplitude of change of the three-dimensional LCV is significantly smaller than that of a monolayer, and the three-dimensional LCV of the image is less affected by the size of the ROI. In the above-mentioned ROI, the smaller the ROI is, the larger the LCV value error is, the larger the ROI is, and the edge pixels of the polystyrene, LDPE module cannot truly represent the CT values of both, and the result is also affected. It is further recommended to choose an ROI of 5 x 3 size, i.e. an LCV size of 2.0015 for the three dimensions of the image.
Example 5
This example is a preferred implementation based on examples 1 and 2 above, and is merely a specific implementation, and is not meant to be limited to the implementation of this example.
The two-dimensional parameter calculation method of the image contrast noise ratio comprises the following steps:
selecting a layer of image in the CTP404 module, selecting a ROI with proper size in a substance with CT value=990, such as Teflon (polytetrafluoroethylene) image range, selecting the ROI with the same size at the adjacent background, calculating the average value (mean) and standard deviation (sigma) of pixels respectively, substituting the mean value (mean) and standard deviation (sigma) into the formula (11) (12) (13) to obtain the two-dimensional CNR of the image:
/>
wherein, mean ROI And mean Background Mean values of ROI and background pixel values are represented, respectively; sigma (sigma) ROI Sum sigma Background Standard deviation representing ROI and background pixel values; f (i, j) is the pixel value of the ith row and jth column in the ROI. The maximum square inscribed in the Teflon (polytetrafluoroethylene) image is 8 x 8 pixels, so m=6, n=6 and roi size 7*7 pixels can be selected in the scheme, alternatively m=4 and n=4; two sets of data, m=2 and n=2, were compared and analyzed for size 5*5 pixels and 3*3 pixels.
The three-dimensional parameter calculation method of the image contrast noise ratio comprises the following steps:
selecting a plurality of adjacent images in a CTP404 module, selecting a plurality of ROIs with proper sizes in a Teflon (polytetrafluoroethylene) image to form a three-dimensional ROI, selecting the ROIs with the same sizes at adjacent backgrounds, respectively calculating the average value (T_mean) and standard deviation (T_sigma) of voxels, substituting the average value (T_mean) and standard deviation (T_sigma) into a formula (14) (15) (16) to obtain a three-dimensional CNR' of the image:
where f (i, j, k) is the voxel value of the kth layer, i-th row, j-th column in the ROI. For example, m=6, n= 6,O =2; m=4, n= 4,O =2; three sets of ROIs, m=2, n=2, o=2, with three-dimensional ROI sizes of 7 x 3 voxels, 5 x 3 voxels, and 3 x 3 voxels, respectively.
The following calculation results can be obtained:
from the above calculations we can have the following analysis:
since the image contrast to noise ratio results are greatly affected by the layer selection and the different ROI results are different. The larger the ROI, the larger the image is affected by noise, the smaller the CNR value of the image, the smaller the difference between the ROI and the background, and the worse the reflected image quality. However, as the ROI increases, the three-dimensional CNR value change amplitude is smaller than that of a single layer, and the reflected image three-dimensional CNR is less affected by the ROI size and has certain advantages over the two-dimensional parameters. In the selection of the ROI, too small an ROI error is larger, the CNR value of 3*3 pixels in the table above shows that the second layer is the largest, but the ROIs of 5*5 and 7*7 pixels both show that the third layer is the largest, because the ROI of 3*3 pixels is smaller, the larger ROI can always select the noise point under the condition of low noise frequency, but the smaller ROI does not always contain the noise pixel, and when the noise pixel is not taken by the smaller ROI of a certain layer, the higher CNR value is obtained, but the contrast noise ratio of the real image cannot be reflected. Too large an ROI is easily affected by the module edge pixels, and also results in inaccurate contrast-to-noise ratio, so the scheme further selects an ROI with a size of 5×5×3, i.e. the CNR size of the image three-dimensional is 46.6659.
Example 6
This example is a preferred implementation based on examples 1 and 2 above, and is merely a specific implementation, and is not meant to be limited to the implementation of this example.
The two-dimensional parameter calculation method of the geometric accuracy of the image comprises the following steps:
selecting an image in a second module, carrying out circle detection on the whole image by using Hough transformation, obtaining circle center coordinates of four circles in the horizontal direction and the vertical direction by a target image, calculating circle center distances of the two circles in the horizontal direction and the vertical direction by a program, namely respectively obtaining values of geometric accuracy of the image in the horizontal direction and the vertical direction, and finally judging the geometric accuracy of the target image in the horizontal direction and the vertical direction by comparing the values with the actual physical distance;
the three-dimensional parameter calculation method of the geometric accuracy of the image comprises the following steps:
selecting a plurality of adjacent images in a second module, respectively carrying out circle detection on the plurality of images by using Hough transformation, obtaining circle center coordinates of four circles in the horizontal direction and the vertical direction by a target image, calculating the circle center distances of the two circles in the horizontal direction and the vertical direction of each image by a program, and expanding the three dimensions of geometric accuracy in the horizontal direction and the vertical direction into the average value of the two circle center distances of an interpolation module in the horizontal direction and the vertical direction of the selected layer, namely
D=(l 1 +l 2 +…+l n )/n (17)
Wherein n represents the number of layers, l n Representing the horizontal or vertical distance of the slice, the geometric accuracy of the target image in the horizontal and vertical directions is determined by comparison with the actual physical distance as well.
The following calculation results can be obtained:
from the above results we can have the following analysis:
the horizontal distances of the two-dimensional parameter and the three-dimensional parameter are 116.55mm, have little difference with the actual distance, represent that the horizontal distances of the three selected layers are consistent, and meet the requirements. The vertical distances of the two-dimensional parameter and the three-dimensional parameter are different and are 116.55mm and 116.21mm respectively, the two-dimensional vertical distances of the three selected layers are different, the geometric accuracy of the image in the vertical direction is slightly weaker than that in the horizontal direction, but the two-dimensional vertical distance of the image cannot display the information, and the three-dimensional geometric accuracy is more advantageous.
Example 7
This example is a preferred implementation based on examples 1 and 2 above, and is merely a specific implementation, and is not meant to be limited to the implementation of this example.
The two-dimensional parameter calculation method of the image mean square error comprises the following steps:
selecting a layer of original image and a processed image of a corresponding layer, and obtaining a two-dimensional parameter of a mean square error through the following formula:
where M, N is the length and width of the image, f (i, j) represents the pixel value of the original image, and f (i, j)' represents the pixel value of the processed image;
the three-dimensional parameter calculation method of the image mean square error comprises the following steps:
selecting a plurality of layers of adjacent original images and processed images of corresponding adjacent layers, and obtaining three-dimensional parameters of mean square error through the following formula:
where M, N is the length and width of the image, O is the number of layers, f (i, j, k) represents the voxel value of the original image, and f (i, j, k)' represents the voxel value of the processed image.
The following calculation results can be obtained:
from the above results we can have the following analysis:
the three-dimensional MSE is equal to the average value of the single-layer MSE of each three-layer image, can reflect more multi-layer information, is more representative, and has certain advantages compared with the two-dimensional MSE.
Example 8
This example is a preferred implementation based on examples 1 and 2 above, and is merely a specific implementation, and is not meant to be limited to the implementation of this example.
The two-dimensional parameter calculation method of the image peak signal-to-noise ratio comprises the following steps:
selecting an original image and a processed image of a corresponding layer, and obtaining a two-dimensional parameter of a peak signal-to-noise ratio through the following formula:
wherein L is the maximum signal quantity, replaced by the maximum pixel value of the image;
the three-dimensional parameter calculation method of the image peak signal-to-noise ratio comprises the following steps:
selecting a plurality of layers of adjacent original images and processed images of corresponding adjacent layers, and obtaining three-dimensional parameters of peak signal-to-noise ratio through the following formula:
where k is the number of layers.
The following calculation results can be obtained:
from the above results we can have the following analysis:
the three-dimensional PSNR value is higher than the two-dimensional corresponding parameter, so that the three-dimensional PSNR is more sensitive to the image distortion degree and has more advantages in evaluating the image distortion degree.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.

Claims (9)

1. A method for quantitatively evaluating an image using three-dimensional parameters, the method comprising the steps of:
s1: scanning parameters: scanning the target image by using a measuring die body, and obtaining a plurality of detection images;
s2: two-dimensional parameter calculation: respectively calculating two-dimensional parameters corresponding to the detection images;
s3: three-dimensional parameter expansion calculation: selecting a plurality of layers of adjacent images and ROIs from the detection images according to the calculated two-dimensional parameters of each image, obtaining the average value of voxels of each three-dimensional ROI, and calculating to obtain the three-dimensional parameters of each image; or respectively selecting a plurality of layers of adjacent images from the detection images according to the calculated two-dimensional parameters of each image, and then selecting the preprocessed images of the images and calculating to obtain the three-dimensional parameters of each image; wherein the three-dimensional parameters include one or more of image uniformity, low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio;
and evaluating the target image by combining the three-dimensional parameters.
2. The method for quantitatively evaluating an image according to claim 1, wherein in the process of calculating the two-dimensional parameters, a layer of image in the detected image and a plurality of ROIs of the image are selected by a program and substituted into a formula for calculation; and/or in the process of calculating the two-dimensional parameters, selecting one layer of image in the detection image by a program, then selecting the image with the preprocessed layer, and substituting the image into a formula for calculation.
3. The method of quantitatively evaluating an image of claim 1, wherein the measurement phantom comprises a first measurement module and a second measurement module; the first measuring module is used for measuring image uniformity; the second measurement module is used for measuring one or more of low contrast resolution, contrast-to-noise ratio, geometric accuracy, mean square error and peak signal-to-noise ratio of the image.
4. The method for quantitatively evaluating an image according to claim 3, wherein the two-dimensional parameter calculation method of the image uniformity is:
selecting four ROIs in the cross section center and the orthogonal direction from the center of one layer of image in the detection image from the first measurement module, respectively calculating the pixel average value mean of each ROI, and obtaining the uniformity of the two-dimensional image according to the formulas (1) and (2):
wherein,
wherein f (i, j) is the pixel value of the ith row and the jth column in the ROI; m, N is the length and width of the image; mean (high) represents the maximum value of the average of five ROIs; mean (low) represents the minimum of the five ROI averages;
the three-dimensional parameter calculation method of the image uniformity comprises the following steps:
selecting the centers of a plurality of adjacent images from the detected images in the three-dimensional direction and forming three-dimensional ROIs by four areas in the orthogonal direction from the centers, expanding parameters in a two-dimensional formula into three dimensions, respectively calculating the average value T_mean of voxels in each ROI, substituting the average value T_mean into the formula (3) (4), and calculating the three-dimensional uniformity of the images:
wherein,
wherein f (i, j, k) is the voxel value of the ith row and the jth column of the kth layer in the ROI; m, N is the length and width of the image, O is the number of layers; t_mean (high) and t_mean (low) represent the maximum and minimum, respectively, of the five three-dimensional ROI averages.
5. A method of quantitatively evaluating an image as claimed in claim 3, wherein the two-dimensional parameter calculation method of the image low contrast resolution is:
selecting one layer of image in the detection image from the second measurement module, selecting ROIs with the same size in the range of the first substance and the second substance image, respectively calculating a pixel mean and a variance SD, substituting the mean and the variance SD into formulas (5), (6) and (7) to calculate a two-dimensional image low-contrast resolution value LCV of the image:
wherein,
wherein CT First substance CT value, CT, representative of the first substance Second substance A CT value representing the second object, f (i, j) being a pixel value of an ith row and a jth column in the ROI, M, N being a length and a width of the image;
the three-dimensional parameter calculation method of the image low-contrast resolution comprises the following steps:
selecting a plurality of adjacent images of the detection image from a second measurement module, respectively selecting ROIs with the same size from the first substance image range and the second substance image range to form two groups of three-dimensional ROIs, respectively calculating the average value T_mean and the variance T_SD of three-dimensional ROI voxels, substituting the average value T_mean and the variance T_SD into a three-dimensional expansion formula (8) (9) (10) to obtain a three-dimensional LCV':
wherein,
where f (i, j, k) is the voxel value of the ith row and jth column of the kth layer in the ROI, and O is the layer number.
6. A method of quantitatively evaluating an image as claimed in claim 3, wherein the two-dimensional parameter calculation method of geometric accuracy of the image is:
selecting one layer of image in the detection image from the second module, carrying out circle detection on the image to obtain circle center coordinates of four circles in the horizontal direction and the vertical direction, calculating circle center distances of the two circles in the horizontal direction and the vertical direction through a program, namely respectively obtaining values of geometric accuracy of the image in the horizontal direction and the vertical direction, and finally judging the geometric accuracy of the target image in the horizontal direction and the vertical direction through comparison with the actual physical distance;
the three-dimensional parameter calculation method of the geometric accuracy of the image comprises the following steps:
selecting a plurality of adjacent images in the detection images from the second module, respectively carrying out circle detection on the plurality of images to obtain circle center coordinates of four circles in the horizontal direction and the vertical direction, calculating the circle center distances of the two circles in the horizontal direction and the vertical direction of each image by a program, and expanding the three dimensions of the geometric accuracy of the horizontal direction and the vertical direction into the average value of the two circle center distances of the interpolation module in the horizontal direction and the vertical direction of the selected layer, namely
D=(l 1 +l 2 +…+l n )/n (17)
Wherein n represents the number of layers, l n Representing the horizontal or vertical distance of the slice, and determining the geometric accuracy of the target image in the horizontal and vertical directions by comparison with the actual physical distance.
7. A method of quantitatively evaluating an image as claimed in claim 3, wherein the two-dimensional parameter calculation method of the image mean square error is:
selecting one layer of image and the processed image of the corresponding layer from the detection image, and obtaining a two-dimensional parameter of the mean square error through the following formula:
where M, N is the length and width of the image, f (i, j) represents the pixel value of the original image, and f (i, j)' represents the pixel value of the processed image;
the three-dimensional parameter calculation method of the image mean square error comprises the following steps:
selecting a plurality of layers of adjacent images and images processed by corresponding adjacent layers from the detection images, and obtaining three-dimensional parameters of mean square error through the following formula:
where M, N is the length and width of the image, O is the number of layers, f (i, j, k) represents the voxel value of the original image, and f (i, j, k)' represents the voxel value of the processed image.
8. The method for quantitatively evaluating an image of claim 7, wherein the two-dimensional parameter calculation method of the peak signal-to-noise ratio of the image is:
selecting one layer of image from the detection image and the image of the corresponding layer after processing, and obtaining the two-dimensional parameter of the peak signal-to-noise ratio through the following formula:
wherein L is the maximum signal quantity, replaced by the maximum pixel value of the image;
the three-dimensional parameter calculation method of the image peak signal-to-noise ratio comprises the following steps:
selecting a plurality of layers of adjacent original images and processed images of corresponding adjacent layers, and obtaining three-dimensional parameters of peak signal-to-noise ratio through the following formula:
where k is the number of layers.
9. The method of quantitatively evaluating an image of any one of claims 1-8, wherein the measurement motif is Catphan504.
CN201810548477.1A 2018-05-31 2018-05-31 Method for quantitatively evaluating image by utilizing three-dimensional parameters Active CN110634117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810548477.1A CN110634117B (en) 2018-05-31 2018-05-31 Method for quantitatively evaluating image by utilizing three-dimensional parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810548477.1A CN110634117B (en) 2018-05-31 2018-05-31 Method for quantitatively evaluating image by utilizing three-dimensional parameters

Publications (2)

Publication Number Publication Date
CN110634117A CN110634117A (en) 2019-12-31
CN110634117B true CN110634117B (en) 2023-11-24

Family

ID=68966152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810548477.1A Active CN110634117B (en) 2018-05-31 2018-05-31 Method for quantitatively evaluating image by utilizing three-dimensional parameters

Country Status (1)

Country Link
CN (1) CN110634117B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679808A (en) * 2013-12-24 2014-03-26 通号通信信息集团有限公司 Method and system for rebuilding three-dimensional head model by two-dimensional nuclear magnetic images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679808A (en) * 2013-12-24 2014-03-26 通号通信信息集团有限公司 Method and system for rebuilding three-dimensional head model by two-dimensional nuclear magnetic images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
加速器机载kV-CBCT成像参数与图像质量的定量分析;刘磊等;《安徽医科大学学报》;20140930;第1336页第5段 *
医用直线加速器机载影像系统QC图像定量评估方法研究;庄永东等;《中华放射肿瘤学杂志》;20170430;第444页第2段、第445页第5段 *
庄永东等.医用直线加速器机载影像系统QC图像定量评估方法研究.《中华放射肿瘤学杂志》.2017,第442-447页. *
李月卿等.第六章磁共振成像.《医学影像成像原理》.2009,第174页. *
王延江等.第4章图像复原.《数字图像处理》.2016,第63-64页. *

Also Published As

Publication number Publication date
CN110634117A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
CN109211904B (en) Detection system and detection method for two-dimensional internal structure of asphalt mixture
CN105931257B (en) SAR image method for evaluating quality based on textural characteristics and structural similarity
US20120330447A1 (en) Surface data acquisition, storage, and assessment system
CN104838422B (en) Image processing equipment and method
CN108550145B (en) SAR image quality evaluation method and device
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN111476159A (en) Method and device for training and detecting detection model based on double-angle regression
CN106780584B (en) The fine evaluation method of grain direction based on gray level co-occurrence matrixes
CN112991287B (en) Automatic indentation measurement method based on full convolution neural network
CN113066064B (en) Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
CN110348459B (en) Sonar image fractal feature extraction method based on multi-scale rapid carpet covering method
CN106618572A (en) Automatic evaluation method of image quantitation of medical magnetic resonance model body
Hussein et al. A novel edge detection method with application to the fat content prediction in marbled meat
CN112085675A (en) Depth image denoising method, foreground segmentation method and human motion monitoring method
CN110956618B (en) CT image small defect quantification method based on coefficient of variation method
Girón et al. Nonparametric edge detection in speckled imagery
Kerut et al. Review of methods for texture analysis of myocardium from echocardiographic images: a means of tissue characterization
CN110634117B (en) Method for quantitatively evaluating image by utilizing three-dimensional parameters
CN116309608B (en) Coating defect detection method using ultrasonic image
CN111595247B (en) Crude oil film absolute thickness inversion method based on self-expansion convolution neural network
CN107478656A (en) Paper pulp mixing effect method of determination and evaluation based on machine vision, device, system
Berger et al. Automated ice-bottom tracking of 2D and 3D ice radar imagery using Viterbi and TRW-S
Yu et al. Analysis and processing of decayed log CT image based on multifractal theory
Khoje et al. A Comprehensive survey of fruit grading systems for tropical fruits of maharashtra
CN109242823B (en) Reference image selection method and device for positioning calculation and automatic driving system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220706

Address after: 510610 512, building a, No. 11, Nanyun fifth road, Huangpu District, Guangzhou, Guangdong Province

Applicant after: Guangzhou Furui value medical and Health Industry Co.,Ltd.

Address before: 510060 No. 651 Dongfeng East Road, Guangdong, Guangzhou

Applicant before: SUN YAT SEN University CANCER CENTER (SUN YAT SEN University AFFILIATED TO CANCER CENTER SUN YAT SEN UNIVERSITY CANCER INSTITUTE)

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant