CN115760891A - CT image quality evaluation method and system based on edge detection algorithm - Google Patents

CT image quality evaluation method and system based on edge detection algorithm Download PDF

Info

Publication number
CN115760891A
CN115760891A CN202211360778.4A CN202211360778A CN115760891A CN 115760891 A CN115760891 A CN 115760891A CN 202211360778 A CN202211360778 A CN 202211360778A CN 115760891 A CN115760891 A CN 115760891A
Authority
CN
China
Prior art keywords
image
information
edge detection
shooting
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211360778.4A
Other languages
Chinese (zh)
Other versions
CN115760891B (en
Inventor
张超
阮狄克
徐成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
6th Medical Center of PLA General Hospital
Original Assignee
6th Medical Center of PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 6th Medical Center of PLA General Hospital filed Critical 6th Medical Center of PLA General Hospital
Priority to CN202211360778.4A priority Critical patent/CN115760891B/en
Publication of CN115760891A publication Critical patent/CN115760891A/en
Application granted granted Critical
Publication of CN115760891B publication Critical patent/CN115760891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a CT image quality evaluation method and system based on an edge detection algorithm, relates to the technical field of image processing, and aims to obtain CT image acquisition parameter information and shooting target information, further determine predicted shooting effect information, obtain an actually shot CT image, detect global and focus edges by using the edge detection algorithm, determine an image edge detection result, perform traversal comparison on the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information, determine deviation information of each area of the image based on a comparison result, further obtain an image quality evaluation result, solve the technical problems that in the prior art, the evaluation process is insufficient in intelligence, the cut-in point and the evaluation direction are not strict enough, and the image quality evaluation result is insufficient in accuracy, optimize an evaluation method, refine analysis dimension and complete intelligent and accurate quality evaluation of the CT image.

Description

CT image quality evaluation method and system based on edge detection algorithm
Technical Field
The invention relates to the technical field of image processing, in particular to a CT image quality evaluation method and system based on an edge detection algorithm.
Background
The CT has an outstanding clinical use value and market value, is widely applied to the medical field, the industrial field and the like, can perform information diagnosis based on CT image information to perform targeted diagnosis and treatment, correction and the like, so the requirement on the quality of the CT image is high, the quality of the CT image influences the subsequent diagnosis and treatment effect, nowadays, the quality of the CT image is mainly evaluated by evaluating various performance indexes of the CT system, however, due to the universality of the CT system and the image evaluation flow, the final image evaluation effect has a certain defect, the fit degree with an actual target is insufficient, and the existing evaluation method has a certain space which can be improved.
In the prior art, when the quality of the CT image is evaluated, because the intelligence of the evaluation process is insufficient, the cut-in point and the evaluation direction are not strict enough, and the accuracy of the image quality evaluation result is insufficient.
Disclosure of Invention
The application provides a CT image quality evaluation method and system based on an edge detection algorithm, which are used for solving the technical problems of insufficient intelligence of an evaluation process, less rigor of cut-in points and evaluation directions and insufficient accuracy of an image quality evaluation result in the prior art.
In view of the foregoing problems, the present application provides a method and a system for evaluating quality of a CT image based on an edge detection algorithm.
In a first aspect, the present application provides a method for evaluating quality of a CT image based on an edge detection algorithm, the method including: acquiring CT image acquisition parameter information and shooting target information; acquiring predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information; acquiring an actual shot CT image, carrying out global and focus edge detection on the actual shot CT image by using an edge detection algorithm, and determining an image edge detection result, wherein the image edge detection result comprises image edge detection information and a focus target object edge detection result; traversing and comparing the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information to obtain a comparison result; and determining deviation information of each region of the image according to the comparison result, and obtaining an image quality evaluation result based on the deviation information of each region of the image.
In a second aspect, the present application provides a CT image quality evaluation system based on an edge detection algorithm, the system comprising: the information acquisition module is used for acquiring CT image acquisition parameter information and shooting target information; the information prediction module is used for obtaining predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information; the image detection module is used for obtaining an actual shooting CT image, carrying out global and focus edge detection on the actual shooting CT image by utilizing an edge detection algorithm, and determining an image edge detection result, wherein the image edge detection result comprises image edge detection information and a focus target object edge detection result; the result comparison module is used for performing traversal comparison on the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information to obtain a comparison result; and the deviation evaluation module is used for determining deviation information of each area of the image according to the comparison result and obtaining an image quality evaluation result based on the deviation information of each area of the image.
One or more technical solutions provided in the present application have at least the following technical effects or advantages: according to the CT image quality evaluation method based on the edge detection algorithm, CT image acquisition parameter information and shooting target information are obtained, predicted shooting effect information is further determined, an actually shot CT image is obtained, global and focus edge detection is carried out through the edge detection algorithm, image edge detection results are determined and comprise image edge detection information and focus target object edge detection results, traversal comparison is carried out on the image edge detection information and the focus target object edge detection results according to the predicted shooting effect information, comparison results are obtained, deviation information of each region of the image is determined based on the comparison results, and then image quality evaluation results are obtained.
Drawings
FIG. 1 is a schematic flow chart of a CT image quality evaluation method based on an edge detection algorithm according to the present application;
FIG. 2 is a schematic diagram illustrating a process of obtaining an image edge detection result in a CT image quality evaluation method based on an edge detection algorithm according to the present application;
FIG. 3 is a schematic diagram illustrating an image quality evaluation result obtaining process in a CT image quality evaluation method based on an edge detection algorithm according to the present application;
fig. 4 is a schematic structural diagram of a CT image quality evaluation system based on an edge detection algorithm according to the present application.
Description of reference numerals: an information acquisition module 11, an information prediction module 12, an image detection module 13, a result comparison module 14 and a deviation evaluation module 15.
Detailed Description
The application provides a CT image quality evaluation method and system based on an edge detection algorithm, CT image acquisition parameter information and shooting target information are obtained, predicted shooting effect information is further determined, an actually shot CT image is obtained, global and focus edge detection is carried out by the edge detection algorithm, an image edge detection result is determined, traversal comparison is carried out on the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information, deviation information of each area of the image is determined based on the comparison result, and an image quality evaluation result is further obtained.
Example one
As shown in fig. 1, the present application provides a method for evaluating quality of a CT image based on an edge detection algorithm, the method comprising:
step S100: acquiring CT image acquisition parameter information and shooting target information;
specifically, the CT has outstanding clinical use value and market value, and is widely applied to the medical field, the industrial field and the like, so that the quality requirement of a CT image is higher and higher, and according to the CT image quality evaluation method based on the edge detection algorithm, the imaging effect of a shooting target is predicted based on associated influence parameters, the prediction effect is further compared with an actually shot target image for analysis, the evaluation analysis is performed based on the deviation degree, and the image quality evaluation effect is determined.
Further, acquiring CT image acquisition parameter information and shooting target information, in step S100 of the present application, the method further includes:
step S110: acquiring a CT shooting request sheet;
step S120: performing semantic recognition on the CT shooting request list to obtain a CT shooting target and CT shooting requirements;
step S130: performing target feature analysis according to the CT shooting target, and determining a target depth feature, a target ray absorption feature, a target size feature and a target attribute feature;
step S140: constructing the shot target information according to the target depth feature, the target ray absorption feature, the target size feature and the target attribute feature;
step S150: performing shooting parameter characteristic analysis according to the CT shooting requirement and the shooting target information, and determining X-ray dose, pixel size, thickness, window setting and scanning parameters;
step S160: and acquiring the CT image acquisition parameter information based on the X-ray dose, the pixel size, the thickness, the window setting and the scanning parameters.
Specifically, the method includes the steps of obtaining a CT shooting request form, namely a shooting content request form, performing semantic recognition on the CT shooting request form based on an object category, a behavior category, a property category and the like, performing shooting key point analysis and extraction based on different part-of-speech information, determining a CT shooting target and the CT shooting requirement, wherein the CT shooting target is an object to be shot and can be a part of a body, performing feature analysis on the CT shooting target, determining multi-dimensional features of the shooting target, including the target depth feature, the target ray absorption feature, the target size feature and the target attribute feature, which are features of the shooting target, including the degree of strength of the features, and further performing information integration on the target depth feature, the target ray absorption feature, the target size feature and the target attribute feature to serve as the shooting target information.
The CT shooting requirement and the shooting target information are further subjected to shooting parameter characteristic analysis, noise is a key factor influencing image quality, the density resolution and the spatial resolution of a CT image can be directly influenced, when the noise is too high, the focus cannot be identified by CT, the accuracy of focus identification is influenced, a plurality of key factors influencing the noise are determined, including X-ray dose, pixel size, thickness, window setting and scanning parameters are used as shooting parameter characteristic analysis results, the shooting parameter characteristic analysis results are further determined to be CT image acquisition parameter information, and the accuracy of the shooting target information and the CT image acquisition parameter information can be effectively guaranteed by carrying out characteristic analysis and shooting influence parameter analysis on the shooting target.
Step S200: acquiring predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information;
specifically, the CT image acquisition parameter information and the shooting target information are data sources for predicting image shooting effects, shooting effect prediction can be performed by constructing a preset prediction model, model training is performed based on a historical experience data set to guarantee objectivity and accuracy of prediction results and guarantee analysis efficiency, the CT image acquisition information and the shooting target information are input into the preset prediction model, the preset shooting effects are directly determined by performing model simulation analysis prediction, shooting effect related parameter information is output and comprises image edge detection information and focus target object edge detection information, the image edge detection information and the focus target object edge detection information are used as the predicted shooting effect information, the predicted shooting effect information is used as standard comparison reference information, and a foundation is laid for quality evaluation analysis of subsequently actually shot CT images.
Further, obtaining predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information, in step S200 of the present application, further comprising:
step S210: obtaining a preset prediction model, wherein the preset prediction model is a neural network model obtained by training convergence through a historical experience data set;
step S220: acquiring shooting target attribute information and shooting target focus setting information according to the shooting target information;
step S230: and inputting the CT image acquisition parameter information, the shooting target attribute information and the shooting target focus setting information into the preset prediction model, predicting shooting effect parameters, and outputting the predicted shooting effect information.
Specifically, standardized historical CT data information is collected, the standardized historical CT data information is used as the historical experience data set, a neural network model is trained and converged to generate the preset prediction model, the accuracy of output data of the preset prediction model can be effectively improved, the preset prediction model is used for performing shooting effect prediction according to input shooting parameter data, the obtained shooting target information is further analyzed and identified, shooting target attribute information and shooting target focus setting information, such as a specific part of a shooting target, a shooting focal length, focus position coordinates and the like, are extracted, the CT image collection parameter information, the shooting target attribute information and the shooting target focus setting information are input into the preset prediction model, shooting effect prediction is performed based on shooting related parameters, shooting effect related parameter data are determined and output as the predicted shooting effect information, and shooting effect prediction is performed by constructing the preset prediction model, so that objective prediction results can be effectively guaranteed.
Step S300: acquiring an actual shooting CT image, carrying out global and focus edge detection on the actual shooting CT image by utilizing an edge detection algorithm, and determining an image edge detection result, wherein the image edge detection result comprises image edge detection information and a focus target object edge detection result;
specifically, a shooting range is determined based on the CT image acquisition parameter information and the shooting target information, the actually shot CT image is obtained by shooting a target, filtering processing is performed on the actually shot image, image noise reduction is performed, image definition is improved, a plurality of pixel points of the actually shot CT image are further determined, gradient strength of the pixel points is determined based on convolution of a plurality of direction templates, a complementary effect can be formed due to the fact that edge points corresponding to gradient images determined based on different direction templates are different, the obtained edge images are more accurate through combination, pixel point gradient detection is further performed on the edge images, analysis and judgment are performed respectively based on the global direction and the focus edge, and the image edge detection information and the focus target edge detection result are generated and used as the image edge detection result.
Further, as shown in fig. 2, the actually captured CT image is subjected to global and focus edge detection by using an edge detection algorithm to determine an image edge detection result, where step S300 further includes:
step S310-1: acquiring a target focus shooting position range according to the shooting target information and the CT image acquisition parameter information;
step S320-1: according to the target focus shooting position range, performing focus area rough calibration, wherein the focus area rough calibration range is larger than the target focus shooting position range;
step S330-1: and carrying out global and focus edge detection on the roughly calibrated actual shooting CT image by using a preset edge detection algorithm to obtain image edge detection information and a focus target object edge detection result.
Specifically, parameter analysis is performed on the acquired shooting target information and the CT image acquisition parameter information, a shooting region is defined for a shooting target, position coordinates and size parameters of the shooting region are determined and serve as the target focus shooting position range, further, a focus region is determined based on the target focus shooting position range, in order to ensure that a shot image completely covers the shooting target, the shooting focus region is determined, region outline calibration is performed on the shooting focus region, focus region rough calibration is completed, the focus region rough calibration range should be larger than the target focus shooting position range, so as to ensure the coverage rate of the shooting target, the coverage rate needs to be 100%, the preset edge detection algorithm is further determined, namely, an application algorithm to be used for image edge detection is performed, for example, image edge detection is performed based on an edge detection operator, global edge detection is performed on the actually shot CT image subjected to rough calibration based on the preset edge detection algorithm, points with obvious brightness changes, such as discontinuity in image depth, discontinuity in surface direction, and the like, the image edge detection information is acquired, similarly, an image edge detection result is generated, focus detection data is removed, and important image structure data can be easily analyzed, and important image detection data can be removed.
Further, the step S330 of performing global and focus edge detection on the coarsely calibrated actually captured CT image by using a preset edge detection algorithm further includes:
step S331-1: filtering the actually shot CT image, and traversing each pixel point of the actually shot CT image;
step S332-1: based on all pixel points of the whole actually shot CT image, respectively convolving with multi-direction templates, and calculating the gradient intensity of each pixel point, wherein the multi-direction template convolution at least comprises X and Y directions;
step S333-1: adding the multi-directional gradient strengths to obtain an approximate gradient of the pixel points;
step S334-1: and obtaining a preset double threshold, and performing edge judgment on the approximate gradient of the pixel point by using the preset double threshold to determine edge detection information.
Specifically, the actual shot CT image is obtained, image filtering processing is performed on the actual shot CT image, image target features are extracted, noise mixed during image digitization is eliminated, the processed image can be clearer, each pixel point of the actual shot CT image is further determined, traversal is performed on the pixel points and convolution is performed on the pixel points and the multi-direction templates respectively, because the edges of the image are horizontal and vertical, namely, a plurality of other edge directions such as 0 degrees, 45 degrees, 90 degrees, 135 degrees exist in the X direction and the Y direction, the gradient strength of each pixel point is calculated, detection is performed on the basis of a plurality of different edge directions, edge points corresponding to direction gradient graphs formed in different template directions are different, the edge detection precision of an operator can be effectively guaranteed, the edge points corresponding to gradient images determined on the basis of templates in different directions have differences, a complementary effect can be formed, the obtained edge images are more accurate through combination, the obtained gradient strengths are added to be used as approximate gradients of the pixel points, further, pixel point approximate gradients of the actual shot CT edge and the gradient threshold value are set respectively on the overall edge and the CT edge and the pixel point approximate gradients of the multi-direction gradient graphs are determined on the basis of a dual-direction gradient threshold value, and the pixel point approximate gradients of the dual-gradient judgment threshold value, the pixel point approximate gradients are determined on the basis of the dual-gradient information of the dual-point, and the dual-gradient information of the dual-point approximate gradients of the CT image, and the dual-point approximate gradients of the dual-point approximate gradients are determined threshold value, and the dual-point approximate gradients are determined gradient threshold value.
Further, step S300 of the present application further includes:
step S310-2: setting a uniform detection object;
step S320-2: constructing a detection module based on the uniform detection object, carrying out whole scanning field CT value detection on the actually shot CT image, and determining the CT value of the uniform detection object in each region in the scanning field;
step S330-2: determining an average CT value according to the CT value of each region, and determining the change information of the CT value of each region in the scanning field based on the average CT value;
step S340-2: and according to the CT value change information, determining the uniformity and noise of the CT image of the pixel point in each region.
Specifically, the uniform detection object is set, the uniform detection object is an auxiliary substance for detecting CT values of regions in an image, the detection module is constructed based on the uniform detection object, for example, distilled water or water equivalent plastic is used as the uniform detection object to form a liquid water model or a solid water model, the liquid water model or the solid water model is used as the detection module, CT value detection is performed on the whole scanning field of the actually-photographed CT image based on the detection module, the scanning field refers to the whole detection range of the image, CT values of the uniform detection object in different regions in the scanning field are determined, the CT values of the uniform detection object indicate the CT values of the corresponding regions in the image, wherein the CT values in the same regions may have differences, the CT values of the regions are averaged to determine the average CT value, CT value comparison is performed on a plurality of regions in the scanning field based on the average CT value, the variation trend and the variation scale of the CT values of the regions in the scanning field are determined, the CT value variation information is generated, the variation of the CT image and the noise are determined based on the variation of the CT value, and the uniformity of the CT information are inversely proportional to the uniformity of the CT value.
Step S400: traversing and comparing the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information to obtain a comparison result;
step S500: and determining deviation information of each region of the image according to the comparison result, and obtaining an image quality evaluation result based on the deviation information of each region of the image.
Specifically, the predicted shooting effect information is used as evaluation reference information, information mapping correspondence is carried out on the preset shooting effect information, the image edge detection information and the focus target edge detection result, information traversal is carried out based on mapping results, information deviation comparison is determined to be carried out on the preset shooting effect information and the image edge detection information, deviation information and a corresponding information deviation scale are determined and used as comparison results, further deviation information extraction is carried out on the comparison results, image characteristic requirements are determined based on image characteristic information identification requirements, the CT image is divided based on the image characteristic requirements, a plurality of image areas are determined, evaluation weight values of the corresponding areas are further determined based on the image characteristic requirements, the deviation information and the evaluation weight values correspond to the image areas one by one, weighted summation is carried out on the deviation information of each area on the CT image, the image quality evaluation results are generated, and the image quality evaluation results are maximally consistent with the actual quality of the image, so that the accuracy of the quality evaluation results is guaranteed.
Further, as shown in fig. 3, step S500 of the present application further includes:
step S510: determining an image detection fusion result according to the image edge detection information, the focus target object edge detection result, and the CT image uniformity and noise of each region pixel point;
step S520: acquiring CT image acquisition parameter information and shooting target information, and determining image characteristic requirements;
step S530: obtaining global and focus division information, and carrying out region division on the image detection fusion result based on the image feature requirement and the global and focus division information;
step S540: traversing and comparing each area by using the predicted shooting effect information, and determining deviation information of each area;
step S550: determining the weight information of each image feature requirement according to the image feature requirements;
step S560: and obtaining the obtained image quality evaluation result according to the weight information of each image characteristic demand and the deviation information of each region.
Specifically, the image edge detection information, the focus target edge detection information, and the uniformity and noise of the pixel points in each region are mapped and mapped based on the image position, the detection information in the same position is fused to generate the image detection fusion result, the evaluation analysis requirements of a plurality of regions of the image are further determined based on the CT image acquisition parameter information and the shooting target information, for example, a part of the regions may need one image to identify the focus region, and the rest of the regions may need three-dimensional modeling to identify, for the high demand of the oblique street frame of the image, the image characteristic requirements are obtained, the global and focus regions of the CT image are limited in range, the corresponding position information is determined as the global and focus division information, and the global and focus division information is further based on the global and focus division information, dividing the image detection fusion result according to different image feature demand levels, determining a plurality of image division areas, mapping and corresponding the predicted shooting effect information and the image detection fusion result after area division, acquiring a mapping result, traversing and comparing the mapping result, determining deviation data and data deviation degrees of the predicted shooting effect information and the image detection fusion result after area division, determining deviation information of each area, further setting a weight value of the image feature demand, wherein the higher the image feature demand degree is, the higher the corresponding weight value is, so as to determine the weight information of each image feature demand, further performing weighted calculation on the weight information of the image feature demand and the deviation information of each area, taking the calculation result as a judgment standard of image quality, and generating the image quality evaluation result, the accuracy of the image quality evaluation result can be effectively improved.
Example two
Based on the same inventive concept as the CT image quality evaluation method based on the edge detection algorithm in the foregoing embodiment, as shown in fig. 4, the present application provides a CT image quality evaluation system based on the edge detection algorithm, the system includes:
the information acquisition module 11, the information acquisition module 11 is used for acquiring CT image acquisition parameter information and shooting target information;
the information prediction module 12, the information prediction module 12 is configured to obtain predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information;
the image detection module 13 is configured to obtain an actually shot CT image, perform global and focus edge detection on the actually shot CT image by using an edge detection algorithm, and determine an image edge detection result, where the image edge detection result includes image edge detection information and a focus target edge detection result;
the result comparison module 14 is configured to perform traversal comparison on the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information, so as to obtain a comparison result;
and the deviation evaluation module 15 is used for determining deviation information of each area of the image according to the comparison result, and obtaining an image quality evaluation result based on the deviation information of each area of the image.
Further, the system further comprises:
the model acquisition module is used for acquiring a preset prediction model, and the preset prediction model is a neural network model obtained by training and converging through a historical experience data set;
the target information acquisition module is used for acquiring shooting target attribute information and shooting target focus setting information according to the shooting target information;
and the effect prediction module is used for inputting the CT image acquisition parameter information, the shooting target attribute information and the shooting target focus setting information into the preset prediction model, predicting shooting effect parameters and outputting the predicted shooting effect information.
Further, the system further comprises:
the range acquisition module is used for acquiring a target focus shooting position range according to the shooting target information and the CT image acquisition parameter information;
the area calibration module is used for carrying out rough calibration on a focus area according to the target focus shooting position range, wherein the rough calibration range of the focus area is larger than the target focus shooting position range;
and the edge detection module is used for carrying out global and focus edge detection on the actually shot CT image subjected to rough calibration by utilizing a preset edge detection algorithm to obtain image edge detection information and a focus target object edge detection result.
Further, the system further comprises:
the pixel point traversing module is used for filtering the actually shot CT image and traversing each pixel point of the actually shot CT image;
the gradient calculation module is used for respectively convolving each pixel point of the whole actually shot CT image with a multi-direction template and calculating the gradient strength of each pixel point, wherein the multi-direction template convolution at least comprises the X direction and the Y direction;
the gradient acquisition module is used for adding the multi-direction gradient strength to obtain the approximate gradient of the pixel point;
and the edge judgment module is used for acquiring a preset double threshold value, performing edge judgment on the approximate gradient of the pixel point by using the preset double threshold value and determining edge detection information.
Further, the system further comprises:
the request list acquisition module is used for acquiring a CT shooting request list;
the request list identification module is used for carrying out semantic identification on the CT shooting request list to obtain a CT shooting target and a CT shooting requirement;
the characteristic analysis module is used for carrying out target characteristic analysis according to the CT shooting target and determining a target depth characteristic, a target ray absorption characteristic, a target size characteristic and a target attribute characteristic;
the target information construction module is used for constructing the shooting target information according to the target depth characteristic, the target ray absorption characteristic, the target size characteristic and the target attribute characteristic;
the parameter analysis module is used for carrying out shooting parameter characteristic analysis according to the CT shooting requirement and the shooting target information and determining X-ray dosage, pixel size, thickness, window setting and scanning parameters;
and the parameter information acquisition module is used for acquiring the CT image acquisition parameter information based on the X-ray dosage, the pixel size, the thickness, the window setting and the scanning parameters.
Further, the system further comprises:
the detection object setting module is used for setting a uniform detection object;
constructing a detection module based on the uniform detection object, detecting the CT value in the whole scanning field of the actually shot CT image, and determining the CT value of the uniform detection object in each area in the scanning field;
the information determining module is used for determining an average CT value according to the CT value of each area and determining the change information of the CT value of each area in the scanning field based on the average CT value;
and the image information determining module is used for determining the uniformity and the noise of the CT image of the pixel points in each region according to the change information of the CT values.
Further, the system further comprises:
the result determining module is used for determining an image detection fusion result according to the image edge detection information, the focus target object edge detection result, and the CT image uniformity and noise of each region pixel point;
the characteristic requirement determining module is used for acquiring CT image acquisition parameter information and shooting target information and determining image characteristic requirements;
the region division module is used for obtaining global and focus division information and carrying out region division on the image detection fusion result based on the image feature requirement and the global and focus division information;
the deviation information determining module is used for performing traversal comparison on each area by using the predicted shooting effect information to determine the deviation information of each area;
the weight determining module is used for determining weight information of each image feature requirement according to the image feature requirements;
and the quality evaluation module is used for obtaining the obtained image quality evaluation result according to the weight information of each image characteristic demand and the deviation information of each region.
In the present specification, through the foregoing detailed description of the CT image quality evaluation method based on the edge detection algorithm, those skilled in the art can clearly know the CT image quality evaluation method and system based on the edge detection algorithm in the present embodiment.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A CT image quality evaluation method based on an edge detection algorithm is characterized by comprising the following steps:
acquiring CT image acquisition parameter information and shooting target information;
acquiring predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information;
acquiring an actual shooting CT image, carrying out global and focus edge detection on the actual shooting CT image by utilizing an edge detection algorithm, and determining an image edge detection result, wherein the image edge detection result comprises image edge detection information and a focus target object edge detection result;
traversing and comparing the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information to obtain a comparison result;
and determining deviation information of each region of the image according to the comparison result, and obtaining an image quality evaluation result based on the deviation information of each region of the image.
2. The method of claim 1, wherein obtaining predicted photographing effect information according to the CT image acquisition parameter information and the photographing target information comprises:
obtaining a preset prediction model, wherein the preset prediction model is a neural network model obtained by training convergence through a historical experience data set;
acquiring shooting target attribute information and shooting target focus setting information according to the shooting target information;
and inputting the CT image acquisition parameter information, the shooting target attribute information and the shooting target focus setting information into the preset prediction model, predicting shooting effect parameters, and outputting the predicted shooting effect information.
3. The method of claim 1, wherein performing global, focus edge detection on the actually captured CT image using an edge detection algorithm to determine an image edge detection result comprises:
acquiring a target focus shooting position range according to the shooting target information and the CT image acquisition parameter information;
according to the target focus shooting position range, performing focus area rough calibration, wherein the focus area rough calibration range is larger than the target focus shooting position range;
and carrying out global and focus edge detection on the actually shot CT image subjected to rough calibration by utilizing a preset edge detection algorithm to obtain image edge detection information and a focus target object edge detection result.
4. The method as claimed in claim 3, wherein the performing global and focus edge detection on the coarsely scaled actual captured CT image by using a preset edge detection algorithm comprises:
filtering the actually shot CT image, and traversing each pixel point of the actually shot CT image;
based on all pixel points of the whole actually shot CT image, respectively convolving with multi-direction templates, and calculating the gradient intensity of each pixel point, wherein the multi-direction template convolution at least comprises X and Y directions;
adding the multidirectional gradient strengths to obtain a pixel point approximate gradient;
and obtaining a preset double threshold, and performing edge judgment on the approximate gradient of the pixel point by using the preset double threshold to determine edge detection information.
5. The method of claim 1, wherein obtaining CT image acquisition parameter information, acquisition target information, comprises:
acquiring a CT shooting request sheet;
performing semantic recognition on the CT shooting request list to obtain a CT shooting target and CT shooting requirements;
performing target feature analysis according to the CT shooting target, and determining a target depth feature, a target ray absorption feature, a target size feature and a target attribute feature;
according to the target depth feature, the target ray absorption feature, the target size feature and the target attribute feature, constructing the shot target information;
performing shooting parameter characteristic analysis according to the CT shooting requirement and the shooting target information, and determining X-ray dose, pixel size, thickness, window setting and scanning parameters;
and acquiring the CT image acquisition parameter information based on the X-ray dose, the pixel size, the thickness, the window setting and the scanning parameters.
6. The method of claim 1, wherein the method further comprises:
setting a uniform detection object;
constructing a detection module based on the uniform detection object, carrying out whole scanning field CT value detection on the actually shot CT image, and determining the CT value of the uniform detection object in each region in the scanning field;
determining an average CT value according to the CT value of each region, and determining the change information of the CT value of each region in the scanning field based on the average CT value;
and according to the CT value change information, determining the uniformity and noise of the CT image of each area pixel point.
7. The method of claim 6, wherein the method further comprises:
determining an image detection fusion result according to the image edge detection information, the focus target object edge detection result, and the CT image uniformity and noise of each region pixel point;
acquiring CT image acquisition parameter information and shooting target information, and determining image characteristic requirements;
obtaining global and focus division information, and carrying out region division on the image detection fusion result based on the image feature requirement and the global and focus division information;
traversing and comparing each area by using the predicted shooting effect information, and determining deviation information of each area;
determining the weight information of each image feature requirement according to the image feature requirements;
and obtaining the obtained image quality evaluation result according to the weight information of each image characteristic demand and the deviation information of each region.
8. An edge detection algorithm-based CT image quality assessment system, characterized in that the system comprises:
the information acquisition module is used for acquiring CT image acquisition parameter information and shooting target information;
the information prediction module is used for obtaining predicted shooting effect information according to the CT image acquisition parameter information and the shooting target information;
the image detection module is used for obtaining an actual shooting CT image, carrying out global and focus edge detection on the actual shooting CT image by utilizing an edge detection algorithm, and determining an image edge detection result, wherein the image edge detection result comprises image edge detection information and a focus target object edge detection result;
the result comparison module is used for performing traversal comparison on the image edge detection information and the focus target object edge detection result according to the predicted shooting effect information to obtain a comparison result;
and the deviation evaluation module is used for determining deviation information of each region of the image according to the comparison result and obtaining an image quality evaluation result based on the deviation information of each region of the image.
CN202211360778.4A 2022-11-02 2022-11-02 CT image quality evaluation method and system based on edge detection algorithm Active CN115760891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211360778.4A CN115760891B (en) 2022-11-02 2022-11-02 CT image quality evaluation method and system based on edge detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211360778.4A CN115760891B (en) 2022-11-02 2022-11-02 CT image quality evaluation method and system based on edge detection algorithm

Publications (2)

Publication Number Publication Date
CN115760891A true CN115760891A (en) 2023-03-07
CN115760891B CN115760891B (en) 2023-05-05

Family

ID=85355287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211360778.4A Active CN115760891B (en) 2022-11-02 2022-11-02 CT image quality evaluation method and system based on edge detection algorithm

Country Status (1)

Country Link
CN (1) CN115760891B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017153590A (en) * 2016-02-29 2017-09-07 東芝メディカルシステムズ株式会社 X-ray CT apparatus
CN110324679A (en) * 2018-03-29 2019-10-11 优酷网络技术(北京)有限公司 A kind of video data handling procedure and device
US11170500B1 (en) * 2018-11-09 2021-11-09 United States Of America As Represented By The Administrator Of Nasa Pyramid image quality indicator (IQI) for X-ray computed tomography
CN114913183A (en) * 2021-02-07 2022-08-16 上海交通大学 Image segmentation method, system, apparatus and medium based on constraint

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017153590A (en) * 2016-02-29 2017-09-07 東芝メディカルシステムズ株式会社 X-ray CT apparatus
CN110324679A (en) * 2018-03-29 2019-10-11 优酷网络技术(北京)有限公司 A kind of video data handling procedure and device
US11170500B1 (en) * 2018-11-09 2021-11-09 United States Of America As Represented By The Administrator Of Nasa Pyramid image quality indicator (IQI) for X-ray computed tomography
CN114913183A (en) * 2021-02-07 2022-08-16 上海交通大学 Image segmentation method, system, apparatus and medium based on constraint

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENG-TA CHUANG: "Deep Learning for Improving Image Quality with Uneven Illumination Images" *
颜溶標: "基于多尺度边缘提取和加权卷积稀疏编码的低剂量CT去噪算法" *

Also Published As

Publication number Publication date
CN115760891B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN107480677B (en) Method and device for identifying interest region in three-dimensional CT image
CN109035187B (en) Medical image labeling method and device
US20230263463A1 (en) Osteoporosis diagnostic support apparatus
JP6099479B2 (en) Crack detection method
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
KR102073468B1 (en) System and method for scoring color candidate poses against a color image in a vision system
JP5852919B2 (en) Crack detection method
CN104838422A (en) Image processing device and method
CN104574312A (en) Method and device of calculating center of circle for target image
CN116563262A (en) Building crack detection algorithm based on multiple modes
CN111241331A (en) Image searching method, device, equipment and medium based on artificial intelligence
KR20230132686A (en) A method for damage identification and volume quantification of concrete pipes based on PointNet++ neural network
CN114565722A (en) Three-dimensional model monomer realization method
CN113749646A (en) Monocular vision-based human body height measuring method and device and electronic equipment
CN116309608B (en) Coating defect detection method using ultrasonic image
CN115760891B (en) CT image quality evaluation method and system based on edge detection algorithm
CN113012127A (en) Cardiothoracic ratio measuring method based on chest medical image
CN115661152B (en) Target development condition analysis method based on model prediction
CN116858102A (en) Weld joint size detection method, system, medium and equipment based on point cloud matching
CN116363104A (en) Automatic diagnosis equipment and system for image medicine
CN110175977B (en) Three-dimensional choroid neovascularization growth prediction method and device and quantitative analysis method
CN113436120A (en) Image fuzzy value identification method and device
CN111325747A (en) Disease detection method and device for rectangular tunnel
CN106033602A (en) Image segmentation device, image segmentation method and image processing system
CN112668621B (en) Image quality evaluation method and system based on cross-source image translation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant