CN114241077B - CT image resolution optimization method and device - Google Patents

CT image resolution optimization method and device Download PDF

Info

Publication number
CN114241077B
CN114241077B CN202210164396.8A CN202210164396A CN114241077B CN 114241077 B CN114241077 B CN 114241077B CN 202210164396 A CN202210164396 A CN 202210164396A CN 114241077 B CN114241077 B CN 114241077B
Authority
CN
China
Prior art keywords
image
resolution
low
matrix
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210164396.8A
Other languages
Chinese (zh)
Other versions
CN114241077A (en
Inventor
潘博洋
龚南杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Ruidu Medical Technology Co ltd
Original Assignee
Nanchang Ruidu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Ruidu Medical Technology Co ltd filed Critical Nanchang Ruidu Medical Technology Co ltd
Priority to CN202210164396.8A priority Critical patent/CN114241077B/en
Publication of CN114241077A publication Critical patent/CN114241077A/en
Application granted granted Critical
Publication of CN114241077B publication Critical patent/CN114241077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4076Super resolution, i.e. output image resolution higher than sensor resolution by iteratively correcting the provisional high resolution image using the original low-resolution image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method and a device for optimizing the resolution of CT images, wherein a plurality of CT matching image pairs are collected, and a training data set is constructed on the basis of the collected CT matching image pairs; constructing a neural network model for reconstructing the low-resolution CT image into a high-resolution CT image; training the neural network model by using the training data set to obtain a trained neural network model; and inputting the target CT image to be optimized into the trained neural network model to obtain the target CT image with high resolution. Therefore, the method and the device can reconstruct the low-resolution CT image into the high-resolution CT image by utilizing deep learning, namely the high-resolution CT image can be obtained without CT super-resolution scanning, so that the risk that a patient damages health in the scanning process is reduced while the resolution of the CT image is improved; moreover, the training data set of the application adopts real clinical data of patients, so that the constructed neural network model has stronger universality.

Description

CT image resolution optimization method and device
Technical Field
The invention relates to the field of medical imaging, in particular to a method and a device for optimizing the resolution of a CT image.
Background
CT (computed tomography) scanning is a medical imaging technique applied to radiology for obtaining detailed images of the body in a non-invasive manner for diagnosis. The principle of the CT scanner is: the X-ray attenuation of different tissues in the body is measured using a rotating X-ray tube and an array of detectors placed in a gantry, and then multiple X-ray measurements taken from different angles are processed on a computer using a reconstruction algorithm to generate tomographic (cross-sectional) images of the body.
Conventional chest CT scanning cannot fully observe lung nodule image features due to large visual field and thick layer thickness. CT super-resolution scanning, also known as magnification scanning, region-of-interest scanning, is a small-scale local scan of lesions with lung length <3cm, which we call "lung nodules". In short, the CT super-resolution scan is a scan with a small viewing angle and a small range in the lung CT examination, and the scanning range is smaller than the whole lung, so that the focus can be found and enlarged in a thin layer.
The CT super-resolution scanning can more comprehensively, diversely and intuitively observe the form and the characteristics of the focus, observe the edge and the central structure of the focus, and find valuable diagnosis clues through computer processing and the experience of doctors so as to obtain more accurate conclusion. The CT super-resolution scanning can better display the pulmonary nodule lesion characteristics by improving the spatial resolution, and provides more basis for the qualitative diagnosis of nodules in the lung. However, in order to improve the scanning accuracy and the image resolution, the CT super-resolution scanning usually brings higher radiation dose, which results in higher potential canceration risk for the patient.
Therefore, how to provide a solution to the above technical problems is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a method and a device for optimizing the resolution of a CT image, which can reconstruct a low-resolution CT image into a high-resolution CT image by utilizing deep learning, namely, the high-resolution CT image can be obtained without CT super-resolution scanning, thereby improving the resolution of the CT image and reducing the risk that a patient damages the health in the scanning process; moreover, the training data set of the application adopts real clinical data of the patient, so that the constructed neural network model has stronger universality.
In order to solve the technical problem, the invention provides a method for optimizing the resolution of a CT image, which comprises the following steps:
collecting a plurality of CT matched image pairs, and constructing a training data set based on the collected CT matched image pairs; each CT matching image pair comprises a high-resolution CT image with the resolution higher than a preset resolution threshold value and a low-resolution CT image with the resolution lower than the preset resolution threshold value; the high-resolution CT image and the low-resolution CT image in the same CT matching image pair correspond to the same scanning position of the same patient;
constructing a neural network model for reconstructing the low-resolution CT image into a high-resolution CT image;
training the neural network model by using the training data set to obtain a trained neural network model;
and inputting the target CT image to be optimized into the trained neural network model to obtain the target CT image with high resolution.
Optionally, several CT matched image pairs are acquired, including:
two successive CT scans of different resolution are performed on the same patient to obtain a CT matched image pair consisting of a high resolution CT image and a low resolution CT image.
Optionally, after obtaining a CT matching image pair composed of a high resolution CT image and a low resolution CT image, before constructing a training data set based on the acquired CT matching image pair, the CT image resolution optimization method further includes:
normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair;
fitting the first CT image and the second CT image in the same coordinate system based on the spatial range of the first CT image and the spatial range of the second CT image after normalization processing, and correspondingly obtaining a first CT fitting image and a second CT fitting image;
extracting a plurality of low-resolution image matrixes with preset sizes from an original second CT image and the second CT fitting image respectively; any low-resolution image matrix extracted from the original second CT image and a low-resolution image matrix of the same image position extracted from the second CT fitting image form the same group of low-resolution image matrices;
respectively finding high-resolution image matrixes matched with two low-resolution image matrixes in each group of low-resolution image matrixes from the first CT fitting image, and forming a CT matching image matrix pair by the low-resolution image matrix with higher matching degree in each group of low-resolution image matrixes and the high-resolution image matrix matched with the low-resolution image matrix to obtain a plurality of CT matching image matrix pairs formed by the low-resolution image matrix and the high-resolution image matrix matched with the low-resolution image matrix;
and performing data amplification processing on the obtained CT matching image matrix pair to construct a training data set based on the CT matching image matrix pair subjected to data amplification.
Optionally, normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair includes:
normalizing the first CT image and the second CT image according to a preset first normalization relation X' = (X-min)/(max-min);
wherein x is image data to be normalized; x' is normalized image data; max is the maximum image data or the preset maximum image data in the CT image to be normalized; and min is minimum image data or preset minimum image data in the CT image to be normalized.
Optionally, normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair includes:
normalizing the first CT image and the second CT image according to a preset second normalization relation X' = (X-u)/v;
wherein x is image data to be normalized; x' is normalized image data; u is the image data mean value of the CT image to be normalized; v is the image data standard deviation of the CT image to be normalized.
Optionally, normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair includes:
normalizing the first CT image and the second CT image according to a preset third normalization relational expression X' = (X-air)/(bone-air); wherein x is image data to be normalized; x' is normalized image data; the bone represents a characteristic CT value of the bone, the value of the characteristic CT value is a bone numerical value corresponding to a standard Henschel unit calculation formula, or the maximum peak value of a statistical histogram representing the number of the same pixel values on a CT image to be normalized, or a foreground threshold value calculated by a window width window level in a CT display image with the resolution higher than a preset resolution threshold value; air represents a characteristic CT value of an image background, and the value of the characteristic CT value is a minimum CT value of all CT images, or an air numerical value corresponding to a standard Henschel unit calculation formula, or a minimum peak value of a statistical histogram representing the number of the same pixel values on the CT images to be normalized, or a background threshold value calculated by a window width window level in a CT display image with the resolution higher than a preset resolution threshold value.
Optionally, finding a high resolution image matrix from the first CT fit image that matches each of the low resolution image matrices comprises:
screening out a target high-resolution image matrix with the maximum peak signal-to-noise ratio and/or the highest image similarity index and/or the minimum average absolute error and/or the minimum average square error of a target low-resolution image matrix from the high-resolution images in the preset range of the coordinate system where the first CT fitting image is located, and taking the target high-resolution image matrix as a high-resolution image matrix matched with the target low-resolution image matrix;
wherein the target low-resolution image matrix is any one of the low-resolution image matrices.
Optionally, several CT matched image pairs are acquired, including:
carrying out high-resolution CT scanning on a target patient to obtain a high-resolution CT image of the target patient;
carrying out one or more times of downsampling processing on the high-resolution CT image of the target patient to obtain a low-resolution CT image of the target patient so as to obtain a CT matched image pair consisting of the high-resolution CT image and the low-resolution CT image; wherein the down-sampling process comprises a blurring process and/or a reduced image resolution process and/or an additive noise process and/or an additive ringing effect process.
Optionally, after obtaining the high resolution CT image of the target patient, before performing one or more downsampling processes on the high resolution CT image of the target patient, the CT image resolution optimization method further includes:
and carrying out normalization processing on the high-resolution CT image of the target patient so as to carry out one or more times of downsampling processing on the normalized high-resolution CT image.
In order to solve the above technical problem, the present invention further provides a device for optimizing the resolution of a CT image, comprising:
a memory for storing a computer program;
a processor for implementing the steps of any of the above-mentioned methods for optimizing resolution of CT images when executing said computer program.
The invention provides a CT image resolution optimization method, which comprises the steps of collecting a plurality of CT matched image pairs, and constructing a training data set based on the collected CT matched image pairs; constructing a neural network model for reconstructing the low-resolution CT image into a high-resolution CT image; training the neural network model by utilizing the training data set to obtain a trained neural network model; and inputting the target CT image to be optimized into the trained neural network model to obtain the target CT image with high resolution. Therefore, the method and the device can reconstruct the low-resolution CT image into the high-resolution CT image by utilizing deep learning, namely the high-resolution CT image can be obtained without CT super-resolution scanning, so that the risk that a patient damages health in the scanning process is reduced while the resolution of the CT image is improved; moreover, the training data set of the application adopts real clinical data of the patient, so that the constructed neural network model has stronger universality.
The invention also provides a CT image resolution optimization device, which has the same beneficial effects as the optimization method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for optimizing a resolution of a CT image according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a device for optimizing resolution of a CT image according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide a method and a device for optimizing the resolution of a CT image, which can reconstruct a low-resolution CT image into a high-resolution CT image by utilizing deep learning, namely, the high-resolution CT image can be obtained without CT super-resolution scanning, thereby improving the resolution of the CT image and reducing the risk that a patient damages health in the scanning process; moreover, the training data set of the application adopts real clinical data of patients, so that the constructed neural network model has stronger universality.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for optimizing a resolution of a CT image according to an embodiment of the present invention.
The CT image resolution optimization method comprises the following steps:
step S1: a plurality of CT matched image pairs are acquired, and a training data set is constructed based on the acquired CT matched image pairs.
Specifically, the method acquires a plurality of CT matched image pairs, each CT matched image pair comprises a high-resolution CT image and a low-resolution CT image, and the high-resolution CT image and the low-resolution CT image in the same CT matched image pair correspond to the same scanning position of the same patient (the acquired CT matched image pair is real clinical data of the patient). It should be noted that the resolution of the high-resolution CT image is higher than the preset resolution threshold, and the resolution of the low-resolution CT image is lower than the preset resolution threshold, that is, the resolution of the high-resolution CT image > the preset resolution threshold > the resolution of the low-resolution CT image.
The method and the device construct a training data set based on the acquired CT matched image pair to be used by a neural network model constructed for subsequent training.
Step S2: a neural network model is constructed for reconstructing the low resolution CT image into a high resolution CT image.
Specifically, the neural network model for reconstructing the low-resolution CT image into the high-resolution CT image is constructed, namely when the neural network model is applied, the low-resolution CT image is input, and the high-resolution CT image corresponding to the input is output, so that the high-quality CT image is obtained.
More specifically, the neural network model constructed by the method can be composed of a main generation network and a plurality of bypass generation networks; the trunk generation network can be formed by combining a plurality of convolution layers, a nonlinear activation function layer, a skip connection layer, an up-sampling layer and a down-sampling layer; the backbone generation network may further include a channel attention module connected to an output of the skip connection layer. The bypass generation network and the backbone generation network have the same structure, and the bypass generation network can also be formed by combining a plurality of convolution layers, nonlinear activation function layers, skip connection layers, up-sampling layers and down-sampling layers; the bypass generation network may also include a channel attention module connected to an output of the skip connection layer. The output of the bypass generation network may be concatenated to the last layer of the backbone generation network to provide constraints for the final generated image of the backbone generation network.
The input image of the bypass generation network is a feature domain image of the input image of the backbone generation network, such as a wavelet transform image, a gradient index image, a projection data image, and the like.
Step S3: and training the neural network model by using the training data set to obtain the trained neural network model.
Specifically, the neural network model is trained by using the constructed training data set to obtain the trained neural network model, and the trained neural network model can be put into practical application and used for reconstructing a low-resolution CT image into a high-resolution CT image.
More specifically, the training principle for training the neural network model by using the training data set is as follows: inputting the training data set into a neural network model to obtain a contrast sample of a high-resolution CT image reconstructed by the neural network model and an originally acquired high-resolution CT image; and calculating the loss of the neural network model, and iteratively updating the weight of the neural network model through a random gradient descent method or an Adam (Adaptive Moment Estimation) optimizer so as to finally reduce the loss of the neural network model to be below a preset loss threshold value, and finishing the training of the neural network model.
It should be noted that the loss of the neural network model depends on the structure of the neural network model. The loss of the neural network model may be comprised of one or more of pixel feature loss, image feature loss, resist generation loss, bypass branch loss. Wherein, the pixel characteristic loss comprises L1 loss (L1 loss: mean absolute error) and L2 loss (L2 loss: mean square error). The image feature loss includes SSIM (Structural Similarity, which is an index for measuring Similarity between two images) loss, PSNR (Peak Signal to Noise Ratio) loss, and loss using a public pre-training network feature map. The antagonistic generation loss is composed of arbiter network loss and GAN (Generative adaptive Networks) loss; the arbiter network can be a VGG network (Convolutional neural network) or a basic FCN (Fully connected Convolutional network) or a U-net network (full Convolutional network) with different depths; the GAN loss may be a basal GAN loss or a wgan (wasserstein GAN) loss or an rgan (relativic GAN) loss. The bypass branch loss includes bypass L1 loss and bypass antagonistic generation loss (it should be noted that the antagonistic generation loss and the bypass antagonistic generation loss are generated when the trunk generation network and the bypass generation network of the neural network model both include the antagonistic generation network).
Step S4: and inputting the target CT image to be optimized into the trained neural network model to obtain the target CT image with high resolution.
Specifically, after the neural network model is trained, the target CT image to be optimized can be input into the trained neural network model, and the neural network model can output the target CT image with high resolution.
Therefore, the method and the device can reconstruct the low-resolution CT image into the high-resolution CT image by utilizing deep learning, namely the high-resolution CT image can be obtained without CT super-resolution scanning, so that the risk that a patient damages health in the scanning process is reduced while the resolution of the CT image is improved; moreover, the training data set of the application adopts real clinical data of patients, so that the constructed neural network model has stronger universality.
On the basis of the above-described embodiment:
as an alternative embodiment, several CT matching image pairs are acquired, including:
two successive CT scans of different resolution are performed on the same patient to obtain a CT matched image pair consisting of a high resolution CT image and a low resolution CT image.
Specifically, the first acquisition mode for acquiring the CT matching image pair in the present application is: the method comprises the steps of continuously carrying out two times of CT scanning with different resolutions on the same patient, specifically, continuously carrying out one time of high-resolution CT scanning and one time of low-resolution CT scanning on the same patient to obtain a CT matching image pair consisting of a high-resolution CT image and a low-resolution CT image so as to meet the requirement that the high-resolution CT image and the low-resolution CT image in the same CT matching image pair correspond to the same scanning position of the same patient. Similarly, a plurality of patients are collected, and each patient is subjected to two consecutive CT scans with different resolutions, so that a plurality of CT matching image pairs consisting of high-resolution CT images and low-resolution CT images are obtained.
As an alternative embodiment, after obtaining a CT matching image pair consisting of a high resolution CT image and a low resolution CT image, before constructing a training data set based on the acquired CT matching image pair, the CT image resolution optimization method further comprises:
normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair;
fitting the first CT image and the second CT image in the same coordinate system based on the spatial range of the first CT image and the spatial range of the second CT image after normalization processing, and correspondingly obtaining a first CT fitting image and a second CT fitting image;
extracting a plurality of low-resolution image matrixes with preset sizes from the original second CT image and the original second CT fitting image respectively; any low-resolution image matrix extracted from the original second CT image and a low-resolution image matrix at the same image position extracted from the second CT fitting image form the same group of low-resolution image matrixes;
respectively finding high-resolution image matrixes matched with two low-resolution image matrixes in each group of low-resolution image matrixes from the first CT fitting image, and combining the low-resolution image matrix with higher matching degree in each group of low-resolution image matrixes and the high-resolution image matrix matched with the low-resolution image matrix into a CT matching image matrix pair to obtain a plurality of CT matching image matrix pairs consisting of the low-resolution image matrixes and the high-resolution image matrixes matched with the low-resolution image matrixes;
and performing data amplification processing on the obtained CT matching image matrix pair to construct a training data set based on the CT matching image matrix pair subjected to data amplification.
Further, considering that there are problems of different thicknesses of images on the same layer and spatial mismatch caused by body motion of a patient in two consecutive CT scans with different resolutions for the same patient, after obtaining a CT matching image pair composed of a high resolution CT image and a low resolution CT image, the present application performs the following optimization processing on each CT matching image pair: 1) normalizing the high-resolution CT image (called as a first CT image) and the low-resolution CT image (called as a second CT image) in the same CT matching image pair; 2) each acquired CT image is stored in a DICOM (standard format of CT scan file) file, and spatial information corresponding to each CT image is also stored in the DICOM file (the spatial information indicates a coordinate range of X, Y, Z axes of the CT image in a coordinate system, a direction from a foot to a head is a Z-axis direction, a direction in which an arm is unfolded is an X-direction, a direction from a back to a chest is a Y-direction, and an origin is preset by a CT scanner). Based on the above, the spatial range of the first CT image and the spatial range of the second CT image after normalization processing are determined through the spatial information stored in the DICOM file for storing the CT images; 3) fitting the first CT image in a coordinate system based on the spatial range of the first CT image subjected to normalization processing to obtain a first CT fitting image, and fitting the second CT image in the same coordinate system with the first CT image belonging to the same CT matching image pair based on the spatial range of the second CT image subjected to normalization processing (the fitting method comprises single linear fitting, bilinear fitting, bicubic linear fitting and Lanczos interpolation method) to obtain a second CT fitting image; 4) extracting a plurality of first low-resolution image matrixes with preset sizes from the original second CT image (the size of the extracted low-resolution image matrixes is between 4x4 and 256x 256), and extracting a plurality of second low-resolution image matrixes which are at the same image position as the first low-resolution image matrixes from a second CT fitting image obtained by processing the original second CT image; any first low-resolution image matrix and a second low-resolution image matrix at the same image position form the same group of low-resolution image matrices; 5) and respectively finding high-resolution image matrixes matched with the two low-resolution image matrixes in each group of low-resolution image matrixes from the first CT fitting image, and forming a CT matching image matrix pair by the low-resolution image matrix with higher matching degree in each group of low-resolution image matrixes and the high-resolution image matrix matched with the low-resolution image matrix to obtain a plurality of CT matching image matrix pairs formed by the low-resolution image matrix and the high-resolution image matrix matched with the low-resolution image matrix.
Based on the above processing on each CT matched image pair, a plurality of CT matched image matrix pairs can be obtained, then data amplification processing is carried out on the obtained CT matched image matrix pairs (for example, a new CT matched image matrix pair can be obtained by carrying out the same rotation processing on a low-resolution image matrix and a high-resolution image matrix in the same CT matched image matrix pair), and a training data set is constructed based on the CT matched image matrix pairs after data amplification.
As an alternative embodiment, the normalization process is performed on the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair, and includes:
normalizing the first CT image and the second CT image according to a preset first normalization relation X' = (X-min)/(max-min);
wherein x is image data to be normalized; x' is normalized image data; max is the maximum image data or the preset maximum image data in the CT image to be normalized; and min is minimum image data or preset minimum image data in the CT image to be normalized.
Specifically, the first normalization method of the CT image to be normalized in the present application is: normalizing by the maximum image data max and the minimum image data min in the CT image to be normalized: x' = (X-min)/(max-min).
The second normalization method of the CT image to be normalized in the present application is: normalizing by a certain threshold (preset minimum image data min, preset maximum image data max): x' = (X-min)/(max-min).
As an alternative embodiment, the normalizing process performed on the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair includes:
normalizing the first CT image and the second CT image according to a preset second normalization relational expression X' = (X-u)/v;
wherein x is image data to be normalized; x' is normalized image data; u is the image data mean value of the CT image to be normalized; v is the image data standard deviation of the CT image to be normalized.
Specifically, the third normalization method for the CT image to be normalized in the present application is: normalizing by the image data mean u and the image data standard deviation v of the CT image to be normalized: x' = (X-u)/v.
As an alternative embodiment, the normalizing process performed on the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair includes:
normalizing the first CT image and the second CT image according to a preset third normalization relational expression X' = (X-air)/(bone-air); wherein x is image data to be normalized; x' is normalized image data; bone represents the characteristic CT value of the bone, the value is the bone value corresponding to a standard Henschel unit calculation formula, or the maximum peak value of a statistical histogram representing the number of the same pixel values on the CT image to be normalized, or the foreground threshold value calculated by the window width and window level in a CT display image with the resolution higher than the preset resolution threshold value; air represents a characteristic CT value of an image background, and the value of the characteristic CT value is a minimum CT value of all CT images, or an air numerical value corresponding to a standard Henschel unit calculation formula, or a minimum peak value of a statistical histogram representing the number of the same pixel values on the CT images to be normalized, or a background threshold value calculated by a window width window level in a CT display image with the resolution higher than a preset resolution threshold value.
Specifically, the fourth normalization method for the CT image to be normalized in the present application is: normalizing the characteristic CT value bone of the bone and the characteristic CT value air of the image background: x' = (X-air)/(bone-air). The method for acquiring the value of the bone at least comprises the following three steps: 1. taking a bone value corresponding to a standard Hounsfield Unit calculation formula as a bone value; 2. generating a statistical histogram representing the number of the same pixel values on the CT image to be normalized (the horizontal axis is the pixel values arranged from small to large, and the vertical axis is the number of the same pixel values on the CT image to be normalized) based on the image pixel values of the CT image to be normalized, wherein the maximum peak value of the statistical histogram corresponding to the CT image to be normalized is used as a bone value; 3. the foreground threshold value calculated by the window width window level in a CT display image (with better vision) with the resolution higher than the preset resolution threshold value is used as a bone value. The method for obtaining the air value at least comprises the following four methods: 1. the minimum CT value (-1024) of all CT images is used as the air value; 2. taking an air value corresponding to a standard Hounsfield Unit calculation formula as an air value; 3. taking the minimum peak value of the statistical histogram corresponding to the CT image to be normalized as an air value; 4. and taking the background threshold value of window width and window level calculation in the CT display image with the resolution higher than the preset resolution threshold value as the air value.
As an alternative embodiment, finding a high resolution image matrix matching each low resolution image matrix from the first CT fit image comprises:
screening out a target high-resolution image matrix with the maximum peak signal-to-noise ratio and/or the highest image similarity index and/or the minimum average absolute error and/or the minimum average square error of the target low-resolution image matrix from the high-resolution image in the preset range of the coordinate system where the first CT fitting image is located, and taking the target high-resolution image matrix as a high-resolution image matrix matched with the target low-resolution image matrix; the target low-resolution image matrix is any low-resolution image matrix.
Specifically, the principle of finding the high-resolution image matrix matching each low-resolution image matrix from the first CT fitting image is as follows (taking the target low-resolution image matrix as an example): screening a target high-resolution image matrix matched with the target low-resolution image matrix according to preset screening conditions from a high-resolution image in a preset range of a coordinate system in which the first CT fitting image is positioned (such as within 0-40 layers before and after a Z axis, within 0-40 layers before and after an X axis and within 0-40 layers before and after a Y axis of the first CT fitting image); wherein the preset screening conditions are as follows: and finding out a target high-resolution image matrix with the maximum peak signal-to-noise ratio (PSNR) and/or the maximum image similarity index (SSIM) and/or the minimum mean absolute error and/or the minimum mean square error from the high-resolution images in the planning range.
As an alternative embodiment, several CT matching image pairs are acquired, including:
carrying out high-resolution CT scanning on a target patient to obtain a high-resolution CT image of the target patient;
carrying out one or more times of downsampling processing on the high-resolution CT image of the target patient to obtain a low-resolution CT image of the target patient so as to obtain a CT matched image pair consisting of the high-resolution CT image and the low-resolution CT image; wherein the down-sampling process comprises a blurring process and/or a process of reducing the resolution of the image and/or a process of adding noise and/or a process of adding ringing effect.
Specifically, the second acquisition mode for acquiring the CT matching image pair in the present application is: the method comprises the steps of carrying out high-resolution CT scanning on a patient (called a target patient) to obtain a high-resolution CT image of the target patient, and then carrying out one or more times of downsampling processing on the high-resolution CT image of the target patient to obtain a relatively real low-resolution CT image of the target patient. The high-resolution CT image and the low-resolution CT image of the same patient form a CT matched image pair, and similarly, high-resolution CT scanning is carried out on a plurality of patients, and one or more times of downsampling processing is carried out on the obtained high-resolution CT image, so that a plurality of CT matched image pairs can be obtained.
More specifically, the down-sampling process includes one or more of an image blurring process, an image resolution reduction process, an addition noise process, and an addition ringing effect process; the image blurring processing comprises one or more of Gaussian blurring (isotropy and anisotropy), motion blurring, disk blurring, image shading, shift axis blurring, path blurring, scene blurring, rotation blurring and true blurring simulation based on deep learning; the method for reducing the image resolution comprises one or more of methods of single linear interpolation, bilinear interpolation, bicubic linear interpolation and Lanczos interpolation; the noise adding process includes adding one or more of gaussian noise, rice noise, salt and pepper noise, poisson noise, and noise obtained from the real CT image based on a deep learning technique.
As an alternative embodiment, after obtaining the high resolution CT image of the target patient, before performing one or more downsampling processes on the high resolution CT image of the target patient, the CT image resolution optimization method further includes:
and carrying out normalization processing on the high-resolution CT image of the target patient so as to carry out one or more times of downsampling processing on the high-resolution CT image after the normalization processing.
Furthermore, after the high-resolution CT image of the target patient is obtained, the high-resolution CT image of the target patient is normalized, and then the high-resolution CT image after the normalization of the target patient is subjected to one or more times of downsampling processing to obtain the low-resolution CT image of the target patient. The high-resolution CT image after normalization processing of the same patient and the low-resolution CT image obtained by down-sampling thereof form a CT matching image pair.
Based on the above description of the embodiments, a preferred solution is as follows:
in the first step, real collected data and simulated low-resolution data are mixed to be used as training data.
The adoption of two consecutive CT scan acquisitions with different resolutions:
1. and carrying out normalization processing on the acquired low-resolution CT image and the high-resolution CT image. Normalization is performed by analyzing a statistical histogram of image pixel values: x' = (X-air)/(bone-air).
2. And determining the spatial range of each CT image after normalization processing through the spatial information stored in the DICOM file for storing the CT images.
3. And obtaining a fitting image corresponding to each CT image by a Lanczos interpolation method, thereby obtaining an expanded database matched with space coordinates.
4. A 32x32 image matrix under a low resolution CT image is extracted.
5. And finding out a matched high-resolution image matrix near the high-resolution CT image corresponding to the low-resolution image matrix through a preset screening condition.
5.1 vicinity image means an image within 20 layers before and after the Z axis, within 20 layers before and after the X axis, and within 20 layers before and after the Y axis of the high-resolution CT image;
5.2 the screening condition was that the peak signal-to-noise ratio (PSNR) and the image similarity index (SSIM) were optimal and greater than the threshold of 0.93.
6. And (4) carrying out data amplification on the CT matching image matrix pair obtained by screening to obtain a corresponding training data set.
For the scheme of simulating the corresponding low-resolution CT image by a down-sampling method by acquiring the high-resolution CT image:
1. and normalizing the acquired high-resolution CT image.
2. And carrying out down-sampling processing on the high-resolution CT image after the normalization processing.
2.1 the down sampling process comprises a fuzzy process, an image resolution reduction process, a noise adding process and a ringing effect adding process;
2.2 the image blurring processing adopts Gaussian blurring (the probability of 0.4 is isotropic, and the probability of 0.6 is anisotropic);
2.3 the method for reducing the image resolution is that bicubic linear interpolation is used for reducing the image resolution;
2.4 additive noise processing includes adding 30% probability gaussian noise, 10% probability rice noise, 60% probability poisson noise;
2.5 downsampling method two repeated downsampling processes are performed.
3. A training data set is constructed based on the acquired CT matched image pairs.
And secondly, constructing a neural network model consisting of 18 layers of RRDB (Residual in Residual Dense Block, Residual connecting and densely connecting nested Residual modules).
And thirdly, constructing the loss of the neural network model, namely the loss of the L1 loss, the loss of SSIM loss, the loss of the VGGNet-19 layer characteristic diagram and the loss of the basic GAN.
And fourthly, training the neural network model by utilizing the training data set.
And taking a low-resolution image matrix and a high-resolution image matrix in the training data set as the input of the network, outputting a comparison sample, calculating the loss of the neural network model, iteratively updating the weight of the neural network model through an Adam optimizer with the learning rate of 0.001, training for 200 periods (the loss of the neural network model is enough to be reduced below a preset loss threshold), and obtaining the trained neural network model.
And fifthly, inputting the target CT image to be optimized into the trained neural network model to obtain the high-resolution target CT image.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a device for optimizing resolution of a CT image according to an embodiment of the present invention.
The present application further provides a device for optimizing the resolution of a CT image, comprising:
a memory 1 for storing a computer program;
a processor 2, for implementing the steps of any of the above-mentioned methods for optimizing the resolution of CT images when executing a computer program.
For introduction of the optimization apparatus provided in the present application, reference is made to the embodiments of the optimization method, which are not repeated herein.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A method for optimizing the resolution of a CT image is characterized by comprising the following steps:
collecting a plurality of CT matched image pairs, and constructing a training data set based on the collected CT matched image pairs; each CT matching image pair comprises a high-resolution CT image with the resolution higher than a preset resolution threshold value and a low-resolution CT image with the resolution lower than the preset resolution threshold value; the high-resolution CT image and the low-resolution CT image in the same CT matching image pair correspond to the same scanning position of the same patient;
constructing a neural network model for reconstructing the low-resolution CT image into a high-resolution CT image;
training the neural network model by using the training data set to obtain a trained neural network model;
inputting a target CT image to be optimized into a trained neural network model to obtain a high-resolution target CT image;
acquiring a plurality of CT matched image pairs, comprising:
carrying out two times of CT scanning with different resolutions on the same patient continuously to obtain a CT matched image pair consisting of a high-resolution CT image and a low-resolution CT image;
after obtaining a CT matching image pair consisting of a high resolution CT image and a low resolution CT image, before constructing a training data set based on the acquired CT matching image pair, the CT image resolution optimization method further comprises:
normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair;
fitting the first CT image in a coordinate system based on the spatial range of the first CT image after normalization processing to obtain a first CT fitted image, and fitting the second CT image in the same coordinate system as the first CT image based on the spatial range of the second CT image after normalization processing to obtain a second CT fitted image, wherein the fitting method comprises a Lanczos interpolation method, bilinear fitting and bicubic linear fitting;
extracting a plurality of low-resolution image matrixes with preset sizes from an original second CT image and the second CT fitting image respectively; any low-resolution image matrix extracted from the original second CT image and a low-resolution image matrix at the same image position extracted from the second CT fitting image form the same group of low-resolution image matrices;
respectively finding high-resolution image matrixes matched with two low-resolution image matrixes in each group of low-resolution image matrixes from the first CT fitting image, and forming a CT matching image matrix pair by the low-resolution image matrix with higher matching degree in each group of low-resolution image matrixes and the high-resolution image matrix matched with the low-resolution image matrix to obtain a plurality of CT matching image matrix pairs formed by the low-resolution image matrix and the high-resolution image matrix matched with the low-resolution image matrix;
performing data amplification processing on the obtained CT matching image matrix pair to construct a training data set based on the CT matching image matrix pair subjected to data amplification;
the method for normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair comprises the following steps:
normalizing the first CT image and the second CT image according to a preset third normalization relational expression X' = (X-air)/(bone-air);
wherein x is image data to be normalized; x' is normalized image data; the bone represents a characteristic CT value of the bone, the value of the characteristic CT value is a bone numerical value corresponding to a standard Henschel unit calculation formula, or the maximum peak value of a statistical histogram representing the number of the same pixel values on a CT image to be normalized, or a foreground threshold value calculated by a window width window level in a CT display image with the resolution higher than a preset resolution threshold value; air represents the characteristic CT value of the image background, and the value of the characteristic CT value is the minimum CT value of all CT images, or the air value corresponding to a standard Henschel unit calculation formula, or the minimum peak value of a statistical histogram representing the number of the same pixel values on the CT images to be normalized, or the background threshold value calculated by the window width window level in a CT display image with the resolution higher than the preset resolution threshold value.
2. The method for optimizing the resolution of a CT image according to claim 1, wherein the normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair comprises:
normalizing the first CT image and the second CT image according to a preset first normalization relational expression X' = (X-min)/(max-min);
wherein x is image data to be normalized; x' is normalized image data; max is the maximum image data or the preset maximum image data in the CT image to be normalized; and min is minimum image data or preset minimum image data in the CT image to be normalized.
3. The method for optimizing the resolution of a CT image according to claim 1, wherein the normalizing the first CT image with high resolution and the second CT image with low resolution in the same CT matching image pair comprises:
normalizing the first CT image and the second CT image according to a preset second normalization relational expression X' = (X-u)/v;
wherein x is image data to be normalized; x' is normalized image data; u is the image data mean value of the CT image to be normalized; v is the image data standard deviation of the CT image to be normalized.
4. A method for optimizing the resolution of CT images according to any of claims 1 to 3, wherein finding a high resolution image matrix from the first CT fit image that matches each of the low resolution image matrices comprises:
screening out a target high-resolution image matrix with the maximum peak signal-to-noise ratio and/or the highest image similarity index and/or the minimum average absolute error and/or the minimum average square error of a target low-resolution image matrix from high-resolution images in a preset range of a coordinate system where the first CT fitting image is located, and taking the target high-resolution image matrix as a high-resolution image matrix matched with the target low-resolution image matrix;
wherein the target low-resolution image matrix is any one of the low-resolution image matrices.
5. The method for optimizing resolution of a CT image of claim 1 wherein acquiring a plurality of CT matched image pairs comprises:
carrying out high-resolution CT scanning on a target patient to obtain a high-resolution CT image of the target patient;
carrying out one or more times of downsampling processing on the high-resolution CT image of the target patient to obtain a low-resolution CT image of the target patient so as to obtain a CT matching image pair consisting of the high-resolution CT image and the low-resolution CT image; wherein the down-sampling process comprises a blurring process and/or a reduced image resolution process and/or an additive noise process and/or an additive ringing effect process.
6. The CT image resolution optimization method of claim 5, wherein after obtaining the high resolution CT image of the target patient, the CT image resolution optimization method further comprises, before performing one or more downsampling processes on the high resolution CT image of the target patient:
and carrying out normalization processing on the high-resolution CT image of the target patient so as to carry out one or more times of downsampling processing on the normalized high-resolution CT image.
7. A CT image resolution optimization apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method for optimizing the resolution of a CT image according to any one of claims 1 to 6 when executing said computer program.
CN202210164396.8A 2022-02-23 2022-02-23 CT image resolution optimization method and device Active CN114241077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210164396.8A CN114241077B (en) 2022-02-23 2022-02-23 CT image resolution optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210164396.8A CN114241077B (en) 2022-02-23 2022-02-23 CT image resolution optimization method and device

Publications (2)

Publication Number Publication Date
CN114241077A CN114241077A (en) 2022-03-25
CN114241077B true CN114241077B (en) 2022-07-15

Family

ID=80747742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210164396.8A Active CN114241077B (en) 2022-02-23 2022-02-23 CT image resolution optimization method and device

Country Status (1)

Country Link
CN (1) CN114241077B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419183B (en) * 2022-03-31 2022-07-01 南昌睿度医疗科技有限公司 Optimization method, system, equipment and storage medium of MRA acceleration image
CN115936983A (en) * 2022-11-01 2023-04-07 青岛哈尔滨工程大学创新发展中心 Method and device for super-resolution of nuclear magnetic image based on style migration and computer storage medium
CN115578263B (en) * 2022-11-16 2023-03-10 之江实验室 CT super-resolution reconstruction method, system and device based on generation network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427308A (en) * 2015-11-20 2016-03-23 中国地质大学(武汉) Sparse and dense characteristic matching combined image registration method
CN113298854A (en) * 2021-05-27 2021-08-24 广州柏视医疗科技有限公司 Image registration method based on mark points

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102027507B (en) * 2008-05-15 2015-02-04 皇家飞利浦电子股份有限公司 Using non-attenuation corrected PET emission images to compensate for incomplete anatomic images
CN107154023B (en) * 2017-05-17 2019-11-05 电子科技大学 Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution
CN108682020B (en) * 2018-04-28 2019-04-12 中国石油大学(华东) Rock core micron CT pore structure reconstructing method
CN111899177A (en) * 2020-08-05 2020-11-06 苏州深透智能科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112435309A (en) * 2020-12-07 2021-03-02 苏州深透智能科技有限公司 Method for enhancing quality and resolution of CT image based on deep learning
CN113359077A (en) * 2021-06-08 2021-09-07 苏州深透智能科技有限公司 Magnetic resonance imaging method and related equipment
CN113506333A (en) * 2021-09-09 2021-10-15 之江实验室 Medical image registration network training data set expansion method based on deformable atlas

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427308A (en) * 2015-11-20 2016-03-23 中国地质大学(武汉) Sparse and dense characteristic matching combined image registration method
CN113298854A (en) * 2021-05-27 2021-08-24 广州柏视医疗科技有限公司 Image registration method based on mark points

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A method for characterizing and matching CT image quality across CT scanners from different manufacturers;James Winslow 等;《Medical Physics》;20171130;第44卷(第11期);第5705-5717页 *
深度医学图像配准研究进展:迈向无监督学习;马露凡 等;《中国图象图形学报》;20210916;第26卷(第9期);第2037-2057页 *

Also Published As

Publication number Publication date
CN114241077A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN114241077B (en) CT image resolution optimization method and device
Isaac et al. Super resolution techniques for medical image processing
CN107886508B (en) Differential subtraction method and medical image processing method and system
CN111539930A (en) Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
CN112258415B (en) Chest X-ray film super-resolution and denoising method based on generation countermeasure network
CN113298710B (en) Optical coherence tomography super-resolution imaging method based on external attention mechanism
Yang Multimodal medical image fusion through a new DWT based technique
WO2007002406A2 (en) Interactive diagnostic display system
CN107845079A (en) 3D shearlet medicine CT video denoising methods based on compact schemes
CN106981090B (en) Three-dimensional reconstruction method for in-tube stepping unidirectional beam scanning tomographic image
US20200242744A1 (en) Forecasting Images for Image Processing
Gajera et al. CT-scan denoising using a charbonnier loss generative adversarial network
CN111815766A (en) Processing method and system for reconstructing blood vessel three-dimensional model based on 2D-DSA image
CN112070785A (en) Medical image analysis method based on computer vision
WO2021102644A1 (en) Image enhancement method and apparatus, and terminal device
CN114792287A (en) Medical ultrasonic image super-resolution reconstruction method based on multi-image fusion
CN115601268A (en) LDCT image denoising method based on multi-scale self-attention generation countermeasure network
CN113689337B (en) Ultrasonic image super-resolution reconstruction method and system based on generation countermeasure network
Tariq et al. A cross sectional study of tumors using bio-medical imaging modalities
CN111968108A (en) CT intelligent imaging method, device and system based on intelligent scanning protocol
CN109767410A (en) A kind of lung CT and MRI image blending algorithm
CN111242853B (en) Medical CT image denoising method based on optical flow processing
CN112508881A (en) Intracranial blood vessel image registration method
CN112508868A (en) Intracranial blood vessel comprehensive image generation method
Kovács et al. On Image preprocessing methods for preparation liver CT image series database for intelligent segmentation and classification algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant