CN111915516B - Motion compensation method, motion compensation device, CT equipment and CT system - Google Patents

Motion compensation method, motion compensation device, CT equipment and CT system Download PDF

Info

Publication number
CN111915516B
CN111915516B CN202010705846.0A CN202010705846A CN111915516B CN 111915516 B CN111915516 B CN 111915516B CN 202010705846 A CN202010705846 A CN 202010705846A CN 111915516 B CN111915516 B CN 111915516B
Authority
CN
China
Prior art keywords
motion vector
vector field
image
motion
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010705846.0A
Other languages
Chinese (zh)
Other versions
CN111915516A (en
Inventor
郭志飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN202010705846.0A priority Critical patent/CN111915516B/en
Publication of CN111915516A publication Critical patent/CN111915516A/en
Application granted granted Critical
Publication of CN111915516B publication Critical patent/CN111915516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Abstract

The embodiment of the invention provides a motion compensation method, a motion compensation device, CT equipment and a CT system. According to the embodiment of the invention, M initial motion vector fields are determined in a preset motion range according to a preset motion vector field model, image reconstruction is carried out after motion compensation is carried out on heart CT scanning data to be processed according to each initial motion vector field for target reconstruction positions, M initial reconstructed images are obtained, the trained classification model is utilized to classify the M initial reconstructed images, at least one image with motion artifact smaller than a preset threshold value is selected according to a classification result, a target motion vector field is determined according to at least one image, motion compensation is carried out on heart CT scanning data according to the target motion vector field, target data is obtained, and images with motion artifact smaller than the preset threshold value are screened out through the classification model for motion artifact estimation, so that the effectiveness of the motion vector field is improved, and the accuracy of motion compensation is improved.

Description

Motion compensation method, motion compensation device, CT equipment and CT system
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a motion compensation method, a motion compensation device, a CT apparatus, and a CT system.
Background
CT (Computed Tomography, electronic computer tomography) coronary artery blood vessel imaging is a safe noninvasive imaging technology with wide clinical application at present, can accurately diagnose heart blood vessel related diseases, and is a popular research direction in heart imaging. The difficulty of cardiac imaging is that the heart is always in motion during CT scanning, and thus the acquired images often contain motion artifacts, which affect the image quality.
For motion artifact, a heart coronary motion compensation technology is adopted in the related art for reconstruction, and the technology performs image reconstruction after motion compensation according to a coronary motion vector field (short for motion field).
Disclosure of Invention
In order to overcome the problems in the related art, the invention provides a data reconstruction method, a data reconstruction device, CT equipment and a CT system, and the accuracy of motion compensation is improved.
According to a first aspect of an embodiment of the present invention, there is provided a motion compensation method, including:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
Classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
and performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
According to a second aspect of an embodiment of the present invention, there is provided a motion compensation apparatus including:
the initial field determining module is used for determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
the preliminary reconstruction module is used for carrying out image reconstruction on heart CT scanning data to be processed after carrying out motion compensation according to each initial motion vector field in the M initial motion vector fields respectively aiming at a target reconstruction position to obtain M preliminary reconstruction images;
the classifying and selecting module is used for classifying the M preliminary reconstructed images by using the trained classifying model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to the classifying result; the classification model is a deep learning network model;
A target field determining module for determining a target motion vector field based on the at least one image;
and the compensation module is used for carrying out motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
According to a third aspect of embodiments of the present invention, there is provided a CT apparatus comprising: an internal bus, and a memory, a processor and an external interface connected through the internal bus; the external interface is used for being connected with a detector of the CT system, and the detector comprises a plurality of detector chambers and corresponding processing circuits;
the memory is used for storing machine-readable instructions corresponding to control logic of image reconstruction;
the processor is configured to read the machine-readable instructions on the memory and perform operations comprising:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
Classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
and performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
According to a fourth aspect of embodiments of the present invention, there is provided a CT system comprising a detector, a scan bed and a CT apparatus, the detector comprising a plurality of detector cells and corresponding processing circuitry; wherein:
the detector chamber is used for detecting X-rays passing through a scanning object and converting the X-rays into electric signals in the scanning process of the CT system;
the processing circuit is used for converting the electric signal into a pulse signal and collecting energy information of the pulse signal;
the CT device is used for:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
Classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
and performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
according to the embodiment of the invention, M initial motion vector fields are determined in a preset motion range according to a preset motion vector field model, for a target reconstruction position, motion compensation is carried out on cardiac CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields respectively, image reconstruction is carried out to obtain M primary reconstructed images, the M primary reconstructed images are classified by utilizing a trained classification model, at least one image with motion artifact smaller than a preset threshold value is selected from the M primary reconstructed images according to a classification result, a target motion vector field is determined according to the at least one image, motion compensation is carried out on cardiac CT scanning data according to the target motion vector field, target data is obtained, images with motion artifact smaller than a preset threshold value are screened out through a classification model for motion artifact estimation, the effectiveness of the motion vector fields is improved, and the accuracy of motion compensation is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a flowchart illustrating a motion compensation method according to an embodiment of the present invention.
Fig. 2 is an exemplary diagram of different categories of images.
Fig. 3 is a functional block diagram of a motion compensation apparatus according to an embodiment of the invention.
Fig. 4 is a hardware configuration diagram of a CT apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the invention as detailed in the accompanying claims.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting of embodiments of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present invention to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present invention. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The phase compensation only needs half circle of View data to carry out motion compensation, so that the phase compensation method is very suitable for practical application in terms of calculation amount and calculation complexity.
In the related art, motion compensation is performed according to the estimated motion artifact, and in the technology, the accuracy of quantitative estimation of the motion artifact is low, so that the effect of motion compensation is affected.
The motion compensation method provided by the embodiment of the invention can be applied to the process of heart imaging, in particular coronary artery blood vessel imaging.
The motion compensation method is described in detail by way of examples.
Fig. 1 is a flowchart illustrating a motion compensation method according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the motion compensation method may include:
s101, determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number.
S102, performing motion compensation on cardiac CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields respectively aiming at a target reconstruction position, and performing image reconstruction to obtain M primary reconstructed images.
S103, classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to a classification result; the classification model is a deep learning network model.
S104, determining a target motion vector field according to the at least one image.
And S105, performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
In this embodiment, the cardiac CT scan data to be processed is scan data corresponding to the target reconstruction location.
In this embodiment, the motion vector field model may be a polynomial. For example, a motion vector field model shown in the following formula (1) can be used.
In the formula (1), s is a motion vector field, a i For model parameters, a i =(a xi ,a yi ,a zi ) Parameters in the three directions (x, y, z) are respectively corresponding. t is t 0 And t is the time point of the other phase.
The motion vector field in equation (1) is a quadratic polynomial.
In the application, the preset movement range can be set according to the maximum amplitude of the vascular movement at different heart rates through experience statistics. The preset motion range may be used to initialize the motion vector field.
At the time of initialization, it is assumed that the vascular motion in three directions of x, y and z is linear motion, i.e. a in the formula (1) is set 2 =0, calculating the maximum offset of the vessel in x, y, z directions, denoted S m ={S xm ,S ym ,S zm }. The range of motion of S is [ -S m ,S m ]。
When a is 2 When =0, formula (1) becomes s=a 1 (t-t 0 ) At this time according to t, t 0 、s=S m Can calculate a 1 Through the value of a 1 The motion vector field of equation (1) for value determination of (a) is the initial motion vector field (a in the initial motion vector field) 2 =0)。
At [ -S m ,S m ]Within the range, it is assumed that there are k (k is a natural number) samples in each of the x, y, z directions (i.e., for S respectively x 、S y 、S z Taking k samples) then k can be obtained 3 Combining parameters, let m=k 3 A motion vector field is calculated for each of the M parameter combinations as an initial motion vector field. Each parameter combination corresponds to an initial motion vector field.
For example, assume that the current sample is S x0 The View phase for reconstructing the target point is 60-80 and the target phase is 75, respectively, and the maximum distance from all views to the target phase is 15, where s=a 1 (t-t 0 ) Calculating to obtain a 1 =S x0 /15。
The process of performing image reconstruction after performing motion compensation on cardiac CT scan data to be processed according to the initial motion vector field may be as follows:
for any one reconstruction position (x 0 ,y 0 ,z 0 ) Let an initial motion vector field be s (x, y, z, t) which is at the target reconstruction position (x 0 ,y 0 ,z 0 ) Is s (x) 0 ,y 0 ,z 0 T) can be derived from the initial motion vector field 0 ,y 0 ,z 0 ) Compensated position (x k ,y k ,z k ) The following are provided:
x k =x 0 +s(x 0 ,y 0 ,z 0 ,t)
y k =y 0 +s(x 0 ,y 0 ,z 0 ,t)
z k =z 0 +s(x 0 ,y 0 ,z 0 ,t)
reconstructing the position (x) by FDK (Feldkamp) algorithm k ,y k ,z k ) Coronary image I at k (x, y) as shown in formula (2):
in the formula (2), the contents of each symbol are as follows:
nViewHalfPerRot, the circumferential sampling number of half a rotation of the frame;
ChannelPos, the position of the channel where the reconstruction point (i.e., reconstruction position) is projected onto the detector;
slicepos: the reconstruction point projects to a layer position of the detector;
nBegView: a starting sample View index of the reconstructed point;
nEndView: terminating sampling View index of the reconstructed point;
p represents the data value in View.
In equation (2), the reconstruction point (x 0 ,y 0 ,z 0 ) The channel position projected onto the detector can be obtained by the following equation (3):
ChannelPos i =x k *cosθ i -y k *sinθ i (3)
in the formula (3), θ represents a ray sampling angle.
In equation (2), the reconstruction point (x 0 ,y 0 ,z 0 ) The position of the layer projected onto the detector may be determined by the following equationThe formula (4) is obtained:
in the formula (4), R represents the rotating radius of the frame, namely the distance from the focus of the bulb to the rotating center; ΔZ represents the distance from the reconstruction plane to the position of the source Z at the point of over reconstruction.
For each initial motion vector field, a corresponding preliminary reconstructed image may be obtained according to the method described above.
In this embodiment, the classification model is a pre-trained AI (Artificial Intelligence ) model. The function of the classification model is to classify the input image. The image categories in the classification model in this embodiment can be of two types: one type is an image with motion artifacts less than a preset threshold (including an image without motion artifacts), which includes herein small motion artifact images and motion artifact free images; the other is an image with motion artifact greater than or equal to a preset threshold, including a large motion artifact image and a background image. Fig. 2 is an exemplary diagram of different categories of images. Referring to fig. 2, the first image is an image of a blood vessel, and belongs to an image with motion artifact smaller than a preset threshold on the image category; the second column of images are large artifact images, the third column of images are background images, and the second column of images and the third column of images belong to images with motion artifacts larger than or equal to a preset threshold value in image categories.
In one exemplary implementation, the training method of the classification model may include:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
In this embodiment, the deep learning network model may be a model obtained by performing network structure adjustment on the basis of Alex-net.
The training data are divided into two types, wherein the label image in one type of training data is an image with motion artifact smaller than a preset threshold value, and the label image in the other type of training data is an image with motion artifact larger than or equal to the preset threshold value. Quantitatively, the ratio of the two types of training data may be 1:2.
In training, a random gradient descent method can be used for training. Of course, the present embodiment is not limited to the training method, and in practical application, other training methods may be selected as appropriate.
After training, the classification model can also be verified by using verification data. The verification data also includes two types of images, namely, one type of image is an image with motion artifact smaller than a preset threshold value, and the other type of image is an image with motion artifact larger than or equal to the preset threshold value.
Through verification, the classification accuracy of the classification model can be further determined to determine whether the classification model meets application requirements. If not, the classification model needs to be retrained. If the model is matched, the model can be stored and used as a trained model.
At present, some motion artifact estimation methods cannot accurately distinguish the sizes of all blood vessel image motion artifacts, and when the motion artifacts in the images are large, erroneous estimation is caused or the estimation accuracy is low, so that the accuracy of a motion vector field for motion compensation is low, and the motion compensation effect is further affected.
According to the method, the image is classified by using the classification model, so that a large motion artifact image and a background image with low estimation accuracy can be excluded from calculation of the motion vector field, and only a small motion artifact image and a motion artifact-free image are used for estimating the motion artifact and determining the motion vector field, so that the accuracy of the motion vector field is improved.
In an exemplary implementation, determining the target motion vector field according to the at least one image in step S104 may include:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
and optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field.
Wherein determining motion artifact estimates for each of the at least one image based on a preset motion artifact estimation algorithm may include:
and acquiring entropy values of each image in the at least one image, and taking the entropy values as motion artifact estimation values of the images.
In the present embodiment, the entropy value Ent of the image can be calculated by the following formula (5).
In the formula (5), g represents a gray level in the image histogram, and f(s) represents an image subjected to motion compensation in the motion vector field s. The function P () is a probability function representing the probability of each gray level occurring in the image.
Wherein, optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field may include:
Making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
searching a motion vector field which minimizes the entropy value of the image in a preset range as a target motion vector field.
In one example, the basic motion vector field may be optimized using a powell optimization method, that is, a motion vector field found by the powell optimization method that minimizes the entropy value of the image is used as the target motion vector field.
It should be noted that, in practical application, an appropriate optimization method may be used to optimize the base vector field according to specific needs, and is not limited to the powell optimization method in this example.
According to the embodiment, the target motion vector field is determined through the image entropy value on the basis of the small motion artifact image selected according to the classification result, interference of the large motion artifact image with low accuracy and the background image is eliminated, the effectiveness of the motion vector field is improved, and the accuracy of motion compensation is improved as the effectiveness of the motion vector field is better, the accuracy of the motion compensation is higher.
In an exemplary implementation process, after step S105, the method may further include:
and carrying out image reconstruction according to the target data to obtain a target reconstructed image.
In this embodiment, the target data is obtained by performing motion compensation based on a target motion vector field with high accuracy, and the compensation effect is good, so that a reconstructed image with higher quality can be obtained by reconstructing according to the target data.
According to the motion compensation method provided by the embodiment of the invention, M initial motion vector fields are determined in a preset motion range according to the preset motion vector field model, image reconstruction is carried out after motion compensation is carried out on cardiac CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields respectively aiming at a target reconstruction position, M primary reconstructed images are obtained, the M primary reconstructed images are classified by utilizing the trained classification model, at least one image with motion artifact smaller than a preset threshold value is selected from the M primary reconstructed images according to a classification result, a target motion vector field is determined according to the at least one image, motion compensation is carried out on cardiac CT scanning data according to the target motion vector field, target data is obtained, images with motion artifact smaller than a preset threshold value are screened out through the classification model for motion artifact estimation, and therefore the effectiveness of the motion vector field is improved, and the accuracy of motion compensation is improved.
Based on the method embodiment, the embodiment of the invention also provides a corresponding device, equipment and storage medium embodiment.
Fig. 3 is a functional block diagram of a motion compensation apparatus according to an embodiment of the invention. As shown in fig. 3, in this embodiment, the motion compensation apparatus may include:
an initial field determining module 310, configured to determine M initial motion vector fields within a preset motion range according to a preset motion vector field model; m is a natural number;
the preliminary reconstruction module 320 is configured to perform image reconstruction on cardiac CT scan data to be processed according to each initial motion vector field of the M initial motion vector fields, for a target reconstruction position, to obtain M preliminary reconstructed images;
the classifying and selecting module 330 is configured to classify the M preliminary reconstructed images by using a trained classification model, and select at least one image with a motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to a classification result; the classification model is a deep learning network model;
a target field determining module 340 for determining a target motion vector field from the at least one image;
and the compensation module 350 is configured to perform motion compensation on the cardiac CT scan data according to the target motion vector field, so as to obtain target data.
In an exemplary implementation, the method further includes:
and the final reconstruction module is used for carrying out image reconstruction according to the target data to obtain a target reconstruction image.
In one exemplary implementation, the object field determination module 340 is specifically configured to:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
and optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field.
In an exemplary implementation, the object field determining module 340, when configured to determine the motion artifact estimation of each image in the at least one image based on a preset motion artifact estimation algorithm, may be specifically configured to:
and acquiring entropy values of each image in the at least one image, and taking the entropy values as motion artifact estimation values of the images.
In an exemplary implementation, the target field determining module 340, when configured to optimize the motion vector field according to a preset optimization algorithm and the initial parameter value, may be specifically configured to:
Making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
searching a motion vector field which minimizes the entropy value of the image in a preset range as a target motion vector field.
In one exemplary implementation, the motion vector field model is a polynomial.
In an exemplary implementation, the training method of the classification model includes:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
The embodiment of the invention also provides CT equipment. Fig. 4 is a hardware configuration diagram of a CT apparatus according to an embodiment of the present invention. As shown in fig. 4, the CT apparatus includes: an internal bus 401, and a memory 402, a processor 403 and an external interface 404 connected by the internal bus, wherein the external interface is used for connecting a detector of the CT system, and the detector comprises a plurality of detector chambers and corresponding processing circuits;
The memory 402 is configured to store machine-readable instructions corresponding to the motion compensation logic;
the processor 403 is configured to read the machine readable instructions on the memory 402 and execute the instructions to implement the following operations:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
and performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
In an exemplary implementation, the motion compensation is performed on the cardiac CT scan data according to the target motion vector field, and after obtaining the target data, the method further includes:
And carrying out image reconstruction according to the target data to obtain a target reconstructed image.
In one exemplary implementation, determining a target motion vector field from the at least one image includes:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
and optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field.
In an exemplary implementation, determining motion artifact estimates for each of the at least one image based on a preset motion artifact estimation algorithm includes:
and acquiring entropy values of each image in the at least one image, and taking the entropy values as motion artifact estimation values of the images.
In an exemplary implementation, optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field includes:
making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
And optimizing the basic vector field by adopting a powell optimization method, and searching a motion vector field with the minimum entropy value of the image in a preset range to serve as a target motion vector field.
In one exemplary implementation, the motion vector field model is a polynomial.
In an exemplary implementation, the training method of the classification model includes:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
The embodiment of the invention also provides a CT system, which comprises a detector, a scanning bed and CT equipment, wherein the detector comprises a plurality of detector chambers and corresponding processing circuits; wherein:
the detector chamber is used for detecting X-rays passing through a scanning object and converting the X-rays into electric signals in the scanning process of the CT system;
The processing circuit is used for converting the electric signal into a pulse signal and collecting energy information of the pulse signal;
the CT device is used for:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
and performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
In an exemplary implementation, the motion compensation is performed on the cardiac CT scan data according to the target motion vector field, and after obtaining the target data, the method further includes:
And carrying out image reconstruction according to the target data to obtain a target reconstructed image.
In one exemplary implementation, determining a target motion vector field from the at least one image includes:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
and optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field.
In an exemplary implementation, determining motion artifact estimates for each of the at least one image based on a preset motion artifact estimation algorithm includes:
and acquiring entropy values of each image in the at least one image, and taking the entropy values as motion artifact estimation values of the images.
In an exemplary implementation, optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field includes:
making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
Searching a motion vector field which minimizes the entropy value of the image in a preset range as a target motion vector field.
In one exemplary implementation, the motion vector field model is a polynomial.
In an exemplary implementation, the training method of the classification model includes:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, wherein the program when executed by a processor realizes the following operations:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
Classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
and performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data.
In an exemplary implementation, the motion compensation is performed on the cardiac CT scan data according to the target motion vector field, and after obtaining the target data, the method further includes:
and carrying out image reconstruction according to the target data to obtain a target reconstructed image.
In one exemplary implementation, determining a target motion vector field from the at least one image includes:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
and optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field.
In an exemplary implementation, determining motion artifact estimates for each of the at least one image based on a preset motion artifact estimation algorithm includes:
and acquiring entropy values of each image in the at least one image, and taking the entropy values as motion artifact estimation values of the images.
In an exemplary implementation, optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field includes:
making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
searching a motion vector field which minimizes the entropy value of the image in a preset range as a target motion vector field.
In one exemplary implementation, the motion vector field model is a polynomial.
In an exemplary implementation, the training method of the classification model includes:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
And training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
For the device and apparatus embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (6)

1. A method of motion compensation, comprising:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
Aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data;
said determining a target motion vector field from said at least one image comprising:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field;
The determining motion artifact estimation values of each image in the at least one image based on a preset motion artifact estimation algorithm comprises:
acquiring entropy values of all images in the at least one image, and taking the entropy values as motion artifact estimation values of the images;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field, wherein the optimizing comprises the following steps:
making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
searching a motion vector field which enables the entropy value of the image to be minimum in a preset range, and taking the motion vector field as a target motion vector field;
the training method of the classification model comprises the following steps:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
2. The method of claim 1, further comprising, after said motion compensating said cardiac CT scan data based on said target motion vector field to obtain target data:
and carrying out image reconstruction according to the target data to obtain a target reconstructed image.
3. The method of claim 1, wherein the motion vector field model is a polynomial.
4. A motion compensation apparatus, comprising:
the initial field determining module is used for determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
the preliminary reconstruction module is used for carrying out image reconstruction on heart CT scanning data to be processed after carrying out motion compensation according to each initial motion vector field in the M initial motion vector fields respectively aiming at a target reconstruction position to obtain M preliminary reconstruction images;
the classifying and selecting module is used for classifying the M preliminary reconstructed images by using the trained classifying model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to the classifying result; the classification model is a deep learning network model;
A target field determining module for determining a target motion vector field based on the at least one image;
the compensation module is used for carrying out motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data;
the target field determining module is specifically configured to:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field;
the object field determining module is used for determining motion artifact estimation values of each image in the at least one image based on a preset motion artifact estimation algorithm, and is specifically used for:
acquiring entropy values of all images in the at least one image, and taking the entropy values as motion artifact estimation values of the images;
the target field determining module is used for optimizing the motion vector field according to a preset optimizing algorithm and the initial parameter value to obtain a target motion vector field, and is specifically used for:
Making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
searching a motion vector field which enables the entropy value of the image to be minimum in a preset range, and taking the motion vector field as a target motion vector field;
the training method of the classification model comprises the following steps:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
5. A CT apparatus, comprising: an internal bus, and a memory, a processor and an external interface connected through the internal bus; the external interface is used for being connected with a detector of the CT system, and the detector comprises a plurality of detector chambers and corresponding processing circuits;
the memory is used for storing machine-readable instructions corresponding to control logic of image reconstruction;
The processor is configured to read the machine-readable instructions on the memory and perform operations comprising:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
determining a target motion vector field from the at least one image;
performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data;
said determining a target motion vector field from said at least one image comprising:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
Determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field;
the determining motion artifact estimation values of each image in the at least one image based on a preset motion artifact estimation algorithm comprises:
acquiring entropy values of all images in the at least one image, and taking the entropy values as motion artifact estimation values of the images;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field, wherein the optimizing comprises the following steps:
making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
searching a motion vector field which enables the entropy value of the image to be minimum in a preset range, and taking the motion vector field as a target motion vector field;
the training method of the classification model comprises the following steps:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
And training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
6. A CT system comprising a detector, a scan bed and a CT apparatus, the detector comprising a plurality of detector cells and corresponding processing circuitry; wherein:
the detector chamber is used for detecting X-rays passing through a scanning object and converting the X-rays into electric signals in the scanning process of the CT system;
the processing circuit is used for converting the electric signal into a pulse signal and collecting energy information of the pulse signal;
the CT device is used for:
determining M initial motion vector fields in a preset motion range according to a preset motion vector field model; m is a natural number;
aiming at a target reconstruction position, respectively carrying out image reconstruction after carrying out motion compensation on heart CT scanning data to be processed according to each initial motion vector field in the M initial motion vector fields to obtain M initial reconstructed images;
classifying the M preliminary reconstructed images by using the trained classification model, and selecting at least one image with motion artifact smaller than a preset threshold from the M preliminary reconstructed images according to classification results; the classification model is a deep learning network model;
Determining a target motion vector field from the at least one image;
performing motion compensation on the cardiac CT scanning data according to the target motion vector field to obtain target data;
said determining a target motion vector field from said at least one image comprising:
determining motion artifact estimation values of all images in the at least one image based on a preset motion artifact estimation algorithm;
determining an initial parameter value of a motion vector field based on a minimum value of motion artifact estimates of the at least one image;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field;
the determining motion artifact estimation values of each image in the at least one image based on a preset motion artifact estimation algorithm comprises:
acquiring entropy values of all images in the at least one image, and taking the entropy values as motion artifact estimation values of the images;
optimizing the motion vector field according to a preset optimization algorithm and the initial parameter value to obtain a target motion vector field, wherein the optimizing comprises the following steps:
making the parameter value of the motion vector field equal to the initial parameter value to obtain a basic motion vector field;
Searching a motion vector field which enables the entropy value of the image to be minimum in a preset range, and taking the motion vector field as a target motion vector field;
the training method of the classification model comprises the following steps:
setting a deep learning network model and initial parameter values;
obtaining a plurality of sets of training data, wherein each set of training data comprises a label image and an image category corresponding to the label image; the image category comprises images with motion artifacts smaller than a preset threshold value and images with motion artifacts larger than or equal to the preset threshold value;
and training the deep learning network model by using the training data to obtain a trained deep learning network model, and taking the trained deep learning network model as a classification model.
CN202010705846.0A 2020-07-21 2020-07-21 Motion compensation method, motion compensation device, CT equipment and CT system Active CN111915516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010705846.0A CN111915516B (en) 2020-07-21 2020-07-21 Motion compensation method, motion compensation device, CT equipment and CT system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010705846.0A CN111915516B (en) 2020-07-21 2020-07-21 Motion compensation method, motion compensation device, CT equipment and CT system

Publications (2)

Publication Number Publication Date
CN111915516A CN111915516A (en) 2020-11-10
CN111915516B true CN111915516B (en) 2024-03-08

Family

ID=73281407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010705846.0A Active CN111915516B (en) 2020-07-21 2020-07-21 Motion compensation method, motion compensation device, CT equipment and CT system

Country Status (1)

Country Link
CN (1) CN111915516B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103027705A (en) * 2011-09-28 2013-04-10 西门子公司 Method and system of CT image data set for generating motion compensation
CN107945850A (en) * 2016-10-13 2018-04-20 三星电子株式会社 Method and apparatus for handling medical image
CN109727203A (en) * 2017-10-27 2019-05-07 西门子医疗保健有限责任公司 For being carried out the method and system of compensation campaign artifact by means of machine learning
CN110298447A (en) * 2018-03-23 2019-10-01 西门子医疗保健有限责任公司 The method and method for reconstructing of parameter for handling machine learning method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224056B2 (en) * 2009-12-15 2012-07-17 General Electronic Company Method for computed tomography motion estimation and compensation
US10565744B2 (en) * 2016-06-30 2020-02-18 Samsung Electronics Co., Ltd. Method and apparatus for processing a medical image to reduce motion artifacts

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103027705A (en) * 2011-09-28 2013-04-10 西门子公司 Method and system of CT image data set for generating motion compensation
CN107945850A (en) * 2016-10-13 2018-04-20 三星电子株式会社 Method and apparatus for handling medical image
CN109727203A (en) * 2017-10-27 2019-05-07 西门子医疗保健有限责任公司 For being carried out the method and system of compensation campaign artifact by means of machine learning
CN110298447A (en) * 2018-03-23 2019-10-01 西门子医疗保健有限责任公司 The method and method for reconstructing of parameter for handling machine learning method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Deep-Learning-Based CT Motion Artifact Recognition in Coronary Arteries;T.Elss 等;《Medical Imaging 2018:Image Processing》;1-7 *
Motion artifact Recognition and quantification in coronary CT angiography using convolutional neural networks;T.Lossau 等;《Medical Image Analysis》;68-79 *
Motion compensation in the region of the coronary arteries based on partial angle reconstructions from short-scan CT data;Juliane Hahn 等;《Medical Physics》;第44卷(第11期);5795-5813 *
基于运动补偿的压缩感知4D-CBCT优质重建;杨轩;张华;何基;曾栋;张忻宇;边兆英;张敬;马建华;;《南方医科大学学报》;第36卷(第07期);969-973+978 *
运动分割和光流估计的卫星视频超分辨率重建;卜丽静;郑新杰;张正鹏;肖一鸣;;《测绘科学》;第41卷(第12期);233-237+242 *

Also Published As

Publication number Publication date
CN111915516A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
JP6855223B2 (en) Medical image processing device, X-ray computer tomographic imaging device and medical image processing method
Shaw et al. MRI k-space motion artefact augmentation: model robustness and task-specific uncertainty
CN110751702B (en) Image reconstruction method, system, device and storage medium
CN111192228B (en) Image processing method, device, CT equipment and CT system
CN111986181A (en) Intravascular stent image segmentation method and system based on double-attention machine system
CN110400298B (en) Method, device, equipment and medium for detecting heart clinical index
CN110298447B (en) Method for processing parameters of machine learning method and reconstruction method
CN111462020A (en) Method, system, storage medium and device for correcting motion artifact of heart image
CN106530236B (en) Medical image processing method and system
CN110969633B (en) Automatic optimal phase identification method for cardiac CT imaging
CN111815735A (en) Human tissue self-adaptive CT reconstruction method and reconstruction system
Wu et al. Improving the ability of deep neural networks to use information from multiple views in breast cancer screening
CN110728730B (en) Image reconstruction method, device, CT equipment and CT system
Mohebbian et al. Classifying MRI motion severity using a stacked ensemble approach
Marin et al. Numerical surrogates for human observers in myocardial motion evaluation from SPECT images
JP5364009B2 (en) Image generating apparatus, image generating method, and program thereof
Bagher-Ebadian et al. Neural network and fuzzy clustering approach for automatic diagnosis of coronary artery disease in nuclear medicine
CN111915516B (en) Motion compensation method, motion compensation device, CT equipment and CT system
US11308660B2 (en) Motion compensated cardiac valve reconstruction
CN111311531B (en) Image enhancement method, device, console device and medical imaging system
CN112244884B (en) Bone image acquisition method, device, console equipment and CT system
Amin et al. Semi-supervised learning for limited medical data using generative adversarial network and transfer learning
CN110706338B (en) Image reconstruction method, device, CT equipment and CT system
CN111985485A (en) Pyramid attention cycle network-based surgical interventional instrument tracking method
CN112258596A (en) Image generation method and device, console equipment and CT system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240204

Address after: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Neusoft Medical Systems Co.,Ltd.

Country or region after: China

Address before: Room 336, 177-1, Chuangxin Road, Hunnan New District, Shenyang City, Liaoning Province

Applicant before: Shenyang advanced medical equipment Technology Incubation Center Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant