CN115147320A - Coronary artery CT image subtraction method and device, electronic equipment and storage medium - Google Patents

Coronary artery CT image subtraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115147320A
CN115147320A CN202210746283.9A CN202210746283A CN115147320A CN 115147320 A CN115147320 A CN 115147320A CN 202210746283 A CN202210746283 A CN 202210746283A CN 115147320 A CN115147320 A CN 115147320A
Authority
CN
China
Prior art keywords
image
neural network
scan
feature point
flat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210746283.9A
Other languages
Chinese (zh)
Inventor
张瑜
马骏
郑凌霄
兰宏志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Raysight Intelligent Medical Technology Co Ltd
Original Assignee
Shenzhen Raysight Intelligent Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Raysight Intelligent Medical Technology Co Ltd filed Critical Shenzhen Raysight Intelligent Medical Technology Co Ltd
Priority to CN202210746283.9A priority Critical patent/CN115147320A/en
Publication of CN115147320A publication Critical patent/CN115147320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a subtraction method, a subtraction device, electronic equipment and a storage medium for coronary CT images, which comprise the following steps: acquiring an enhanced CT image and a plain scan CT image of a target patient; extracting a first characteristic point set of the enhanced CT image through a first characteristic point extraction model; extracting a second characteristic point set of the flat-scan CT image through a second characteristic point extraction model; determining a target transformation matrix based on the first characteristic point set and the second characteristic point set; performing coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image; and subtracting the gray values of the corresponding pixel points of the transformed flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain the subtraction CT image. By the subtraction method provided by the scheme, the subtraction CT image can be automatically obtained, and the accuracy of the subtraction CT image is improved by accurately registering the enhanced CT image and the flat-scan CT image.

Description

Coronary artery CT image subtraction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a coronary CT image subtraction method and apparatus, an electronic device, and a storage medium.
Background
Coronary heart disease is well known for high incidence and mortality, and can be called first killer at home and abroad. The early diagnosis and accurate treatment of coronary heart disease are of great importance to the prevention of the general population and the prolongation of the life of the affected population. Flat scan CT, as a common coronary heart disease screening means, is often used to calculate indexes such as calcium score, and enhanced CT is used to establish a coronary artery three-dimensional model and quantitatively and qualitatively observe plaque. In practical situations, enhanced CT often has difficulty observing lumens in heavily calcified or stent regions where it is affected by its high density. The high-density image is less influenced by the contrast agent, so that the performance on enhancement and flat-scan CT is more consistent, and therefore, the stent and calcification displayed on the enhancement CT can be removed through subtraction processing based on the flat-scan CT, so that the lumen is displayed more clearly, and the subsequent three-dimensional modeling of the lumen or functional calculation such as CT-FFR is more accurate. Therefore, how to perform automatic subtraction processing based on two images is an urgent technical problem to be solved.
Disclosure of Invention
In view of the above, an object of the present application is to provide a coronary CT image subtraction method, a coronary CT image subtraction apparatus, an electronic device, and a storage medium, which can automatically obtain a subtraction CT image and improve the accuracy of the subtraction CT image by performing accurate image registration on an enhanced CT image and a flat scan CT image.
The embodiment of the application provides a subtraction method of coronary artery CT images, which comprises the following steps:
acquiring an enhanced CT image and a plain scan CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient;
inputting the enhanced CT image into a first feature point extraction model trained in advance, and extracting a first feature point set of the enhanced CT image;
inputting the flat-scan CT image into a second feature point extraction model trained in advance, and extracting a second feature point set of the flat-scan CT image;
determining a target transformation matrix between the first feature point set and the second feature point set through iterative computation based on the first feature point set and the second feature point set;
performing coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image so as to realize image registration of the flat-scan CT image and the enhanced CT image;
and subtracting the gray values of the corresponding pixel points of the converted flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtraction CT image.
Optionally, the first feature point extraction model and the second feature point extraction model are constructed by the following steps:
acquiring a CT image training sample set; the CT image training sample set comprises enhanced CT images to be trained and flat-scan CT images to be trained at coronary arteries of a plurality of patients, and each CT image in the CT image training sample set has a corresponding feature point set label;
determining the enhanced CT image to be trained as a first input feature of a first neural network, determining a feature point set label corresponding to the enhanced CT image to be trained as a first output feature of the first neural network, determining the flat-scan CT image to be trained as a second input feature of a second neural network, and determining a feature point set label corresponding to the flat-scan CT image to be trained as a second output feature of the second neural network;
performing an alternating iterative training of the first neural network and the second neural network using a back gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features; wherein, during the alternating iterative training, network parameters of the first neural network and the second neural network are shared;
and when the first neural network and the second neural network reach a convergence state, stopping training to obtain a first characteristic point extraction model and a second characteristic point extraction model.
Optionally, the determining, based on the first feature point set and the second feature point set, a target transformation matrix between the first feature point set and the second feature point set through iterative computation includes:
acquiring an initial transformation matrix; the initial transformation matrix comprises a plurality of parameters corresponding to a plurality of transformation types, and each parameter is provided with a random initial value;
performing initial transformation on the feature points in the second feature point set based on the initial transformation matrix to obtain a transformed second feature point set;
setting a target function based on the transformed coordinates of the second characteristic points in the second characteristic point set and the transformed coordinates of the first characteristic points in the first characteristic point set; the target function is used for determining the sum of Euclidean distances between the transformed second characteristic point and the first characteristic point;
and iteratively updating the random initial value corresponding to each parameter in the initial transformation matrix based on the target function until the value of the target function is smaller than a preset threshold value, and stopping updating to obtain the target transformation matrix.
Optionally, the initial network parameters of the first neural network and the second neural network are the same.
Optionally, the training the first neural network and the second neural network alternately and iteratively by using an inverse gradient propagation algorithm based on the first input feature, the first output feature, the second input feature, and the second output feature includes:
determining a first loss function based on the first input feature and the first output feature, and determining a second loss function based on the second input feature and the second output feature during each iteration;
alternately selecting the first loss function or the second loss function as a target loss function, and determining a target neural network corresponding to the selected target loss function; the target neural network corresponding to the first loss function is a first neural network, and the target neural network corresponding to the second loss function is a second neural network;
and updating the network parameters of the target neural network by using a reverse gradient propagation algorithm based on the target loss function, and updating the network parameters of the other neural network simultaneously so as to enable the network parameters of the two neural networks to be consistent.
Optionally, the transformation types include at least a translation transformation, a rotation transformation, a perspective transformation, and a scaling transformation.
The embodiment of the present application further provides a subtraction apparatus for coronary CT images, the subtraction apparatus includes:
an acquisition module for acquiring an enhanced CT image and a scout CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient;
the first extraction module is used for inputting the enhanced CT image into a first feature point extraction model trained in advance and extracting a first feature point set of the enhanced CT image;
the second extraction module is used for inputting the flat-scan CT image into a second feature point extraction model trained in advance and extracting a second feature point set of the flat-scan CT image;
a matrix determination module, configured to determine, based on the first feature point set and the second feature point set, a target transformation matrix between the first feature point set and the second feature point set through iterative computation;
a registration module, configured to perform coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image, so as to implement image registration between the flat-scan CT image and the enhanced CT image;
and the subtraction module is used for subtracting the gray values of the corresponding pixel points of the transformed flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtracted CT image.
Optionally, the subtraction apparatus further includes a model building module, and the model building module is configured to:
acquiring a CT image training sample set; the CT image training sample set comprises enhanced CT images to be trained and flat-scan CT images to be trained at coronary arteries of a plurality of patients, and each CT image in the CT image training sample set has a corresponding feature point set label;
determining the enhanced CT image to be trained as a first input feature of a first neural network, determining a feature point set label corresponding to the enhanced CT image to be trained as a first output feature of the first neural network, determining the flat-scan CT image to be trained as a second input feature of a second neural network, and determining a feature point set label corresponding to the flat-scan CT image to be trained as a second output feature of the second neural network;
performing an alternating iterative training of the first neural network and the second neural network using a back gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features; wherein, during the alternating iterative training, network parameters of the first neural network and the second neural network are shared;
and when the first neural network and the second neural network reach a convergence state, stopping training to obtain a first characteristic point extraction model and a second characteristic point extraction model.
Optionally, when the matrix determination module is configured to determine, through iterative computation, a target transformation matrix between the first feature point set and the second feature point set based on the first feature point set and the second feature point set, the matrix determination module is configured to:
acquiring an initial transformation matrix; the initial transformation matrix comprises a plurality of parameters corresponding to a plurality of transformation types, and each parameter is provided with a random initial value;
performing initial transformation on the feature points in the second feature point set based on the initial transformation matrix to obtain a transformed second feature point set;
setting a target function based on the transformed coordinates of the second characteristic points in the second characteristic point set and the transformed coordinates of the first characteristic points in the first characteristic point set; the objective function is used for determining the sum of Euclidean distances between the transformed second characteristic point and the first characteristic point;
and iteratively updating the random initial value corresponding to each parameter in the initial transformation matrix based on the target function until the value of the target function is smaller than a preset threshold value, and stopping updating to obtain the target transformation matrix.
Optionally, the initial network parameters of the first neural network and the second neural network are the same.
Optionally, when the model building module is configured to perform an alternating iterative training on the first neural network and the second neural network by using an inverse gradient propagation algorithm based on the first input feature, the first output feature, the second input feature, and the second output feature, the model building module is configured to:
determining a first loss function based on the first input characteristic and the first output characteristic, and determining a second loss function based on the second input characteristic and the second output characteristic during each iteration;
alternately selecting the first loss function or the second loss function as a target loss function, and determining a target neural network corresponding to the selected target loss function; the target neural network corresponding to the first loss function is a first neural network, and the target neural network corresponding to the second loss function is a second neural network;
and updating the network parameters of the target neural network by using a reverse gradient propagation algorithm based on the target loss function, and updating the network parameters of the other neural network simultaneously so as to enable the network parameters of the two neural networks to be consistent.
Optionally, the transformation types include at least a translation transformation, a rotation transformation, a perspective transformation, and a scaling transformation.
An embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the subtraction method as described above.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the subtraction method as described above.
The embodiment of the application provides a coronary artery CT image subtraction method, a coronary artery CT image subtraction device, electronic equipment and a storage medium, wherein the coronary artery CT image subtraction method comprises the following steps: acquiring an enhanced CT image and a plain scan CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient; inputting the enhanced CT image into a first feature point extraction model trained in advance, and extracting a first feature point set of the enhanced CT image; inputting the flat-scan CT image into a second feature point extraction model trained in advance, and extracting a second feature point set of the flat-scan CT image; determining a target transformation matrix between the first feature point set and the second feature point set through iterative computation based on the first feature point set and the second feature point set; performing coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image so as to realize image registration of the flat-scan CT image and the enhanced CT image; and subtracting the gray values of the corresponding pixel points of the converted flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtraction CT image.
In this way, the enhanced CT image and the flat-scan CT image are respectively extracted based on the characteristic points of machine learning, pairwise matching is carried out based on the characteristic points to form point pairs, a target transformation matrix is constructed by the point pairs, the flat-scan CT image is transformed to an enhanced CT image space through the target transformation matrix, finally, the gray values of the corresponding positions of the enhanced CT image and the flat-scan CT image are subtracted, and the image subtraction processing task closed loop is completed. And the scheme is full-automatic treatment without any manual intervention.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of a subtraction method for coronary CT images according to an embodiment of the present disclosure;
FIG. 2 is a schematic representation of heart region feature points provided herein;
FIG. 3 is a schematic diagram of an alternating iterative training neural network as provided herein;
FIG. 4 is a schematic view of a subtracted CT image provided herein;
FIG. 5 is a schematic structural diagram of a subtraction apparatus for coronary CT images according to an embodiment of the present disclosure;
fig. 6 is a second schematic structural diagram of a subtraction apparatus for coronary CT images according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that one skilled in the art can obtain without inventive effort based on the embodiments of the present application falls within the scope of protection of the present application.
Coronary heart disease is well known for high incidence and mortality, and can be called first killer at home and abroad. The early diagnosis and accurate treatment of coronary heart disease are of great importance to the prevention of the general population and the prolongation of the life of the population suffering from the coronary heart disease. Flat scan CT, as a common coronary heart disease screening means, is often used to calculate indexes such as calcium score, and enhanced CT is used to establish a coronary artery three-dimensional model and quantitatively and qualitatively observe plaque. In practical situations, enhanced CT often has difficulty observing lumens in heavily calcified or stent regions where it is affected by its high density. The high-density image is less influenced by the contrast agent, so that the performance on enhancement and flat-scan CT is more consistent, and therefore, the stent and calcification displayed on the enhancement CT can be removed through subtraction processing based on the flat-scan CT, so that the lumen is displayed more clearly, and the subsequent three-dimensional modeling of the lumen or functional calculation such as CT-FFR is more accurate.
However, when the subtraction processing is performed based on the enhanced CT and the flat-scan CT, the adopted technical means generally include the following two methods: (1) Manually selecting some characteristic points in coronary artery enhancement and flat-scan CT respectively, and needing to correspond one to one, constructing a space transformation matrix according to the characteristic points, transforming the flat-scan CT to an enhanced CT space through the transformation matrix, and finally subtracting the gray values of the corresponding positions of the enhanced CT and the flat-scan CT to finish the subtraction process. (2) The feature points are not required to be selected manually, global matching is carried out by taking global points as a reference, a global-based spatial transformation matrix is constructed, flat-scan CT is transformed to an enhanced CT space through the transformation matrix, and finally the gray values of the enhanced CT and the corresponding positions of the flat-scan CT are subtracted to complete the subtraction process.
However, the technical means (1) has the following disadvantages: the characteristic points need to be selected manually, time and labor are consumed, the burden of medical staff is increased, and the inaccuracy of characteristic point selection is easily caused by human factors such as fatigue, so that the final subtraction effect is influenced. The following disadvantages exist in the technical means (2): although the feature points are not selected manually, the global position-based scheme does not focus on the heart region, so even though the overall registration of the image is good, the heart region which is most concerned by the image is not well registered, and the final subtraction has deviation.
Based on this, the embodiment of the application provides a coronary artery CT image subtraction method, which fully automatically completes the selection of feature points, the construction of a target transformation matrix and the generation of a subtraction CT image, thereby saving a large amount of manpower and time, reducing the burden of medical staff and improving the accuracy of the subtraction CT image.
Referring to fig. 1, fig. 1 is a flowchart illustrating a subtraction method for coronary CT images according to an embodiment of the present disclosure. As shown in fig. 1, a subtraction method provided in an embodiment of the present application includes:
s101, obtaining an enhanced CT image and a flat-scan CT image of a target patient.
Here, the enhanced CT image and the flat-scan CT image are both CT images at the coronary artery of the target patient.
The maximum difference between the enhanced CT image and the flat-scan CT image is whether a contrast medium is applied during shooting, the flat-scan CT image without the contrast medium is often used for primary screening of coronary heart disease, and the calcium score is calculated; and the enhanced CT image is angiographic CT with angiographic contrast. They all have the common point that high-density objects such as calcifications and stents can be observed, but the difference is that the enhanced CT image can more clearly observe the lumen of the blood vessel, and the flat scan CT is difficult to observe the blood vessel.
S102, inputting the enhanced CT image into a first feature point extraction model trained in advance, and extracting a first feature point set of the enhanced CT image.
Here, the first feature point extraction model is configured to perform feature point extraction on an input enhanced CT image, and may extract a plurality of first feature points from the enhanced CT image, and form a first feature point set based on the extracted plurality of feature points.
When the first specific points are extracted, information such as a prediction label and spatial coordinates of each first specific point is determined at the same time, and the prediction label is used for matching with feature points extracted subsequently based on a flat scan CT image.
Wherein, the extracted first feature point is a key feature point of a heart region in the enhanced CT image. It should be noted that there are many characteristic points of the heart region, such as the junction between the aorta and the coronary artery, the junction between the heart and the vertebra, and the junction between the aorta and the pulmonary artery, which have significant characteristics in both enhanced and flat scan CT.
For example, referring to fig. 2, fig. 2 is a schematic diagram of a feature point of a heart region provided in the present application. As shown in fig. 2, the dots on the graph are feature points to be extracted.
S103, inputting the flat-scan CT image into a pre-trained second feature point extraction model, and extracting a second feature point set of the flat-scan CT image.
Here, the second feature point extraction model is configured to perform feature point extraction on an input flat-scan CT image, and a plurality of second feature points may be extracted from the flat-scan CT image, and a second feature point set may be formed based on the plurality of extracted feature points.
When the second specific points are extracted, information such as a prediction label and spatial coordinates of each second specific point is determined at the same time, and the prediction label is used for matching with the feature points extracted based on the enhanced CT image. The number of feature points included in the second feature point set is the same as the number of feature points included in the first feature point set, and each first feature point has only one second feature point matched with the first feature point, and the matching can be performed specifically through a prediction label.
In one embodiment provided by the present application, the first feature point extraction model and the second feature point extraction model are constructed by: acquiring a CT image training sample set; the CT image training sample set comprises enhanced CT images to be trained and flat-scan CT images to be trained at coronary arteries of a plurality of patients, and each CT image in the CT image training sample set has a corresponding feature point set label; determining the enhanced CT image to be trained as a first input feature of a first neural network, determining a feature point set label corresponding to the enhanced CT image to be trained as a first output feature of the first neural network, determining the flat-scan CT image to be trained as a second input feature of a second neural network, and determining a feature point set label corresponding to the flat-scan CT image to be trained as a second output feature of the second neural network; performing an alternating iterative training of the first neural network and the second neural network using a back gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features; wherein, during the alternating iterative training, network parameters of the first neural network and the second neural network are shared; and when the first neural network and the second neural network reach a convergence state, stopping training to obtain a first characteristic point extraction model and a second characteristic point extraction model.
Here, the CT image training sample set is composed of a plurality of CT image pairs, each of which includes an enhanced CT image to be trained at a coronary artery of a patient and a flat-scan CT image to be trained. And each enhanced CT image to be trained and each flat-scan CT image to be trained are subjected to characteristic point labeling, namely each enhanced CT image to be trained and each flat-scan CT image to be trained have corresponding characteristic point set labels.
When the feature point labeling is carried out on the enhanced CT image to be trained and the flat-scan CT image to be trained, a manual labeling mode can be adopted for labeling.
In another embodiment provided herein, the initial network parameters of the first neural network and the second neural network are the same.
Here, the first neural network and the second neural network are two sub-networks in a twin neural network.
It should be noted that, the twin neural network refers to an Artificial Neural Network (ANN), in which there are two sub-networks, and the structures and the parameter weights of the two sub-networks are the same. Generally, the input of the twin neural network is two signals (one-dimensional or multi-dimensional) with larger similarity but certain difference, and the network has the advantage of better performing the similarity measurement of the two inputs. And the difference of coronary artery regions and the similarity of other large regions exist in the flat scanning CT and the enhanced CT, so that the extraction of the characteristic points of the two images is completed through a multilayer information interaction mode based on the twin neural network. Therefore, the scheme fully utilizes the advantage that the twin neural network extracts the similarity characteristics of the two input images, considers the difference between the two input images, and adopts a middle multi-layer information interaction mode to carry out difference characteristic interaction.
When determining the input features of the first neural network and the second neural network, each neural network only uses a unique type of image as an input feature, wherein the enhanced CT image is used as the input feature of the first neural network, the flat scan CT is used as the input feature of the second neural network, the first input feature and the second input feature are determined, and the corresponding feature point set labels are used as the output features corresponding to the input features.
After the input features and the output features of each neural network are determined, model training can be performed, the training mode adopted in the scheme is that a reverse gradient propagation algorithm is adopted to alternately and iteratively train two neural networks, and in the training process, network parameters of the first neural network and the second neural network are shared. The network parameter sharing means that when the network parameters of a certain neural network are updated, the network parameters of another neural network are synchronously updated in the same way, so as to ensure that the network parameters of the two neural networks are the same.
In another embodiment provided herein, the training the first neural network and the second neural network with an inverse gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features comprises: determining a first loss function based on the first input characteristic and the first output characteristic, and determining a second loss function based on the second input characteristic and the second output characteristic during each iteration; alternately selecting the first loss function or the second loss function as a target loss function, and determining a target neural network corresponding to the selected target loss function; the target neural network corresponding to the first loss function is a first neural network, and the target neural network corresponding to the second loss function is a second neural network; and updating the network parameters of the target neural network by using a reverse gradient propagation algorithm based on the target loss function, and updating the network parameters of the other neural network simultaneously so as to enable the network parameters of the two neural networks to be consistent.
Here, when determining the first loss function based on the first input feature and the first output feature, specifically, the first input feature is input to a first neural network to obtain a first prediction result of the first input feature, and the first loss function is obtained by performing comparison calculation using the first prediction result and the first output feature. And determining a second loss function based on the second input characteristic and the second output characteristic, specifically, inputting the second input characteristic into a second neural network to obtain a second prediction result of the second input characteristic, and performing comparison calculation by using the second prediction result and the second output characteristic to obtain the second loss function.
For example, referring to fig. 3, fig. 3 is a schematic diagram of an alternative iterative training neural network provided in the present application. Specifically, as shown in fig. 3, network parameters of two neural networks are shared, that is, in one iterative training, the enhanced CT is first input into the upper neural network (first neural network), a feature point prediction result is obtained and then compared with a standard feature point result (feature point set label) to obtain a loss function value (first loss function), network parameters of the upper sub-network are updated through inverse gradient propagation, and meanwhile, according to the updated network parameters of the upper neural network, network parameters of the lower neural network (second neural network) are updated, and the updated network parameters are the same as the updated network parameters of the two neural networks, where the updated network parameters are used to make the obtained prediction result of the feature point closer to the standard feature point result; secondly, inputting the flat scanning CT into a lower neural network, obtaining a characteristic point prediction result, then comparing the characteristic point prediction result with a standard characteristic point result, obtaining a loss function value (a second loss function), updating network parameters of the lower neural network through inverse gradient propagation, and updating the network parameters of an upper neural network (a second neural network) according to the updated network parameters of the lower neural network, wherein the updated network parameters of the two neural networks are the same. After the alternate iterative training of a large number of enhanced CTs to be trained and flat-scan CTs to be trained, the loss function values of the two neural networks are not reduced any more, namely, the network convergence state is reached, the training is stopped, and the first characteristic point extraction model and the second characteristic point extraction model can be obtained.
Thus, after training is completed, in practical application, the enhanced CT and the flat scan CT can be input into the upper and lower two neural networks (the first neural network and the second neural network) of the twin neural network in parallel, and extraction of the two image feature points is completed in parallel.
And S104, determining a target transformation matrix between the first characteristic point set and the second characteristic point set through iterative computation based on the first characteristic point set and the second characteristic point set.
It should be noted that, in general, after the flat-scan CT is taken, the patient continues to take the enhanced CT by injecting contrast medium, and the position of the patient does not change much during the whole process, so that the heart region in the enhanced CT and the flat-scan CT can be approximately regarded as unchanged. Flat scan CT can be registered to the spatial location of the enhanced CT using image rigid registration (also referred to as image rigid registration). The essence of rigid body transformation is the spatial transformation of coordinate points, i.e. the spatial transformation of feature points on two images, and in the transformation process, the transformation matrix is the key for determining how to transform, so the calculation of the target transformation matrix is also needed.
In an embodiment provided by the present application, the determining, by iterative computation, a target transformation matrix between the first feature point set and the second feature point set based on the first feature point set and the second feature point set includes: acquiring an initial transformation matrix; the initial transformation matrix comprises a plurality of parameters corresponding to a plurality of transformation types, and each parameter is provided with a random initial value; performing initial transformation on the feature points in the second feature point set based on the initial transformation matrix to obtain a transformed second feature point set; setting a target function based on the transformed coordinates of the second characteristic points in the second characteristic point set and the transformed coordinates of the first characteristic points in the first characteristic point set; the target function is used for determining the sum of Euclidean distances between the transformed second characteristic point and the first characteristic point; and iteratively updating the random initial value corresponding to each parameter in the initial transformation matrix based on the target function until the value of the target function is smaller than a preset threshold value, and stopping updating to obtain the target transformation matrix.
Here, the transformation types include at least a translation transformation, a rotation transformation, a perspective transformation, and a scaling transformation.
The coordinate conversion calculation formula corresponding to the transformation matrix is as follows:
Figure BDA0003716958950000151
as shown in the above calculation formula, in the transformation matrix, there are four types of parameters, where a 11 To a 33 Denotes the rotation parameter, t x To t z Representing a translation parameter, v x To v z Representing the perspective transformation parameter and s representing the scaling factor. In essence, the rotation matrix has three unknown parameters respectively representing the rotation angles along the three rotation axes of xyz, and then 10 unknown parameters are added up, and the process of determining the object transformation matrix determines the specific values of the 10 unknown parameters in real time.
If we consider the first feature point set of the enhanced CT image as Q = { Q = } 1 ,q 2 ,...q n And P = { P ] as a second feature point set of the flat-scan CT image 1 ,p 2 ,...p n Firstly, setting a random initial value for 10 unknown parameters of an initial transformation matrix, and transforming all feature points in P according to the initial transformationMatrix transformation to new coordinate point P' = { P } 1 ',p 2 ',...p n ', and setting an objective function, which is a sum of euclidean distances between the second feature point and the first feature point after the determination of the transform, the formula of the set objective function being as follows:
Figure BDA0003716958950000152
the Euclidean distance between the feature points on the flat-scan CT image after transformation and the feature points on the enhanced CT is obtained through calculation of the target function, then the target function is continuously optimized in an iterative mode, the value of the target function can be smaller than a preset threshold value through a transformation matrix composed of 10 unknown parameters, namely, the condition is met, and a final target transformation matrix is obtained.
And S105, carrying out coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image so as to realize image registration of the flat-scan CT image and the enhanced CT image.
Here, a flat-scan CT image after transformation is obtained based on the flat-scan CT image and the target transformation matrix, and in practice, all pixel points in the flat-scan CT image are transformed to the same space as the enhanced CT image through the determined target transformation matrix, so that the transformed flat-scan CT image is obtained.
Wherein the image registration is an image rigid registration.
It should be noted that image registration is a process of matching and superimposing two or more images acquired at different times and under different sensors (imaging devices) or under different conditions (weather, illumination, camera positions and angles, etc.), and is widely applied to the fields of remote sensing data analysis, computer vision, image processing, etc.
And S106, subtracting the gray values of the corresponding pixel points of the converted flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtraction CT image.
And subtracting the gray value of the pixel point at the corresponding position of the converted flat-scan CT image by using the gray value of the pixel point aiming at each pixel point on the enhanced CT image, wherein the image obtained after the gray values of all the pixel points are subtracted is the subtraction CT image.
For example, please refer to fig. 4, fig. 4 is a schematic diagram of a subtraction CT image provided in the present application. As shown in fig. 4, the flat-scan CT image subjected to spatial coordinate transformation is subtracted from the enhanced CT image to obtain the subtraction CT image.
Therefore, the automatic feature point extraction method based on machine learning does not need to manually select feature points when image registration is carried out, has high speed and high accuracy, saves a large amount of manpower and time, lightens the burden of medical staff, and avoids inaccurate matching point selection caused by human factors; and the target transformation matrix is constructed by adopting the heart region characteristic point pairs, so that the side emphasis can be kept in the heart emphasis region, and the registration of the coronary artery region is important.
The embodiment of the application provides a coronary artery CT image subtraction method, which comprises the following steps: acquiring an enhanced CT image and a plain scan CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient; inputting the enhanced CT image into a first feature point extraction model trained in advance, and extracting a first feature point set of the enhanced CT image; inputting the flat-scan CT image into a second feature point extraction model trained in advance, and extracting a second feature point set of the flat-scan CT image; determining a target transformation matrix between the first feature point set and the second feature point set through iterative computation based on the first feature point set and the second feature point set; performing coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image so as to realize image registration of the flat-scan CT image and the enhanced CT image; and subtracting the gray values of the corresponding pixel points of the converted flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtraction CT image.
In this way, the enhanced CT image and the flat-scan CT image are respectively extracted based on the characteristic points of machine learning, pairwise matching is carried out based on the characteristic points to form point pairs, a target transformation matrix is constructed by the point pairs, the flat-scan CT image is transformed to an enhanced CT image space through the target transformation matrix, finally, the gray values of the corresponding positions of the enhanced CT image and the flat-scan CT image are subtracted, and the image subtraction processing task closed loop is completed. And the scheme is full-automatic treatment without any manual intervention.
Referring to fig. 5 and 6, fig. 5 is a first schematic structural diagram of a coronary CT image subtraction apparatus provided in an embodiment of the present application, and fig. 6 is a second schematic structural diagram of a coronary CT image subtraction apparatus provided in an embodiment of the present application. As shown in fig. 5, the subtraction apparatus 500 includes:
an acquisition module 510 for acquiring an enhanced CT image and a scout CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient;
a first extraction module 520, configured to input the enhanced CT image into a first feature point extraction model trained in advance, and extract a first feature point set of the enhanced CT image;
a second extracting module 530, configured to input the flat-scan CT image into a second feature point extracting model trained in advance, and extract a second feature point set of the flat-scan CT image;
a matrix determining module 540, configured to determine, through iterative computation, a target transformation matrix between the first feature point set and the second feature point set based on the first feature point set and the second feature point set;
a registration module 550, configured to perform coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image, so as to implement image registration between the flat-scan CT image and the enhanced CT image;
and a subtraction module 560, configured to subtract the gray values at the corresponding pixel points of the transformed flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtracted CT image.
Optionally, as shown in fig. 6, the subtraction apparatus 500 further includes a model building module 570, where the model building module 570 is configured to:
acquiring a CT image training sample set; the CT image training sample set comprises enhanced CT images to be trained and flat-scan CT images to be trained at the coronary arteries of a plurality of patients, and each CT image in the CT image training sample set has a corresponding feature point set label;
determining the enhanced CT image to be trained as a first input feature of a first neural network, determining a feature point set label corresponding to the enhanced CT image to be trained as a first output feature of the first neural network, determining the to-be-trained flat-scan CT image as a second input feature of a second neural network, and determining a feature point set label corresponding to the to-be-trained flat-scan CT image as a second output feature of the second neural network;
performing an alternating iterative training of the first neural network and the second neural network using a back gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features; wherein, during the alternating iterative training, network parameters of the first neural network and the second neural network are shared;
and when the first neural network and the second neural network reach a convergence state, stopping training to obtain a first characteristic point extraction model and a second characteristic point extraction model.
Optionally, when the matrix determining module 540 is configured to determine the target transformation matrix between the first feature point set and the second feature point set through iterative computation based on the first feature point set and the second feature point set, the matrix determining module 540 is configured to:
acquiring an initial transformation matrix; the initial transformation matrix comprises a plurality of parameters corresponding to a plurality of transformation types, and each parameter is provided with a random initial value;
performing initial transformation on the feature points in the second feature point set based on the initial transformation matrix to obtain a transformed second feature point set;
setting a target function based on the transformed coordinates of the second characteristic points in the second characteristic point set and the transformed coordinates of the first characteristic points in the first characteristic point set; the target function is used for determining the sum of Euclidean distances between the transformed second characteristic point and the first characteristic point;
and iteratively updating the random initial value corresponding to each parameter in the initial transformation matrix based on the target function until the value of the target function is smaller than a preset threshold value, and stopping updating to obtain a target transformation matrix.
Optionally, the initial network parameters of the first neural network and the second neural network are the same.
Optionally, when the model building module 570 is configured to perform an alternating iterative training on the first neural network and the second neural network by using an inverse gradient propagation algorithm based on the first input feature, the first output feature, the second input feature, and the second output feature, the model building module 570 is configured to:
determining a first loss function based on the first input characteristic and the first output characteristic, and determining a second loss function based on the second input characteristic and the second output characteristic during each iteration;
alternately selecting the first loss function or the second loss function as a target loss function, and determining a target neural network corresponding to the selected target loss function; the target neural network corresponding to the first loss function is a first neural network, and the target neural network corresponding to the second loss function is a second neural network;
and updating the network parameters of the target neural network by using a reverse gradient propagation algorithm based on the target loss function, and updating the network parameters of the other neural network simultaneously so as to enable the network parameters of the two neural networks to be consistent.
Optionally, the transformation types include at least a translation transformation, a rotation transformation, a perspective transformation, and a scaling transformation.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic device 700 includes a processor 710, a memory 720, and a bus 730.
The memory 720 stores machine-readable instructions executable by the processor 710, when the electronic device 700 runs, the processor 710 communicates with the memory 720 through the bus 730, and when the machine-readable instructions are executed by the processor 710, the steps in the method embodiments shown in fig. 1 to 4 can be performed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the method embodiments shown in fig. 1 to 4 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used to illustrate the technical solutions of the present application, but not to limit the technical solutions, and the scope of the present application is not limited to the above-mentioned embodiments, although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A subtraction method for coronary CT images, the subtraction method comprising:
acquiring an enhanced CT image and a plain scan CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient;
inputting the enhanced CT image into a first feature point extraction model trained in advance, and extracting a first feature point set of the enhanced CT image;
inputting the flat-scan CT image into a second feature point extraction model trained in advance, and extracting a second feature point set of the flat-scan CT image;
determining a target transformation matrix between the first feature point set and the second feature point set through iterative computation based on the first feature point set and the second feature point set;
performing coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image so as to realize image registration of the flat-scan CT image and the enhanced CT image;
and subtracting the gray values of the corresponding pixel points of the converted flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtraction CT image.
2. The subtraction method according to claim 1, wherein the first feature point extraction model and the second feature point extraction model are constructed by:
acquiring a CT image training sample set; the CT image training sample set comprises enhanced CT images to be trained and flat-scan CT images to be trained at coronary arteries of a plurality of patients, and each CT image in the CT image training sample set has a corresponding feature point set label;
determining the enhanced CT image to be trained as a first input feature of a first neural network, determining a feature point set label corresponding to the enhanced CT image to be trained as a first output feature of the first neural network, determining the flat-scan CT image to be trained as a second input feature of a second neural network, and determining a feature point set label corresponding to the flat-scan CT image to be trained as a second output feature of the second neural network;
performing an alternating iterative training of the first neural network and the second neural network using a back gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features; wherein, during the alternating iterative training, network parameters of the first neural network and the second neural network are shared;
and when the first neural network and the second neural network reach a convergence state, stopping training to obtain a first characteristic point extraction model and a second characteristic point extraction model.
3. The subtraction method according to claim 1, wherein determining the target transformation matrix between the first feature point set and the second feature point set by iterative computation based on the first feature point set and the second feature point set comprises:
acquiring an initial transformation matrix; the initial transformation matrix comprises a plurality of parameters corresponding to a plurality of transformation types, and each parameter is provided with a random initial value;
performing initial transformation on the feature points in the second feature point set based on the initial transformation matrix to obtain a transformed second feature point set;
setting a target function based on the transformed coordinates of the second characteristic points in the second characteristic point set and the transformed coordinates of the first characteristic points in the first characteristic point set; the target function is used for determining the sum of Euclidean distances between the transformed second characteristic point and the first characteristic point;
and iteratively updating the random initial value corresponding to each parameter in the initial transformation matrix based on the target function until the value of the target function is smaller than a preset threshold value, and stopping updating to obtain the target transformation matrix.
4. A subtraction method according to claim 2, characterized in that the initial network parameters of the first and second neural networks are the same.
5. The subtraction method according to claim 2, wherein the training of the first neural network and the second neural network with an inverse gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features comprises:
determining a first loss function based on the first input characteristic and the first output characteristic, and determining a second loss function based on the second input characteristic and the second output characteristic during each iteration;
alternately selecting the first loss function or the second loss function as a target loss function, and determining a target neural network corresponding to the selected target loss function; the target neural network corresponding to the first loss function is a first neural network, and the target neural network corresponding to the second loss function is a second neural network;
and based on the target loss function, updating the network parameters of the target neural network by using a reverse gradient propagation algorithm, and updating the network parameters of the other neural network simultaneously so as to ensure that the network parameters of the two neural networks are consistent.
6. A subtraction method according to claim 3, characterized in that said transformation types comprise at least a translation transformation, a rotation transformation, a perspective transformation and a scaling transformation.
7. A subtraction apparatus for coronary CT images, comprising:
an acquisition module for acquiring an enhanced CT image and a scout CT image of a target patient; the enhanced CT image and the flat scan CT image are both CT images at the coronary artery of the target patient;
the first extraction module is used for inputting the enhanced CT image into a first feature point extraction model trained in advance and extracting a first feature point set of the enhanced CT image;
the second extraction module is used for inputting the flat-scan CT image into a second feature point extraction model trained in advance and extracting a second feature point set of the flat-scan CT image;
a matrix determination module, configured to determine, based on the first feature point set and the second feature point set, a target transformation matrix between the first feature point set and the second feature point set through iterative computation;
a registration module, configured to perform coordinate transformation processing based on the flat-scan CT image and the target transformation matrix to obtain a transformed flat-scan CT image, so as to implement image registration between the flat-scan CT image and the enhanced CT image;
and the subtraction module is used for subtracting the gray values of the corresponding pixel points of the transformed flat-scan CT image from the gray values of all the pixel points in the enhanced CT image to obtain a subtracted CT image.
8. A subtraction arrangement according to claim 7, characterized in that the subtraction arrangement further comprises a model construction module for:
acquiring a CT image training sample set; the CT image training sample set comprises enhanced CT images to be trained and flat-scan CT images to be trained at the coronary arteries of a plurality of patients, and each CT image in the CT image training sample set has a corresponding feature point set label;
determining the enhanced CT image to be trained as a first input feature of a first neural network, determining a feature point set label corresponding to the enhanced CT image to be trained as a first output feature of the first neural network, determining the flat-scan CT image to be trained as a second input feature of a second neural network, and determining a feature point set label corresponding to the flat-scan CT image to be trained as a second output feature of the second neural network;
performing an alternating iterative training of the first neural network and the second neural network using a back gradient propagation algorithm based on the first input features, the first output features, the second input features, and the second output features; wherein, in alternating iterative training, network parameters of the first neural network and the second neural network are shared;
and when the first neural network and the second neural network reach a convergence state, stopping training to obtain a first characteristic point extraction model and a second characteristic point extraction model.
9. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is run, the machine-readable instructions when executed by the processor performing the steps of the subtraction method according to any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the subtraction method according to any of the claims 1 to 6.
CN202210746283.9A 2022-06-28 2022-06-28 Coronary artery CT image subtraction method and device, electronic equipment and storage medium Pending CN115147320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210746283.9A CN115147320A (en) 2022-06-28 2022-06-28 Coronary artery CT image subtraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210746283.9A CN115147320A (en) 2022-06-28 2022-06-28 Coronary artery CT image subtraction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115147320A true CN115147320A (en) 2022-10-04

Family

ID=83410529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210746283.9A Pending CN115147320A (en) 2022-06-28 2022-06-28 Coronary artery CT image subtraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115147320A (en)

Similar Documents

Publication Publication Date Title
JP6947759B2 (en) Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects
US10854339B2 (en) Systems and methods for associating medical images with a patient
US8218845B2 (en) Dynamic pulmonary trunk modeling in computed tomography and magnetic resonance imaging based on the detection of bounding boxes, anatomical landmarks, and ribs of a pulmonary artery
Campadelli et al. A segmentation framework for abdominal organs from CT scans
JP2016531709A (en) Image analysis technology for diagnosing disease
CN107072531A (en) Method and system for the dynamic (dynamical) analysis of myocardial wall
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
Bourbakis Detecting abnormal patterns in WCE images
JP6458166B2 (en) MEDICAL IMAGE PROCESSING METHOD, DEVICE, SYSTEM, AND PROGRAM
CN115830016B (en) Medical image registration model training method and equipment
Arikan et al. Deep learning based multi-modal registration for retinal imaging
CN109003283A (en) A kind of aorta outline segmentation based on active shape model
US20230169657A1 (en) Automatic organ geometry determination
CN115147320A (en) Coronary artery CT image subtraction method and device, electronic equipment and storage medium
Liu 3D image segmentation of MRI prostate based on a pytorch implementation of V-Net
US11837352B2 (en) Body representations
US20220076421A1 (en) Method for identifying bone images
CN113744193B (en) Lung tissue segmentation method, device, equipment and medium based on semantic shape
CN115100092B (en) Subtraction method and device for coronary CT image, electronic equipment and storage medium
CN113223104B (en) Cardiac MR image interpolation method and system based on causal relationship
CN117745641A (en) Method for detecting and quantitatively analyzing calcified plaque based on intracranial artery image
Zhang et al. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach
Sharma et al. Kidney Stone Detection Using Image Processing
Bravo et al. Left ventricle segmentation and motion analysis in multislice computerized tomography
CN117766108A (en) Method for generating three-dimensional needle track model in DICOM image based on TPS report

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination