CN115511995A - Angiography image processing method and system - Google Patents

Angiography image processing method and system Download PDF

Info

Publication number
CN115511995A
CN115511995A CN202211288955.2A CN202211288955A CN115511995A CN 115511995 A CN115511995 A CN 115511995A CN 202211288955 A CN202211288955 A CN 202211288955A CN 115511995 A CN115511995 A CN 115511995A
Authority
CN
China
Prior art keywords
image
reconstruction
corrected
kernel
reconstructed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211288955.2A
Other languages
Chinese (zh)
Inventor
张佳胤
王佳宇
吴迪嘉
陈子融
宋燕丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202211288955.2A priority Critical patent/CN115511995A/en
Publication of CN115511995A publication Critical patent/CN115511995A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/404Angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

An embodiment of the present specification provides an angiography image processing method, including: identifying one or more image areas to be corrected from an initial angiography image, wherein the initial angiography image is a computed tomography angiography image reconstructed by using a first reconstruction kernel; for each image area to be corrected, generating a second reconstructed nuclear reconstructed image corresponding to the image area to be corrected by using an image reconstruction model, wherein the image reconstruction model is a trained deep learning model; and generating a target angiogram image based on the initial angiogram image and a second reconstructed kernel reconstructed image corresponding to each image area to be corrected, wherein the first reconstructed kernel is one of a smooth reconstructed kernel and a sharp reconstructed kernel, and the second reconstructed kernel is the other one of the smooth reconstructed kernel and the sharp reconstructed kernel.

Description

Angiography image processing method and system
Technical Field
The present disclosure relates to the field of medical images, and more particularly, to a method and system for processing an angiographic image.
Background
Vascular disease may cause the lumen diameter of blood vessels to decrease, thereby causing a decrease in blood flow, which in turn causes ischemia, hypoxia, and severe necrosis in the corresponding portion of the heart. Computed tomography angiography (CTA, CT angiography) has been a main means for clinical detection of vascular conditions due to its characteristics of being non-invasive, simple to operate, and the like. The quality of the angiographic images can affect the accuracy of subsequent diagnoses.
Therefore, it is desirable to provide an angiographic image processing method and system for improving the clarity of angiographic images and facilitating the observation and evaluation of the state of blood vessels.
Disclosure of Invention
One of the embodiments of the present specification provides an angiographic image processing method, including: identifying one or more image areas to be corrected from an initial angiography image, wherein the initial angiography image is a computed tomography angiography image reconstructed by using a first reconstruction kernel; for each image area to be corrected, generating a second reconstruction kernel reconstruction image corresponding to the image area to be corrected by using an image reconstruction model, wherein the image reconstruction model is a trained deep learning model; and generating a target angiogram image based on the initial angiogram image and a second reconstructed kernel reconstructed image corresponding to each image area to be corrected, wherein the first reconstructed kernel is one of a smooth reconstructed kernel and a sharp reconstructed kernel, and the second reconstructed kernel is the other one of the smooth reconstructed kernel and the sharp reconstructed kernel.
In some embodiments, the method identifies one or more image regions to be corrected from an initial angiographic image, that is, identifies a region with poor reconstruction effect in the initial angiographic image, generates a second reconstructed nuclear image corresponding to the image region to be corrected by using an image reconstruction model for each image region to be corrected, and reconstructs the region with poor reconstruction effect in the initial angiographic image again based on the initial angiographic image and the second reconstructed nuclear image corresponding to each image region to be corrected, so as to improve the definition of the region with poor reconstruction effect in the initial angiographic image and generate a target angiographic image, so that the definition of the target angiographic image is greater than that of the initial angiographic image, thereby facilitating a doctor to view and evaluate the vascular state.
In some embodiments, the identifying one or more image regions to be corrected from the initial angiographic image comprises: performing blood vessel segmentation on the initial angiography image to obtain a blood vessel mask image; identifying one or more target vessel window masks in the vessel mask image based on a corrected region determination model; and determining the one or more image areas to be corrected in the initial angiographic image based on the one or more target vascular window masks.
In some embodiments, one or more target blood vessel window masks are identified in the blood vessel mask image based on the correction region determination model, and one or more image regions to be corrected can be determined in the initial angiography image relatively quickly and accurately based on the one or more target blood vessel window masks.
In some embodiments, the identifying one or more target vessel window masks in the vessel mask image based on the corrected region determination model comprises: determining a blood vessel window mask corresponding to each sampling point on a blood vessel central line in the blood vessel mask image; for each sampling point, determining image characteristics related to the blood vessel window mask; and determining whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask or not based on the correction area determination model and the image characteristics.
In some embodiments, the target blood vessel window mask can be determined relatively quickly and accurately by determining a blood vessel window mask corresponding to the sampling point for each sampling point on the blood vessel centerline in the blood vessel mask image, determining image features related to the blood vessel window mask for each sampling point, determining whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask based on the correction region determination model and the image features.
In some embodiments, the determining the one or more image regions to be corrected in the initial angiographic image based on the one or more target vascular window masks comprises: determining one or more positive vessel segment masks based on the one or more target vessel window masks; and for each positive blood vessel segment mask, segmenting a region corresponding to the positive blood vessel segment mask from the initial angiography image to be used as the image region to be corrected.
In some embodiments, one or more positive blood vessel segment masks are determined based on one or more target blood vessel window masks, and for each positive blood vessel segment mask, a region corresponding to the positive blood vessel segment mask is divided from the initial angiography image to be used as an image region to be corrected.
In some embodiments, the generating, by using an image reconstruction model, a second reconstructed nuclear reconstructed image corresponding to the image region to be corrected includes: preprocessing the image area to be corrected to generate a preprocessed image area to be corrected; processing the preprocessed image area to be corrected by using the image reconstruction model, and determining the deviation between the image area to be corrected and the second reconstruction kernel reconstructed image; and generating the second reconstructed nuclear reconstructed image based on the deviation and the image area to be corrected.
In some embodiments, the second reconstructed nuclear reconstructed image may be generated quickly by preprocessing the image region to be corrected to generate a preprocessed image region to be corrected, processing the preprocessed image region to be corrected with an image reconstruction model, determining a deviation of the image region to be corrected from the second reconstructed nuclear reconstructed image, and based on the deviation and the image region to be corrected. Furthermore, the deviation is easier to learn, and the image reconstruction model outputs the deviation of the smooth nuclear image and the sharp nuclear image without directly outputting the predicted second reconstruction nuclear reconstruction image, so that the processing speed of the image reconstruction model can be higher.
In some embodiments, the image reconstruction model is trained using the following process: obtaining at least two training samples, wherein each training sample comprises a sample first reconstruction kernel reconstruction image and a corresponding sample second reconstruction kernel reconstruction image; training an initial model by using the at least two training samples to generate a trained model; and determining the image reconstruction model based on the trained model.
In some embodiments, the identifying one or more image regions to be corrected from the initial angiographic image comprises: extracting a vessel centerline in the initial angiographic image; generating a blood vessel straightening image corresponding to the initial angiogram image according to the blood vessel central line; identifying the one or more image regions to be corrected from the vessel-straightened image.
In some embodiments, the vessel center line in the initial angiogram image is extracted, the vessel straightening image corresponding to the initial angiogram image is generated according to the vessel center line, and one or more image areas to be corrected are identified from the vessel straightening image for reconstruction, so that the subsequently generated target angiogram image can be more intuitive, and a doctor can check and evaluate the vessel state more conveniently.
In some embodiments, the first reconstruction kernel is a smooth reconstruction kernel and the second reconstruction kernel is a sharp reconstruction kernel.
One of the embodiments of the present specification provides an angiographic image processing system, said system comprising: the system comprises a region identification module, a correction module and a correction module, wherein the region identification module is used for identifying one or more image regions to be corrected from an initial angiography image, and the initial angiography image is a computed tomography angiography image reconstructed by using a first reconstruction kernel; the image reconstruction module is used for generating a second reconstruction kernel reconstruction image corresponding to the image area to be corrected by using an image reconstruction model for each image area to be corrected, wherein the image reconstruction model is a trained deep learning model; an image generation module, configured to generate a target angiographic image based on the initial angiographic image and a second reconstructed kernel reconstructed image corresponding to each image region to be corrected, where the first reconstructed kernel is one of a smooth reconstructed kernel and a sharp reconstructed kernel, and the second reconstructed kernel is the other of the smooth reconstructed kernel and the sharp reconstructed kernel.
One of the embodiments of the present specification provides an angiographic image processing apparatus, including a processor, configured to execute the angiographic image processing method described above.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer executes the above-mentioned angiographic image processing method.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of an angiographic image processing system according to some embodiments of the present description;
FIG. 2 is a block schematic diagram of an angiographic image processing system in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of an angiographic image processing method according to some embodiments of the present description;
FIG. 4 is an exemplary flow diagram illustrating the determination of one or more image regions to be corrected by vessel segmentation of an initial angiographic image according to some embodiments of the present description;
FIG. 5 is an exemplary flow diagram illustrating the generation of a second reconstructed nuclear reconstructed image corresponding to an image region to be corrected using an image reconstruction model according to some embodiments of the present disclosure;
FIG. 6 is a schematic illustration of a process of training an initial model, according to some embodiments herein;
FIG. 7 is a schematic illustration of an initial angiographic image and a target angiographic image according to some embodiments of the present description;
FIG. 8 is a schematic illustration of an initial angiographic image and a target angiographic image according to further embodiments of the present description;
FIG. 9 is a schematic view of a vascular window shown in accordance with further embodiments of the present disclosure.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, without inventive effort, the present description can also be applied to other similar contexts on the basis of these drawings. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not to be taken in a singular sense, but rather are to be construed to include a plural sense unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified steps or elements as not constituting an exclusive list and that the method or apparatus may comprise further steps or elements.
Flowcharts are used in this specification to illustrate the operations performed by the system according to embodiments of the present specification. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Reconstruction of CTA images can generally be based on either a smooth reconstruction kernel or a sharp reconstruction kernel. The reconstruction kernel used for the images of different target parts is different, for example, the CTA image of the cardiovascular and cerebrovascular is generally reconstructed by using a smooth reconstruction kernel. Under the reconstruction mode of the smooth reconstruction kernel, the blood vessels and tissues in the reconstructed CTA image are smoother, and the method is suitable for observing soft plaque or stenosis of slightly calcified blood vessels. However, due to the influence of the wire harness hardening artifact, the blood vessels in the heavily calcified area or the stent area in the reconstructed CTA image are more blurred, which is not beneficial to observing and evaluating the stenosis degree of the blood vessels in the heavily calcified area or the stent area. The reconstruction of the heavily calcified or stent region generally adopts a sharp reconstruction kernel, and in the reconstruction mode of the sharp kernel, the blood vessels of the heavily calcified region or the stent region in the reconstructed CTA image are relatively clear, which is beneficial to observing and evaluating the stenosis degree of the blood vessels, but other blood vessels and tissues are not smooth enough.
Therefore, when a soft plaque or a lightly calcified blood vessel and a heavily calcified or stented blood vessel coexist in a target region, the reconstruction using a single reconstruction kernel is not favorable for observing and evaluating the stenosis degree of all blood vessels, and therefore, it is desirable to provide an angiographic image processing method and system, which combine two reconstruction kernels to reconstruct a CTA image, so that the calcified regions, the stented regions and the non-stented regions in different degrees in the reconstructed CTA image are clear, and are favorable for observing and evaluating the stenosis degree of the blood vessels.
Fig. 1 is a schematic diagram of an application scenario 100 of an angiographic image processing system according to some embodiments of the present description.
As shown in fig. 1, in some embodiments, the application scenario 100 may include a processing device 110, a network 120, a user terminal 130, a storage device 140, and a vessel imaging device 150. The application 100 may generate a target angiographic image by implementing the methods and/or processes disclosed herein.
The processing device 110 may be used to process data from at least one component of the application scenario 100 or an external data source (e.g., a cloud data center). For example, the processing device 110 may acquire an initial angiographic image from the storage device 140 or the vascular imaging device 150. The processing device 110 may process the acquired data. For example, the processing device 110 may identify one or more image regions to be corrected from the initial angiographic image. For each image region to be corrected, the processing device 110 may generate a second reconstructed nuclear reconstructed image corresponding to the image region to be corrected by using the image reconstruction model. The processing device 110 may generate a target angiographic image based on the initial angiographic image and the second reconstructed image corresponding to each image region to be corrected. In some embodiments, the processing device 110 may be a single server or a group of servers. The processing device 110 may be local, remote.
The network 120 may include any suitable network that provides information and/or data exchange capable of facilitating the application scenario 100. In some embodiments, information and/or data may be exchanged between one or more components of the application scenario 100 (e.g., the processing device 110, the user terminal 130, the storage device 140, and/or the vessel imaging device 150) via the network 120.
In some embodiments, the network 120 may be any one or more of a wired network or a wireless network. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, e.g., base stations and/or network switching points, through which one or more components of the application scenario 100 may connect to the network 120 to exchange data and/or information.
User terminal 130 refers to one or more terminals or software used by a user (e.g., a nurse, a doctor, etc.). In some embodiments, the user terminal 130 may include, but is not limited to, a smart phone, a tablet, a laptop, a desktop computer, and the like. In some embodiments, the user terminal 130 may interact with other components in the application scenario 100 through the network 120. For example, the user terminal 130 may send one or more control instructions to the processing device 110 to control the processing device 110 to process the target angiographic image based on the initial angiographic image. As another example, user terminal 130 may acquire a target angiographic image from processing device 110 and present the target angiographic image to the user.
Storage device 140 may be used to store data, instructions, and/or any other information. In some embodiments, storage device 140 may store data and/or information obtained from user terminal 130, storage device 140, vascular imaging device 150, and/or the like. For example, the storage device 140 may store an initial angiographic image acquired from the vascular imaging device 150. In some embodiments, storage device 140 may include mass storage, removable storage, and the like, or any combination thereof.
The blood vessel imaging device 150 may image blood vessels of a target object. For example, the vascular imaging device 150 may include a CT device that may scan a target object to acquire CTA images. By way of example only, a target site of a target subject may be scanned using a CT device to acquire a CTA image as a drug (e.g., a contrast agent) is injected intravenously into the target subject and the drug passes through a blood vessel of the target site.
It should be noted that the application scenario 100 is provided for illustrative purposes only and is not intended to limit the scope of the present description. It will be apparent to those skilled in the art that various modifications and variations can be made in light of the description of the present specification. For example, the application scenario 100 may also include one or more other components, or one or more of the components described above may be omitted. However, such changes and modifications do not depart from the scope of the present specification.
Fig. 2 is a block schematic diagram of an angiographic image processing system 200 according to some embodiments of the present description. As shown in fig. 2, the angiographic image processing system 200 may include a region identification module 210, an image reconstruction module 220, and an image generation module 230.
The region identification module 210 may be configured to identify one or more image regions to be corrected from an initial angiographic image, wherein the initial angiographic image is a computed tomography angiographic image reconstructed using a first reconstruction kernel. In some embodiments, the region identification module 210 may be configured to perform vessel segmentation on the initial angiography image to obtain a vessel mask image; identifying one or more target vessel window masks in the vessel mask image based on the corrected region determination model; and determining one or more image regions to be corrected in the initial angiographic image based on the one or more target blood vessel window masks. In some embodiments, the region identification module 210 may be further configured to determine, for each sampling point on the centerline of the blood vessel in the blood vessel mask image, a blood vessel window mask corresponding to the sampling point; for each sampling point, determining image characteristics related to a blood vessel window mask; and determining whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask or not based on the correction area determination model and the image characteristics. In some embodiments, the region identification module 210 may be configured to determine one or more positive vessel segment masks based on one or more target vessel window masks; and for each positive blood vessel segment mask, dividing a region corresponding to the positive blood vessel segment mask from the initial angiography image to be used as an image region to be corrected. In some embodiments, the region identification module 210 may be used to extract vessel centerlines in the initial angiographic image; generating a blood vessel straightening image corresponding to the initial angiogram image according to the blood vessel central line; one or more image regions to be corrected are identified from the vessel-straightened image.
The image reconstruction module 220 may be configured to generate, for each image region to be corrected, a second reconstructed nuclear reconstructed image corresponding to the image region to be corrected by using an image reconstruction model, where the image reconstruction model is a trained deep learning model. In some embodiments, the image reconstruction module 220 may be further configured to pre-process the image region to be corrected, so as to generate a pre-processed image region to be corrected; processing the preprocessed image area to be corrected by using an image reconstruction model, and determining the deviation between the image area to be corrected and a second reconstruction kernel reconstructed image; and generating a second reconstructed nuclear reconstructed image based on the deviation and the image area to be corrected.
The image generation module 230 may be configured to generate a target angiographic image based on the initial angiographic image and the second reconstructed nuclear reconstructed image corresponding to each image region to be corrected.
It should be noted that the above description of the angiographic image processing system and its modules is for convenience only and should not be taken as limiting the scope of the present disclosure. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. In some embodiments, the region identification module 210, the image reconstruction module 220, and the image generation module 230 disclosed in fig. 2 may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present description.
Fig. 3 is an exemplary flow diagram of an angiographic image processing method 300 in accordance with some embodiments of the present description. In some embodiments, the method 300 may be performed by the processing device 110 or the angiographic image processing system 200. As shown in fig. 3, the method 300 includes the following steps.
At step 310, one or more image regions to be corrected are identified from the initial angiographic image. In some embodiments, step 310 may be performed by the area identification module 210.
The initial angiographic image may be a Computed Tomography (CT) image generated by angiographic imaging of the target object using a CT device. For example, when a drug passes through a vascular artery of a target site after intravenous injection of the drug into a target object, the target site may be scanned using the vascular imaging device 150 to acquire CT scan data. Further, the CT scan data is reconstructed by the first reconstruction kernel, and an initial angiographic image can be obtained.
The target object may include a human body, an animal, and the like. The target site may be the entire target object or a portion of the target object. For example, the target site may be the head, chest, abdomen, heart, liver, upper limbs, lower limbs, or the like, or any combination of the foregoing. By way of example only, the initial angiographic image may be an angiographic image of a cerebral artery, a carotid artery, and/or a coronary artery, etc., of the target object.
The initial angiographic Image may be a two-dimensional (2D, two-dimensional) Image or a three-dimensional (3D, three-dimensional) Image, which may be in the Format of Joint Photographic Experts Group (JPEG), tagged Image File Format (TIFF), graphics Interchange Format (GIF).
The first reconstruction kernel may be a mathematical algorithm used to process scan data acquired in angiographic imaging to reconstruct an initial angiographic image. In some embodiments, the first reconstruction kernel may be one of a smooth reconstruction kernel, a sharp reconstruction kernel. Wherein, the smooth reconstruction kernel is a low-pass filter, and only allows low-frequency components to pass through; a sharp reconstruction kernel is a high-pass filter that allows only high-frequency components to pass. In this specification, the high-frequency component may be a component whose frequency exceeds a first threshold value, and the low-frequency component may be a component whose frequency is lower than a second threshold value. The first and second thresholds may be the same or different. The image reconstructed with the smooth reconstruction kernel and the image reconstructed with the sharp reconstruction kernel have different characteristics. For example, the image reconstructed by the smooth reconstruction kernel has smoother blood vessels and tissues, and is suitable for observing soft plaque or stenosis of a slightly calcified blood vessel. However, due to the influence of the wire harness hardening artifact, the calcified blood vessels or blood vessels in the stent region in the image reconstructed by the smooth reconstruction kernel are more blurred, and the observation and evaluation of the calcified blood vessels or the stenosis degree of the blood vessels in the stent region are not facilitated. In the image reconstructed by a sharp reconstruction kernel (e.g., sharp kernel U70U, etc.), the blood vessels in the calcified or stent region are relatively clear, which is beneficial for observing and evaluating the degree of stenosis of the blood vessels in the calcified or stent region, but the smoothness of other regions is poor. In some embodiments, the first reconstruction kernel may be a smooth reconstruction kernel (e.g., smooth kernel B20f, etc.).
In some embodiments, the region identification module 210 may acquire the initial angiographic image in any manner. For example, the region identification module 210 may acquire an initial angiographic image from the processing device 110, the user terminal 130, the storage device 140, the vessel imaging device 150, and/or an external data source. Alternatively, the region identification module 210 may acquire scan data acquired in the angiographic imaging from the angiographic device 150 and reconstruct the scan data using the first reconstruction kernel to generate an initial angiographic image.
In some embodiments, the region identification module 210 may extract a vessel centerline in the initial angiographic image, generate a vessel-straightened image corresponding to the initial angiographic image according to the vessel centerline, and identify one or more image regions to be corrected from the vessel-straightened image. The blood vessel straightening image can be an image obtained by straightening and reconstructing blood vessels along the center lines of the blood vessels in the initial angiography image.
In some embodiments, the region identification module 210 may extract vessel centerlines in the initial angiographic image in any manner. For example, the region identification module 210 may use a morphological erosion operation to remove the outer layer of the blood vessel in the initial angiographic image until the blood vessel in the initial angiographic image only has its skeleton, and perform tree-like traversal ordering on the skeleton tree to obtain the blood vessel centerline. As another example, the region identification module 210 may process the initial angiographic image using a vessel segmentation model to obtain a vessel segmentation map. Further, the region identification module 210 may determine a vessel centerline based on the vessel segmentation map. For another example, the region identification module 210 may process the initial angiographic image using a centerline extraction model to obtain a vessel centerline segmentation result. The vessel segmentation model and the centerline extraction model may be models generated by training using a machine learning algorithm.
In some embodiments, the region identification module 210 may generate a vessel-straightened image corresponding to the initial angiographic image based on the vessel centerline in any manner. For example only, the region identification module 210 may extend each point on the centerline of the blood vessel in the positive and negative directions to obtain a specified number of points, connect adjacent four points into a quadrilateral, and combine the quadrilaterals to generate the curved surface. The region identification module 210 may obtain a corresponding pixel gray level by corresponding the world coordinates of each point on the curved surface to the image coordinates in the initial angiographic image according to an interpolation method, so as to obtain a gray curved surface. The region identification module 210 may further perform straightening projection mapping on the curved surface to a rectangular plane to obtain a blood vessel straightening image of the initial angiography image.
In some embodiments, the vessel center line in the initial angiogram image is extracted, the vessel straightening image corresponding to the initial angiogram image is generated according to the vessel center line, and one or more image areas to be corrected are identified from the vessel straightening image for reconstruction, so that the subsequently generated target angiogram image can be more intuitive, and a doctor can check and evaluate the vessel state more conveniently. For convenience of description, the following description will be made taking an initial angiographic image as an example. It will be appreciated that in some other embodiments, the vessel-straightened image may also be subjected to the same or similar processing to generate the target angiographic image.
The image area to be corrected can be an area which has poor reconstruction effect and needs to be corrected in the initial angiography image.
As described above, the image reconstructed by the smooth reconstruction kernel has a blurred blood vessel in the heavily calcified or stent region and a smoother other region. And the blood vessels in the heavily calcified area or the stent area in the image reconstructed by the sharp reconstruction kernel are relatively clear, and the smoothness of other areas is poor. Thus, when the first reconstruction kernel is a smooth reconstruction kernel and the second reconstruction kernel is a sharp reconstruction kernel, the image region to be corrected may comprise a heavily calcified region and/or a stent region in the initial angiographic image. In some embodiments, the image region to be corrected may include a region with a larger gray value in the initial angiographic image, which has a larger probability of being a stent region or a heavily calcified region.
When the first reconstruction kernel is a sharp reconstruction kernel and the second reconstruction kernel is a smooth reconstruction kernel, the image region to be corrected may include a non-heavily calcified region and a non-stent region in the initial angiographic image. In some embodiments, the image region to be corrected may include a region with a smaller gray value in the initial angiographic image, which has a greater probability of being a non-heavily calcified region and a non-stent region.
For the convenience of understanding, the present solution is described below by taking the first reconstruction kernel as a smooth reconstruction kernel and the second reconstruction kernel as a sharp reconstruction kernel as an example.
In some embodiments, the region identification module 210 may identify one or more image regions to be corrected from the initial angiographic image in any manner. For example, the region identification module 210 may divide the initial angiographic image into a plurality of sub-regions (e.g., sub-regions of the same size and shape) and calculate a gray scale average for each sub-region. And judging whether the sub-region is the image region to be corrected or not based on the gray average value and the gray threshold of the sub-region. For example only, when the average value of the gray levels of a certain sub-region is 1.5 times of the gray level threshold, the region identification module 210 may regard the sub-region as the image region to be corrected. The grayscale threshold may be a default setting of the system or set empirically by the user. Alternatively, the threshold value of the gray scale may be determined by counting the average value of the gray scales of the image region to be corrected in the historical angiographic image.
In some embodiments, the region identification module 210 may perform vessel segmentation on the initial angiography image to obtain a vessel mask image; identifying one or more target vessel window masks in the vessel mask image based on the corrected region determination model; and determining one or more image regions to be corrected in the initial angiographic image based on the one or more target blood vessel window masks. For further description of identifying the target blood vessel window mask and determining the image region to be corrected based on the target blood vessel window mask, reference may be made to fig. 4 and its related description, which are not repeated herein.
And 320, generating a second reconstruction kernel reconstruction image corresponding to the image area to be corrected by using the image reconstruction model for each image area to be corrected. In some embodiments, step 320 may be performed by image reconstruction module 220.
The second reconstructed nuclear reconstructed image is a virtual image generated based on the image reconstruction model. The second reconstructed nuclear reconstructed image may be used to simulate an image generated by reconstructing scan data of the target object using the second reconstructed nuclear. That is, some embodiments in this specification do not need to reconstruct the scan data again using the second reconstruction kernel, but generate the second reconstructed nuclear reconstructed image corresponding to the image region to be corrected using the image reconstruction model, that is, generating the second reconstructed nuclear reconstructed image corresponding to the image region to be corrected using the image reconstruction model is a process of image processing, and is not a process of scan image generation performed by the blood vessel imaging device 150, and the process of generating the second reconstructed nuclear reconstructed image may be performed independently of the blood vessel imaging device 150. The second reconstruction kernel is a different image reconstruction algorithm than the first reconstruction kernel. In some embodiments, the second reconstruction kernel may be another of a smooth reconstruction kernel, a sharp reconstruction kernel. For example only, the first reconstruction kernel may be a smooth reconstruction kernel and the second reconstruction kernel may be a sharp reconstruction kernel.
The image reconstruction model may be a trained deep learning model that can be used to simulate the process of reconstructing an image with the second reconstruction kernel. The image reconstruction model may be one of a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), a multi-layer neural network (MLP), a generative antagonistic neural network (GAN), or any combination thereof. For example, the image reconstruction model may be a model formed by combining a convolutional neural network and a deep neural network. In some embodiments, the image reconstruction model may be a generative model in a generative neural network.
In some embodiments, the image reconstruction model may be pre-trained for the processing device 110 or other device and stored in the storage device 140 or other storage device, and the image reconstruction module 220 may retrieve the trained image reconstruction model directly from the storage device 140 or other storage device. In some embodiments, the image reconstruction module 220 may directly train the initial model to generate the image reconstruction model.
For ease of understanding, the following describes the training process of the image reconstruction module 220 on the image reconstruction model. For example, the image reconstruction module 220 may obtain at least two training samples (also referred to as first training samples), wherein each training sample includes a sample first reconstructed nuclear reconstructed image and a corresponding sample second reconstructed nuclear reconstructed image. The sample first reconstructed image may be an image generated by reconstructing angiographic scan data of a sample object using a first reconstruction kernel, and the sample second reconstructed image may be an image generated by reconstructing the same scan data using a second reconstruction kernel. The image reconstruction module 220 may train the initial model using at least two training samples to generate a trained model. The image reconstruction module 220 may determine an image reconstruction model based on the trained model. In some embodiments, the image reconstruction module 220 may use the trained model or a part thereof that satisfies the predetermined condition after training as the image reconstruction model. The preset condition may be that the loss function is converged, the loss function value is smaller than a preset value, or the iteration number is greater than a preset number, and the like. In some embodiments, the image reconstruction module 220 may obtain training samples from the processing device 110, the user terminal 130, the storage device 140, and/or an external data source. Further description of the training image reconstruction model may be found elsewhere in this specification, e.g., in relation to FIG. 6.
In some embodiments, the image reconstruction module 220 may input the image region to be corrected to the image reconstruction model, and the image reconstruction model may generate a corresponding second reconstructed nuclear reconstructed image according to the image region to be corrected. In some embodiments, before inputting the image region to be corrected into the reconstructed image model, a preprocessing operation needs to be performed on the image region to be corrected. Accordingly, a post-processing operation on the output of the image reconstruction model is required to generate a second reconstructed nuclear reconstructed image. In some embodiments, the image reconstruction model may output information related to the second reconstructed nuclear reconstructed image, e.g. a deviation between the second reconstructed nuclear reconstructed image and the image region to be corrected. The image reconstruction module 220 may generate a second reconstructed nuclear reconstructed image based on the information related to the second reconstructed nuclear reconstructed image.
By way of example only, fig. 5 provides an exemplary flowchart for generating a second reconstructed image corresponding to an image region to be corrected by using an image reconstruction model according to some embodiments of the present disclosure, and as shown in fig. 5, in some embodiments, the image reconstruction module 220 may pre-process the image region to be corrected 510 to generate a pre-processed image region to be corrected 520. By processing the pre-processed image region to be corrected 520 using the image reconstruction model 530, a deviation image 540 of the image region to be corrected 510 from the second reconstructed nuclear reconstructed image can be determined. Based on the deviation image 540 and the image region to be corrected 510, a second reconstructed nuclear reconstructed image 550 may be generated.
In some embodiments, the preprocessing may include expanding the image region to be corrected 510, and obtaining the expanded image region to be corrected. By expanding the image area to be corrected, the problem of under-segmentation can be avoided, and errors are reduced.
In some embodiments, the pre-processing may include normalizing the image region to be corrected 510 or the dilated image region to be corrected. For example, the image reconstruction module 220 may normalize the image area to be corrected 510 or the dilated image area to be corrected by the following formula:
Figure BDA0003900657170000121
wherein x is 1 Image gray scale, y, representing the image area to be corrected 510 or the dilated image area to be corrected 1 And representing the image gray scale of the normalized image area to be corrected 510 or the expanded image area to be corrected, mean is a preset density average value, and std is a standard deviation constant. In some embodiments mean may be 600 and std may be 800.
In some embodiments, the pre-processing may also include other image processing operations, such as image denoising, image enhancement, and the like.
In some embodiments, the image reconstruction module 220 may input the preprocessed image region to be corrected to the image reconstruction model 530, and the image reconstruction model 530 outputs a difference image (difference image) of the image region to be corrected 510 and the second reconstructed kernel reconstructed image 550. In some embodiments, the image reconstruction module 220 may denormalize the image region to be corrected 510 output by the image reconstruction model 530 and the difference image 540 of the second reconstructed nuclear reconstructed image 550. For example only, the image reconstruction module 220 may denormalize the deviation image 540 of the image region to be corrected 510 and the second reconstructed nuclear reconstructed image 550 by the following formula.
x 2 =y 2 *std+mean;
Wherein x is 2 Image gray scale, y, of the de-normalized difference image 2 The image gray scale of the offset image.
Further, the image reconstruction module 220 may add the denormalized deviation image to the image region to be corrected to generate a second reconstructed nuclear reconstructed image 550.
In some embodiments, the second reconstructed nuclear reconstructed image may be generated quickly based on the image region to be corrected or the pre-processed image region to be corrected through image reconstruction model processing. Further, the image reconstruction model outputs a deviation image of the image area to be corrected and the second reconstruction kernel reconstruction image, and does not directly output the predicted second reconstruction kernel reconstruction image, so that the processing speed of the image reconstruction model can be higher. In the training process, the training efficiency of the image reconstruction model is higher because the deviation is easier to learn.
In some embodiments, when there are multiple image areas to be corrected, the image reconstruction module 220 may process the image areas to be corrected sequentially or simultaneously. Compared with the method of directly processing the whole initial angiography image, the method of determining the image area to be corrected and then processing the image area can reduce the data processing amount and improve the generation efficiency of the target angiography image.
And step 330, generating a target angiography image based on the initial angiography image and the second reconstruction kernel reconstruction image corresponding to each image area to be corrected. In some embodiments, step 330 may be performed by the image generation module 230.
The target angiographic image may be an angiographic image obtained by processing the initial angiographic image based on the second reconstructed nuclear reconstructed image corresponding to each image region to be corrected.
In some embodiments, for each image region to be corrected in the initial angiographic image, the image generation module 230 may replace the image region to be corrected with a corresponding second reconstructed nuclear reconstructed image to generate the target angiographic image.
In some embodiments, the method 300 first identifies an image region to be corrected in the initial angiographic image, where the image region to be corrected is a region with poor reconstruction effect of the first reconstruction kernel in the initial angiographic image, and reconstructs the image region to be corrected again by using the image reconstruction model, so as to improve the definition of the region with poor reconstruction effect in the initial angiographic image, generate the target angiographic image, and facilitate the doctor to check and evaluate the vascular state. Taking the first reconstruction kernel as the smooth reconstruction kernel and the second reconstruction kernel as the sharp reconstruction kernel as an example, the method 300 may identify the region where the stent/heavily calcified vessel is located in the initial angiographic image, and generate a corresponding second reconstruction kernel reconstructed image, i.e., the sharp kernel reconstructed image, to replace the region where the stent/calcified vessel is located in the initial angiographic image, thereby obtaining the target angiographic image. In a target angiography image, blood vessel regions of soft plaque, mild calcified plaque and other regions are generated by smooth kernel reconstruction, and have better smoothness; the stent and the heavily calcified vascular region are predicted sharp nucleus reconstruction regions and have better definition. The target angiography image can integrate the advantages of a smooth nuclear reconstruction image and a sharp nuclear reconstruction image, and the subsequent diagnosis accuracy and efficiency are improved.
Fig. 7 is a schematic illustration of an initial angiographic image and a target angiographic image according to some embodiments described herein. The left image in fig. 7 is the initial angiographic image 710, and the right image in fig. 7 is the target angiographic image 720 after the initial angiographic image has been corrected by the method 300. As shown in fig. 7, the clarity of the calcified region 740 of the target angiographic image 720 is effectively improved compared to the calcified region 730 of the initial angiographic image 710. FIG. 8 is a schematic illustration of an initial angiographic image and a target angiographic image according to further embodiments of the present description. The left image in fig. 8 is the initial angiographic image 810, and the right image in fig. 8 is the target angiographic image 820 corrected for the initial angiographic image by the method 300. As shown in fig. 8, the clarity of the calcified region 840 of the target angiographic image 820 is effectively improved compared to the calcified region 830 of the initial angiographic image 810.
It should be noted that the above description of the angiographic image processing method 300 is for illustration and description only and is not intended to limit the scope of applicability of the present description. Various modifications and alterations to the angiographic image processing method 300 will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are intended to be within the scope of the present description.
Fig. 4 is an exemplary flow chart illustrating the determination of one or more image regions to be corrected by vessel segmentation of an initial angiographic image according to some embodiments of the present description. In some embodiments, the flow 400 may be performed by the area identification module 210. In some embodiments, flow 400 may be used to implement step 310. As shown in fig. 4, the process 400 may include the following steps.
Step 410, performing vessel segmentation on the initial angiography image to obtain a vessel mask image. In some embodiments, step 410 may be performed by the area identification module 210.
The vessel segmentation may be an operation of segmenting the initial angiographic image to obtain a blood vessel mask image. In some embodiments, the region identification module 210 may perform vessel segmentation on the initial angiography image by means of matched filtering, morphology, deep learning, and the like, so as to obtain a vessel mask image. For example only, the region identification module 210 may input the initial angiographic image into a vessel segmentation model that outputs a vessel mask image corresponding to the initial angiographic image. The vessel segmentation model may be one of a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), a multi-layer neural network (MLP), a generative antagonistic neural network (GAN), or any combination thereof. The vessel segmentation model can be generated by utilizing a plurality of sample angiography images marked with vessel regions.
At step 420, one or more target vessel window masks are identified in the vessel mask image based on the corrected region determination model. In some embodiments, step 420 may be performed by the area identification module 210.
The target vessel window mask may be a mask image of the vessel segment that needs to be image corrected. For example, the target vessel window mask can be a region of the vessel mask image that contains a stent and/or calcified vessels.
The corrected region determination model may be a machine-learned model for determining a target vessel window mask in the vessel mask image.
In some embodiments, the region identification module 210 may input the vessel mask image to a correction region determination model and determine a target vessel window mask based on an output of the correction region determination model. For example only, the region identification module 210 may divide the blood vessel mask image into a plurality of sub-regions, input each sub-region to the correction region determination model, and output a determination result of whether the sub-region is the target blood vessel window mask by the correction region determination model. For another example, the region identification module 210 may input the image feature of each sub-region to a correction region determination model, which outputs a determination result of whether the sub-region is the target blood vessel window mask.
In some embodiments, the region identification module 210 may extract a vessel centerline in the vessel mask image and determine a plurality of sampling points on the vessel centerline. And determining a blood vessel window mask corresponding to the sampling point for each sampling point on the blood vessel central line in the blood vessel mask image. For each sampling point, the region identification module 210 can determine image features related to the vascular window mask. Further, the region identification module 210 may determine whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask based on the corrected region determination model and the image features. For more description of extracting the vessel centerline, reference may be made to fig. 3 and its related description, which are not repeated herein.
The blood vessel window mask corresponding to the sampling point may be a mask image of the blood vessel window in which the sampling point is located. In some embodiments, the region identification module 210 may determine a sampling point every fixed distance (e.g., 10 millimeters, 20 millimeters, etc.) on the centerline of the blood vessel and generate a blood vessel window from the sampling point. After the vessel window is generated, the region identification module 210 may intercept a region in the vessel mask image, which is located in the vessel window, as a vessel window mask. In some embodiments, the area identification module 210 may obtain the fixed distance from the processing device 110, the user terminal 130, the storage device 140, and/or an external data source.
For example only, fig. 9 is a schematic diagram of a blood vessel window according to some embodiments of the present disclosure, as shown in fig. 9, a white region is a blood vessel centerline in a blood vessel mask image 910, the region identification module 210 may determine a sampling point 920 and generate a blood vessel window (i.e., a rectangular dashed frame in fig. 9) according to the sampling point 920, and the region identification module 210 may cut out a region in the blood vessel mask image located in the rectangular dashed frame as a blood vessel window mask 930.
In some embodiments, the region identification module 210 may be based on determining the vascular window corresponding to the sampling point in any manner. Taking the sampling point a as an example, the region identification module 210 may take a region with a specific size and shape as a blood vessel window with the sampling point a as the center. For another example, the region identification module 210 may take the sampling point a as the center, and extract a local centerline from the centerline of the blood vessel where the sampling point a is located. For each pixel point in the blood vessel mask image, the region identification module 210 may determine a point (also referred to as a blood vessel center point b) on the blood vessel center line closest to the pixel point, and determine whether the blood vessel center point b belongs to the local center line corresponding to the sampling point a, and if the blood vessel center point b is located on the local center line corresponding to the sampling point a, the blood vessel center point b belongs to the local center line corresponding to the sampling point a. If the blood vessel center point b belongs to the local central line corresponding to the sampling point a, the pixel point is judged to belong to the blood vessel window corresponding to the sampling point, namely, the pixel point is positioned in the blood vessel window mask. After traversing all the pixel points in the blood vessel mask image, the blood vessel window mask corresponding to the sampling point can be determined.
In some embodiments, the length of the local centerline corresponding to each sampling point may be uniform. In some embodiments, the zone identification module 210 may obtain the length of the local centerline from the processing device 110, the user terminal 130, the storage device 140, and/or an external data source.
The image features may characterize information about the vascular window mask. In some embodiments, the image features may include local vessel image features and/or global vessel image features. The local blood vessel image features may be image features of a blood vessel window mask, for example, an image gray level average value, an image gray level standard deviation, and/or an image gradient average value of the blood vessel window mask; the global blood vessel image feature may characterize a feature of the blood vessel mask image, such as an image gray scale average of the blood vessel mask image, and the like. In some embodiments, the local blood vessel image characteristics may further include a proportion of target pixel points in the blood vessel window mask, where the target pixel points may be pixels in the blood vessel window mask having a gray scale value greater than an image gray scale average of the blood vessel mask image, and for example only, the target pixel points may be pixel points in the blood vessel window mask having a gray scale value greater than 1.5 times the image gray scale average of the blood vessel mask image.
In some embodiments, the region identification module 210 may determine whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask based on the corrected region determination model and the image features. For example only, for a blood vessel window mask corresponding to each sampling point, the region identification module 210 may input image features (e.g., local blood vessel image features, global blood vessel image features, etc.) of the blood vessel window mask to the correction region determination model, and the correction region determination model outputs a determination result of whether the blood vessel window mask is the target blood vessel window mask. In some embodiments, the correction region determination model may be one of a Support Vector Machine (SVM), a Random Forest Classifier (Random Forest Classifier), or any combination thereof.
In some embodiments, the region identification module 210 may update the parameters of the initial correction region determination model based on the second training sample until the trained initial correction region determination model satisfies the preset condition, resulting in a trained correction region determination model. The second training sample may include image features of the sample blood vessel window mask, the label of the second training sample being whether the sample blood vessel window mask is the target blood vessel window mask. The preset condition may be that the loss function converges, the loss function value is smaller than a preset value, or the iteration number is greater than a preset number, etc. In some embodiments, the label of the second training sample may be obtained in any manner, for example, by manually determining the label of the second training sample, and for example, by obtaining the label of the second training sample from an external data source.
In some embodiments, by determining a blood vessel window mask corresponding to a sampling point for each sampling point on a blood vessel centerline in a blood vessel mask image, a region where a blood vessel is located in the blood vessel mask image can be divided into a plurality of blood vessel window masks relatively quickly. Further, for each blood vessel window mask, the image characteristics related to the blood vessel window mask are determined, and whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask is determined through the correction region determination model based on the image characteristics. By training and generating the correction region determination model by using a machine learning algorithm, the relation between data of various dimensions (such as local blood vessel image characteristics, global blood vessel image characteristics and the like) can be mined. Such relationships often include deep relationships that are difficult to obtain with other methods of determining a target vascular window mask. Therefore, the accuracy of judging whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask or not can be improved by using the correction area determination model.
At step 430, one or more image regions to be corrected are determined in the initial angiographic image based on the one or more target vessel window masks. In some embodiments, step 430 may be performed by the area identification module 210.
In some embodiments, the region identification module 210 may process one or more target vessel window masks to determine one or more image regions to be corrected in the initial angiographic image. For example, for each target vessel window mask, the region identification module 210 may regard the corresponding region of the target vessel window mask in the initial angiographic image as an image region to be corrected.
In some embodiments, the region identification module 210 may determine one or more positive vessel segment masks based on one or more target vessel window masks. For each positive blood vessel segment mask, the region identification module 210 may segment a region corresponding to the positive blood vessel segment mask from the initial angiographic image as an image region to be corrected.
The positive vessel segment mask may be a mask image of the positive vessel segment. The positive vessel segments are the individual vessel segments that need image correction. For example, the positive vessel segment may be a separate vessel segment in a stent or calcified area. The positive blood vessel segments corresponding to different positive blood vessel segment masks are independent (i.e. not communicated) with each other. In some embodiments, the region identification module 210 may generate a positive blood vessel segment mask image after performing a stitching process on the target blood vessel window mask. For example, the region identification module 210 may stitch two or more target vessel window masks corresponding to connected vessel segments into one positive vessel segment mask.
In some embodiments, for each positive blood vessel segment mask, the region identification module 210 may regard the corresponding region of the positive blood vessel segment mask in the initial angiographic image as an image region to be corrected.
In some embodiments, the region identification module 210 may first expand the positive blood vessel segment in the mask of the positive blood vessel segment, and obtain the expanded positive blood vessel segment. Further, the region identification module 210 may determine a bounding box (bounding box) of the expanded positive blood vessel segment, and use a region corresponding to the bounding box as an image region to be corrected in the initial angiography image. By expanding the positive vessel segment, the problem of under-segmentation can be avoided, and errors can be reduced.
In some embodiments, one or more positive blood vessel segment masks are determined based on one or more target blood vessel window masks, and for each positive blood vessel segment mask, a region corresponding to the positive blood vessel segment mask is divided from the initial angiography image to be used as an image region to be corrected, so that the number of the image regions to be corrected is reduced, and the efficiency of subsequently generating the target angiography image is improved.
In some embodiments, the initial model may include a generation module and a discrimination module. The input of the generation module comprises a sample first reconstructed nuclear reconstructed image and the output of the generation module comprises a predicted second reconstructed nuclear reconstructed image or information related to the predicted second reconstructed nuclear reconstructed image corresponding to the generated sample first reconstructed nuclear reconstructed image (e.g. a deviation between the sample first reconstructed nuclear reconstructed image and the predicted second reconstructed nuclear reconstructed image). The input of the discrimination module includes a sample first reconstructed nuclear reconstructed image and a sample second reconstructed nuclear reconstructed image, or a sample first reconstructed nuclear reconstructed image and a predicted second reconstructed nuclear reconstructed image. The output of the discrimination module may be a plausibility discrimination result for the predicted second reconstructed kernel reconstructed image or the sample second reconstructed kernel reconstructed image.
In the training process, the value of the loss function can be determined based on the difference between the predicted second reconstructed nuclear reconstructed image and the sample second reconstructed nuclear reconstructed image and the judgment result of the identification module. Through a back propagation algorithm, the generation module and the discrimination module can be updated based on the value of the loss function, so that the generation module generates a more real prediction second reconstruction kernel reconstructed image, and meanwhile, the discrimination module also improves the discrimination capability of the discrimination module. And performing iterative training through the loss function, wherein the two are mutually confronted until the generation module can generate a second reconstructed kernel reconstructed image of the sample which enables the discriminator not to judge the authenticity, and then the reconstructed image reaches a Nash equilibrium state, or the iteration times reach a threshold value, and the training of the initial model is completed.
In some embodiments, the image reconstruction module 220 may first train the discrimination module until it converges. The parameters of the discrimination module may then be fixed and the generation module trained until it converges. And repeating the steps, and alternately training the identification module and the generation module until the identification module cannot distinguish the truth of the reconstructed image of the second reconstruction core. After the training is completed, the image reconstruction module 220 may use the generation module of the trained model as an image reconstruction model.
In some embodiments, the loss function of the initial model may be:
Loss=Loss(G,D)+λLoss(G)
Loss(G,D)=arg min G max D {E x,y [log D(x,y)]+E x [log(1-G(x))]}
Loss(G=E x,y [|y-G(x)|]
g represents a generating module, D represents a distinguishing module, x represents a sample first reconstruction kernel reconstruction image, y represents a real sample second reconstruction kernel reconstruction image, lambda represents a Loss weight, loss (G, D) is a Loss function of the distinguishing module, loss (G) is a Loss function of the generating module, D (x, y) is the probability that the generating module judges whether a picture is real or not, and G (x) is a prediction second reconstruction kernel reconstruction image generated by the generating module.
FIG. 6 is a schematic diagram of a process of training an initial model according to some embodiments of the present description. The following description is given by way of example with the first reconstruction kernel being a smooth reconstruction kernel and the second reconstruction kernel being a sharp reconstruction kernel. The training samples of the initial model include a sample smooth kernel image reconstructed by a smooth reconstruction kernel (i.e., a sample first reconstructed kernel reconstructed image) and a sample sharp kernel image reconstructed by a sharp reconstruction kernel (i.e., a sample second reconstructed kernel reconstructed image). As shown in fig. 6, the input of the generation module comprises a sample smooth kernel image and the output of the generation module comprises a biased image of the sample smooth kernel image and the corresponding predicted sharp kernel image. And superposing the deviation image and the original sample smooth nuclear image to generate a predicted sharp nuclear image. The input to the discrimination module includes a sample smoothed kernel image and a true sample sharp kernel image, or a sample smoothed kernel image and a predicted sharp kernel image. The output of the discrimination module can be the discrimination result of the input sample sharp nuclear image or the predicted sharp nuclear image as true or false.
In some embodiments, in training the initial model, the input to the generation module may be the first reconstructed nuclear reconstructed image of the sample after the preprocessing operation. The preprocessing operation of the sample first reconstruction kernel reconstructed image may be similar to the preprocessing operation of the image area to be corrected 510 described above, and will not be described herein again.
It should be noted that the above description related to the initial model and the process of training the initial model are only for illustration and description, and do not limit the application scope of the present specification. Various modifications and changes to the initial model and the process of training the initial model will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are still within the scope of the present specification. For example, the generation module may directly generate the predicted sharp nuclear image instead of the deviation image.
In some embodiments, the image reconstruction model is trained using the training process described above and can mine the relationship between data in various dimensions (e.g., a sample smooth kernel image reconstructed by a smooth reconstruction kernel and a sample sharp kernel image reconstructed by a sharp reconstruction kernel) such that the image reconstruction model can simulate the second reconstruction kernel to generate a second reconstructed kernel reconstructed image with higher definition.
In some embodiments, the image reconstruction method disclosed in the present disclosure may also be applied to other scenarios. For example, in a magnetic resonance scan, different magnetic resonance images may be acquired using different scan sequences. Different tissues have different characteristics in different magnetic resonance images. The image reconstruction methods disclosed herein can be used to fuse magnetic resonance images generated by different scan sequences. For example only, the image region to be corrected may be identified in the first magnetic resonance image corresponding to the first scanning sequence, and the image region to be corrected may be processed by using the image reconstruction model, so as to obtain the second magnetic resonance image corresponding to the second scanning sequence. Based on the original first and second magnetic resonance images, a fused target magnetic resonance image may be obtained.
It should be noted that the above description related to the flow 400 is only for illustration and description, and does not limit the applicable scope of the present specification. Various modifications and changes to flow 400 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested in this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the description. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Where numerals describing the number of components, attributes or the like are used in some embodiments, it is to be understood that such numerals used in the description of the embodiments are modified in some instances by the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of the present specification shall control if they are inconsistent or inconsistent with the statements and/or uses of the present specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present specification can be seen as consistent with the teachings of the present specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. An angiographic image processing method, comprising:
identifying one or more image areas to be corrected from an initial angiography image, wherein the initial angiography image is a computed tomography angiography image reconstructed by using a first reconstruction kernel;
for each image area to be corrected, generating a second reconstruction kernel reconstruction image corresponding to the image area to be corrected by using an image reconstruction model, wherein the image reconstruction model is a trained deep learning model; and
and generating a target angiogram image based on the initial angiogram image and a second reconstruction kernel reconstruction image corresponding to each image area to be corrected, wherein the first reconstruction kernel is one of a smooth reconstruction kernel and a sharp reconstruction kernel, and the second reconstruction kernel is the other one of the smooth reconstruction kernel and the sharp reconstruction kernel.
2. The method of claim 1, wherein identifying one or more image regions to be corrected from the initial angiographic image comprises:
performing blood vessel segmentation on the initial angiography image to obtain a blood vessel mask image;
identifying one or more target vessel window masks in the vessel mask image based on a corrected region determination model; and
determining the one or more image regions to be corrected in the initial angiographic image based on the one or more target vessel window masks.
3. The method of claim 2, wherein said identifying one or more target vessel window masks in said vessel mask image based on a corrected region determination model comprises:
determining a blood vessel window mask corresponding to each sampling point on a blood vessel central line in the blood vessel mask image;
for each of the sampling points, the sampling point is,
determining image features related to the vessel window mask;
and determining whether the blood vessel window mask corresponding to the sampling point is the target blood vessel window mask or not based on the correction area determination model and the image characteristics.
4. The method of claim 2, wherein said determining the one or more image regions to be corrected in the initial angiographic image based on the one or more target vascular window masks comprises:
determining one or more positive vessel segment masks based on the one or more target vessel window masks;
and for each positive blood vessel segment mask, segmenting a region corresponding to the positive blood vessel segment mask from the initial angiography image to be used as the image region to be corrected.
5. The method of claim 1, wherein the generating a second reconstructed nuclear reconstructed image corresponding to the image region to be corrected by using the image reconstruction model comprises:
preprocessing the image area to be corrected to generate a preprocessed image area to be corrected;
processing the preprocessed image area to be corrected by using the image reconstruction model, and determining the deviation between the image area to be corrected and the second reconstruction kernel reconstructed image; and
and generating the second reconstruction nuclear reconstruction image based on the deviation and the image area to be corrected.
6. The method of claim 1, wherein the image reconstruction model is trained using the following process:
obtaining at least two training samples, wherein each training sample comprises a sample first reconstruction kernel reconstruction image and a corresponding sample second reconstruction kernel reconstruction image;
training an initial model by using the at least two training samples to generate a trained model;
and determining the image reconstruction model based on the trained model.
7. The method of claim 1, wherein identifying one or more image regions to be corrected from the initial angiographic image comprises:
extracting a vessel centerline in the initial angiographic image;
generating a blood vessel straightening image corresponding to the initial angiogram image according to the blood vessel central line;
identifying the one or more image regions to be corrected from the vessel-straightened image.
8. The method of claim 1, wherein the first reconstruction kernel is a smooth reconstruction kernel and the second reconstruction kernel is a sharp reconstruction kernel.
9. An angiographic image processing system comprising:
the region identification module is used for identifying one or more image regions to be corrected from an initial angiography image, wherein the initial angiography image is a computed tomography angiography image reconstructed by using a first reconstruction kernel;
the image reconstruction module is used for generating a second reconstruction kernel reconstruction image corresponding to the image area to be corrected by using an image reconstruction model for each image area to be corrected, wherein the image reconstruction model is a trained deep learning model;
an image generation module, configured to generate a target angiographic image based on the initial angiographic image and a second reconstructed kernel reconstructed image corresponding to each image region to be corrected, where the first reconstructed kernel is one of a smooth reconstructed kernel and a sharp reconstructed kernel, and the second reconstructed kernel is the other of the smooth reconstructed kernel and the sharp reconstructed kernel.
10. An angiographic image processing apparatus comprising a processor for performing the angiographic image processing method according to any one of claims 1 to 8.
CN202211288955.2A 2022-10-20 2022-10-20 Angiography image processing method and system Pending CN115511995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211288955.2A CN115511995A (en) 2022-10-20 2022-10-20 Angiography image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211288955.2A CN115511995A (en) 2022-10-20 2022-10-20 Angiography image processing method and system

Publications (1)

Publication Number Publication Date
CN115511995A true CN115511995A (en) 2022-12-23

Family

ID=84509878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211288955.2A Pending CN115511995A (en) 2022-10-20 2022-10-20 Angiography image processing method and system

Country Status (1)

Country Link
CN (1) CN115511995A (en)

Similar Documents

Publication Publication Date Title
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
US11062449B2 (en) Method and system for extracting vasculature
US20210106299A1 (en) Method and system for extracting lower limb vasculature
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
CN106340021B (en) Blood vessel extraction method
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
Tan et al. Analysis of segmentation of lung parenchyma based on deep learning methods
US20220301224A1 (en) Systems and methods for image segmentation
KR102030533B1 (en) Image processing apparatus for adopting human body morphometric based on artificial neural network for sarcopenia and image processing method using the same
CN112562058B (en) Method for quickly establishing intracranial vascular simulation three-dimensional model based on transfer learning
CN113643353B (en) Measurement method for enhancing resolution of vascular caliber of fundus image
Liu et al. Automatic segmentation algorithm of ultrasound heart image based on convolutional neural network and image saliency
CN109886973A (en) A kind of vessel extraction method, apparatus and computer readable storage medium
CN118247284B (en) Training method of image processing model and image processing method
Huang et al. Bone feature segmentation in ultrasound spine image with robustness to speckle and regular occlusion noise
CN117809122B (en) Processing method, system, electronic equipment and medium for intracranial large blood vessel image
WO2021032325A1 (en) Updating boundary segmentations
US20240104705A1 (en) Systems and methods for image correction
CN112116623B (en) Image segmentation method and device
Udupa et al. Fuzzy connected object definition in images with respect to co-objects
Radaelli et al. On the segmentation of vascular geometries from medical images
CN114612484B (en) Retina OCT image segmentation method based on unsupervised learning
CN115760605A (en) Image processing method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination