CN114943682A - Method and device for detecting anatomical key points in three-dimensional angiography image - Google Patents

Method and device for detecting anatomical key points in three-dimensional angiography image Download PDF

Info

Publication number
CN114943682A
CN114943682A CN202210179800.9A CN202210179800A CN114943682A CN 114943682 A CN114943682 A CN 114943682A CN 202210179800 A CN202210179800 A CN 202210179800A CN 114943682 A CN114943682 A CN 114943682A
Authority
CN
China
Prior art keywords
image
blood vessel
data set
key point
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210179800.9A
Other languages
Chinese (zh)
Inventor
冯建江
谭子萌
杨光明
印胤
卢旺盛
秦岚
刘文哲
周杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Union Strong Beijing Technology Co ltd
Tsinghua University
Original Assignee
Union Strong Beijing Technology Co ltd
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Union Strong Beijing Technology Co ltd, Tsinghua University filed Critical Union Strong Beijing Technology Co ltd
Priority to CN202210179800.9A priority Critical patent/CN114943682A/en
Publication of CN114943682A publication Critical patent/CN114943682A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/404Angiography

Abstract

The application provides a method for detecting anatomical key points in a three-dimensional angiography image, which relates to the technical field of medical image processing, wherein the method comprises the following steps: acquiring a three-dimensional angiography image as a test image; preprocessing a test image, inputting the preprocessed image into a pre-trained multitask deep learning network, and outputting an anatomical key point prediction probability map, wherein the multitask deep learning model is obtained by training a three-dimensional angiography training image containing the same blood vessel type as the test image and a labeling result of the three-dimensional angiography training image as a training data set; and generating a detection result of the anatomical key points according to the prediction probability of the voxel positions in the anatomical key point prediction probability map. By adopting the scheme, the method and the device can make full use of the synergistic effect among different tasks, explicitly model the topological variation type of the blood vessel, and combine spatial prior information to realize good detection performance.

Description

Method and device for detecting anatomical key points in three-dimensional angiography image
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for detecting anatomical key points in a three-dimensional angiography image.
Background
The three-dimensional angiography technology comprises a magnetic resonance angiography technology (MRA), a computed tomography angiography technology (CTA), a digital subtraction angiography technology (DSA) and the like, and can clearly and stereoscopically display the characteristics of blood vessels and blood flow signals in a body by utilizing the imaging characteristic of blood flow. The three-dimensional angiography technology covers various blood vessel structures such as intracranial blood vessels, coronary arteries, carotid arteries, aorta and the like, and examination and analysis of the structures are important auxiliary means for diagnosing and treating related diseases. Taking an intracranial vascular magnetic resonance radiography image as an example, the method can reflect whether the intracranial artery and vein have malformation or not, and non-invasively, safely and clearly display the tumor body and the tumor-carrying artery form of the intracranial aneurysm, and becomes a preferred method for diagnosing the intracranial aneurysm. In recent years, the intelligent analysis technology of medical images based on computer theory has been developed, and tasks such as automatic extraction of blood vessels, lesion location, lesion measurement and the like in three-dimensional angiography images have been widely researched and clinically applied.
The detection task of the anatomical key points in the three-dimensional angiography image focuses on all levels of bifurcation points of blood vessels, and the bifurcation points are positioned at the bifurcation positions of all levels of blood vessel sections, so that the detection task has unique and important anatomical significance. Taking intracranial vessel keypoint detection as an example, according to the definition of cerebral vessel topology, the complete Willis ring region can be divided into 20 vessel segments with independent anatomical names (ICA-C4 and beyond are not included here), and then 19 keypoints can be defined at the border positions of each vessel segment. Therefore, the anatomical key points explicitly model the whole topological structure of the blood vessel, and can provide rich semantic information for semantic segmentation of the blood vessel, lesion location and disease diagnosis. In addition, the detection of the anatomical key points is an important enabling means for the follow-up task of the intelligent analysis of the medical images, and can provide initialization conditions for vessel tracking and centerline extraction, assist in realizing vessel tree registration of multi-stage images of the same patient or images of different patients and the like. However, the detection task of the anatomical key points faces a great challenge due to the slender and curved shape of the blood vessel, the complex structural distribution, the various changes among different individuals, the requirement that partial blood vessel segmentation depends on the position of surrounding tissues, the local appearance and gray level distribution of the image may be affected by pathology and the like.
On the other hand, unlike other anatomical tree structures such as trachea and aorta, there may be topological changes in the blood vessel structures of intracranial vessels and coronary arteries. Taking the area of the Willis loop in intracranial blood vessels as an example, according to related studies, only about 52% of individuals have the complete Willis loop, and physiological variations represented by unilateral or bilateral PCoA deletion and PCA-P1 segment deletion (post-embryonic traffic) widely exist. Studies have indicated that these physiological variations may be associated with the potential risk of disease, and how to model these types of variations is one of the key points in intracranial vascular analysis. Notably, the absence of a partial vessel segment will result in the associated keypoint losing the local bifurcation characteristic and being difficult to discern. For example, when one-sided PCoA is present, the critical point PCoA-a (the bifurcation of the vessel segment PCoA with ICA) is located at the bifurcation site; when the PCoA is absent, the key point will lie on a smooth ICA vessel segment, with no local bifurcation features. In clinical labeling, the locations of these keypoints often need to be determined according to the experience of the physician and spatial symmetry, which makes automatic detection of the keypoints particularly difficult.
Disclosure of Invention
The present application is directed to solving, at least in part, one of the technical problems in the related art.
Therefore, the first objective of the present application is to provide a method for detecting anatomical key points in a three-dimensional angiography image, and the present application adopting the above scheme can fully utilize the synergistic effect between different tasks, explicitly model the type of vascular topological variation, and combine spatial prior information to realize good detection performance.
A second object of the present application is to propose a device for detecting anatomical key points in a three-dimensional angiographic image.
To achieve the above object, a first aspect of the present application provides a method for detecting anatomical key points in a three-dimensional angiography image, including: acquiring a three-dimensional angiography image as a test image; preprocessing a test image, inputting the preprocessed image into a pre-trained multi-task deep learning network, and outputting an anatomic key point prediction probability map, wherein the multi-task deep learning model is obtained by training a three-dimensional angiography training image containing the same blood vessel type as the test image and a labeling result of the three-dimensional angiography training image as a training data set; and generating a detection result of the anatomical key points according to the prediction probability of the voxel positions in the anatomical key point prediction probability map.
According to the method for detecting the anatomical key points in the three-dimensional angiography image, the three-dimensional angiography image data containing the specific blood vessel structure is obtained in an off-line stage and is preprocessed, the image is manually marked, and then the prediction targets of corresponding anatomical key point detection, blood vessel segment semantic segmentation, blood vessel segment missing classification and key point local bifurcation feature classification tasks are generated, a training data set is formed together, and the multi-task deep learning network is trained; and in the online stage, a key point probability heat map prediction result is output from the same type of image by using the trained network model, and the final anatomical key point detection position is obtained from the key point probability heat map prediction result. The method explicitly introduces structure prior knowledge, models spatial semantic information, and can realize good detection performance.
Optionally, in an embodiment of the present application, the test image is preprocessed, including unifying resolution, clipping to a preset size, and voxel gray-level normalization.
Optionally, in an embodiment of the present application, the pre-training of the multitask deep learning network includes:
acquiring a three-dimensional angiography image containing the same blood vessel type as the test image as an original data set;
preprocessing an original data set, and acquiring a labeling result corresponding to the preprocessed data set, wherein the labeling result comprises a vessel anatomy key point labeling result, a vessel binary segmentation labeling result and a vessel segment semantic segmentation labeling result;
generating a training data set according to the preprocessed data set and the corresponding labeling result;
and constructing a multi-task deep learning network, and training the multi-task deep learning network by using a training data set to obtain the trained multi-task deep learning network.
Optionally, in an embodiment of the present application, the obtaining of the labeling result corresponding to the preprocessed data set includes:
using medical image processing software, and manually marking each image in the preprocessed data set with predefined vascular anatomy key points and a vascular binary segmentation part, wherein the marking result of the vascular anatomy key points is a three-dimensional coordinate corresponding to each key point, and the marking result of the vascular binary segmentation is a voxel-by-voxel binary image with the same size as the image;
and generating a semantic segmentation labeling result of the blood vessel segment corresponding to each image in the data set by using an automatic method based on the blood vessel anatomical key points and the labeling result of the binary segmentation of the blood vessel.
Optionally, in an embodiment of the present application, the generating a semantic segmentation labeling result of a blood vessel segment corresponding to each image in the data set using an automated method based on the blood vessel anatomical key point and the blood vessel binary segmentation labeling result includes:
obtaining a corresponding lumen center line by using a thinning algorithm through a blood vessel binary segmentation labeling result, and dividing the center line into different semantic segments according to anatomical key point labeling;
segmenting each vessel voxel in the label for vessel binary, and determining a semantic label of each vessel voxel according to the nearest centerline voxel;
and manually correcting the semantic segmentation automatic labeling result obtained by each image in medical image processing software to obtain a final semantic segmentation labeling result, wherein the semantic segmentation automatic labeling result comprises a semantic segment and a semantic label.
Optionally, in an embodiment of the present application, generating a training data set according to the preprocessed data set and the corresponding labeling result includes:
processing each image in the preprocessed data set according to the labeling result to obtain an anatomic key point multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector and a key point local bifurcation feature classification vector which are used as prediction targets corresponding to the images;
and forming a training data pair by each image in the preprocessed data set and the corresponding prediction target, wherein all the training data pairs jointly form a training data set.
Optionally, in an embodiment of the present application, processing each image in the preprocessed data set according to the labeling result to obtain an anatomical key point multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector, and a key point local bifurcation feature classification vector, which are used as prediction targets corresponding to the images, includes:
outputting an anatomical key point multi-channel probability heat map with the same size as the input image to each predefined key point according to the labeling result of the vascular anatomical key point for each preprocessed image in the data set, wherein for each target key point, the corresponding probability heat map takes the key point as the center and presents three-dimensional Gaussian distribution;
generating a blood vessel segment semantic segmentation multi-channel probability map according to the blood vessel segment semantic segmentation labeling result, wherein the last channel of the blood vessel segment semantic segmentation multi-channel probability map is a background channel, and the rest channels respectively reflect the position distribution of each blood vessel segment in the input image;
and obtaining a vessel segment missing classification vector and a key point local bifurcation feature classification vector according to the labeling result of the vessel segment semantic segmentation, wherein when a certain vessel segment is missing in the vessel segment semantic segmentation labeling result, the anatomical key points at the two ends of the vessel segment lose the local bifurcation feature, otherwise, the anatomical key points at the two ends of the vessel segment have the local bifurcation feature.
Optionally, in one embodiment of the present application, the multitasking deep learning network comprises a trunk section and four branch sections, wherein,
the main part is used for carrying out feature extraction on the input image and outputting a feature map;
the first branch is used for processing the characteristic diagram and generating a prediction result of the multichannel probability heat map of the anatomical key points;
the second branch is used for processing the characteristic graph and generating a prediction result of the blood vessel segment semantic segmentation multi-channel probability graph;
the third branch is used for processing the characteristic diagram and generating a prediction result of the vessel segment missing classification vector;
and the fourth branch is used for processing the feature map and generating a prediction result of the local bifurcation feature classification vector of the key point.
Optionally, in an embodiment of the present application, training the initialized network using a training data set includes:
step S1: randomly selecting a training data pair from a training data set, inputting the preprocessed three-dimensional angiography image in the training data pair into a constructed multi-task deep learning network, and acquiring the output result of each branch of the network as a prediction result;
step S2: inputting the prediction result and the prediction target in the training data pair into a loss function to obtain a loss function value;
step S3: minimizing the loss function by using a gradient descent method based on the calculated loss function value, and adjusting network parameters;
step S4: and repeating the steps S1, S2, S3 and S4, continuously adjusting the network parameters, finishing training when the training times exceed the set upper limit times, determining the multitask deep learning network parameters, and obtaining the trained multitask deep learning network.
In order to achieve the above object, a second aspect of the present invention provides an apparatus for detecting anatomical key points in a three-dimensional angiography image, including an obtaining module, a processing module, and a result generating module, wherein,
the acquisition module is used for acquiring a three-dimensional angiography image as a test image;
the processing module is used for preprocessing a test image, inputting the preprocessed image into a pre-trained multi-task deep learning network and outputting an anatomical key point prediction probability map, wherein the multi-task deep learning model is obtained by training a three-dimensional angiography training image containing the same blood vessel type as the test image and a labeling result of the three-dimensional angiography training image as a training data set;
and the result generation module is used for generating the detection result of the anatomical key point according to the prediction probability of the voxel position in the anatomical key point prediction probability map.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart illustrating a method for detecting anatomical key points in a three-dimensional angiographic image according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of an embodiment of the present application;
FIG. 3 is a schematic diagram of the intracranial vascular labeling and data generation results according to an embodiment of the present application;
FIG. 4 is a diagram illustrating correspondence between vessel segment deletion and key point local bifurcation feature variation in an intracranial blood vessel according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an offline stage multitask deep learning network according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an online phase multitask deep learning network according to an embodiment of the present application;
FIG. 7 is a graph of the detection of anatomical key points in an intracranial vascular magnetic resonance angiography image according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for detecting an anatomical key point in a three-dimensional angiography image according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
If the attribute of whether each blood vessel segment is missing or not is introduced into the detection algorithm of the key points of the blood vessel anatomy, the algorithm can be helped to model the blood vessel variation type in an explicit mode, and better detection performance is achieved. In addition, the anatomical key points are located at the end points of the two ends of the corresponding blood vessel sections, and have very distinct structural characteristics. In consideration of the fact that the position distribution of a specific blood vessel section often has strong regularity and consistency, an auxiliary task for segmenting each blood vessel section (namely semantic segmentation of the blood vessel) is introduced into a key point detection algorithm, and structure prior information can be introduced to further improve detection precision, so that the method for detecting the anatomical key points explicitly combined with the topological structure variation type is provided.
The following describes a method and apparatus for detecting an anatomical key point in a three-dimensional angiographic image according to an embodiment of the present application with reference to the drawings.
Fig. 1 is a flowchart illustrating a method for detecting anatomical key points in a three-dimensional angiography image according to an embodiment of the present disclosure.
As shown in fig. 1, the method for detecting anatomical key points in a three-dimensional angiographic image includes the following steps:
step 101, acquiring a three-dimensional angiography image as a test image;
102, preprocessing a test image, inputting the preprocessed image into a pre-trained multi-task deep learning network, and outputting an anatomical key point prediction probability map, wherein the multi-task deep learning model is obtained by training a three-dimensional angiography training image and a labeling result of the three-dimensional angiography training image, which contain the same blood vessel type as the test image, as a training data set;
and 103, generating a detection result of the anatomical key points according to the prediction probability of the voxel positions in the anatomical key point prediction probability map.
According to the detection method of the anatomical key points in the three-dimensional angiography image, the three-dimensional angiography image data containing the specific blood vessel structure is obtained in the off-line stage and is preprocessed, the image is manually labeled, and then the prediction targets of corresponding tasks such as anatomical key point detection, blood vessel segment semantic segmentation, blood vessel segment missing classification and key point local bifurcation feature classification are generated, and a training data set is formed together to train a multi-task deep learning network; and in the online stage, a key point probability heat map prediction result is output from the same type of image by using the trained network model, and the final anatomical key point detection position is obtained from the key point probability heat map prediction result. The method explicitly introduces structure prior knowledge, models spatial semantic information, and can realize good detection performance.
According to the method for detecting the anatomical key points in the three-dimensional angiography image, the three-dimensional angiography image data containing the specific blood vessel structure is obtained in an off-line stage and is preprocessed, the image is manually marked, and then the prediction targets of corresponding anatomical key point detection, blood vessel segment semantic segmentation, blood vessel segment missing classification and key point local bifurcation feature classification tasks are generated, a training data set is formed together, and the multi-task deep learning network is trained; and in the online stage, a key point probability heat map prediction result is output from the same type of image by using the trained network model, and the final anatomical key point detection position is obtained from the key point probability heat map prediction result. The method explicitly introduces structure prior knowledge, models spatial semantic information, and can realize good detection performance.
Optionally, in an embodiment of the present application, the test image is preprocessed, including unifying resolution, clipping to a preset size, and voxel gray-level normalization.
Optionally, in an embodiment of the present application, the pre-training of the multitask deep learning network includes:
acquiring a three-dimensional angiography image containing the same blood vessel type as the test image as an original data set;
preprocessing an original data set, and acquiring a labeling result corresponding to the preprocessed data set, wherein the labeling result comprises a blood vessel anatomy key point labeling result, a blood vessel binary segmentation labeling result and a blood vessel segment semantic segmentation labeling result;
generating a training data set according to the preprocessed data set and the corresponding labeling result;
and constructing a multi-task deep learning network, and training the multi-task deep learning network by using a training data set to obtain the trained multi-task deep learning network.
Optionally, in an embodiment of the present application, the obtaining of the labeling result corresponding to the preprocessed data set includes:
using medical image processing software, and manually marking each image in the preprocessed data set with predefined vascular anatomy key points and a vascular binary segmentation part, wherein the marking result of the vascular anatomy key points is a three-dimensional coordinate corresponding to each key point, and the marking result of the vascular binary segmentation is a voxel-by-voxel binary image with the same size as the image;
and generating a semantic segmentation labeling result of the blood vessel segment corresponding to each image in the data set by using an automatic method based on the blood vessel anatomical key points and the labeling result of the binary segmentation of the blood vessel.
Optionally, in an embodiment of the present application, the generating a semantic segmentation labeling result of a blood vessel segment corresponding to each image in the data set using an automated method based on the blood vessel anatomical key point and the blood vessel binary segmentation labeling result includes:
obtaining a corresponding lumen center line by using a thinning algorithm through a blood vessel binary segmentation labeling result, and dividing the center line into different semantic segments according to anatomical key point labeling;
determining semantic labels of each vessel voxel in the vessel binary segmentation labels according to the nearest centerline voxel;
and manually correcting the semantic segmentation automatic labeling result obtained from each image in medical image processing software to obtain a final semantic segmentation labeling result, wherein the semantic segmentation automatic labeling result comprises a semantic segment and a semantic label.
Optionally, in an embodiment of the present application, generating a training data set according to the preprocessed data set and the corresponding labeling result includes:
processing each image in the preprocessed data set according to the labeling result to obtain an anatomic key point multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector and a key point local bifurcation feature classification vector which are used as prediction targets corresponding to the images;
and forming a training data pair by each image in the preprocessed data set and the corresponding prediction target, wherein all the training data pairs jointly form a training data set.
Optionally, in an embodiment of the present application, processing each image in the preprocessed data set according to the labeling result to obtain an anatomical key point multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector, and a key point local bifurcation feature classification vector, which are used as prediction targets corresponding to the images, includes:
outputting an anatomical key point multi-channel probability heat map with the same size as the input image to each predefined key point according to the labeling result of the vascular anatomical key point for each preprocessed image in the data set, wherein for each target key point, the corresponding probability heat map takes the key point as the center and presents three-dimensional Gaussian distribution;
generating a blood vessel segment semantic segmentation multi-channel probability map according to the blood vessel segment semantic segmentation labeling result, wherein the last channel of the blood vessel segment semantic segmentation multi-channel probability map is a background channel, and the rest channels respectively reflect the position distribution of each blood vessel segment in the input image;
and obtaining a vessel segment missing classification vector and a key point local bifurcation feature classification vector according to the labeling result of the vessel segment semantic segmentation, wherein when a certain vessel segment is missing in the vessel segment semantic segmentation labeling result, the anatomical key points at the two ends of the vessel segment lose the local bifurcation feature, otherwise, the anatomical key points at the two ends of the vessel segment have the local bifurcation feature.
Optionally, in one embodiment of the present application, the multitasking deep learning network comprises a trunk section and four branch sections, wherein,
the main part is used for carrying out feature extraction on the input image and outputting a feature map;
the first branch is used for processing the characteristic diagram and generating a prediction result of the multi-channel probability heat map of the anatomical key points;
the second branch is used for processing the characteristic graph and generating a prediction result of the blood vessel segment semantic segmentation multi-channel probability graph;
the third branch is used for processing the characteristic diagram and generating a prediction result of the blood vessel segment missing classification vector;
and the fourth branch is used for processing the feature map and generating a prediction result of the local bifurcation feature classification vector of the key point.
Optionally, in an embodiment of the present application, training the initialized network using a training data set includes:
step S1: randomly selecting a training data pair from a training data set, inputting a preprocessed three-dimensional angiography image in the training data pair into a constructed multi-task deep learning network, and acquiring output results of each branch of the network as prediction results;
step S2: inputting the prediction result and the prediction target in the training data pair into a loss function to obtain a loss function value;
step S3: minimizing the loss function by using a gradient descent method based on the calculated loss function value, and adjusting network parameters;
step S4: and repeating the steps S1, S2, S3 and S4, continuously adjusting the network parameters, finishing training when the training times exceed the set upper limit times, determining the multitask deep learning network parameters, and obtaining the trained multitask deep learning network.
The method is suitable for complex topological structure variation situations of blood vessels, whether a blood vessel segment is missing or not and local bifurcation feature changes of key points (namely whether the key points have local bifurcation features or not) caused by the missing of the blood vessel segment are respectively modeled into additional attributes of each blood vessel segment and each bifurcation point, and an algorithm is required to classify and predict the attributes. The method is realized based on a deep learning network, takes a multi-task model as a framework, and simultaneously completes four subtasks of anatomy key point detection, blood vessel segment semantic segmentation, blood vessel segment deletion classification and key point local bifurcation feature classification. The subtasks are highly correlated and share the spatial semantic features extracted by the network trunk part, and the synergistic effect among the tasks is fully utilized to explicitly model the vascular variation type and the structure prior information. The method can be widely applied to detection tasks of various vascular anatomy key points, such as intracranial vessels, coronary arteries and the like, and can achieve good detection performance.
The method of an embodiment of the present invention is described in detail below as a specific embodiment.
The anatomical key point detection method provided by the application is applied to partial key point detection of an intracranial vascular magnetic resonance angiography image, and the whole process is shown in fig. 2 and comprises an off-line stage and an on-line stage.
(1) Off-line phase
(1-1) acquiring an original data set and preprocessing the original data set;
a large number of three-dimensional angiographic images containing the same vessel type (intracranial vascular magnetic resonance angiographic images are used in this embodiment) are used as the raw data set, and the images may be derived from public data sets or cooperative hospitals, and the number should be no less than 50. And performing preprocessing on each image in the original data set, wherein the preprocessing comprises three parts of unifying resolution, cutting to the same size and normalizing voxel gray value. The present invention has no special requirements for the resolution and the specific value of the size after cutting (in this embodiment, the resolution is set to be 0.5 × 0.5 × 0.8 mm) 3 The size after cutting is 192 multiplied by 160 multiplied by 60); the clipped image should include the whole blood vessel structure to be detected (for example, in this embodiment, the image is required to include a Willis ring region), the clipping process can remove the interference of noise such as bones and other unrelated tissues, and the size of the clipping region can be determined according to the average statistical distribution of the blood vessel structure.
The preprocessed magnetic resonance angiography image of the present embodiment is shown in fig. 3 (a).
(1-2) labeling the preprocessed data set;
each image in the preprocessed data set is manually labeled by using medical image processing software (in this embodiment, 3D Slicer software is used), and two parts, namely, predefined anatomical key points of blood vessels and binary segmentation of blood vessels, need to be labeled. The labeling result of the anatomical key points is a three-dimensional coordinate corresponding to each key point, and the labeling result of the blood vessel binary segmentation is a voxel-by-voxel binary image with the same size as the image (wherein the voxel values belonging to the blood vessel region are 1, and the voxel values of the rest background regions are 0).
Based on the anatomical key points and the blood vessel binary segmentation labels, an automatic method can be used for generating the blood vessel segment semantic segmentation labels corresponding to each image in the data set. Specifically, a refinement algorithm is used to obtain a corresponding lumen center line through binary segmentation and labeling of a blood vessel, and the center line is divided into different semantic segments according to labeling of anatomical key points (for example, in this embodiment, a center line part between the key points PCoA-a and PCoA-P is a PCoA semantic segment). Each vessel voxel (voxel with voxel value of 1) in the labeling is segmented by the vessel binary value, and the semantic label of each vessel voxel is determined according to the nearest central line voxel. Specifically, at the end of the peripheral vessel segment (e.g., the outer end of the MCA-M1 segment in this embodiment), the semantic segmentation label is cut to make the tangent plane perpendicular to the centerline. And then, manually correcting the semantic segmentation automatic labeling result obtained by each image in medical image processing software to obtain the final semantic segmentation label.
In this embodiment, the blood vessel binary segmentation label in the magnetic resonance angiography image is shown as (B) in fig. 3; the corresponding anatomical key points of the image are labeled as shown in (C) of FIG. 3, wherein the number is the sequence number of the predefined 19 key points; the semantic segmentation labels corresponding to the image are shown in fig. 3 (D), where the regions with different gray levels represent different blood vessel segments (i.e. different semantic labels), and english is abbreviated as the anatomical name of the blood vessel segment.
(1-3) preparing a training data set;
and (3) completing preparation work of a training data set by using the original data set preprocessed in the step (1-1) and the artificial labeling result obtained in the step (1-2), namely obtaining four subtask prediction targets of anatomical key point detection, blood vessel segment semantic segmentation, blood vessel segment missing classification and key point local bifurcation feature classification in the multi-task network for each image in the data set.
(1-3-1) detecting and predicting target generation by anatomical key points;
in the application, an anatomical key point detection target is modeled as a multi-channel Gaussian heat map regression task. Specifically, for each pre-processed image in the data set, the network is required to output a probability heat map of equal size to the input image for each predefined keypoint. For each target key point, the corresponding probability heat map of the target key point is in three-dimensional Gaussian distribution by taking the key point as the center, and the value of each voxel reflects the probability that the voxel belongs to the target key point. The probability value is determined by the Euclidean distance from the voxel position to the target key pointAnd the outward direction is decreased from 1 to 0, and the decreasing rate is determined by the standard deviation delta of the Gaussian distribution. Specifically, for any preprocessed image in the data set, the spatial coordinate of the ith key point is set as x i The probability value G of the corresponding probability heat map at any voxel space position x i (x) Can be defined as:
Figure BDA0003522000490000091
where N is the predefined total number of anatomical keypoints in each image. In this embodiment, a thermal map generated for each anatomical key point in the magnetic resonance angiography image is shown in fig. 3 (E). For ease of viewing, the three-dimensional heat map of all key points is projected into the same plane.
(1-3-2) generating a prediction target by semantic segmentation of the blood vessel segment;
in the application, the blood vessel segment semantic segmentation task is modeled into a multi-channel single blood vessel segment binary segmentation task, namely a prediction target is a multi-channel probability graph generated by blood vessel segment semantic segmentation and labeling. For S predefined vessel segments (i.e., S semantic classes), the predicted target should include S +1 channels, where the first S channels respectively reflect the location distribution of each vessel segment in the input image (i.e., whether each voxel in the input image belongs to each vessel segment, when a voxel belongs to the ith vessel segment, the ith channel at the voxel position has a value of 1, and the remaining channels have values of 0), and the S +1 channel is a background channel (i.e., reflect whether each voxel in the input image belongs to a background class, when a voxel does not belong to any vessel segment, the background channel at the voxel position has a value of 1, and the remaining channels have values of 0).
In this embodiment, the prediction target of semantic segmentation of a blood vessel segment in a magnetic resonance angiography image is shown as (F) in fig. 3, which shows a channel corresponding to an MCA-M1 blood vessel segment in the prediction target.
(1-3-3) generating a blood vessel segment deletion classification and key point local bifurcation feature classification prediction target;
whether the blood vessel segment is missing or not and whether key points at two ends of the blood vessel segment have local bifurcation characteristics or not are in one-to-one correspondence, and the result can be obtained by artificially marking the semantic segmentation of the blood vessel segment. Specifically, when a certain vessel segment is missing in the vessel segment semantic segmentation labeling (that is, the number of voxels belonging to the vessel segment in the labeling is 0), the anatomical key points at the two ends of the vessel segment lose the local bifurcation feature; when a certain vessel segment exists (namely the number of voxels belonging to the vessel segment in the label is more than 0), the anatomical key points at two ends of the vessel segment have local bifurcation characteristics. The above correspondence relationship can be intuitively explained by fig. 4.
In the application, the vessel segment missing classification and the key point local bifurcation feature classification are processed into a plurality of mutually independent two classification tasks. For predefined N anatomical key points and S blood vessel segments, the prediction targets of key point local bifurcation feature classification and blood vessel segment deletion classification are vectors y with the lengths of N and S respectively N And y S The value of each element in the vector reflects whether each key point has local bifurcation characteristics or not, and whether each blood vessel segment exists or not (for any input image, when the ith blood vessel segment is absent, the vector y S The value of the ith element is 0, otherwise, the value is 1; similarly, when the ith anatomical keypoint has no local bifurcation feature, the vector y N The value of the ith element is 0, otherwise it is 1).
(1-3-4) constructing each preprocessed image and an anatomical key point multichannel probability heat map, a blood vessel segment semantic segmentation multichannel probability map, a blood vessel segment missing classification vector and a key point local bifurcation feature vector which are generated by corresponding manual labeling into a training data pair. All pairs of training data together constitute a training data set.
(1-4) constructing a multitask deep learning network;
the input of the multitask deep learning network is a single preprocessed three-dimensional angiography image, and the input images are required to have uniform size and resolution, but no limitation is imposed on specific numerical values (the size of the input image used in the embodiment is 192 × 160 × 60, and the resolution is 0.5 × 0.5 × 0.8 mm) 3 ). The network is composed of a trunk part and four branch parts, and the structure is shown in fig. 4. A main part is improved from a medical image processing classical network U-Net model and comprises a symmetrical encoderAnd a decoder structure. The encoder comprises 5 residual modules and 4 maximum pooling layers, and the maximum pooling layers are sequentially distributed between the two residual modules. The residual error module does not change the size of the input feature map, and comprises two convolution layers with convolution kernel size of 3 multiplied by 3 and a short-connection structure between the input and the output of the module, so that the problem of gradient disappearance possibly occurring in the deep learning network training process is solved. Each max pooling layer reduces the dimension size of the feature map to 1/2. The decoder comprises 3 residual error modules and 4 deconvolution layers, and the residual error modules are sequentially distributed between every two deconvolution layers. The decoder and the residual module in the encoder have the same structure, and each deconvolution layer expands the dimension of the feature map to 2 times of the original dimension. The output end of the 5 th residual error module in the encoder is connected with the input end of the 1 st deconvolution layer in the decoder, and the number of the maximum pooling layers in the encoder is kept consistent with that of the deconvolution layers in the decoder, so that the input and the output of the main part are ensured to have the same size.
In addition, in order to fuse the low-dimensional local spatial features and the high-dimensional global semantic information, a jump structure is added between symmetrical layers in the encoder and decoder structures. Specifically, the output characteristic diagram of the 4 th residual module in the encoder and the output characteristic diagram of the 1 st deconvolution layer in the decoder are spliced together to be used as the input of the 1 st residual module in the decoder; splicing the output characteristic diagram of the 3 rd residual module in the encoder and the output characteristic diagram of the 2 nd deconvolution layer in the decoder as the input of the 2 nd residual module in the decoder; splicing the output characteristic diagram of the 2 nd residual error module in the encoder and the output characteristic diagram of the 3 rd deconvolution layer in the decoder to be used as the input of the 3 rd residual error module in the decoder; and splicing the output characteristic diagram of the 1 st residual module in the encoder and the output characteristic diagram of the 4 th deconvolution layer in the decoder to jointly serve as the output characteristic diagram of the trunk part. Thereafter, the output profile of the backbone section is simultaneously transmitted into four branches of the network.
The four branch parts of the network are respectively composed of a residual module and a convolution layer with the convolution kernel size of 1 multiplied by 1. The output of the four branch parts respectively corresponds to the prediction results of an anatomic key point multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector and a key point local bifurcation feature vector, wherein the size and the resolution of the first two branch prediction results are consistent with those of an input image, and the length of the second two branch prediction result vectors is consistent with the number of the blood vessel segments and the key points which are predefined.
The multitask deep learning network constructed in the embodiment is shown in fig. 5, and an intracranial vascular magnetic resonance angiography image is taken as an example, it is noted that the numerical values in the figure are only examples, and other numerical values may be adopted in practice.
(1-5) applying the training data set generated in the step (1-3) and training the multitask deep learning network constructed in the step (1-4) in an off-line manner, wherein the off-line training comprises the following steps:
(1-5-1) randomly initializing the multitask deep learning network parameters constructed in the step (1-4).
(1-5-2) randomly selecting a training data pair from the training data set generated in the step (1-3), inputting the preprocessed three-dimensional angiography image into the multi-task deep learning network constructed in the step (1-4), and obtaining an output result of each branch of the network, namely a prediction result of each subtask. And respectively inputting the prediction result of each subtask and the prediction target of each subtask in the training data pair into the corresponding loss function to obtain the corresponding loss function value. Specifically, the invention uses an L2 loss function in an anatomical key point detection task, a Dice loss function in a blood vessel segment semantic segmentation task, and a cross entropy loss function in a blood vessel segment missing classification task and a key point local bifurcation feature classification task respectively. In order to avoid the problem that the network is difficult to converge due to serious category imbalance in training, weighting is carried out on loss functions of tasks of detecting anatomical key points and semantically segmenting blood vessel segments, wherein the weights are respectively the ratio of the number of voxels of an input image to the number of voxels of Gaussian hot spots and the area of each blood vessel segment.
In addition, whether the vessel segments are missing or not and whether the key points have local bifurcation characteristics are in one-to-one correspondence are considered, so as to ensureThe prediction results of the classification tasks of the two are proved to accord with the observation rule, and a consistency loss function L is introduced self It is supervised. In particular, prediction results classified by vessel segment deletion
Figure BDA0003522000490000121
Deducing corresponding key point local bifurcation feature classes
Figure BDA0003522000490000122
The local bifurcation feature classification prediction result of the key point required to be actually output by the network
Figure BDA0003522000490000123
And the consistency is maintained. The consistency loss function may be defined using a cross entropy loss function:
Figure BDA0003522000490000124
wherein the superscript i represents the ith element (i.e. corresponding to the ith anatomical key point) in the vector, and Θ is a set of serial numbers of all key points that may cause the local bifurcation feature change due to the vascular variation (for example, in the intracranial vascular magnetic resonance angiography image used in this embodiment, among predefined Willis ring anatomical key points, the common key points that may cause the local bifurcation feature change due to the physiological variation include two side PCoA, ACoA, PCA-P1, two side endpoints of ACA-a1, and the like).
The total loss function of the network training is obtained by linearly combining the loss functions:
L=L 1 +αL 2 +β(L 3 +L 4 )+γL self (0<α,β<1)
wherein L is 1 、L 2 、L 3 、L 4 Respectively a loss function of the tasks of anatomy key point detection, blood vessel segment semantic segmentation, blood vessel segment deletion classification and key point local bifurcation feature classification, L self As a consistency loss function. The hyper-parameters alpha, beta and gamma can be flexibly adjusted in the actual scene, so that each loss functionThe numbers are in the same order of magnitude.
(1-5-3) circularly executing the training steps, and in each training, minimizing a loss function by using a gradient descent method based on the total loss function value obtained by calculation, and continuously adjusting network parameters. And when the training times exceed the set upper limit times (the upper limit times are generally not less than 5000 times), finishing training to obtain the multi-task deep learning network parameters.
(2) An online stage;
(2-1) acquiring a three-dimensional angiographic image containing the same blood vessel type as the original data set of step (1-1) as a test image.
And (2-2) preprocessing the test image acquired in the step (2-1), wherein parameters such as image resolution, cut image size and the like in preprocessing operation are consistent with the preprocessing step in the step (1-1).
And (2-3) inputting the preprocessed three-dimensional angiography image obtained in the step (2-2) into a trained multitask deep learning network in an off-line stage to obtain an output prediction probability map of the anatomical key points. And selecting the voxel position with the maximum prediction probability in each prediction probability map, namely the final detection result of the key point corresponding to the heat map. The multitask deep learning network used in this step is shown in fig. 6. Note that the values in the figures are for example only and that other values may be used in practice.
By applying the anatomical key point detection method provided by the invention, the detection result of part of key points of the intracranial vascular magnetic resonance angiography image in the embodiment is shown in fig. 7.
In order to implement the above embodiments, the present application further proposes a device for detecting anatomical key points in a three-dimensional angiographic image,
fig. 8 is a schematic structural diagram of a device for detecting an anatomical key point in a three-dimensional angiography image according to an embodiment of the present disclosure.
As shown in fig. 8, the apparatus for detecting anatomical key points in a three-dimensional angiography image includes an acquisition module, a processing module, and a result generation module, wherein,
the acquisition module is used for acquiring a three-dimensional angiography image as a test image;
the processing module is used for preprocessing the test image, inputting the preprocessed image into a pre-trained multi-task deep learning network and outputting an anatomical key point prediction probability map, wherein the multi-task deep learning model is obtained by training a three-dimensional angiography training image containing the same blood vessel type as the test image and a labeling result of the three-dimensional angiography training image as a training data set;
and the result generation module is used for generating the detection result of the anatomical key point according to the prediction probability of the voxel position in the anatomical key point prediction probability map.
It should be noted that the foregoing explanation of the embodiment of the method for detecting anatomical key points in a three-dimensional angiography image is also applicable to the apparatus for detecting anatomical key points in a three-dimensional angiography image of this embodiment, and details are not repeated here.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having appropriate combinational logic gates, Programmable Gate Arrays (PGAs), Field Programmable Gate Arrays (FPGAs), etc.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that can be related to instructions of a program, which can be stored in a computer-readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method for detecting anatomical key points in a three-dimensional angiographic image, comprising the steps of:
acquiring a three-dimensional angiography image as a test image;
preprocessing the test image, inputting the preprocessed image into a pre-trained multi-task deep learning network, and outputting an anatomic key point prediction probability map, wherein the multi-task deep learning model is obtained by training a three-dimensional angiography training image containing the same blood vessel type as the test image and a labeling result of the three-dimensional angiography training image as a training data set;
and generating a detection result of the anatomical key points according to the prediction probability of the voxel positions in the anatomical key point prediction probability map.
2. The method of claim 1, wherein the pre-processing of the test image comprises uniform resolution, cropping to a preset size, voxel grey value normalization.
3. The method of claim 1, wherein pre-training the multitask deep learning network comprises:
acquiring a three-dimensional angiography image containing the same blood vessel type as the test image as an original data set;
preprocessing the original data set, and acquiring a labeling result corresponding to the preprocessed data set, wherein the labeling result comprises a vessel anatomy key point labeling result, a vessel binary segmentation labeling result and a vessel segment semantic segmentation labeling result;
generating a training data set according to the preprocessed data set and the corresponding labeling result;
and constructing the multi-task deep learning network, and training the multi-task deep learning network by using the training data set to obtain the trained multi-task deep learning network.
4. The method of claim 3, wherein the obtaining the labeling result corresponding to the preprocessed data set comprises:
using medical image processing software, manually labeling each image in the preprocessed data set with predefined key points of the blood vessel anatomy and two-value segmentation parts of the blood vessel anatomy, wherein the labeling result of the key points of the blood vessel anatomy is a three-dimensional coordinate corresponding to each key point, and the labeling result of the two-value segmentation of the blood vessel is a voxel-by-voxel two-value image with the same size as the image;
and generating a semantic segmentation labeling result of the blood vessel segment corresponding to each image in the data set by using an automatic method based on the blood vessel anatomy key point and the labeling result of the blood vessel binary segmentation.
5. The method of claim 4, wherein the generating of the vessel segment semantic segmentation labeling result corresponding to each image in the data set based on the vessel anatomical key points and the vessel binary segmentation labeling result by using an automated method comprises:
obtaining a corresponding lumen center line by the blood vessel binary segmentation labeling result by using a thinning algorithm, and dividing the center line into different semantic segments according to anatomical key point labeling;
determining semantic labels of each vessel voxel in the vessel binary segmentation labels according to the nearest centerline voxel;
and manually correcting the semantic segmentation automatic labeling result obtained from each image in medical image processing software to obtain a final semantic segmentation labeling result, wherein the semantic segmentation automatic labeling result comprises the semantic segment and the semantic label.
6. The method of claim 3, wherein generating a training data set from the preprocessed data set and corresponding labeling results comprises:
processing each image in the preprocessed data set according to the labeling result to obtain an anatomical key point multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector and a key point local bifurcation feature classification vector which are used as prediction targets corresponding to the images;
and forming a training data pair by each image in the preprocessed data set and the corresponding prediction target, wherein all the training data pairs form a training data set together.
7. The method of claim 6, wherein the processing each image in the preprocessed data set according to the labeling result to obtain an anatomical keypoint multi-channel probability heat map, a blood vessel segment semantic segmentation multi-channel probability map, a blood vessel segment missing classification vector, and a keypoint local bifurcation feature classification vector as the prediction targets corresponding to the images comprises:
outputting an anatomical key point multi-channel probability heat map with the same size as the input image to each predefined key point according to the labeling result of the vascular anatomical key point for each preprocessed image in the data set, wherein for each target key point, the corresponding probability heat map takes the key point as the center and presents three-dimensional Gaussian distribution;
generating a blood vessel segment semantic segmentation multi-channel probability map according to the blood vessel segment semantic segmentation labeling result, wherein the last channel of the blood vessel segment semantic segmentation multi-channel probability map is a background channel, and the rest channels respectively reflect the position distribution of each blood vessel segment in an input image;
and obtaining a vessel segment missing classification vector and a key point local bifurcation feature classification vector according to the labeling result of the vessel segment semantic segmentation, wherein when a certain vessel segment is missing in the vessel segment semantic segmentation labeling result, the anatomical key points at the two ends of the vessel segment lose the local bifurcation feature, otherwise, the anatomical key points at the two ends of the vessel segment have the local bifurcation feature.
8. The method of claim 3, wherein the multitasking deep learning network includes a trunk portion and four branch portions, wherein,
the main part is used for carrying out feature extraction on the input image and outputting a feature map;
the first branch is used for processing the characteristic map and generating a prediction result of the multi-channel probability heat map of the anatomical key points;
the second branch is used for processing the characteristic graph and generating a prediction result of the blood vessel segment semantic segmentation multi-channel probability graph;
the third branch is used for processing the characteristic diagram and generating a prediction result of the blood vessel segment missing classification vector;
and the fourth branch is used for processing the feature map and generating a prediction result of the local bifurcation feature classification vector of the key point.
9. The method of claim 6, wherein training the initialized network using the training data set comprises:
step S1: randomly selecting a training data pair from the training data set, inputting the preprocessed three-dimensional angiography image in the training data pair into a constructed multi-task deep learning network, and acquiring the output result of each branch of the network as a prediction result;
step S2: inputting the prediction result and the prediction target in the training data pair into a loss function to obtain a loss function value;
step S3: minimizing the loss function by using a gradient descent method based on the calculated loss function value, and adjusting network parameters;
step S4: and repeating the steps S1, S2, S3 and S4, continuously adjusting the network parameters, finishing training when the training times exceed the set upper limit times, determining the multitask deep learning network parameters, and obtaining the trained multitask deep learning network.
10. A device for detecting anatomical key points in a three-dimensional angiography image is characterized by comprising an acquisition module, a processing module and a result generation module, wherein,
the acquisition module is used for acquiring a three-dimensional angiography image as a test image;
the processing module is used for preprocessing the test image, inputting the preprocessed image into a pre-trained multitask deep learning network and outputting an anatomical key point prediction probability map, wherein the multitask deep learning model is obtained by training a three-dimensional angiography training image containing the same blood vessel type as the test image and a labeling result of the three-dimensional angiography training image as a training data set;
and the result generation module is used for generating the detection result of the anatomical key point according to the prediction probability of the voxel position in the anatomical key point prediction probability map.
CN202210179800.9A 2022-02-25 2022-02-25 Method and device for detecting anatomical key points in three-dimensional angiography image Pending CN114943682A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210179800.9A CN114943682A (en) 2022-02-25 2022-02-25 Method and device for detecting anatomical key points in three-dimensional angiography image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210179800.9A CN114943682A (en) 2022-02-25 2022-02-25 Method and device for detecting anatomical key points in three-dimensional angiography image

Publications (1)

Publication Number Publication Date
CN114943682A true CN114943682A (en) 2022-08-26

Family

ID=82905875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210179800.9A Pending CN114943682A (en) 2022-02-25 2022-02-25 Method and device for detecting anatomical key points in three-dimensional angiography image

Country Status (1)

Country Link
CN (1) CN114943682A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471499A (en) * 2022-10-19 2022-12-13 中国科学院空间应用工程与技术中心 Image target detection and segmentation method, system, storage medium and electronic equipment
CN116309591A (en) * 2023-05-19 2023-06-23 杭州健培科技有限公司 Medical image 3D key point detection method, model training method and device
CN116524548A (en) * 2023-07-03 2023-08-01 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium
CN116704248A (en) * 2023-06-07 2023-09-05 南京大学 Serum sample image classification method based on multi-semantic unbalanced learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471499A (en) * 2022-10-19 2022-12-13 中国科学院空间应用工程与技术中心 Image target detection and segmentation method, system, storage medium and electronic equipment
CN116309591A (en) * 2023-05-19 2023-06-23 杭州健培科技有限公司 Medical image 3D key point detection method, model training method and device
CN116309591B (en) * 2023-05-19 2023-08-25 杭州健培科技有限公司 Medical image 3D key point detection method, model training method and device
CN116704248A (en) * 2023-06-07 2023-09-05 南京大学 Serum sample image classification method based on multi-semantic unbalanced learning
CN116524548A (en) * 2023-07-03 2023-08-01 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium
CN116524548B (en) * 2023-07-03 2023-12-26 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium

Similar Documents

Publication Publication Date Title
CN114943682A (en) Method and device for detecting anatomical key points in three-dimensional angiography image
CN109035255B (en) Method for segmenting aorta with interlayer in CT image based on convolutional neural network
Ecabert et al. Segmentation of the heart and great vessels in CT images using a model-based adaptation framework
EP3660785A1 (en) Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ
Bouraoui et al. 3D segmentation of coronary arteries based on advanced mathematical morphology techniques
US7817831B2 (en) Method for identification of a contrasted blood vessel in digital image data
Aljabri et al. A review on the use of deep learning for medical images segmentation
CN111709925B (en) Devices, systems, and media for vascular plaque analysis
US20220284583A1 (en) Computerised tomography image processing
Xian et al. Main coronary vessel segmentation using deep learning in smart medical
US9189866B2 (en) Vascular tree from anatomical landmarks and a clinical ontology
CN113506310B (en) Medical image processing method and device, electronic equipment and storage medium
CN110852987B (en) Vascular plaque detection method and device based on deep morphology and storage medium
Li et al. Lumen segmentation of aortic dissection with cascaded convolutional network
Wang et al. A two-stage U-net model for 3D multi-class segmentation on full-resolution cardiac data
Vukadinovic et al. Segmentation of the outer vessel wall of the common carotid artery in CTA
Hepp et al. Fully automated segmentation and shape analysis of the thoracic aorta in non–contrast-enhanced magnetic resonance images of the German National Cohort Study
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
Mirunalini et al. Segmentation of Coronary Arteries from CTA axial slices using Deep Learning techniques
CN112541893A (en) Method for detecting tree structure branching key points in three-dimensional tomography image
Lyu et al. Dissected aorta segmentation using convolutional neural networks
CN113192069A (en) Semantic segmentation method and device for tree structure in three-dimensional tomography image
Roy et al. Vessels segmentation in angiograms using convolutional neural network: A deep learning based approach
Luong et al. A computer-aided detection to intracranial hemorrhage by using deep learning: a case study
Gao et al. Automatic detection of aorto-femoral vessel trajectory from whole-body computed tomography angiography data sets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination