CN112784928A - DSA image recognition method, device and storage medium - Google Patents

DSA image recognition method, device and storage medium Download PDF

Info

Publication number
CN112784928A
CN112784928A CN202110183361.4A CN202110183361A CN112784928A CN 112784928 A CN112784928 A CN 112784928A CN 202110183361 A CN202110183361 A CN 202110183361A CN 112784928 A CN112784928 A CN 112784928A
Authority
CN
China
Prior art keywords
image
dsa
aneurysm
identification
dsa image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110183361.4A
Other languages
Chinese (zh)
Inventor
马学升
刘伟奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming Tongxin Yilian Technology Co ltd
Original Assignee
Kunming Tongxin Yilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming Tongxin Yilian Technology Co ltd filed Critical Kunming Tongxin Yilian Technology Co ltd
Priority to CN202110183361.4A priority Critical patent/CN112784928A/en
Publication of CN112784928A publication Critical patent/CN112784928A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a DSA image identification method, a device and a storage medium, comprising the following steps: receiving 2D-DSA image data, wherein the 2D-DSA image data comprises one or more 2D-DSA images; performing first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and performing second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data; outputting the identification result data, wherein the identification result data comprises an aneurysm label labeling the 2D-DSA image. The invention can carry out two-stage processing on the 2D-DSA image through the CNN network algorithm, gradually position the 2D-DSA image to the aneurysm, effectively reduce the interference of other factors and improve the detection precision.

Description

DSA image recognition method, device and storage medium
Technical Field
The present invention relates to a DSA image recognition technology, and more particularly, to a DSA image recognition method, device and storage medium.
Background
Intracranial aneurysms are cerebrovascular diseases caused by abnormal local dilation of the cerebral arterial vessel wall. Intracranial aneurysms are often caused by intimal weakness and appear to be associated with polycystic kidney disease, fibroplasia, aortic stenosis, and the like. If not discovered and treated in time, rupture of an intracranial aneurysm can lead to subarachnoid hemorrhage, which can lead to severe neurological sequelae and high mortality.
Intracranial aneurysms are common among people, and their incidence is statistically 3% worldwide and 7% in china. A large number of people have aneurysms but are not known by themselves, and detection and screening of intracranial aneurysms are very necessary. Digital Subtraction Angiography (DSA) has been considered as the gold standard for diagnosing intracranial aneurysms. DSA is recommended for the identification and evaluation of intracranial aneurysms when surgical or intravascular treatment is considered. DSA has higher resolution and detection sensitivity than CT angiography (CTA) and Magnetic Resonance Angiography (MRA), especially for intracranial aneurysms of less than 3 millimeters.
DSA records acquired two-dimensional DSA contrast images in a standard Digital Imaging and Communications in Medicine (DICOM) format. DSA images are sampled of the data sequence by the flow of contrast agent throughout the artery at a sampling rate of 3 to 5 frames per second, the sampling process typically lasting 3 to 15 seconds. During the imaging process, the patient's head should be in a stable position in order to obtain a clear imaging sequence. The physician should examine the entire two-dimensional DSA image sequence to identify an intracranial aneurysm. The examination procedure may take longer than expected and small intracranial aneurysms are sometimes misdiagnosed because the location and shape of the intracranial aneurysm may be wrong. The traditional diagnostic method is laborious and requires the involvement of skilled and experienced physicians. Nevertheless, it is not always the case that intracranial aneurysms cannot be identified. Thus, an automated system for diagnosing intracranial aneurysms may reduce the burden on physicians to some extent.
For patients with subarachnoid hemorrhage, the physician needs to obtain anteroposterior two-dimensional DSA (2D-DSA) and lateral views of the four intracranial arteries (left and right common carotid arteries and left and right vertebral arteries). If an intracranial aneurysm is found on an artery, a three-dimensional DSA (3D-DSA) reconstruction of the entire artery should be performed immediately. Reconstructed three-dimensional DSA provides vital information about the morphology and pathology of blood vessels. It can be used by physicians to analyze in detail the geometry of the vessel, i.e. the vessel diameter, the location and size of the aneurysm, in order to make clinical decisions. The quality of the 3D-DSA image may vary from patient to patient. In clinical practice, there are many factors that have an influence on the quality of 3D-DSA images, including the amount of contrast agent injected, the timing of injection, hemodynamic indices of the contrast agent, and differences in factors such as blood flow and cardiac output, and in addition, the reconstruction parameters may vary from patient to patient. These factors can cause a deviation in the physician's choice of treatment for a patient.
On the other hand, the 3D-DSA model has more information and can easily identify an aneurysm than the 2D-DSA model, but most hospitals in developing countries can only purchase 2D angiography devices at an expensive 3D device price, and 3D-DSA data for invasive examinations is rather limited.
The current DSA mode of automatic computer-aided diagnosis research is based on a classical digital image processing method, and 2D-DSA images are adopted. However, classical digital image processing methods do not solve the problem of automatic localization of aneurysms, current feature extraction methods are severely hampered by complex vascularity, and sliding window methods are very time consuming during search and feature extraction, which is not the optimal method.
The Convolutional Neural Networks (CNN) method has proven to be more effective in feature extraction than the classical digital image processing method. In recent years, the CNN architecture has been widely used for object detection and has achieved good results. In addition, the CNN method has good performance in some medical image detection works. However, there is currently no solution to combine CNN architecture with identification of aneurysms.
Disclosure of Invention
Embodiments of the present invention provide a DSA image recognition method, a DSA image recognition device, and a storage medium, which combine a CNN framework with aneurysm recognition, and improve accuracy of aneurysm recognition and detection.
In a first aspect of the embodiments of the present invention, a DSA image recognition method is provided, including:
receiving 2D-DSA image data, wherein the 2D-DSA image data comprises one or more 2D-DSA images;
performing first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and performing second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data;
outputting the identification result data, wherein the identification result data comprises an aneurysm label labeling the 2D-DSA image.
Optionally, in a possible implementation manner of the first aspect, the performing, based on the CNN model, a first-stage region identification on the 2D-DSA image, and locating a specific region in the 2D-DSA image includes:
the frame of the CNN model is constructed based on a characteristic pyramid network, and the backbone is a residual error neural network;
the 2D-DSA image includes a frontal image and a lateral image,
Figure BDA0002942712170000031
a jth image representing the ith patient frontal image sequence,
Figure BDA0002942712170000032
a j-th image representing a sequence of i-th patient side images, said
Figure BDA0002942712170000034
And
Figure BDA0002942712170000033
corresponding;
and respectively inputting the front image and the side image into the feature pyramid network, determining a target area in the front image, and determining a corresponding area corresponding to the target area in the side image.
Optionally, in a possible implementation manner of the first aspect, the performing a second-stage aneurysm identification on the 2D-DSA image based on a CNN model after locating the specific region to obtain identification data includes:
acquiring a target area in the front image and a corresponding area in the side image;
connecting the target area and the corresponding area to a feature pyramid network in a double input mode to obtain identification result data;
the identification result data includes an aneurysm label for the pair of images, the aneurysm label for labeling a confidence that a location in the images is an aneurysm.
Optionally, in a possible implementation manner of the first aspect, performing area-average grayscale suppression processing on the recognition result data after obtaining the recognition result data includes:
comparing the confidence coefficient of the aneurysm with a standard threshold, and if the confidence coefficient of the aneurysm is lower than the standard, applying a regional average gray suppression algorithm;
the self-adaptive gray threshold selection method roughly extracts a blood vessel model of an original target aneurysm region image;
logically calculating a blood vessel of the target aneurysm region image based on the blood vessel model and the original aneurysm region image;
wherein the aneurysm gray value is GAneurysmPixel value of PAneurysmThe mean gray value of the aneurysm is:
AGaneurysmAneurysm/Aneurysm
The mean gray values of the peri-aneurysmal vessels were:
AGperiphery of aneurysm=(GExpanded area-GAneurysm)/(PExpanded area-PAneurysm)
If the number of the first-time-series terminal,AGaneurysmIs less than AGPeriphery of aneurysmThen the area image is the overlapped blood vessel, if AGAneurysmGreater than AGPeriphery of aneurysmThen the regional image is an aneurysm.
Optionally, in a possible implementation manner of the first aspect, the CNN model is trained by the following steps, including:
acquiring 2D-DSA image data of a patient, the 2D-DSA image data including one or more 2D-DSA images;
removing images lower than a preset value in the 2D-DSA images to obtain 2D-DSA images meeting the preset value;
and dividing the 2D-DSA image meeting the preset value into a training data set and a testing data set based on a preset proportion.
Optionally, in a possible implementation manner of the first aspect, after dividing the 2D-DSA image satisfying the preset value into a training data set and a test data set based on a preset ratio, the method further includes:
the plurality of 2D-DSA images form a DSA image sequence;
and carrying out local normalization processing on the DSA images to enable the local average value of the DSA images to be 0 and the SD to be 1.
In a second aspect of the embodiments of the present invention, a DSA image recognition apparatus is provided, including:
the 2D-DSA image receiving module is used for receiving 2D-DSA image data, and the 2D-DSA image data comprises one or more 2D-DSA images;
the identification module is used for carrying out first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and carrying out second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data;
and the output module is used for outputting the identification result data, wherein the identification result data comprises an aneurysm label for marking the 2D-DSA image.
Optionally, in a possible implementation manner of the second aspect, the identification module includes a stage identification unit, and the stage identification unit is configured to perform the following steps:
the frame of the CNN model is constructed based on a characteristic pyramid network, and the backbone is a residual error neural network;
the 2D-DSA image includes a frontal image and a lateral image,
Figure BDA0002942712170000051
a jth image representing the ith patient frontal image sequence,
Figure BDA0002942712170000052
a j-th image representing a sequence of i-th patient side images, said
Figure BDA0002942712170000053
And
Figure BDA0002942712170000054
corresponding;
and respectively inputting the front image and the side image into the feature pyramid network, determining a target area in the front image, and determining a corresponding area corresponding to the target area in the side image.
Optionally, in a possible implementation manner of the second aspect, the identification module includes a two-stage identification unit, and the two-stage identification unit is configured to perform the following steps:
acquiring a target area in the front image and a corresponding area in the side image;
connecting the target area and the corresponding area to a feature pyramid network in a double input mode to obtain identification result data;
the identification result data includes an aneurysm label for the pair of images, the aneurysm label for labeling a confidence that a location in the images is an aneurysm.
In a third aspect of the embodiments of the present invention, a readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, is adapted to carry out the method according to the first aspect of the present invention and various possible designs of the first aspect of the present invention.
According to the DSA image identification method, the DSA image identification device and the storage medium, the 2D-DSA image can be processed in two stages through a CNN network algorithm, the aneurysm can be gradually positioned, interference of other factors is effectively reduced, and detection accuracy is improved. The accuracy of the network algorithm for the first-stage area identification is 93%, that is, the aneurysm area in 93 images can be effectively measured by 100 test images. The problem of background information interference is effectively solved, and the detection accuracy is improved.
In addition, according to the technical scheme of the invention, by comparing with the result of the single input model, the curve convergence of the double input model of the system in the quintuple cross validation result fluctuates between 0.59 and 0.72, while the curve convergence of the single input model is between 0.56 and 0.82, and the performance of the double input model combining the front and side information is more stable. And a regional average gray suppression algorithm is added, so that false positive can be effectively suppressed, and intracranial aneurysm and blood vessel overlapping can be better distinguished. The sensitivity, specificity and accuracy of the detection result of the invention are higher than those of senior physicians.
Drawings
FIG. 1 is a schematic diagram of a first implementation of an embodiment of the invention;
FIG. 2 is a schematic diagram of a second implementation of an embodiment of the invention;
FIG. 3 is a model diagram of an embodiment of the invention;
FIG. 4 is a schematic diagram of a network architecture in the area identification phase according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a network architecture of an aneurysm identification stage according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of recognition algorithm training according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of a third implementation of an embodiment of the invention;
FIG. 8 is a schematic illustration of a fourth embodiment of an example of the present invention;
FIG. 9 is a schematic view of a vessel centerline according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a fifth implementation of an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, for example, and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that all three of A, B, C comprise, "comprises A, B or C" means that one of A, B, C comprises, "comprises A, B and/or C" means that any 1 or any 2 or 3 of A, B, C comprises.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Digital Subtraction Angiography (DSA) is often used to assess the structural basis of neurovascular diseases such as stenosis, arteriovenous malformations, and intracranial aneurysms. In addition to the geometry and connectivity of standard 2D-DSA displays, angiographic parametric imaging also helps clinicians to assess the functional properties of these lesions. Angiography parametric imaging is an imaging method that uses digital subtraction angiography to describe the dynamic properties of contrast agents throughout the vasculature, identifies and segments intracranial aneurysms in DSA by Convolutional Neural Networks (CNN), and extracts aneurysm imaging features with minimal error. This may be useful in understanding the natural history of the disease, predicting disease progression, or planning intervention.
2D-DSA is considered the gold standard for diagnosing aneurysms. However, for patients who are screened for subarachnoid hemorrhage, the physician needs to obtain anteroposterior two-dimensional DSA and lateral views of the four intracranial arteries. If an intracranial aneurysm is found on an artery, the physician should immediately perform a three-dimensional DSA (3D-DSA) reconstruction of the entire artery. The reconstructed three-dimensional DSA provides more information to help select a treatment. However, the three-dimensional DSA needs to be compared with the two-dimensional DSA to avoid errors caused by differences in patient image quality.
An embodiment of the present invention provides a DSA image recognition method, as shown in fig. 1 and 2, including:
step S110, receiving 2D-DSA image data, wherein the 2D-DSA image data comprises one or more 2D-DSA images;
s120, performing first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and performing second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data;
and S130, outputting the identification result data, wherein the identification result data comprises an aneurysm label for marking the 2D-DSA image.
In the present embodiment, two links are mainly used. In the first step, a two-dimensional DSA image in a DICOM format is uploaded to an aneurysm identification system through a DSA device and a server. Through a unique two-stage CNN algorithm model, the aneurysm is automatically identified and measured, and the identification result of the aneurysm, including a series of morphological parameters such as the size and the aspect ratio of the aneurysm, can be directly output. And the second link, a two-stage CNN aneurysm detection algorithm, can output a standard segmentation result of 2D-DSA as a gold standard correction 3D-DSA analysis result. When an aneurysm is found, 3D-DSA data is generated through three-dimensional DSA, a central line is generated through an algorithm, three-dimensional vessel segmentation is carried out based on an ellipse, future results are predicted, 2D/3D image optimization results are integrated by referring to 2D-DSA standard segmentation results and returned to the three-dimensional vessel segmentation, and finally verified 3D-DSA results are output.
2D-DSA image data are acquired for aneurysm identification, with an average number of 25 frames for DSA sequences, considering that the aneurysm contours are typically around 1mm to 40 mm. All frames were rescaled to 512 x 512 with a pixel pitch of 0.391mm, resulting in a field of view of 200 x 200mm 2. Since it is unlikely that the most obvious frame of the aneurysm is found at the beginning or end of the DSA sequence, only 16 frames around the central frame are used. To reduce the noise of the background image and thus enhance the vessel signal, the frames in the DSA sequence are averaged together to form a single frame when the artery and aorta are fully fused. The average frame is passed through a two-dimensional median filter of 3 x 3 pixels in size to adjust for background effects caused by motion artifacts. An intensity threshold is then applied to separate the background from the vasculature. The background pixel has a value of 0, the pixels within the vasculature class have a value of 1, and the pixels within the aneurysm sac have a value of 2. The entire process of processing each DSA sequence takes approximately 2 minutes.
The CNN architecture utilizes a visual geometry group (VGG-16) encoder, two full convolutional layers (with corrected linear cell activators), and a decoder consisting of transposed convolutional layers to upsample the image data back to the original input image size. The system assigns a pixel-by-pixel semantic segmentation problem to this network, each pixel being divided into three groups: background, blood vessels or aneurysms. The system runs the modeling on the NVIDIA GPU. The network of the system uses an optimizer, which is a gradient descent adaptive method that can adjust the learning rate over time so that network learning can continue even after many periods of time have elapsed. The loss between the net prediction mask and the benchmarking truth mask in the training queue after each training period is calculated using the sum of Die (DICE) losses (the complement of the DICE similarity coefficient and the binary cross entropy) and the gradient is guided to decrease during the training process. After network weight optimization, the model is evaluated using a test queue. Network training relies on the accuracy of benchmarking truth to perform proper weight optimization. Compared with a shallower network structure, the network encoder uses the VGG-16 structure at the cost of computation time, and requires about 24 hours of training time to complete model weight convergence. Compared with other segmentation structures such as U-Net with 3100 ten thousand adjustable parameters, the network used in the method has fewer parameters, and can optimize network weight more quickly and less data.
To address the set of technical issues presented above, the aneurysm identification process automatically detects intracranial aneurysms on 2D-DSA images by a two-stage Convolutional Neural Network (CNN) architecture. The two stages are divided into region identification and aneurysm detection. In the first stage of zone identification, the system may locate a particular zone to reduce interference in other zones. In a second stage aneurysm detection stage, the detection algorithm may utilize a false positive suppression algorithm in conjunction with information from both anterior and lateral angiography to identify intracranial aneurysms. In an aneurysm identification system, a CNN-based two-stage detection network was developed to enable automatic detection of intracranial aneurysms on 2D-DSA images, as shown in fig. 3. The region identification phase is to locate the aneurysm region from the 2D-DSA image, and the subsequent aneurysm detection phase identifies the intracranial aneurysm from the output of the region identification.
In one embodiment, the CNN model-based first-stage region identification of the 2D-DSA image, and locating a specific region in the 2D-DSA image comprises:
the frame of the CNN model is constructed based on a characteristic pyramid network, and the backbone is a residual error neural network;
the 2D-DSA image includes a frontal image and a lateral image,
Figure BDA0002942712170000091
a jth image representing the ith patient frontal image sequence,
Figure BDA0002942712170000092
a j-th image representing a sequence of i-th patient side images, said
Figure BDA0002942712170000093
And
Figure BDA0002942712170000094
corresponding;
and respectively inputting the front image and the side image into the feature pyramid network, determining a target area in the front image, and determining a corresponding area corresponding to the target area in the side image.
Two-stage CNN frameThe framework is based on a feature pyramid network, and the backbone is a residual neural network (ResNet50 network). The 2D-DSA device simultaneously generates two matching sequences for each patient, including front and side views of the head. Thus, system introduction
Figure BDA0002942712170000095
To represent the jth image of the ith patient's frontal sequence, introduce
Figure BDA0002942712170000101
To represent the corresponding outer image. For system use
Figure BDA0002942712170000102
To represent the target image to be detected by the system,
Figure BDA0002942712170000103
is its corresponding image at the same location in the other angiographic view. In the region identification phase, the system inputs a target image and its corresponding image. Then, the region identification model is applied to obtain a corresponding region image. In the aneurysm detection stage, images of two target regions are connected through double inputs and sent to a feature pyramid network for aneurysm detection to detect intracranial aneurysms. Finally, the system employs a region-averaged gray scale suppression algorithm to suppress false positive conditions.
The high resolution of the original 2D-DSA image causes additional image disturbance and analysis time consumption, and therefore the influence of extraneous tissue structures on image identification must be reduced. The system realizes automatic positioning of a specific area, and the network architecture of the area identification stage is shown in fig. 4. In this figure, "Convf × f, c,/s" indicates that the size of the convolutional layer is filter f, the number of channels is c, and the step size is s (default step size is 1). Note that each Convolutional Layer (Conv Layer) is followed by an active Layer of Batch Normalization Layer (BN Layer) and Linear rectification function (Rectified Linear Unit, ReLU). "Poolf x f,/s" refers to the largest pool layer with a filter size of f and span of s. "Up-sanple,/r" represents the peripheral upsampling of the most recent sample, with an upsampling rate of r. "Anchor" is an Anchor box used to predict the area of an aneurysm. The first part is to receive the input of the three-channel image, and the next feature extraction module is a feature pyramid network with the backbone of the ResNet50 network. Finally, the anchor box outputs the aneurysm region of the original input image.
The characteristics of the input image are extracted through the characteristic pyramid network, and the method can adapt to various resolutions of the input image and extract multi-scale characteristics. ResNet50 is used as the backbone of the feature pyramid network, where each convolutional layer is followed by a batch normalization layer (BN layer) and an active layer of a linear rectification function (ReLU). The algorithm then sends the extracted features to the anchor block to predict the aneurysm region of the original input image. In particular, the detector may predict a total of 6k parameters. The features are extracted by a feature pyramid network that is sensitive to objects of various proportions. k represents the number of objects, 6 represents a classification label, four bins (x, y, w, h represent abscissa, ordinate, width and height, respectively) and a classification confidence. Other aneurysm portions can be easily expanded and supplemented with labeled data according to the algorithmic network architecture. Finally, the model of the system outputs an image of the aneurysm region according to the resulting bounding box. Finally, the system adjusts the images to a 288 x 288 uniform interface, easily applying the aneurysm identification detector to different internal carotid artery regions.
In one embodiment, performing a second stage of aneurysm identification on the 2D-DSA image based on the CNN model after locating the specific region to obtain identification data comprises:
acquiring a target area in the front image and a corresponding area in the side image;
connecting the target area and the corresponding area to a feature pyramid network in a double input mode to obtain identification result data;
the identification result data includes an aneurysm label for the pair of images, the aneurysm label for labeling a confidence that a location in the images is an aneurysm.
The second phase is an aneurysm identification phase, which is a method of distinguishing intracranial aneurysms from vessel overlaps. The overall architecture of aneurysm identification is shown in fig. 5. The system architecture mainly comprises four steps: double input, feature extraction, result output and regional average gray suppression. The dual input concatenates the aneurysm target region image and its corresponding aneurysm region image as input variables. The features are extracted by a feature pyramid network that is sensitive to objects of various proportions. The anchor box then outputs the results of the detector. Similar to the first stage region identification, the detector may predict 6k parameters in total, and the output step preserves objects with confidence above 0.6. Finally, since the overlap of the vessels affects the detection result, a region-averaged grayscale suppression algorithm is applied to predict the object labeled as an aneurysm, but the confidence is lower than a preset threshold.
In the aneurysm identification algorithm of fig. 5, "t x" in parenthesis represents that the layers are repeated t times. The dual input layer connects the target aneurysm region image and the corresponding aneurysm region image together, followed by application of a feature pyramid network on the aneurysms of various scales. The anchor box then outputs the prediction results, following the area average gray scale suppression algorithm to suppress false positive cases.
The design considerations for dual input are mainly directed to the information loss and image overlay recognition issues. Clinically, even experienced neurosurgeons have difficulty distinguishing both intracranial aneurysms and regions of vessel overlap. By providing front and side DSA images, the system can correctly infer whether an aneurysm or a vessel overlap by vessel trends. The basic input framework of the identification algorithm is to combine the DSA front view and side view information and connect the target aneurysm image and the corresponding aneurysm image output from the region identification stage as the input variable of the aneurysm identification stage. By means of the connection, the system puts the target image at the front most, followed by the corresponding image. Therefore, the neural network can partially compensate for the loss of the 2D-DSA modal information in the feature extraction by using the information of the double input.
The dual input model improves accuracy and stability to some extent, but vessel overlap is still easily mistaken for an aneurysm. Therefore, it is necessary for the system to suppress the false positive condition by an algorithm. According to the DSA principle, the gradation of an image is determined by the density of a contrast agent for radiation. The gray scale of a pixel in the 2D-DSA image depends on the density of the radioactive contrast agent in the blood in the 3D space to which the pixel corresponds. The region where the blood vessels overlap corresponds to more blood than the surrounding region, which may result in deeper gray levels. The intensity of large aneurysms is generally higher because they contain a large amount of contrast agent, but they are easily found by computer aided identification methods with a confidence level close to 1.0. In contrast, for small aneurysms, such as hemangiomas, the size of the aneurysm is typically smaller than the diameter of the peripheral vein, and thus the grayscale may be lighter.
In one embodiment, after obtaining the recognition result data, performing area average gray suppression processing on the recognition result data includes:
comparing the confidence coefficient of the aneurysm with a standard threshold, and if the confidence coefficient of the aneurysm is lower than the standard, applying a regional average gray suppression algorithm;
the self-adaptive gray threshold selection method roughly extracts a blood vessel model of an original target aneurysm region image;
logically calculating a blood vessel of the target aneurysm region image based on the blood vessel model and the original aneurysm region image;
wherein the aneurysm gray value is GAneurysmPixel value of PAneurysmThe mean gray value of the aneurysm is:
AGaneurysm=GAneurysm/PAneurysm
The mean gray values of the peri-aneurysmal vessels were:
AGperiphery of aneurysm=(GExpanded area-GAneurysm)/(PExpanded area-PAneurysm)
If, AGAneurysmIs less than AGPeriphery of aneurysmThen the area image is the overlapped blood vessel, if AGAneurysmGreater than AGPeriphery of aneurysmThen the regional image is an aneurysm.
Region(s)The mean gray scale suppression algorithm evaluates output results with aneurysm classification labels with less than standard confidence. The system compares the confidence of the predicted aneurysm to a standard threshold to determine whether to retain the label of the aneurysm. If the confidence of the aneurysm is below the norm, a region-averaged grayscale suppression algorithm will be applied. First, a vessel model of an original target aneurysm region image is roughly extracted using an adaptive gray-scale threshold selection method. The vessels of the target aneurysm region image are then logically computed using the vessel model and the original aneurysm region image. By defining the region, the system defines the aneurysm gray value as GAneurysmThe pixel value is defined as PAneurysm. The mean gray value of the aneurysm is:
AGaneurysm=GAneurysm/PAneurysm
Meanwhile, the mean gray values of the peripheral blood vessels are:
AGperiphery of aneurysm=(GExpanded area-GAneurysm)/(PExpanded area-PAneurysm)
Finally, the AG is comparedAneurysmAnd AGPeriphery of aneurysmNumerical case. If AGAneurysmBiAGPeriphery of aneurysmSmaller, indicates that the aneurysm is darker in intensity than the peripheral vessels. Since it can be identified that the object is an overlapping blood vessel. Otherwise, the system should leave this object as an aneurysm.
The loss formula for the object classification is defined as follows:
FL(pt)=-α(1-pt)γlog(pt)
in the formula, α is a balance parameter, and γ is a weight decreasing rate. In addition ptAs defined below:
Figure BDA0002942712170000131
where p represents the prediction confidence, and y ∈ { + -1 } is defined as the basic fact class. For bounding box regression, the system employs the loss formula:
SL(t,v)=∑i∈{x,y,w,h}SmoothL1(ti-vi),
wherein:
Figure BDA0002942712170000132
where t is the bounding box of the predicted object, v represents the bounding box of the ground truth, and σ is a trade-off factor. The first indicator in the region identification stage is accuracy, which can be defined as the number of images the network finds the aneurysm region from the test data. Usually, aneurysm identification is focused on image intersections, and slight movement or size change of the bounding box has little influence on the actual detection. Therefore, it is proposed to evaluate the performance of region identification with respect to a manually labeled canonical offset, defined as follows:
Figure BDA0002942712170000133
Figure BDA0002942712170000134
fxrepresenting the regular offset of the X-axis. x is the number ofannotationX-coordinate, X, representing the centroid of the artificial annotationprdictionThe representative region identification algorithm predicts the X coordinate of the centroid. The same is true for the Y-axis. W and H represent the width and height of the bounding box, respectively.
In addition, a quintuple cross-validation scheme is employed in the aneurysm identification algorithm. To compare performance, five dual-input and one-way input models were trained using the same dataset. The aneurysm identification system uses the average accuracy to assess the classification accuracy of the overlap of the aneurysm and the blood vessel. For the bounding box regression, analytical evaluations were performed by Smooth L1 loss. Moreover, the clinical goal of an aneurysm identification system is to provide physicians with an accurate indication of whether a patient has an aneurysm. To avoid missed diagnosis, strict assessment criteria are defined as follows:
Figure BDA0002942712170000141
wherein, DiagiRepresenting whether the ith patient has an aneurysm, when the jth image of the ith patient is predicted to have an aneurysm from a V angle (F or L),
Figure BDA0002942712170000142
equal to true. In other words, the formula indicates that the patient will be defined as an aneurysm even if one frontal or lateral image with the aneurysm is detected in the entire DSA sequence. In addition, a confusion matrix and receiver operating characteristics are employed in the evaluation. The system calculates True Positives (TP), False Positives (FP), True Negatives (TN) and False Negatives (FN) to calculate sensitivity, specificity and accuracy, as defined below:
Figure BDA0002942712170000143
Figure BDA0002942712170000144
Figure BDA0002942712170000145
in one embodiment, the CNN model is trained by the steps comprising:
acquiring 2D-DSA image data of a patient, the 2D-DSA image data including one or more 2D-DSA images;
removing images lower than a preset value in the 2D-DSA images to obtain 2D-DSA images meeting the preset value;
and dividing the 2D-DSA image meeting the preset value into a training data set and a testing data set based on a preset proportion.
The CNN algorithm model requires a large number of data sets to optimize the weights of each network layer. The system uses a data expansion scheme created in Python language to increase the size of the training data set. The training and test set is extended using a combination of rotation and scaling settings. Each frame of the DSA sequence is rotated by 90 °, 180 °, and 270 ° on average and enlarged to 200% of the original size. The expansion process expands the training set to 1500 examples. The process of enlarging the image data area during enhancement produces a new true negative example, improving the generalization capability of CNN algorithm networks in the absence of aneurysms.
In an embodiment, after dividing the 2D-DSA image satisfying the preset value into a training data set and a test data set based on a preset ratio, the method further includes:
the plurality of 2D-DSA images form a DSA image sequence;
and carrying out local normalization processing on the DSA images to enable the local average value of the DSA images to be 0 and the SD to be 1.
A large amount of two-dimensional DSA image sequence data is used for recognition algorithm training of a two-stage CNN aneurysm. The whole training process is divided into two steps, as shown in fig. 6. The first step is to carry out image quality control and remove DSA image sequences with poor image quality; the second step is to scale the data into a training data set and a test data set. The pixel intensities of the DSA sequences are locally normalized by making the local mean 0 and SD 1 before being input into the CNN network. Next, the proposed network is trained and validated using the patient and DSA sequences, and the patient and DSA sequences are tested for accuracy. In training the network, the input frames and the corresponding ground truth mask are clipped to 256 × 256. When testing is performed on the test set, the system inputs the original 512 x 512 frames and the network operates in full convolution mode.
An embodiment of the present invention further provides a DSA image recognition apparatus, as shown in fig. 7, including:
the 2D-DSA image receiving module is used for receiving 2D-DSA image data, and the 2D-DSA image data comprises one or more 2D-DSA images;
the identification module is used for carrying out first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and carrying out second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data;
and the output module is used for outputting the identification result data, wherein the identification result data comprises an aneurysm label for marking the 2D-DSA image.
In one embodiment, the identification module comprises a stage identification unit configured to perform the steps of:
the frame of the CNN model is constructed based on a characteristic pyramid network, and the backbone is a residual error neural network;
the 2D-DSA image includes a frontal image and a lateral image,
Figure BDA0002942712170000151
a jth image representing the ith patient frontal image sequence,
Figure BDA0002942712170000152
a j-th image representing a sequence of i-th patient side images, said
Figure BDA0002942712170000153
And
Figure BDA0002942712170000161
corresponding;
and respectively inputting the front image and the side image into the feature pyramid network, determining a target area in the front image, and determining a corresponding area corresponding to the target area in the side image.
In one embodiment, the identification module comprises a two-phase identification unit for performing the steps of:
acquiring a target area in the front image and a corresponding area in the side image;
connecting the target area and the corresponding area to a feature pyramid network in a double input mode to obtain identification result data;
the identification result data includes an aneurysm label for the pair of images, the aneurysm label for labeling a confidence that a location in the images is an aneurysm.
Wherein the identification result data comprises 2D-DSA gold standard segmentation result data. After the recognition result data output is completed, the doctor needs to acquire more information through 3D-DSA. In modern neuroradiology, 3D rotational angiography (3D-DSA) is a technique that can both visualize complex cerebrovascular lesions and guide interventional therapy. More and more 3D-DSA data is also used for further quantitative analysis to support treatment planning and treatment procedures for patients with cerebrovascular diseases. Such analysis requires a reliable three-dimensional vessel segmentation method, delineating the boundaries of the vessel as accurately as possible, to derive accurate vessel measurements, such as vessel diameter, vessel bifurcation angle, aneurysm vault size, and the segmentation results and their corresponding mesh representations are the basis for a more reliable analysis. Small variations in the segmentation result may result in completely different flow patterns or wall shear stress distributions. Therefore, vessel segmentation results become more important for post-processing applications in a clinical setting. Accurate vessel segmentation based on 3D angiography is a challenge because it depends on the quality of the 3D-DSA images, which may vary from patient to patient. There are many factors that affect the image quality, including the amount of contrast agent injected, the timing of injection, hemodynamic mixing of the contrast agent, and blood flow, cardiac output, etc., and the reconstruction parameters may vary from patient to patient. The edge gradient between background and vessel intensity varies from reconstruction to reconstruction applied. These contours provide an opportunity for different segmentation methods to interpret these edge gradients that may lead to changes in the position of the vessel boundary in different ways. In the case of different injections of contrast agent, the contrast agent may ooze out, resulting in blurring of local vessel boundaries. Even if all acquisition parameters are exactly the same, the final 3D-DSA image volume will be slightly different due to the non-uniform hemodynamic mixing of the contrast agent with the blood. Therefore, the verification of the 3D-DSA vessel segmentation results becomes quite difficult, and thus 2D-DSA imaging is still considered as a gold standard for quantitative evaluation and measurement.
The present invention further provides a DSA image recognition method, as shown in fig. 8, including:
step S210, receiving 3D-DSA image data, wherein the 3D-DSA image data comprises one or more 3D-DSA images;
s220, acquiring a central point of the 3D-DSA image, and generating a central line of the 3D-DSA image based on the central point;
s230, segmenting the 3D-DSA image based on the central line through an ellipsoid blood vessel model to obtain a segmented 3D-DSA image;
and S240, receiving 2D-DSA gold standard segmentation result data, and verifying the 3D-DSA image based on the 2D-DSA gold standard segmentation result data to obtain verified 3D-DSA image data.
In order to calibrate the 3D-DSA image, 3D-DSA image information is collected, then a central line is generated, and an ellipsoid blood vessel model is adopted for three-dimensional segmentation. In order to evaluate the accuracy of the three-dimensional vessel segmentation, its forward projections are iteratively superimposed with corresponding two-dimensional DSA projections. The local vessel differences are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation result to correspond to the 2D vessel contour. The verification of any three-dimensional vessel segmentation method is a challenge, and in order to solve the problem, the invention is realized by a novel three-dimensional vessel segmentation method based on additional two-dimensional vessel information. The 2D-DSA acquisition serves as a verification basis for the three-dimensional segmentation and is also the driving force for adapting to the initial three-dimensional segmentation. Each 3D-DSA data set is automatically accompanied by 133 2D-DSA projections, showing the patient's organ from different viewing angles. In addition, the projections are registered and calibrated with a 3D-DSA data set. The selected 2D-DSA images show the advantage that the vessel boundaries are clearly visible, rather than blurred as often happens in 3D-DSA images. The innovation of the present invention is that an automatic segmentation algorithm formalizes a new global 2D/3D optimization function that smoothly combines 2D vessel information with 3D images.
In one embodiment, the acquiring a center point of the 3D-DSA image, and the generating a centerline of the 3D-DSA image based on the center point comprises:
selecting the bottommost slice of the 3D-DSA image, and carrying out circle Hough transformation on the bottommost slice to position the blood vessel with the maximum diameter, wherein the center of the circle with the maximum diameter is the central point.
After three-dimensional DSA image data is input into the system, the main purpose of the 3D-DSA segmentation method of the system is to detect and segment large and medium blood vessels in the 3D-DSA data set and provide input for subsequent 2D-DSA information adaptation. 3D-DSA is based on the intra-arterial direct injection of contrast agents. Due to the high contrast and subtraction, it can be assumed a priori that the vessels appear in the highest intensity bin of the histogram. Thus, the 3D-DSA dataset consists of three different types of intensity levels, namely foreground (vessels), artifact and background. The vessels of the 3D-DSA dataset enter the image volume from the bottom because of the a priori known acquisition settings, the anatomy of the cerebrovascular tree and the patient's position on the C-arm system table. Thus, the direction of flow of the contrast agent is from bottom to top. For automatically finding the location of the largest vessel entering the volume. And applying Hough transform of a circle on the bottommost slice of the 3D-DSA image to locate the blood vessel with the largest diameter, and then taking the position as a calculation method of a centerline seed point. As shown in fig. 9, an example of the result of the blood vessel centerline settlement is given. This seed point initialization is limited to a selected specific volume for the secondary reconstruction. After centerline calculation, each vessel branch is associated with a centerline. All centerline sections are stored in the tree, i.e. the centerline of the vessel lumen entering the volume serves as the root in the tree. This structure easily reveals the parent-child relationships between different vessel branches. The vessel centerline tree is used as an initialization for ellipsoid-based vessel segmentation.
In one embodiment, the segmenting the 3D-DSA image based on the centerline by the ellipsoid vessel model to obtain a segmented 3D-DSA image includes:
constructing an ellipsoid blood vessel model, which is represented by the following formula:
Figure BDA0002942712170000181
wherein x ∈ R3Represents a point on a spherical surface;
setting 9 transformation parameters including 3 rotation parameters, 3 scaling parameters and 3 translation parameters, wherein the transformation function is expressed by the following formula:
T(m,x)=R(r)S(s)x+t
wherein R represents a rotation parameter, R ∈ R3Of the 3 x 3 rotation matrix, the scaling parameter s ∈ R3In the 3X 3 diagonal matrix S(s), t ∈ R3Representing a translation vector;
all transformation parameters are in the vector m ═ R, s, t ∈ R9Inner summation, the inner shape model can be rewritten as:
fT(m,x):=f(T(m,x))=f(R(r)S(s)x+t)
the space is divided into three distinct regions:
Figure BDA0002942712170000182
where S (m) and I (m) represent the surface and interior of the ellipsoid, respectively, and O represents the image domain.
A super-ellipsoid, ellipsoid or sphere is a geometry well suited to describe a local vessel segment. The unit sphere coordinates located in the coordinate center are defined as follows:
Figure BDA0002942712170000183
wherein x ∈ R3Representing a point on a sphere. In order to approximate a local vessel segment, a coordinate transformation has to be applied to allow arbitrary rotation, scaling and translation, i.e. nine transformation parameters have to be estimated (three rotations three scaling, three translations). The entire transformation function is represented as:
T(m,x)=R(r)S(s)x+t
wherein R represents a rotation parameter, R ∈ R33 x 3 rotation matrix. The scaling parameter s ∈ R3In the 3X 3 diagonal matrix S(s), t ∈ R3Representing a translation vector. All changes areThe parameters are changed when the vector m is equal to (R, s, t) epsilon R9And (4) internal summation. The intrinsic shape model can therefore be rewritten as:
fT(m,x):=f(T(m,x))=f(R(r)S(s)x+t)
the space is divided into three distinct regions:
Figure BDA0002942712170000191
where S (m) and I (m) represent the surface and interior of the ellipsoid, respectively, and O represents the image domain. These parameters are used as extrinsic parameter terms in the global objective function to correctly estimate the parameters. The previously calculated centreline is used to initialize the parameters, i.e. the centre of the ellipsoid is placed on the centreline. And by rotating the central line, the tangent vector of the central line is coincided with the local Z axis of the internal coordinate system of the ellipsoid. The curvature of the centerline is used to determine the number of ellipsoids needed to approximate the vessel branch. In a high curvature centerline cross section, the number of ellipsoids increases while the proportion of ellipsoids in the Z direction decreases at the same time.
In one embodiment, the segmenting the 3D-DSA image based on the centerline by the ellipsoid vessel model to obtain a segmented 3D-DSA image includes:
rotating the central line to make the tangent vector of the central line coincide with the local Z axis of the internal coordinate system of the ellipsoid;
the ellipsoid blood vessel model parameter vector k is (m)0,m1,…,mM-1)∈R9×MAll ellipsoid parameters of the whole model are represented, and M is the total number of the ellipsoids used;
dividing each ellipsoid local image domain into foreground intensity and background intensity, obtaining a specific local intensity threshold value, and storing the threshold value in a vector h ═ h0,h1,…,hM-1)T∈RMIn (1).
An ellipsoid scale perpendicular to the vessel direction is initialized with the cross-sectional radius intensity profile. Parameter vector k ═ m0,m1,…,mM-1)∈R9×MRepresents the whole modelWith the ellipsoid parameters, M represents the total number of ellipsoids used. For each ellipsoid, the local image domain is divided into foreground and background intensities, and a specific local intensity threshold is estimated. The threshold is stored in the vector h ═ h (h)0,h1,…,hM-1)T∈RMIn (1).
In one embodiment, the initialized ellipsoid tube model is optimized such that the model evolves towards the vessel boundary according to a predefined energy, total energy formula Gtot(k, h) consists of two terms:
Gtot(k,h):R9×M×RM→R
wherein G isext(k, h) and Gint(k) Representing the external energy term and the internal energy term, respectively, alpha defining a weighting factor.
6. The DSA image recognition method of claim 5,
the inherent property of the internal energy term is related to an ellipsoid, and the internal energy term consists of three cubic curves to ensure the smoothness of rotation, scaling and translation between adjacent ellipsoids, and is expressed by the following formula:
Figure BDA0002942712170000201
in the formula, Bi(l, k)' and Bi(l, k) '' represents BiFirst and second derivatives of, LiIs the total length of the B-spline curve and l is the position on the spline curve.
The initialized ellipsoid tube model is now optimized such that the model evolves towards the vessel boundary according to the predefined energy. Total energy formula Gtot(k, h) consists of two terms:
Gtot(k,h):R9×M×RM→R
wherein G isext(k, h) and Gint(k) Representing the external energy term and the internal energy term, respectively, alpha defining a weighting factor.
The intrinsic properties of the internal energy term are related to the ellipsoid, and the parameters between two subsequent ellipsoids must be modeled in a smooth manner. To this end, the energy content term consists of three cubic curves to ensure smoothness of rotation, scaling and translation between adjacent ellipsoids:
Figure BDA0002942712170000202
in the formula, Bi(l, k)' and Bi(l, k) '' represents BiFirst and second derivatives of, LiIs the total length of the B-spline curve and l is the position on the spline curve.
In one embodiment, the external energy term is responsible for driving the model of the surrounding vascular structure, the ellipsoid vascular shape model segments the image domain into two parts, foreground and background voxels, which are assembled by computing the set S (k)i) And I (k)i) The number of mid-foreground and background voxel intensities is derived, where the formula of the external energy term is:
Figure BDA0002942712170000211
the initialization and local density of the ellipsoid parameters ensures that the number of foreground voxels is always larger than the number of background voxels.
The external energy term is responsible for driving the model of the surrounding vascular structure. The ellipsoidal vessel shape model of the system tries to segment the image domain as well as possible into two parts of foreground and background voxels. The two parts are obtained by computing the set S (k)i) And I (k)i) The number of mid-foreground and background voxel intensities. The formula for the external energy term is:
Figure BDA0002942712170000212
the initialization and local density of the ellipsoid parameters ensures that the number of foreground voxels is always larger than the number of background voxels. The local intensity threshold within the box centered at the center of the current ellipsoid is between 32 x 32 and 64 x 64, depending on the scale of the current vessel. TargetIs a minimization equation such that the parameters of the ellipsoid perfectly match the local vessel structure. Such local foreground/background estimation may ensure that the external energy adapts to vessel structures exhibiting even high intensity variations throughout the data set. By minimizing the total energy formula Gtot(k, h) and local intensity threshold to obtain the optimal ellipsoid parameters. Since the model of the system is initialized by pre-computed centerline and section radius profile estimates, the initial set of parameters is located at a global optimum.
In one embodiment, the receiving 2D-DSA gold standard segmentation result data, and verifying the 3D-DSA image based on the 2D-DSA gold standard segmentation result data to obtain verified 3D-DSA image data includes:
measuring the difference between the 2D-DSA gold standard segmentation result data and the 3D-DSA segmentation image by a two-dimensional/three-dimensional external energy function Gext2D/3D (k), wherein the two-dimensional/three-dimensional external energy function Gext2D/3D (k) has the formula:
Figure BDA0002942712170000221
in the formula, Q represents the number of projection images for adjusting the three-dimensional segmentation result, w (i, u) represents the intensity value of the two-dimensional basic truth value of the projection image at the position u ═ x1, x2 ∈ R, and the intensity value of the forward projection v (i, u, k) is obtained by the maximum intensity projection;
the superposition between the forward projection of the 3D-DSA segmented image and the gold standard of the two-dimensional DSA image is evaluated through two indexes to obtain an evaluation result, wherein the two indexes are a Dice (Dice) coefficient and an accuracy rate, and the Dice coefficient is as follows:
Figure BDA0002942712170000222
wherein X represents all pixels of the golden standard segmentation and Y represents all pixels of the orthographic projection;
the definition formula of the accuracy is as follows:
Figure BDA0002942712170000223
wherein TP is the number of true positive cases and FP is the number of false positive cases.
The appearance of 3D-DSA volume data depends on four influencing factors, i.e. the amount of contrast agent used, the injection time, the hemodynamic mixture of the contrast agent and the reconstruction kernel applied. And comparing the 3D-DSA segmentation results with reference to the two-dimensional DSA segmentation results. By selecting the 2D-DSA image analysis results in the system, the three-dimensional segmentation results are verified by superposition matching, and the comparison is realized by orthographic projection of three-dimensional vessel segmentation by using ray casting. The local difference between the two-dimensional vessel information and the forward projection is now formalized as a new objective function, which utilizes three-dimensional parametric segmentation. This derives a new two-dimensional/three-dimensional external energy function Gext2D/3d (k) that measures the difference between the two-dimensional and three-dimensional segmentations. The definition is as follows:
Figure BDA0002942712170000231
in the formula, Q denotes the number of projection images used to adjust the three-dimensional segmentation result, and w (i, u) denotes the intensity value of the two-dimensional ground truth of the projection image at the position u ═ x1, x2 ∈ R. The intensity values of the forward projection v (i, u, k) are given by the maximum intensity projection. The new 2D/3D external energy term forms a new total energy. Thus, the internal energy term ensures that the three-dimensional ellipsoid model does not distort towards non-conforming vessels. And (3) verifying the three-dimensional vessel segmentation method through the 2D/3D image optimization of the system, and finally outputting the verified 3D-DSA image result.
And evaluating the superposition between the forward projection of the 3D-DSA segmentation result and the gold standard of the two-dimensional DSA image through two indexes to obtain an evaluation result. These two metrics are die (Dice) coefficient and accuracy. The Dice coefficient is defined as:
Figure BDA0002942712170000232
where X represents all pixels of the gold standard segmentation and Y represents all pixels of the forward projection.
The definition formula of the accuracy is as follows:
Figure BDA0002942712170000233
wherein TP is the number of true positive cases and FP is the number of false positive cases.
An embodiment of the present invention further provides a DSA image recognition apparatus, as shown in fig. 10, including:
the 3D-DSA image receiving module is used for receiving 3D-DSA image data, and the 3D-DSA image data comprises one or more 3D-DSA images;
the center line obtaining module is used for obtaining a center point of the 3D-DSA image and generating a center line of the 3D-DSA image based on the center point;
the segmentation module is used for segmenting the 3D-DSA image based on the central line through an ellipsoid blood vessel model to obtain a segmented 3D-DSA image;
and the checking module is used for receiving the 2D-DSA gold standard segmentation result data, checking the 3D-DSA image based on the 2D-DSA gold standard segmentation result data, and obtaining the checked 3D-DSA image data.
The readable storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the readable storage medium may also reside as discrete components in a communication device. The readable storage medium may be a read-only memory (ROM), a random-access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present invention also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the device may read the execution instructions from the readable storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the methods provided by the various embodiments described above.
In the above embodiments of the terminal or the server, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A DSA image recognition method is characterized by comprising the following steps:
receiving 2D-DSA image data, wherein the 2D-DSA image data comprises one or more 2D-DSA images;
performing first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and performing second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data;
outputting the identification result data, wherein the identification result data comprises an aneurysm label labeling the 2D-DSA image.
2. The DSA image recognition method according to claim 1,
the CNN-based model performs a first-stage region identification on the 2D-DSA image, and the positioning of a specific region in the 2D-DSA image comprises the following steps:
the frame of the CNN model is constructed based on a characteristic pyramid network, and the backbone is a residual error neural network;
the 2D-DSA image includes a frontal image and a lateral image,
Figure FDA0002942712160000011
a jth image representing the ith patient frontal image sequence,
Figure FDA0002942712160000012
a j-th image representing a sequence of i-th patient side images, said
Figure FDA0002942712160000013
And
Figure FDA0002942712160000014
corresponding;
and respectively inputting the front image and the side image into the feature pyramid network, determining a target area in the front image, and determining a corresponding area corresponding to the target area in the side image.
3. The DSA image recognition method according to claim 2,
the second stage of aneurysm identification on the 2D-DSA image based on a CNN model after the specific region is located to obtain identification data comprises:
acquiring a target area in the front image and a corresponding area in the side image;
connecting the target area and the corresponding area to a feature pyramid network in a double input mode to obtain identification result data;
the identification result data includes an aneurysm label for the pair of images, the aneurysm label for labeling a confidence that a location in the images is an aneurysm.
4. The DSA image recognition method according to claim 3,
after obtaining the identification result data, performing area average gray suppression processing on the identification result data, including:
comparing the confidence coefficient of the aneurysm with a standard threshold, and if the confidence coefficient of the aneurysm is lower than the standard, applying a regional average gray suppression algorithm;
the self-adaptive gray threshold selection method roughly extracts a blood vessel model of an original target aneurysm region image;
logically calculating a blood vessel of the target aneurysm region image based on the blood vessel model and the original aneurysm region image;
wherein the aneurysm gray value is GAneurysmPixel value of PAneurysmThe mean gray value of the aneurysm is:
AGaneurysm=GAneurysm/PAneurysm
The mean gray values of the peri-aneurysmal vessels were:
AGperiphery of aneurysm=(GExpanded area-GAneurysm)/(PExpanded area-PAneurysm)
If, AGAneurysmIs less than AGPeriphery of aneurysmThen the area image is the overlapped blood vessel, if AGAneurysmGreater than AGPeriphery of aneurysmThen the regional image is an aneurysm.
5. The DSA image recognition method according to claim 1,
the CNN model is trained by the following steps, including:
acquiring 2D-DSA image data of a patient, the 2D-DSA image data including one or more 2D-DSA images;
removing images lower than a preset value in the 2D-DSA images to obtain 2D-DSA images meeting the preset value;
and dividing the 2D-DSA image meeting the preset value into a training data set and a testing data set based on a preset proportion.
6. The DSA image recognition method of claim 5,
after the 2D-DSA image meeting the preset value is divided into a training data set and a testing data set based on a preset proportion, the method further comprises the following steps:
the plurality of 2D-DSA images form a DSA image sequence;
and carrying out local normalization processing on the DSA images to enable the local average value of the DSA images to be 0 and the SD to be 1.
7. A DSA image recognition apparatus, comprising:
the 2D-DSA image receiving module is used for receiving 2D-DSA image data, and the 2D-DSA image data comprises one or more 2D-DSA images;
the identification module is used for carrying out first-stage region identification on the 2D-DSA image based on a CNN model, positioning a specific region in the 2D-DSA image, and carrying out second-stage aneurysm identification on the 2D-DSA image based on the CNN model after positioning the specific region to obtain identification result data;
and the output module is used for outputting the identification result data, wherein the identification result data comprises an aneurysm label for marking the 2D-DSA image.
8. The DSA image recognition apparatus according to claim 7,
the identification module comprises a stage identification unit, and the stage identification unit is used for executing the following steps:
the frame of the CNN model is constructed based on a characteristic pyramid network, and the backbone is a residual error neural network;
the 2D-DSA image includes a frontal image and a lateral image,
Figure FDA0002942712160000031
a jth image representing the ith patient frontal image sequence,
Figure FDA0002942712160000032
a j-th image representing a sequence of i-th patient side images, said
Figure FDA0002942712160000033
And
Figure FDA0002942712160000034
corresponding;
and respectively inputting the front image and the side image into the feature pyramid network, determining a target area in the front image, and determining a corresponding area corresponding to the target area in the side image.
9. The DSA image recognition apparatus according to claim 8,
the identification module comprises a two-stage identification unit for performing the steps of:
acquiring a target area in the front image and a corresponding area in the side image;
connecting the target area and the corresponding area to a feature pyramid network in a double input mode to obtain identification result data;
the identification result data includes an aneurysm label for the pair of images, the aneurysm label for labeling a confidence that a location in the images is an aneurysm.
10. A readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 6.
CN202110183361.4A 2021-02-10 2021-02-10 DSA image recognition method, device and storage medium Pending CN112784928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110183361.4A CN112784928A (en) 2021-02-10 2021-02-10 DSA image recognition method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110183361.4A CN112784928A (en) 2021-02-10 2021-02-10 DSA image recognition method, device and storage medium

Publications (1)

Publication Number Publication Date
CN112784928A true CN112784928A (en) 2021-05-11

Family

ID=75761460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110183361.4A Pending CN112784928A (en) 2021-02-10 2021-02-10 DSA image recognition method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112784928A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638799A (en) * 2022-03-11 2022-06-17 深圳市澈影医生集团有限公司 Automatic detection system and method for intracranial aneurysm
CN117635616A (en) * 2024-01-26 2024-03-01 江西科技学院 Image diagnosis system for mutual recognition of medical examination results

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638799A (en) * 2022-03-11 2022-06-17 深圳市澈影医生集团有限公司 Automatic detection system and method for intracranial aneurysm
CN114638799B (en) * 2022-03-11 2023-03-21 深圳市澈影医生集团有限公司 Automatic detection system for intracranial aneurysm
CN117635616A (en) * 2024-01-26 2024-03-01 江西科技学院 Image diagnosis system for mutual recognition of medical examination results

Similar Documents

Publication Publication Date Title
US11576621B2 (en) Plaque vulnerability assessment in medical imaging
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
US10460204B2 (en) Method and system for improved hemodynamic computation in coronary arteries
US9430827B2 (en) Segmentation of a calcified blood vessel
US7567696B2 (en) System and method for detecting the aortic valve using a model-based segmentation technique
Frangi et al. Model-based quantitation of 3-D magnetic resonance angiographic images
Hernandez et al. Non-parametric geodesic active regions: method and evaluation for cerebral aneurysms segmentation in 3DRA and CTA
Wan et al. Automated coronary artery tree segmentation in X-ray angiography using improved Hessian based enhancement and statistical region merging
US9406141B2 (en) Segmentation of a structure
CN112819800A (en) DSA image recognition method, device and storage medium
CN108348206B (en) Collateral flow modeling for non-invasive Fractional Flow Reserve (FFR)
US20220284583A1 (en) Computerised tomography image processing
CN112700451A (en) Method, system and computer readable medium for automatic segmentation of 3D medical images
GB2569427A (en) System, method and apparatus for assisting a determination of medical images
Vukadinovic et al. Segmentation of the outer vessel wall of the common carotid artery in CTA
Winder et al. Automatic arterial input function selection in CT and MR perfusion datasets using deep convolutional neural networks
CN112784928A (en) DSA image recognition method, device and storage medium
Tahoces et al. Deep learning method for aortic root detection
US20230222771A1 (en) Method and system for automatic classification of radiographic images having different acquisition characteristics
CN109727660B (en) Machine learning prediction of uncertainty or sensitivity for hemodynamic quantification in medical imaging
Larralde et al. Evaluation of a 3D segmentation software for the coronary characterization in multi-slice computed tomography
Hemmati et al. Semi-automated carotid lumen segmentation in computed tomography angiography images
Zhang et al. Robust infrarenal aortic aneurysm lumen centerline detection for rupture status classification
Wang Blood vessel segmentation and shape analysis for quantification of coronary artery stenosis in CT angiography
Seada et al. Model-based automatic segmentation of ascending aorta from multimodality medical data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination