CN116862877A - Scanning image analysis system and method based on convolutional neural network - Google Patents
Scanning image analysis system and method based on convolutional neural network Download PDFInfo
- Publication number
- CN116862877A CN116862877A CN202310849947.9A CN202310849947A CN116862877A CN 116862877 A CN116862877 A CN 116862877A CN 202310849947 A CN202310849947 A CN 202310849947A CN 116862877 A CN116862877 A CN 116862877A
- Authority
- CN
- China
- Prior art keywords
- feature
- expansion
- vectors
- coronary artery
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 74
- 238000010191 image analysis Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims description 29
- 238000003703 image analysis method Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 179
- 210000004351 coronary vessel Anatomy 0.000 claims description 113
- 239000011159 matrix material Substances 0.000 claims description 42
- 230000004927 fusion Effects 0.000 claims description 31
- 238000005457 optimization Methods 0.000 claims description 28
- 238000007781 pre-processing Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 230000005012 migration Effects 0.000 claims description 6
- 238000013508 migration Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 2
- 238000006073 displacement reaction Methods 0.000 claims 2
- 238000010968 computed tomography angiography Methods 0.000 description 91
- 206010008479 Chest Pain Diseases 0.000 description 19
- 208000024891 symptom Diseases 0.000 description 12
- 238000002591 computed tomography Methods 0.000 description 10
- 201000010099 disease Diseases 0.000 description 7
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 210000001147 pulmonary artery Anatomy 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 208000004476 Acute Coronary Syndrome Diseases 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 208000002251 Dissecting Aneurysm Diseases 0.000 description 3
- 206010002895 aortic dissection Diseases 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 206010002329 Aneurysm Diseases 0.000 description 1
- 201000001320 Atherosclerosis Diseases 0.000 description 1
- 201000000057 Coronary Stenosis Diseases 0.000 description 1
- 208000000059 Dyspnea Diseases 0.000 description 1
- 206010013975 Dyspnoeas Diseases 0.000 description 1
- 206010014513 Embolism arterial Diseases 0.000 description 1
- 206010015856 Extrasystoles Diseases 0.000 description 1
- 208000000616 Hemoptysis Diseases 0.000 description 1
- 208000002193 Pain Diseases 0.000 description 1
- 208000011191 Pulmonary vascular disease Diseases 0.000 description 1
- 208000007536 Thrombosis Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 206010000891 acute myocardial infarction Diseases 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 208000026106 cerebrovascular disease Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 230000002526 effect on cardiovascular system Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 208000035474 group of disease Diseases 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000002685 pulmonary effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 208000019553 vascular disease Diseases 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of image analysis, and particularly discloses a scanning image analysis system and a scanning image analysis method based on a convolutional neural network.
Description
Technical Field
The present application relates to the field of image analysis, and more particularly, to a scanned image analysis system based on convolutional neural network and a method thereof.
Background
Chest pain triple sign (chest pain triple-rule-out, TRO) is mainly manifested as 3 diseases and complications of acute chest pain attacks, and clinically most common causes include Acute Coronary Syndrome (ACS), pulmonary arterial embolism (PE) and thoracoabdominal Aortic Dissection (AD).
Acute chest pain is one of the most common diseases in emergency departments and cardiovascular internal medicine, the onset of the disease is urgent, the onset of the disease is dangerous to life, and clinical symptoms mainly appear as chest pain, dyspnea, hemoptysis and the like. However, acute chest pain is one of the most clinically common symptoms, and is a heterogeneous group of diseases that is mainly manifested by chest pain. Chest pain symptoms caused by different causes are similar and have different characteristics, and can be expressed by different parts, different properties and different degrees of pain, and accompanying symptoms can also be different. Therefore, if the diagnosis is difficult to be timely confirmed only by clinical symptoms and laboratory examination, and the examination such as laboratory and ultrasonic are long in time, the early diagnosis of TRO is not facilitated. Moreover, the conventional CT examination can only meet single CTA examination, but cannot simultaneously consider three diseases, so that the detection of one or two diseases of ACS, PE or AD is easily ignored, and misdiagnosis or missed diagnosis is easy to occur.
Therefore, an optimized scan image analysis system based on a convolutional neural network is desired to assist a doctor in diagnosing chest pain trigeminal, improve the working efficiency of the doctor and reduce the misdiagnosis rate.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a scanning image analysis system and a scanning image analysis method based on a convolutional neural network, which are used for detecting and judging whether a coronary artery CTA scanning result is normal or not by adopting a neural network model based on deep learning to excavate implicit characteristics about a pulmonary artery in a pulmonary artery CTA scanning image so as to assist a doctor in diagnosing chest pain triple symptoms, improve the working efficiency of the doctor, reduce the misdiagnosis rate and facilitate early treatment of chest pain triple symptoms patients.
According to one aspect of the present application, there is provided a scanned image analysis system based on a convolutional neural network, comprising:
the image acquisition module is used for acquiring a CTA scanning image of the axial position of the coronary artery;
the image preprocessing module is used for preprocessing the coronary artery axial CTA scanning image to obtain a preprocessed coronary artery axial CTA scanning image;
the coronary artery axial position image feature extraction module is used for enabling the preprocessed coronary artery axial position CTA scanning image to pass through a convolutional neural network model comprising a depth feature fusion module to obtain a coronary artery axial position CTA scanning feature image;
The matrix expansion module is used for expanding the feature matrix of each feature matrix of the coronary artery axial CTA scanning feature map along the channel dimension so as to obtain a plurality of local feature expansion feature vectors;
the global semantic association module is used for enabling the local feature expansion feature vectors to pass through a context encoder based on a converter to obtain a plurality of context local feature expansion feature vectors;
the feature optimization module is used for performing feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors;
the cascade fusion module is used for cascading the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors; and
and the scanning result detection module is used for passing the classification feature vector through a classifier to obtain a classification result, and the classification result is used for indicating whether the coronary artery axial position CTA scanning result is normal or not.
According to another aspect of the present application, there is provided a scanned image analysis method based on a convolutional neural network, including:
acquiring a CTA scanning image of the axial position of a coronary artery;
performing image preprocessing on the coronary artery axial CTA scanning image to obtain a preprocessed coronary artery axial CTA scanning image;
The preprocessed coronary artery axial CTA scanning image is passed through a convolutional neural network model comprising a depth feature fusion module to obtain a coronary artery axial CTA scanning feature map;
performing feature matrix expansion on each feature matrix of the coronary artery axial CTA scanning feature map along the channel dimension to obtain a plurality of local feature expansion feature vectors;
passing the plurality of local feature expansion feature vectors through a converter-based context encoder to obtain a plurality of context local feature expansion feature vectors;
performing feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors;
cascading the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors; and
and the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the coronary artery axial CTA scanning result is normal or not.
Compared with the prior art, the scanning image analysis system and the scanning image analysis method based on the convolutional neural network provided by the application have the advantages that the hidden characteristics of the pulmonary artery in the pulmonary artery CTA scanning image are excavated by adopting the neural network model based on deep learning, so that whether the coronary artery CTA scanning result is normal or not is detected and judged, a doctor is assisted in diagnosing chest pain triple symptoms, the working efficiency of the doctor is improved, the misdiagnosis rate is reduced, and the early treatment of the chest pain triple symptom patients is facilitated.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a block diagram of a scanned image analysis system based on a convolutional neural network in accordance with an embodiment of the present application;
FIG. 2 is a system architecture diagram of a convolutional neural network-based scanned image analysis system in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of convolutional neural network encoding in a convolutional neural network-based scanned image analysis system in accordance with an embodiment of the present application;
FIG. 4 is a block diagram of a global semantic association module in a convolutional neural network based scanned image analysis system in accordance with an embodiment of the present application;
FIG. 5 is a flow chart of a method of analysis of scanned images based on convolutional neural networks in accordance with an embodiment of the present application;
Fig. 6 is a schematic view of a scanned image analysis system based on a convolutional neural network according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
FIG. 1 is a block diagram of a scanned image analysis system based on a convolutional neural network in accordance with an embodiment of the present application; fig. 2 is a system architecture diagram of a scanned image analysis system based on a convolutional neural network according to an embodiment of the present application. As shown in fig. 1 and 2, a scanned image analysis system 300 based on a convolutional neural network according to an embodiment of the present application includes: an image acquisition module 310 for acquiring a coronary artery axial position CTA scan image; the image preprocessing module 320 is configured to perform image preprocessing on the coronary artery axial position CTA scan image to obtain a preprocessed coronary artery axial position CTA scan image; the coronary artery axial position image feature extraction module 330 is configured to pass the preprocessed coronary artery axial position CTA scanning image through a convolutional neural network model including a depth feature fusion module to obtain a coronary artery axial position CTA scanning feature map; the matrix expansion module 340 is configured to perform feature matrix expansion on each feature matrix of the coronary artery axial position CTA scan feature map along the channel dimension to obtain a plurality of local feature expansion feature vectors; a global semantic association module 350 for passing the plurality of local feature expansion feature vectors through a context encoder based on a converter to obtain a plurality of context local feature expansion feature vectors; a feature optimization module 360, configured to perform feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors; a cascade fusion module 370, configured to cascade the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors; and a scan result detection module 380, configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the coronary artery axial position CTA scan result is normal.
Specifically, during operation of the convolutional neural network-based scan image analysis system 300, the image acquisition module 310 is configured to acquire a coronary artery axial CTA scan image. Considering that in the process of actually using a scanning image analysis system based on a convolutional neural network to assist a doctor in diagnosing chest pain trigeminy, it is particularly critical to fully capture implicit characteristic information of coronary arteries, if early problems of the coronary arteries, such as acute coronary syndrome, cannot be found in time, and acute myocardial infarction of a patient may be caused by timely treatment. Therefore, in the technical scheme of the application, it is expected to perform implicit characterization of the coronary artery based on analysis and feature capture of the coronary artery axial CTA scan image, so as to perform detection and evaluation of the coronary artery axial CTA scan result.
CTA scan image is a medical imaging technique, CTA is an abbreviation for Computed Tomography Angiography, chinese translated into computed tomography. CTA scan images can be combined with contrast agent injection by computed tomography (CT scan) techniques to generate high resolution vessel images. Compared with the traditional X-ray angiography technology, the CTA scanning image has the advantages of no wound, rapidness, accuracy, comprehensiveness and the like. CTA scan images can be used to detect vascular diseases such as atherosclerosis, thrombosis, aneurysms, etc., and to assess the condition of heart and pulmonary vessels. Clinically, CTA scan images are often used for diagnosis and treatment planning of heart disease, cerebrovascular disease, pulmonary vascular disease, and the like. The acquisition of CTA scan images requires the patient to be injected with a contrast agent, and then scanned by computed tomography techniques, which produce multiple-level images that can be reconstructed into three-dimensional vessel images by a computer. Therefore, the CTA scanning image has the characteristics of high resolution, high contrast, high sensitivity and the like, and can help doctors to diagnose and treat diseases more accurately.
The coronary axial position is a medical imaging examination method used for examining the pathological changes of the coronary arteries of the heart. The examination method is an image obtained by a Computer Tomography (CT) technology, and can display a cross-sectional image of a coronary artery so as to facilitate a doctor to diagnose the position and degree of a lesion. Such examination methods are commonly used to assess coronary stenosis or blockage in heart patients, as well as to assess the effectiveness of coronary stenting.
Specifically, during the operation of the scan image analysis system 300 based on the convolutional neural network, the image preprocessing module 320 is configured to perform image preprocessing on the coronary artery axial position CTA scan image to obtain a preprocessed coronary artery axial position CTA scan image. It should be appreciated that in actually performing a coronary axial CTA scan, the quality of the coronary axial CTA scan image is often disturbed by a variety of influencing factors, such as patient movement, artifacts, noise, and the like. Therefore, in order to improve the expression capability of the subsequent coronary artery implicit characteristics, in the technical scheme of the application, the coronary artery axial position CTA scanning image is further subjected to image preprocessing to obtain a preprocessed coronary artery axial position CTA scanning image. In particular, here, the image preprocessing includes, but is not limited to, image gradation processing, image noise reduction processing, and image enhancement processing. Specifically, the image gray scale processing is carried out on the coronary artery shaft CTA scanning image, so that the image can be converted into a gray scale image, the subsequent extraction of the structural characteristics of the image is facilitated, and the image analysis of the coronary artery shaft CTA scanning image is more convenient and accurate; image noise reduction processing is carried out on the coronary artery shaft CTA scanning image, so that noise affecting image quality and accuracy can be effectively eliminated, the accuracy of subsequent image processing and analysis is ensured, and meanwhile, the definition and identification of the image are improved; the image enhancement processing is carried out on the coronary artery shaft CTA scanning image, so that the image contrast can be increased, interference information in the image is removed, some unobvious coronary lesions are clearer, and the accuracy and the reliability of the image are further improved.
Accordingly, in one possible implementation, the image preprocessing of the coronary artery axial CTA scan image to obtain a preprocessed coronary artery axial CTA scan image includes: first, raw data of a coronary artery axial CTA scan image is acquired, which is typically DICOM-format image data generated by a Computed Tomography (CT) apparatus; then, since the CT scan image is generally noisy, the image needs to be denoised. Methods of denoising typically include median filtering, gaussian filtering, and the like; since the coronary axial CTA scan image is typically a three-dimensional image, it needs to be cropped to a two-dimensional image for subsequent processing. The clipping method is usually to extract a certain layer in the three-dimensional image; then, in order to improve the quality and contrast of the image, enhancement processing is required for the image. Methods of enhancement processing typically include histogram equalization, contrast stretching, etc.; since the coronary artery axial CTA scan image contains a plurality of tissues and structures, the image needs to be segmented to extract the coronary artery region of interest. Methods of segmentation generally include threshold segmentation, region growing, etc.; furthermore, three-dimensional reconstruction of the preprocessed two-dimensional image is required so that the doctor can perform a comprehensive assessment of the coronary arteries. The method of reconstruction is typically to superimpose multiple two-dimensional images together. These steps are typically performed sequentially in steps to obtain a high quality pre-processed coronary artery axial CTA scan image.
Specifically, during the operation of the convolutional neural network-based scan image analysis system 300, the coronary artery axial image feature extraction module 330 is configured to pass the preprocessed coronary artery axial CTA scan image through a convolutional neural network model including a depth feature fusion module to obtain a coronary artery axial CTA scan feature map. That is, feature mining of the post-pretreatment coronary artery axial CTA scan image is performed using a convolutional neural network model having excellent performance in terms of implicit feature extraction of images, and in particular, in consideration of the fact that, when extracting hidden features of the post-pretreatment coronary artery axial CTA scan image, in order to be able to more accurately detect pathological features of the coronary arteries, shallow features such as shape, contour, texture, and the like of the coronary artery bits should be focused on, which have important significance for evaluation of the coronary artery axial CTA scan result. While convolutional neural networks are coded, as their depth deepens, shallow features become blurred and even buried in noise. Therefore, in the technical scheme of the application, the preprocessed coronary artery axial CTA scanning image is processed by using a convolutional neural network model comprising a depth feature fusion module so as to obtain a coronary artery axial CTA scanning feature map. Compared with a standard convolutional neural network model, the convolutional neural network model disclosed by the application can retain the shallow features and the deep features of the coronary artery axial position, so that not only is the feature information richer, but also the features with different depths can be retained, and the accuracy of detection and evaluation of the coronary artery axial position CTA scanning result is improved.
Accordingly, in one possible implementation manner, as shown in fig. 3, the pre-processing the coronary artery axial CTA scan image is performed through a convolutional neural network model including a depth feature fusion module to obtain a coronary artery axial CTA scan feature map, which includes: s210, extracting a shallow feature map from a shallow layer of the convolutional neural network model; s220, extracting a deep feature map from the deep layer of the convolutional neural network model; s230, fusing the shallow feature map and the deep feature map to obtain the coronary artery axial CTA scanning feature map; wherein the ratio between the deep layer and the shallow layer is more than or equal to 5 and less than or equal to 10.
In particular, the convolutional neural network model comprising the depth feature fusion module is a deep learning model for tasks such as image classification, target detection and the like. The model is mainly characterized in that a depth feature fusion module is added on the basis of a convolutional neural network and used for fusing shallow and deep features extracted from the convolutional neural network so as to improve classification performance and robustness of the model. The depth feature fusion module generally comprises two parts: feature extraction and feature fusion. In the feature extraction section, the model performs convolution operation on the input image using convolution checks of different sizes, and extracts a multi-layer feature image. In the feature fusion part, the model fuses the feature images of different layers so as to improve the classification performance and the robustness of the model. Common depth feature fusion methods include concatenation, element-wise addition, element-wise multiplication and the like. In practical application, the specific implementation mode of the depth feature fusion module can be adjusted and optimized according to the task requirements and the characteristics of the data set.
Specifically, during the operation of the scan image analysis system 300 based on the convolutional neural network, the matrix expansion module 340 is configured to perform feature matrix expansion on each feature matrix of the coronary artery axial CTA scan feature map along the channel dimension to obtain a plurality of local feature expansion feature vectors. Considering that there is an association relationship between the implicit features of the respective local regions in the axial position of the coronary artery, it is desirable to sufficiently capture the associated features of the respective local regions so as to sufficiently express the implicit features of the coronary artery. However, due to the inherent limitations of convolution operations, pure CNN methods have difficulty learning explicit global and remote semantic information interactions. Therefore, firstly, each feature matrix of the coronary artery axial CTA scanning feature map along the channel dimension is subjected to feature matrix expansion to obtain a plurality of local feature expansion feature vectors.
Accordingly, in one possible implementation manner, performing feature matrix expansion on each feature matrix of the coronary artery axial CTA scan feature map along a channel dimension to obtain a plurality of local feature expansion feature vectors, including: firstly, splitting a CTA scanning feature map into a plurality of feature matrixes along a channel dimension; for each feature matrix, sliding the feature matrix according to a certain step length and window size to obtain a plurality of local feature matrices; and, for each local feature matrix, developing it into a feature vector. The unfolding mode can be selected to unfold the matrix row by row or column by column, and other unfolding modes can be used; then, the unfolding feature vectors of all the local feature matrixes are spliced into a large feature vector to be used as a final feature representation; further, subsequent classification, regression or other tasks may be performed on this feature vector. It should be noted that in performing feature matrix expansion, appropriate step sizes and window sizes need to be selected to fully utilize the information in the feature matrix while avoiding overfitting. In addition, different expansion modes may affect the expressive power of the feature vectors, and need to be selected and adjusted according to specific tasks.
Specifically, during operation of the convolutional neural network-based scanned image analysis system 300, the global semantic association module 350 is configured to pass the plurality of local feature expansion feature vectors through a context encoder based on a converter to obtain a plurality of context local feature expansion feature vectors. That is, the plurality of local feature expansion feature vectors are encoded in a context encoder based on a converter, so as to extract global context semantic association feature information based on each local implicit feature of the coronary artery axis, thereby obtaining a plurality of context local feature expansion feature vectors.
The context semantic encoder (Contextual Semantic Encoder) is a technique for natural language processing that can convert input text into a vector representation for subsequent processing. The principle is based on a model such as a Recurrent Neural Network (RNN) or a variant long and short time memory network (LSTM) in deep learning, and a vector representation is obtained by encoding the input text word by word. In this process, the model takes into account the context information, i.e., the influence of the preceding and following words on the current word, thereby better capturing the semantic information of the text. Specifically, the context semantic encoder converts each word in the input text into a vector representation, and then processes the vector representation through a model such as a recurrent neural network or a long and short time memory network to finally obtain an overall vector representation. This vector representation can be used in text classification, emotion analysis, machine translation, etc. In general, contextual semantic encoders are a technique that can convert natural language into a vector representation that can help computers better understand and process natural language.
Accordingly, in one possible implementation, as shown in fig. 4, the global semantic association module 350 includes: a query vector construction unit 351, configured to perform one-dimensional arrangement on the plurality of local feature expansion feature vectors to obtain a global feature vector; a self-attention unit 352 configured to calculate a product between the global feature vector and a transpose vector of each of the plurality of local feature expansion feature vectors to obtain a plurality of self-attention correlation matrices; a normalization unit 353, configured to perform normalization processing on each of the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices; a degree of attention calculation unit 354 configured to obtain a plurality of probability values by using a Softmax classification function for each normalized self-attention correlation matrix in the plurality of normalized self-attention correlation matrices; the attention applying unit 355 is configured to weight each of the local feature expansion feature vectors with each of the probability values as a weight to obtain the context local feature expansion feature vectors.
Accordingly, in another possible implementation, passing the plurality of local feature expansion feature vectors through a context encoder based on a converter to obtain a plurality of context local feature expansion feature vectors includes: firstly, arranging a plurality of local feature expansion feature vectors according to a certain sequence to form a matrix as an input matrix; the input matrix is then encoded using a context encoder based on the converter. The context encoder based on the converter is a neural network model capable of learning context information, and is commonly known as a transducer; then, the model carries out context coding on each local characteristic expansion characteristic vector in the input matrix to obtain a plurality of context local characteristic expansion characteristic vectors; further, the final feature representation can be obtained by arranging the plurality of context local feature expansion feature vectors in the original order. It should be noted that in performing the converter-based context encoder encoding, it is necessary to select the appropriate hyper-parameters and model structures to take full advantage of the information in the input matrix while avoiding overfitting. Furthermore, different context encoder models may affect the expressive power of feature vectors, requiring selection and adjustment for a particular task.
Specifically, during operation of the convolutional neural network-based scanned image analysis system 300, the feature optimization module 360 is configured to perform feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors. In particular, in the technical solution of the present application, when the plurality of local feature expansion feature vectors are obtained by a context encoder based on a converter, context-dependent encoding based on channel relevance of a convolutional neural network model including a depth feature fusion module may be performed on local channel image semantic features of the coronary artery axial CTA scan image expressed by the local feature expansion feature vectors, so as to obtain an image semantic feature representation with channel global relevance. In this case, in order to make full use of the channel local expression and the channel global expression, the contextual local feature expansion feature vector is preferably optimized by fusing the local feature expansion feature vector and the contextual local feature expansion feature vector. And, considering that there is a spatial migration caused by context encoding in a high-dimensional feature space with respect to the feature distribution of the local feature expansion feature vector of the context, it is desirable to promote the fusion effect of the local feature expansion feature vector and the local feature expansion feature vector of the context in the case of having a spatial migration, so as to promote the expression effect of the classification feature vector.
Accordingly, in one possible implementation, each of the local feature expansion feature vectors is fused using class-transformer spatial migration permutation fusion, e.g., denoted as V 1i And each of the contextual local feature expansion feature vectors, e.g., denoted as V 2i The method is specifically expressed as follows:
wherein V is 1i And V 2i Respectively, an i-th local feature expansion feature vector of the local feature expansion feature vectors and an i-th context local feature expansion feature vector of the context local feature expansion feature vectors, D (V) 1 ,V 2 ) Is a distance matrix between vectors, d (V 1 ,V 2 ) Representing the Euclidean distance between vectors, t being the mask threshold hyper-parameter, and the vectors being row vectors,and +.>Representing matrix multiplication, mask (·) representing Mask function, V 2i ' is the i-th optimization context local feature expansion feature vector of the plurality of optimization context local feature expansion feature vectors. Here, the class converter spaceMigration permutation fusion is performed by expanding feature vector V with the local features 1i And the contextual local feature expansion feature vector V 2i Mask prediction of a class converter mechanism is carried out on the space distance of the characteristic value pairs by the differential representation of the characteristic value pairs, so that the optimized context local characteristic expansion characteristic vector V is realized 2i ' edge affine encoding in high-dimensional feature space and expansion of feature vector V by optimized contextual local features by applying hidden state bias under the self-attention mechanism of the converter 2i ' developing feature vector V with respect to the local features to be fused 1i And the contextual local feature expansion feature vector V 2i Global rotation and translation under the converter mechanism is not deformed, realizing the local feature expansion feature vector V 1i And the contextual local feature expansion feature vector V 2i Spatial migration displaceability of the feature distribution of (a). In this way, the optimized context local features are expanded into feature vectors V 2i The classification feature vector is obtained through cascade connection, so that the expression effect of the classification feature vector is improved. Therefore, whether the coronary artery CTA scanning result is normal or not can be detected and judged based on the actual condition of the patient, so that a doctor is assisted in diagnosing chest pain triple sign, the working efficiency of the doctor is improved, the misdiagnosis rate is reduced, and the early treatment of the chest pain triple sign patient is facilitated.
Specifically, during operation of the scan image analysis system 300 based on the convolutional neural network, the cascade fusion module 370 is configured to cascade the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors. That is, the plurality of context local feature expansion feature vectors are cascaded to represent global semantic association feature information of the coronary artery axis, and are used as classification feature vectors.
Cascade (cascades) refers to the process of concatenating multiple classifiers in a certain order to form a Cascade of classifiers in machine learning. Cascaded classifiers are typically used to solve complex classification problems such as face recognition, object detection, etc. The cascade classifier can effectively reduce false detectionThe accuracy and the efficiency of the classifier are improved due to the rate and the omission ratio. In the cascade classifier, each classifier classifies input data, if the output result of the current classifier is negative, the input data is directly judged to be negative, and the subsequent classifier judgment is not performed. Therefore, the calculation amount of the classifier can be effectively reduced, and the classification speed is improved. In this specific example, the plurality of optimization context local feature expansion feature vectors are cascaded in the following cascade formula to obtain a classification feature vector; wherein, the formula is: v=concat [ V ]' 1 ,V' 2 ,...,V' n ]Wherein V' 1 ,V' 2 ,...,V' n Representing the plurality of optimized context local feature expansion feature vectors, V representing the classification feature vector, concat [ &, &]Representing a cascading function.
Specifically, during the operation of the scan image analysis system 300 based on the convolutional neural network, the scan result detection module 380 is configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to indicate whether the coronary artery axial position CTA scan result is normal. That is, the classification feature vector is passed through a classifier to obtain a classification result indicating whether the coronary artery axial CTA scan result is normal. Specifically, in the classification processing of the classifier, the classification feature vector is subjected to multiple full-connection coding by using multiple full-connection layers of the classifier to obtain a coding classification feature vector; furthermore, the coding classification feature vector is input into a Softmax layer of the classifier, that is, the coding classification feature vector is classified by using the Softmax classification function to obtain a classification label, in the technical scheme of the application, the label of the classifier comprises a normal (first label) coronary artery axial CTA scanning result and an abnormal (second label) coronary artery axial CTA scanning result, wherein the classifier determines which classification label the classification feature vector belongs to through a soft maximum function. It should be noted that the first tag p1 and the second tag p2 do not include a manually set concept, and in fact, during the training process, the computer model does not have a concept of "whether the coronary artery axis CTA scan result is normal", which is simply that there are two kinds of classification tags and the probability that the output feature is under the two kinds of classification tags, i.e., the sum of p1 and p2 is one. Therefore, the classification result of whether the coronary artery axial CTA scanning result is normal is actually converted into the classification probability distribution conforming to the natural rule through classifying the labels, and the physical meaning of the natural probability distribution of the labels is essentially used instead of the language text meaning of whether the coronary artery axial CTA scanning result is normal. It should be understood that, in the technical solution of the present application, the classification label of the classifier is a detection evaluation label for whether the coronary artery axial position CTA scanning result is normal, so after the classification result is obtained, whether the coronary artery CTA scanning result is normal can be detected and judged based on the classification result, so as to assist a doctor in diagnosing chest pain triple symptoms.
Wherein the classifier is a machine learning model for classifying data into different categories. The principle of the classifier is to learn a classification function or decision rule based on sample features and class labels in the training dataset, for classifying new data. The training process of the classifier is to learn and optimize the training data set to obtain the optimal classification function or decision rule so as to predict the classification of new data to the greatest extent. The specific principles and implementations of the classifier vary from algorithm to algorithm. For example, a naive bayes classifier classifies by calculating the probability of a certain class under a given feature condition based on bayes theorem. The decision tree classifier is based on a tree structure, and a decision tree is finally obtained by recursively dividing the features and is used for classifying new data. The support vector machine classifier then classifies by mapping the data to a high-dimensional space based on a maximum spacing principle, finding the optimal hyperplane. In summary, the classifier is a model based on training data learning that can be used to classify new data into known classes. Different classifiers have different principles and implementations, and an appropriate classifier can be selected according to a specific application scenario.
As described above, the scanned image analysis system based on the convolutional neural network according to the embodiment of the present application can be implemented in various terminal devices. In one example, the convolutional neural network-based scanned image analysis system 300 according to an embodiment of the present application may be integrated into a terminal device as one software module and/or hardware module. For example, the convolutional neural network-based scanned image analysis system 300 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the convolutional neural network-based scanned image analysis system 300 may likewise be one of a number of hardware modules of the terminal device.
Alternatively, in another example, the convolutional neural network-based scanned image analysis system 300 and the terminal device may be separate devices, and the convolutional neural network-based scanned image analysis system 300 may be connected to the terminal device through a wired and/or wireless network and transmit interactive information in a agreed data format.
Further, a scanned image analysis method based on the convolutional neural network is also provided.
Fig. 5 is a flowchart of a scanned image analysis method based on a convolutional neural network according to an embodiment of the present application. As shown in fig. 5, a scan image analysis method based on a convolutional neural network according to an embodiment of the present application includes: s110, acquiring a coronary artery axial CTA scanning image; s120, carrying out image preprocessing on the coronary artery axial CTA scanning image to obtain a preprocessed coronary artery axial CTA scanning image; s130, passing the preprocessed coronary artery axial CTA scanning image through a convolutional neural network model comprising a depth feature fusion module to obtain a coronary artery axial CTA scanning feature map; s140, performing feature matrix expansion on each feature matrix of the coronary artery axial CTA scanning feature map along the channel dimension to obtain a plurality of local feature expansion feature vectors; s150, the local feature expansion feature vectors pass through a context encoder based on a converter to obtain a plurality of context local feature expansion feature vectors; s160, performing feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors; s170, cascading the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors; and S180, the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the coronary artery axial position CTA scanning result is normal or not.
In summary, according to the convolutional neural network-based scan image analysis system and the method thereof provided by the embodiment of the application, the implicit characteristics about the pulmonary artery in the pulmonary artery CTA scan image are excavated by adopting the neural network model based on deep learning, so that whether the coronary artery CTA scan result is normal or not is detected and judged, a doctor is assisted in diagnosing chest pain triple symptoms, the working efficiency of the doctor is improved, the misdiagnosis rate is reduced, and the early treatment of the chest pain triple symptoms patient is facilitated.
Fig. 6 is a schematic view of a scanned image analysis system based on a convolutional neural network according to an embodiment of the present application. As shown in fig. 6, in this application scenario, a coronary artery axial CTA scan image is acquired by an image scanner (e.g., S1 as illustrated in fig. 1). Next, the above-described image is input to a server (e.g., S in fig. 1) in which a scan image analysis algorithm based on a convolutional neural network is deployed, wherein the server is capable of processing the above-described input image with the scan image analysis algorithm based on a convolutional neural network to generate a classification result indicating whether the coronary artery axial position CTA scan result is normal.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A scanned image analysis system based on a convolutional neural network, comprising:
the image acquisition module is used for acquiring a CTA scanning image of the axial position of the coronary artery;
the image preprocessing module is used for preprocessing the coronary artery axial CTA scanning image to obtain a preprocessed coronary artery axial CTA scanning image;
the coronary artery axial position image feature extraction module is used for enabling the preprocessed coronary artery axial position CTA scanning image to pass through a convolutional neural network model comprising a depth feature fusion module to obtain a coronary artery axial position CTA scanning feature image;
The matrix expansion module is used for expanding the feature matrix of each feature matrix of the coronary artery axial CTA scanning feature map along the channel dimension so as to obtain a plurality of local feature expansion feature vectors;
the global semantic association module is used for enabling the local feature expansion feature vectors to pass through a context encoder based on a converter to obtain a plurality of context local feature expansion feature vectors;
the feature optimization module is used for performing feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors;
the cascade fusion module is used for cascading the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors; and
and the scanning result detection module is used for passing the classification feature vector through a classifier to obtain a classification result, and the classification result is used for indicating whether the coronary artery axial position CTA scanning result is normal or not.
2. The convolutional neural network-based scanned image analysis system of claim 1, wherein the coronary artery axial image feature extraction module is configured to:
extracting a shallow feature map from a shallow layer of the convolutional neural network model;
Extracting a deep feature map from the deep layer of the convolutional neural network model; and
fusing the shallow feature map and the deep feature map to obtain the coronary artery axial CTA scanning feature map;
wherein the ratio between the deep layer and the shallow layer is more than or equal to 5 and less than or equal to 10.
3. The convolutional neural network-based scanned image analysis system of claim 2, wherein the global semantic association module comprises:
the query vector construction unit is used for carrying out one-dimensional arrangement on the plurality of local feature expansion feature vectors to obtain global feature vectors;
a self-attention unit, configured to calculate a product between the global feature vector and a transpose vector of each of the plurality of local feature expansion feature vectors to obtain a plurality of self-attention correlation matrices;
the normalization unit is used for respectively performing normalization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of normalized self-attention correlation matrices;
the attention calculating unit is used for obtaining a plurality of probability values through a Softmax classification function by each normalized self-attention correlation matrix in the normalized self-attention correlation matrices;
And the attention applying unit is used for weighting each local characteristic expansion characteristic vector in the local characteristic expansion characteristic vectors by taking each probability value in the probability values as a weight so as to obtain the local characteristic expansion characteristic vectors of the contexts.
4. The convolutional neural network-based scanned image analysis system of claim 3, wherein the feature optimization module is configured to: fusing each local characteristic expansion characteristic vector and each context local characteristic expansion characteristic vector by adopting a class converter space migration displacement fusion mode in a fusion formula to obtain a plurality of optimized context local characteristic expansion characteristic vectors;
wherein, the fusion formula is:
wherein V is 1i And V 2i Respectively, an i-th local feature expansion feature vector of the local feature expansion feature vectors and an i-th context local feature expansion feature vector of the context local feature expansion feature vectors, D (V) 1 ,V 2 ) Is a distance matrix between vectors, d (V 1 ,V 2 ) Representing the Euclidean distance between vectors, t being the mask threshold hyper-parameter, and the vectors being row vectors,and +. >Representing matrix multiplication, mask (·) representing Mask function, V 2i ' is the i-th optimization context local feature expansion feature vector of the plurality of optimization context local feature expansion feature vectors.
5. The convolutional neural network-based scanned image analysis system of claim 4, wherein the cascade fusion module is configured to: cascading the plurality of optimization context local feature expansion feature vectors with the following cascading formula to obtain classification feature vectors;
wherein, the formula is:
V=Concat[V' 1 ,V' 2 ,...,V' n ]
wherein V 'is' 1 ,V' 2 ,...,V' n Representing the plurality of optimized context local feature expansion feature vectors, V representing the classification feature vector, concat [ &, &]Representing a cascading function.
6. The convolutional neural network-based scanned image analysis system of claim 5, wherein the scan result detection module comprises:
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and
and the classification result generation unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
7. A scanned image analysis method based on a convolutional neural network, comprising:
acquiring a CTA scanning image of the axial position of a coronary artery;
performing image preprocessing on the coronary artery axial CTA scanning image to obtain a preprocessed coronary artery axial CTA scanning image;
the preprocessed coronary artery axial CTA scanning image is passed through a convolutional neural network model comprising a depth feature fusion module to obtain a coronary artery axial CTA scanning feature map;
performing feature matrix expansion on each feature matrix of the coronary artery axial CTA scanning feature map along the channel dimension to obtain a plurality of local feature expansion feature vectors;
passing the plurality of local feature expansion feature vectors through a converter-based context encoder to obtain a plurality of context local feature expansion feature vectors;
performing feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors;
cascading the plurality of optimization context local feature expansion feature vectors to obtain classification feature vectors; and
and the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the coronary artery axial CTA scanning result is normal or not.
8. The method for analyzing a scanned image based on a convolutional neural network according to claim 7, wherein the step of passing the preprocessed coronary artery axial CTA scanned image through a convolutional neural network model including a depth feature fusion module to obtain a coronary artery axial CTA scanned feature map comprises the steps of:
extracting a shallow feature map from a shallow layer of the convolutional neural network model;
extracting a deep feature map from the deep layer of the convolutional neural network model; and
fusing the shallow feature map and the deep feature map to obtain the coronary artery axial CTA scanning feature map;
wherein the ratio between the deep layer and the shallow layer is more than or equal to 5 and less than or equal to 10.
9. The method of claim 8, wherein performing feature optimization on the plurality of context local feature expansion feature vectors to obtain a plurality of optimized context local feature expansion feature vectors, comprises: fusing each local characteristic expansion characteristic vector and each context local characteristic expansion characteristic vector by adopting a class converter space migration displacement fusion mode in a fusion formula to obtain a plurality of optimized context local characteristic expansion characteristic vectors;
Wherein, the fusion formula is:
wherein V is 1i And V 2i Respectively, an i-th local feature expansion feature vector of the local feature expansion feature vectors and an i-th context local feature expansion feature vector of the context local feature expansion feature vectors, D (V) 1 ,V 2 ) Is a distance matrix between vectors, d (V 1 ,V 2 ) Representing the Euclidean distance between vectors, t being the mask threshold hyper-parameter, and the vectors being row vectors,and +.>Representing matrix multiplication, mask (·) representing Mask function, V 2i ' is the i-th optimization context local feature expansion feature vector of the plurality of optimization context local feature expansion feature vectors.
10. The method of claim 9, wherein the step of passing the classification feature vector through a classifier to obtain a classification result, wherein the classification result is used to indicate whether the coronary artery axial CTA scan result is normal, includes:
performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and
and the coding classification feature vector is passed through a Softmax classification function of the classifier to obtain the classification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310849947.9A CN116862877A (en) | 2023-07-12 | 2023-07-12 | Scanning image analysis system and method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310849947.9A CN116862877A (en) | 2023-07-12 | 2023-07-12 | Scanning image analysis system and method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116862877A true CN116862877A (en) | 2023-10-10 |
Family
ID=88230073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310849947.9A Pending CN116862877A (en) | 2023-07-12 | 2023-07-12 | Scanning image analysis system and method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116862877A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117252926A (en) * | 2023-11-20 | 2023-12-19 | 南昌工控机器人有限公司 | Mobile phone shell auxiliary material intelligent assembly control system based on visual positioning |
CN117274270A (en) * | 2023-11-23 | 2023-12-22 | 吉林大学 | Digestive endoscope real-time auxiliary system and method based on artificial intelligence |
CN117438024A (en) * | 2023-12-15 | 2024-01-23 | 吉林大学 | Intelligent acquisition and analysis system and method for acute diagnosis patient sign data |
CN117618708A (en) * | 2024-01-26 | 2024-03-01 | 吉林大学 | Intelligent monitoring system and method for intravenous infusion treatment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563983A (en) * | 2017-09-28 | 2018-01-09 | 上海联影医疗科技有限公司 | Image processing method and medical imaging devices |
US11194972B1 (en) * | 2021-02-19 | 2021-12-07 | Institute Of Automation, Chinese Academy Of Sciences | Semantic sentiment analysis method fusing in-depth features and time sequence models |
US20220287668A1 (en) * | 2021-03-09 | 2022-09-15 | Siemens Healthcare Gmbh | Multi-task learning framework for fully automated assessment of coronary artery disease |
CN115761813A (en) * | 2022-12-13 | 2023-03-07 | 浙大城市学院 | Intelligent control system and method based on big data analysis |
CN116051853A (en) * | 2022-11-04 | 2023-05-02 | 河南科技学院 | Automatic water adding dough kneading tank and application method thereof |
CN116189179A (en) * | 2023-04-28 | 2023-05-30 | 北京航空航天大学杭州创新研究院 | Circulating tumor cell scanning analysis equipment |
CN116405326A (en) * | 2023-06-07 | 2023-07-07 | 厦门瞳景智能科技有限公司 | Information security management method and system based on block chain |
-
2023
- 2023-07-12 CN CN202310849947.9A patent/CN116862877A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563983A (en) * | 2017-09-28 | 2018-01-09 | 上海联影医疗科技有限公司 | Image processing method and medical imaging devices |
US11194972B1 (en) * | 2021-02-19 | 2021-12-07 | Institute Of Automation, Chinese Academy Of Sciences | Semantic sentiment analysis method fusing in-depth features and time sequence models |
US20220287668A1 (en) * | 2021-03-09 | 2022-09-15 | Siemens Healthcare Gmbh | Multi-task learning framework for fully automated assessment of coronary artery disease |
CN116051853A (en) * | 2022-11-04 | 2023-05-02 | 河南科技学院 | Automatic water adding dough kneading tank and application method thereof |
CN115761813A (en) * | 2022-12-13 | 2023-03-07 | 浙大城市学院 | Intelligent control system and method based on big data analysis |
CN116189179A (en) * | 2023-04-28 | 2023-05-30 | 北京航空航天大学杭州创新研究院 | Circulating tumor cell scanning analysis equipment |
CN116405326A (en) * | 2023-06-07 | 2023-07-07 | 厦门瞳景智能科技有限公司 | Information security management method and system based on block chain |
Non-Patent Citations (1)
Title |
---|
范思琪: "基于卷积神经网络的冠脉斑块检测方法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, no. 12, pages 062 - 34 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117252926A (en) * | 2023-11-20 | 2023-12-19 | 南昌工控机器人有限公司 | Mobile phone shell auxiliary material intelligent assembly control system based on visual positioning |
CN117252926B (en) * | 2023-11-20 | 2024-02-02 | 南昌工控机器人有限公司 | Mobile phone shell auxiliary material intelligent assembly control system based on visual positioning |
CN117274270A (en) * | 2023-11-23 | 2023-12-22 | 吉林大学 | Digestive endoscope real-time auxiliary system and method based on artificial intelligence |
CN117274270B (en) * | 2023-11-23 | 2024-01-26 | 吉林大学 | Digestive endoscope real-time auxiliary system and method based on artificial intelligence |
CN117438024A (en) * | 2023-12-15 | 2024-01-23 | 吉林大学 | Intelligent acquisition and analysis system and method for acute diagnosis patient sign data |
CN117438024B (en) * | 2023-12-15 | 2024-03-08 | 吉林大学 | Intelligent acquisition and analysis system and method for acute diagnosis patient sign data |
CN117618708A (en) * | 2024-01-26 | 2024-03-01 | 吉林大学 | Intelligent monitoring system and method for intravenous infusion treatment |
CN117618708B (en) * | 2024-01-26 | 2024-04-05 | 吉林大学 | Intelligent monitoring system and method for intravenous infusion treatment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7069359B2 (en) | Methods and systems for improving cancer detection using deep learning | |
Aamir et al. | A deep learning approach for brain tumor classification using MRI images | |
US6125194A (en) | Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing | |
CN116862877A (en) | Scanning image analysis system and method based on convolutional neural network | |
Zuo et al. | R2AU-Net: attention recurrent residual convolutional neural network for multimodal medical image segmentation | |
US20070280530A1 (en) | Using Candidates Correlation Information During Computer Aided Diagnosis | |
CN111951288B (en) | Skin cancer lesion segmentation method based on deep learning | |
WO2021186592A1 (en) | Diagnosis assistance device and model generation device | |
CN113420826B (en) | Liver focus image processing system and image processing method | |
JP2023540910A (en) | Connected Machine Learning Model with Collaborative Training for Lesion Detection | |
Khan et al. | Classification and region analysis of COVID-19 infection using lung CT images and deep convolutional neural networks | |
CN114565761A (en) | Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image | |
Khan et al. | Covid-19 detection and analysis from lung ct images using novel channel boosted cnns | |
Liu et al. | A vessel-focused 3D convolutional network for automatic segmentation and classification of coronary artery plaques in cardiac CTA | |
CN117152433A (en) | Medical image segmentation method based on multi-scale cross-layer attention fusion network | |
CN116884623A (en) | Medical rehabilitation prediction system based on laser scanning imaging | |
CN115984555A (en) | Coronary artery stenosis identification method based on depth self-encoder composition | |
CN112785581A (en) | Training method and device for extracting and training large blood vessel CTA (computed tomography angiography) imaging based on deep learning | |
CN116228690A (en) | Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT | |
Kovalev et al. | Automatic detection of pathological changes in chest X-ray screening images using deep learning methods | |
LAGHMATI et al. | Segmentation of Breast Cancer on Ultrasound Images using Attention U-Net Model | |
Vinod et al. | A comparative analysis on deep learning techniques for skin cancer detection and skin lesion segmentation | |
CN116740041B (en) | CTA scanning image analysis system and method based on machine vision | |
US20230401702A1 (en) | Object detection in paired imaging | |
US20230206438A1 (en) | Multi arm machine learning models with attention for lesion segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |