CN112185550A - Typing method, device and equipment - Google Patents

Typing method, device and equipment Download PDF

Info

Publication number
CN112185550A
CN112185550A CN202011052646.6A CN202011052646A CN112185550A CN 112185550 A CN112185550 A CN 112185550A CN 202011052646 A CN202011052646 A CN 202011052646A CN 112185550 A CN112185550 A CN 112185550A
Authority
CN
China
Prior art keywords
image data
module
processed
classification
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011052646.6A
Other languages
Chinese (zh)
Inventor
金海岚
卢旺盛
印胤
杨光明
秦岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianglian Zhichuang Beijing Technology Co ltd
Original Assignee
Qianglian Zhichuang Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianglian Zhichuang Beijing Technology Co ltd filed Critical Qianglian Zhichuang Beijing Technology Co ltd
Priority to CN202011052646.6A priority Critical patent/CN112185550A/en
Publication of CN112185550A publication Critical patent/CN112185550A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Abstract

The embodiment of the specification discloses a typing method, a typing device and typing equipment, and belongs to the technical field of medical images and computers. The method comprises the following steps: acquiring image data to be processed; preprocessing the image data to be processed to obtain preprocessed image data; based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module. By adopting the method provided by the embodiment of the specification, the imaging diagnosis of the non-acute-stage occlusion of the middle cerebral artery can be quickly, comprehensively and accurately carried out, the typing of the cerebral infarction or the scoring of the cerebral infarction, such as the ASPECTS scoring, is realized, and a reference basis is provided for clinical treatment.

Description

Typing method, device and equipment
Technical Field
The present disclosure relates to the field of medical imaging and computer technologies, and in particular, to a typing method, device and apparatus.
Background
The non-acute occlusion of the intracranial artery is an important reason of ischemic stroke, which accounts for about 10 percent of all ischemic stroke, and the annual recurrence risk of the stroke is 3.6 to 22.0 percent; the middle cerebral artery occlusion is common in clinic and accounts for 79.6 percent of occlusive cerebrovascular diseases. At present, the main treatment method for symptomatic non-acute occlusion with the intracranial artery occlusion time of more than 24 hours is still medication, and patients with ineffective medication can also adopt intracranial external bypass surgery and intravascular treatment to reestablish blood circulation.
Research shows that the non-acute-stage occlusion of the middle cerebral artery has certain feasibility and safety for receiving intravascular treatment, but the development of the technology is limited due to the fact that the patency rate is lack of uniformity, the occurrence rate of complications is high, and the prognosis is poor. The main reason for this is that after occlusion of the middle cerebral artery, primary collateral circulation (Willis loop) and ocular artery compensation cannot be passed, and the main compensation route after occlusion of the middle cerebral artery is pia mater arterial collateral compensation, which has a delay phenomenon, and image diagnosis is difficult, thereby limiting effectiveness of intravascular treatment and increasing occurrence of complications. Therefore, the rapid and accurate determination of the cerebral infarction type is of great significance to the establishment of the optimal intravascular treatment strategy.
Disclosure of Invention
The embodiment of the specification provides a typing method, a typing device and typing equipment, which are used for solving the following technical problems: after occlusion of the middle cerebral artery, imaging diagnosis is difficult due to compensation delay, thereby limiting effectiveness of intravascular treatment and increasing occurrence of complications.
In order to solve the above technical problem, the embodiments of the present specification are implemented as follows:
the typing method provided by the embodiment of the specification comprises the following steps:
acquiring image data to be processed;
preprocessing the image data to be processed to obtain preprocessed image data;
based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
Further, the preprocessing the image data to be processed to obtain preprocessed image data specifically includes:
and after removing the skull in the image data to be processed, carrying out normalization processing to obtain preprocessed image data.
Further, the classifying the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a classification result of the image data to be processed specifically includes:
obtaining local features of the preprocessed image data based on the convolution module;
obtaining the global features of the preprocessed image data based on the global feature fusion module;
and inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed.
Further, the convolution module further includes a third convolution module, and the classifying module classifies the preprocessed image data based on the convolution module, the global feature fusion module and the classifying module to obtain a classification result of the image data to be processed, which specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the second convolution module is used for extracting local features of the first feature map to obtain a second feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map;
the second convolution module and the global fusion module are in parallel connection, so that the second feature map and the third feature map are fused to obtain a fourth feature map;
and inputting the fourth feature map into the third convolution module for convolution and pooling, and inputting the fourth feature map into the classification module for classification to obtain a classification result of the image data to be processed.
Further, the classifying the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a classification result of the image data to be processed specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a second feature map;
inputting the second feature map into the second convolution module to carry out convolution and pooling operation, and obtaining a third feature map;
and inputting the third feature map into the classification module for classification to obtain a classification result of the image data to be processed.
Further, the inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed specifically includes:
the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, the classification module further comprises a full connection layer, and the classification result of the image data to be processed is output after the local features of the preprocessed image data and the global features of the preprocessed image data pass through the global pooling layer or the one-dimensional processing layer and then pass through the full connection layer.
An embodiment of the present specification further provides a typing device, including:
the acquisition module acquires image data to be processed;
the preprocessing module is used for preprocessing the image data to be processed to obtain preprocessed image data;
and the parting module is used for parting the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a parting result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
Further, the preprocessing the image data to be processed to obtain preprocessed image data specifically includes:
and after removing the skull in the image data to be processed, carrying out normalization processing to obtain preprocessed image data.
Further, the classifying the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a classification result of the image data to be processed specifically includes:
obtaining local features of the preprocessed image data based on the convolution module;
obtaining the global features of the preprocessed image data based on the global feature fusion module;
and inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed.
Further, the convolution module further includes a third convolution module, and the classifying module classifies the preprocessed image data based on the convolution module, the global feature fusion module and the classifying module to obtain a classification result of the image data to be processed, which specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the second convolution module is used for extracting local features of the first feature map to obtain a second feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map;
the second convolution module and the global fusion module are in parallel connection, so that the second feature map and the third feature map are fused to obtain a fourth feature map;
and inputting the fourth feature map into the third convolution module for convolution and pooling, and inputting the fourth feature map into the classification module for classification to obtain a classification result of the image data to be processed.
Further, the classifying the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a classification result of the image data to be processed specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a second feature map;
inputting the second feature map into the second convolution module to carry out convolution and pooling operation, and obtaining a third feature map;
and inputting the third feature map into the classification module for classification to obtain a classification result of the image data to be processed.
Further, the inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed specifically includes:
the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, the classification module further comprises a full connection layer, and the classification result of the image data to be processed is output after the local features of the preprocessed image data and the global features of the preprocessed image data pass through the global pooling layer or the one-dimensional processing layer and then pass through the full connection layer.
An embodiment of the present specification further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring image data to be processed;
preprocessing the image data to be processed to obtain preprocessed image data;
based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
The embodiment of the present specification further provides a neural network model, which is applied to the typing method described in the present application, and the neural network model includes:
the input layer is used for receiving image data to be processed;
the convolution module is used for extracting local features of the image data to be processed and comprises a first convolution module and a second convolution module;
the global feature fusion module is used for extracting global features of the image data to be processed, and the convolution module and the global feature fusion module are in a parallel connection relation;
the classification module is used for performing classification based on the local features of the image data to be processed and the global features of the image data to be processed to obtain a classification result of the image data to be processed, the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, and the classification module further comprises a full connection layer.
Further, when the convolution module and the global feature fusion module are in a parallel relationship, the convolution module further comprises a third convolution module.
The embodiment of the specification acquires image data to be processed; preprocessing the image data to be processed to obtain preprocessed image data; based on a convolution module, a global feature fusion module and a classification module, the preprocessed image data are classified to obtain the classification result of the image data to be processed, the convolution module comprises a first convolution module and a second convolution module, the imaging diagnosis of the non-acute-phase occlusion of the middle cerebral artery can be rapidly, comprehensively and accurately carried out, the classification of cerebral infarction or the scoring of cerebral infarction, such as the ASPECTS scoring, is realized, and a reference basis is provided for clinical treatment.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a schematic diagram of a typing method provided in an embodiment of the present disclosure;
FIG. 2 is a system configuration diagram of a typing method provided in example 1 of the present specification;
FIG. 3 is a detailed schematic diagram of a typing method provided in example 1 of the present specification;
fig. 4 is a schematic diagram of global feature fusion provided in an embodiment of the present specification;
fig. 5 is a schematic diagram of a training process of an MCA typing model provided in an embodiment of the present specification;
fig. 6 is a schematic diagram of an improved neural network model for ASPECTS scoring provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a neural network model for ASPECTS scoring after further modification provided by an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a typing device provided in an embodiment of the present disclosure.
Detailed Description
In the prior art, there are various methods for typing cerebral infarction, and different typing results of cerebral infarction can be obtained according to different typing evidences. Proper clinical typing is critical to acute phase treatment, secondary prevention of patients, and stroke related studies. The current typing methods mainly include OCSP (oxifordshire community stroke project), TOAST and CISS typing, which are all based on clinical symptoms. Among them, OCSP and TOAST typing are most commonly used, and CISS typing has been increasingly recognized.
OCSP typing, which is based on the clinical manifestations of the most significant neurological deficit caused by primary stroke, rapidly classifies the size and location of the affected vessels and infarcts, and suggests the size and location of the affected vessels and infarcts, including Total Anterior Circulatory Infarcts (TACI), Partial Anterior Circulatory Infarcts (PACI), lacuna, and post-circulatory infarcts (POCI), and has the following advantages: the method is simple and rapid, can be widely applied clinically, has good reliability and effectiveness, can accurately predict the position and the size of a cerebral infarction focus, and is beneficial to evaluating the prognosis of a patient, but the typing method also has certain limitations: the classification is rough, and the etiology and pathogenesis of different subtypes are not clear; similar symptoms or atypical symptoms exist in clinical manifestations of different subtypes of cerebral apoplexy, and clinical signs at different periods change, so that certain bias exists in the typing process.
The TOAST typing is based on the clinical manifestation of the type, which divides ischemic stroke into: atherosclerosis (1) type of atherosclerosis, cardiac embolization (cardioembylism), arteriolar occlusion (small-artery occlusion), other definite etiology (stroke of other defined cause), and unknown etiology (stroke of open cause). This typing method overlooks the perforator atherosclerotic disease and therefore the typing results are not comprehensive.
The CISS typing is also a typing mode based on clinical manifestations, and divides ischemic stroke into: atherosclerosis of the major arteries (LAA), Cardiogenic Stroke (CS), Perforator Artery Disease (PAD), other causes (OE), uncertain causes (UE).
Because the existing typing mode only considers clinical manifestations, the typing result has certain limitations. Therefore, a new typing method is available to obtain a good typing result, so that the pathophysiological mechanism of the partial ischemic stroke can be explained, and the method can also be used for evaluating clinical symptoms, treating and prognosing and providing a reference basis for clinical treatment.
In the typing method provided by the embodiment of the specification, in addition to the consideration of clinical symptoms, the imaging characteristics are further considered during typing, and the convolutional neural network is improved so as to overcome the defect that the ordinary convolutional neural network cannot meet the requirement of typing due to the compensation complexity after middle cerebral artery occlusion.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
Fig. 1 is a schematic diagram of a typing method provided in an embodiment of the present disclosure, where the typing method includes:
step S101: and acquiring image data to be processed.
In an embodiment of the present description, the image data to be processed is brain image data, and may specifically be CTA or MRA image data, or other image data such as DSA, or CT perfusion/magnetic perfusion imaging or cerebrovascular angiography, or high resolution magnetic resonance (HR-MRI).
Step S103: and preprocessing the image data to be processed to obtain preprocessed image data.
In an embodiment of this specification, the preprocessing the image data to be processed to obtain preprocessed image data specifically includes:
after removing the skull in the image data to be processed, normalization processing is carried out to obtain the preprocessed image data.
As irrelevant tissues such as skull and the like exist in the image data to be processed, in order to ensure the accuracy of subsequent typing, the skull in the image data to be processed needs to be removed. In the specific implementation process, removing the skull in the image data to be processed specifically includes: and extracting the skull from the first image by threshold segmentation according to a first threshold to obtain a skull mask (mask) image, and segmenting the cranium into an inner part and an outer part of the skull. In practical applications, the threshold for extracting the skull is > 100. Further, pixel points lower than the second threshold belong to the skull, and the skull is taken out from the skull mask image to obtain a tissue mask image after the skull is removed. In particular implementations, the second threshold may be 80. The skull in the image data to be processed can be removed, other methods for removing the skull can also be adopted, and the specific method for removing the skull does not constitute a limitation on the present application.
In the embodiment of the present specification, the normalization process includes: one or more of coordinate centering, x-sharpening normalization, scaling normalization or rotation normalization. Other methods may be used for normalization, and the specific method of normalization is not limited in this application.
Step S105: based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
In an embodiment of the present specification, the classifying the preprocessed image data based on a convolution module, a global feature fusion module, and a classification module to obtain a classification result of the image data to be processed specifically includes:
obtaining local features of the preprocessed image data based on the convolution module;
obtaining the global features of the preprocessed image data based on the global feature fusion module;
and inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed.
In an embodiment of the present specification, the classifying the preprocessed image data based on the convolution module, the global feature fusion module, and the classification module to obtain a classification result of the image data to be processed specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a second feature map;
inputting the second feature map into the second convolution module to carry out convolution and pooling operation, and obtaining a third feature map;
and inputting the third feature map into the classification module for classification to obtain a classification result of the image data to be processed.
In an embodiment of the present specification, the convolution module further includes a third convolution module, and the classifying module classifies the preprocessed image data based on the convolution module, the global feature fusion module, and the classifying module to obtain a classification result of the image data to be processed, where the classifying module specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the second convolution module is used for extracting local features of the first feature map to obtain a second feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map;
the second convolution module and the global fusion module are in parallel connection, so that the second feature map and the third feature map are fused to obtain a fourth feature map;
and inputting the fourth feature map into the third convolution module for convolution and pooling, and inputting the fourth feature map into the classification module for classification to obtain a classification result of the image data to be processed.
It should be particularly noted that, in this embodiment of the present specification, when the second convolution module and the global feature fusion module are in a parallel relationship, the numbers of the pooling layers of the second convolution module and the global feature fusion module are the same, so as to ensure that the sizes of the feature maps respectively corresponding to the second convolution module and the global feature fusion module are the same.
In an embodiment of this specification, the inputting the local feature of the preprocessed image data and the global feature of the preprocessed image data into the classification module to obtain a classification result of the image data to be processed specifically includes:
the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, the classification module further comprises a full connection layer, and the classification result of the image data to be processed is output after the local features of the preprocessed image data, the local features of the preprocessed image data and the global features of the preprocessed image data pass through the global pooling layer or the one-dimensional processing layer and then pass through the full connection layer.
In this embodiment of the present specification, the global pooling layer or the one-dimensional processing layer of the classification module is used to input the local features of the preprocessed image data and the global features of the preprocessed image data obtained in the previous step into the classification module, and then flatten the local features of the preprocessed image data and the tensors of the global features of the preprocessed image data or reconstruct the dimensionality of the tensors, so as to flatten the multidimensional vector into a one-dimensional vector. In a specific implementation process, a frame of an open source machine learning platform tensoflow may be adopted for flattening (scatter), or a view function under the frame of an open source machine learning platform Pytorch may be adopted to reconstruct the dimensionality of a tensor. Of course, other methods capable of flattening the multidimensional vector into a one-dimensional vector under the framework of the open-source machine learning platform can also be regarded as the protection scope of the present application.
In order to further understand that the convolution-based module, the global feature fusion module and the classification module provided in the present application classify the preprocessed image data to obtain the classification result of the image data to be processed, a detailed description will be given below by taking a specific embodiment as an example. Fig. 2 is a system configuration diagram of a typing method provided in embodiment 1 of the present specification. As shown in fig. 2, the features extracted by the first convolution module are input into the second convolution module and the global feature fusion module, wherein the second convolution module and the global feature fusion module are in a parallel relationship, and after being processed by the second convolution module and the global feature fusion module, the features are input into the third convolution module for further convolution and pooling operations, so that the features can be classified by the subsequent classification module.
In one embodiment of the present description, the convolution module is in a parallel relationship with the global feature fusion module. The first convolution module is composed of a plurality of convolution layers and pooling layers and is used for extracting local features of the preprocessed image data, and a first feature map is obtained mainly based on operations between pixels and peripheral pixels, such as textures, edges and the like. The global feature fusion module and the second convolution module perform parallel processing, wherein the second convolution module is used for extracting local features of the first feature graph to obtain a second feature graph, and the second feature graph is obtained as local features; the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map, and the third feature map obtains the global features; the second convolution module and the global fusion module are in parallel connection, and the second feature map and the third feature map are fused to obtain a fourth feature map, so that the fourth feature map has local features and global features at the same time;
and inputting the fourth feature map into a third convolution module for convolution and pooling, and inputting the fourth feature map into a classification module for classification to obtain a classification result of the image data to be processed.
It should be particularly noted that, in the embodiment of the present specification, the output typing result mainly includes the following contents: whether the MCA (Middle Cerebral Artery) was occluded, whether the M1 trunk was visualized, whether the M1 distal bifurcation was visualized, and whether M2 was visualized. The typing results were determined based on whether MCA was occluded, whether M1 trunk developed, whether M1 distal bifurcation developed, whether M2 developed and their respective probability values.
To facilitate understanding of MCA typing, table 1 is a schematic representation of MCA typing provided in the examples herein. The MCA typing comprises 9 types of typing, and the details are shown in Table 1.
TABLE 1
Figure 2
To further understand the specific implementation process of the system structure diagram of the typing method shown in fig. 2, the whole operation flow is specifically described below with a specific embodiment by taking CTA or MRA image data as an example. Fig. 3 is a specific schematic diagram of a typing method provided in example 1 of the present specification.
Example 1
The CTA or MRA image data, after normalization and resampling, is a matrix of 512 × 256, i.e., the image is 256 frames of tomograms, each tomogram is a grayscale image of 512 × 512, so the number of channels is 1, and the input of the network is a matrix of 512 × 256 × 1 in the framework of the open-source machine learning platform tenserflow or in the framework of the open-source machine learning platform Pytorch. The first convolution module (within the dashed box) is composed of two convolution blocks, each convolution block has convolution operation and pooling operation, typically using convolution kernel of 3 × 3, and after the convolution operation, there may be done dropout (random deactivation) operation, normalization operation, activation operation, etc., and the pooling operation may be max pool (maximum pool) or average pool (average pool), etc., and in the current embodiment, the step size of the pooling operation is 2. The first volume block uses 32 convolution kernels, resulting in 32 feature maps. After the pooling operation, the image size is reduced from 512 by 256 to 256 by 128; the second convolution block used 64 convolution kernels, resulting in 64 feature maps. After the pooling operation, the image size was decreased from 256 × 128 to 128 × 64, i.e., the first feature size was 128 × 64 and the number of channels was 64. It should be particularly noted that the framework of the open source machine learning platform may also be other frameworks, such as caffe, and the specific type of the framework of the open source machine learning platform is not limited in this application.
The second convolution module is similar to the first convolution module, and is distinguished in that the second convolution block in the second convolution module has no pooling operation, and correspondingly, the second global feature fusion block of the global feature fusion module has no pooling operation.
In one embodiment of the present description, the first convolution module consists of two convolution blocks, each convolution block having a convolution operation and a pooling operation. In the convolutional layer, the size of the convolutional kernel is 3 × 3, the step size of the convolutional operation is 1, the number of convolutional kernels in the first convolutional block is 32, the number of convolutional kernels in the second convolutional block is 64, and the step size of the pooling operation is 2. The second convolution module is composed of two convolution blocks, wherein the number of convolution kernels in the first convolution block is 128, and the number of convolution kernels in the second convolution block is 128. And the global feature fusion module consists of two convolution blocks, wherein the number of convolution kernels in the first convolution block is 64, and the number of convolution kernels in the second convolution block is also 64. In the third convolution module, the convolution module is composed of 1 convolution block, and the number of convolution kernels is 192.
Fig. 4 is a schematic diagram of global feature fusion provided in an embodiment of the present disclosure, and an operation of global feature fusion is shown in fig. 4, for a first global feature fusion block, 64 feature maps (the number of channels is 64) are total, a size of each feature map is 128 × 64, and may be considered as 64 faults, and a size of each fault is 128 × 128. For pixel point (small in the figure) located at (j, k) in the ith faultCube), the length of the feature vector of the point is 64, and the vector is defined as x = (b) ((x)
Figure DEST_PATH_IMAGE003
,
Figure 578026DEST_PATH_IMAGE004
, ...,
Figure DEST_PATH_IMAGE005
). In order to enable the characteristics of the pixel point far away from the fault and the characteristics of the pixel points of other faults to be fused, the pixel vector and other pixel vectors can be subjected to correlation operation, and the pixel point with higher correlation is lower than the pixel point with higher correlation, and the pixel vector is added into the pixel vector with certain weight. Let the other vectors be y =: (b)
Figure 784885DEST_PATH_IMAGE006
,
Figure DEST_PATH_IMAGE007
, ... ,
Figure 459580DEST_PATH_IMAGE008
) The correlation between the two vectors can be represented by w = (d) ((d))
Figure 202539DEST_PATH_IMAGE003
*
Figure 101225DEST_PATH_IMAGE006
+
Figure 811692DEST_PATH_IMAGE004
*
Figure 290078DEST_PATH_IMAGE007
+...+
Figure 651658DEST_PATH_IMAGE005
*
Figure DEST_PATH_IMAGE009
)/
Figure 924508DEST_PATH_IMAGE010
/
Figure DEST_PATH_IMAGE011
The total number of y was 128 x 64, and was set as
Figure 356493DEST_PATH_IMAGE006
,
Figure 904149DEST_PATH_IMAGE007
, ...,
Figure 870968DEST_PATH_IMAGE012
Then after global feature fusion, the point x' =
Figure DEST_PATH_IMAGE013
*
Figure 829566DEST_PATH_IMAGE014
+
Figure DEST_PATH_IMAGE015
*
Figure 717887DEST_PATH_IMAGE007
+...+
Figure 865972DEST_PATH_IMAGE016
*
Figure 687298DEST_PATH_IMAGE012
It should be noted that, the operation of calculating the correlation between two vectors, i.e. calculating the distance, and the specific method of calculating the correlation between two vectors does not constitute a specific limitation of the present application.
In embodiment 1 of the present specification, after the output of the third convolution module, the output may also be a vector through a leveling operation, and the typing result may be output through two fully connected layers.
In the embodiment of the present specification, the convolution module, the global feature fusion module, and the classification module together form an MCA typing model, and the MCA typing model is a model obtained through neural network pre-training based on the brain image data and the corresponding clinical features thereof. The neural network may use the model structure, the number of model layers, and the number of convolution kernels described in embodiment 1 of the present specification. In the specific implementation process, the neural network preferably adopts the model structure, the number of layers of the model, and the number of convolution kernels shown in embodiment 1.
The embodiment of the present specification further provides a neural network model, which is applied to the typing method described in the present application, and the neural network model includes:
the input layer is used for receiving image data to be processed;
the convolution module is used for extracting local features of the image data to be processed and comprises a first convolution module and a second convolution module;
the global feature fusion module is used for extracting global features of the image data to be processed, and the convolution module and the global feature fusion module are in a parallel connection relation;
the classification module is used for performing classification based on the local features of the image data to be processed and the global features of the image data to be processed to obtain a classification result of the image data to be processed, the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, and the classification module further comprises a full connection layer.
Furthermore, the model also comprises a preprocessing module which is used for preprocessing the image data to be processed,
wherein:
the pretreatment step comprises: after removing the skull in the image data to be processed, carrying out normalization processing to obtain preprocessed image data so as to extract the local features of the image data to be processed and the global features of the image data to be processed.
Further, when the convolution module and the global feature module are in a parallel relationship, the convolution module further comprises a third convolution module.
Further, when the convolution module and the global feature module are in a parallel relationship, the numbers of the pooling layers of the second convolution module and the global feature fusion module are the same, so as to ensure that the sizes of the feature maps respectively corresponding to the second convolution module and the global feature fusion module are the same.
Further, the global pooling layer or the one-dimensional processing layer of the classification module is configured to perform flattening processing on the local features of the image data to be processed and the tensor of the global features of the image data to be processed or reconstruct the dimensionality of the tensor, and flatten the multidimensional vector into a one-dimensional vector.
It should be particularly noted that the neural network model provided in the embodiments of the present disclosure may be used for the classification of image data, preferably CTA image data or MRA image data, where the classification is the middle cerebral artery classification of the image data.
Fig. 5 is a schematic diagram of a training process of an MCA typing model provided in an embodiment of the present specification, and as shown in fig. 5, the training of the MCA typing model includes:
step S501: and dividing the acquired learning sample set data into training set data, tuning set data and test set data.
In an embodiment of the present description, the learning sample set data is brain image data, and may specifically be CTA or MRA image data, or other image data such as DSA, or CT perfusion/nuclear magnetic perfusion imaging or cerebrovascular angiography, or high resolution magnetic resonance (HR-MRI). And labeling the learning sample set data, and determining the classification of artery occlusion in the brain in the learning sample set data. And randomly dividing the labeled learning sample set data into training set data, tuning set data and testing set data according to the proportion of 5:2: 3.
Step S503: and carrying out normalization post-processing on the learning sample set data to obtain normalized learning sample set data.
The learning sample set data is normalized and resampled to obtain a 512 by 256 matrix corresponding to the type of the respective occlusion of the middle cerebral artery.
Step S505: and training the normalized learning sample set data to obtain an MCA typing model.
And selecting the optimal hyper-parameter based on the constructed neural network. And inputting the image matrix in the training set data and the corresponding typed classes in pairs to build the neural network. The model is preheated by adopting a lower learning rate firstly, and then the learning rate is gradually increased. In the training process, the model can be trained by adopting a cross entropy cost function as a loss function. In the training process, if the loss function on the tuning data does not decrease any more, the training is stopped, and overfitting is prevented. Each model structure adopts different hyper-parameters, a plurality of models are trained under the same initialization condition, and the average value of the loss function of the models on the tuning set data is taken as the evaluation index of the hyper-parameters. And taking the hyperparameter with the minimum loss function average value as the optimal hyperparameter of each model structure.
And then, testing the MCA typing model corresponding to the optimal hyper-parameter by using the test set data, and selecting the optimal MCA typing model for subsequent typing.
It should be noted that, in the case of insufficient data in the test set, the optimal MCA typing model may also be selected by using a cross validation method.
To further understand the neural network in the training process of the MCA typing model provided in the embodiments of the present specification, the output of the neural network will be described below with reference to specific embodiments:
for an image data, the output of the neural network is a vector with a length of 9, [ p1, p 2.., p9], pi represents the probability that the image data belongs to the i-th type, p1+ p2+. + p9=1, if p2=0.5, p3=0.5, and the rest are 0, then the probability that the image belongs to type 2 [ MCA occlusion, M1 stem development, M1 distal bifurcation development, M2 development ] or type 3 [ MCA occlusion, M1 stem development, M1 distal bifurcation development, and M2 non-development ] is 0.5, indicating that M2 may be partial development.
By adopting the MCA typing model provided by the embodiment of the specification, the local features and the global features can be fused, a better typing result can be obtained, and the requirement of typing is met.
It should be noted that, the improvement based on the neural network model provided in the embodiments of the present disclosure can also be used for cerebral infarction related scoring, such as ASPECTS scoring.
In a specific implementation process, the improvement of the neural network model provided based on the embodiment of the present specification is mainly to improve a classification module in the neural network model, and replace the classification module with a regression module. That is, the improved neural network model includes: the system comprises an input layer, a convolution module, a global feature fusion module and a regression module.
In the embodiment of the present specification, the output of the regression module is a continuous numerical value for outputting a cerebral infarction related score, such as an ASPECTS score. In the embodiment of the specification, when the improved neural network model is used for cerebral infarction related scoring, such as an ASPECTS scoring, thin-layer NCCT image data is input, local features and global features of the thin-layer NCCT image data are extracted through the improved neural network model, and cerebral infarction related scoring, such as the ASPECTS scoring, is output. The improved neural network model provided by the embodiment of the specification is suitable for relevant scores of acute ischemic stroke caused by MCA occlusion, such as ASPECTS score, but the improved neural network model provided by the embodiment of the specification is not suitable for the condition of bilateral infarction. In a specific implementation, if the left brain is infarcted in n regions, label is n, the right brain is infarcted in n regions, and label is-n, the set of all labels is { n belongs to the integer | -10= < n < =10}, and if n =0, it means that neither the left or right brain has been infarcted.
In the embodiment of the present specification, the training of the improved neural network model takes thin-layer NCTT image data of a single-sided infarction as a sample, and after artificial labeling, a real label is obtained, and after training, the improved neural network model is obtained. In practical application, the input of the improved neural network model is thin-layer NCTT image data, and the output is related scores of infarction, such as ASPECTS scores.
In the embodiment of the present specification, the improved neural network model can be used for the ASPECTS score of the anterior cycle and can also be used for the ASPECTS score of the posterior cycle. When the improved neural network model is used for the assessment of the ASPECTS of the anterior circulation, the target region for assessment comprises a nucleus pulposus layer and a nucleus pulposus upper layer, wherein the nucleus pulposus layer comprises 14 regions of M1, M2, M3, islet leaves, putamen, caudate nucleus and inner capsule hind limb of the left brain and the right brain, the nucleus pulposus upper layer is 2cm above the nucleus pulposus layer, and the nucleus pulposus upper layer comprises M4, M5 and M6 regions of the left brain and the right brain.
When the improved neural network model is used for the evaluation of the ASPECTS of the posterior cycle, the target regions for evaluation comprise: the cerebral bridge and bilateral cerebellum in cerebellum layer, the midbrain in mesencephalon layer, and the cerebral thalamus and cerebral artery supplying area in back of brain layer. To further understand the improved neural network model provided by the embodiments of the present disclosure for the ASPECTS scoring, the following detailed description will be made with reference to specific embodiments.
Taking the embodiment shown in fig. 3 as an example, when the improved neural network model is used for the ASPECTS evaluation, the input module, the first convolution module, the global feature fusion module, and the second convolution module remain unchanged. To facilitate understanding of the improved neural network model for the ASPECTS scoring, fig. 6 is a schematic diagram of an improved neural network model for the ASPECTS scoring provided by the embodiments of the present disclosure. As shown in fig. 6, the difference from fig. 3 is only in the regression module, and the softmax step is passed through in the full connection step, so that the output of the scoring result is realized.
Fig. 7 is a schematic diagram of a neural network model for ASPECTS scoring after further modification provided by an embodiment of the present disclosure. As shown in fig. 7, the only difference from fig. 6 is that, in the full connection step, the softmax step is not passed, and a specific value is output. For example, an output of-4 would indicate 4 regional infarcts in the left brain, with an ASPECTS score of 6.
It should be noted that, in fig. 3, 6 and 7 of the present specification, taking the first convolution module in fig. 3 as an example, 128 × 64 in 128 × 64, 128 × 64 represents the size or magnitude of each corresponding feature map, 64 represents that the corresponding number of channels is 64, and 64 also represents that the number of convolution kernels is 64.
By adopting the typing method provided by the embodiment of the specification, the imaging diagnosis of the non-acute-stage occlusion of the middle cerebral artery can be quickly, comprehensively and accurately carried out, the typing of the cerebral infarction or the scoring of the cerebral infarction, such as the ASPECTS scoring, is realized, and a reference basis is provided for clinical treatment.
The above details a typing method, and accordingly, the present specification also provides a typing device, as shown in fig. 8. Fig. 8 is a schematic diagram of a typing device provided in an embodiment of the present specification, where the typing device includes:
an obtaining module 801, which obtains image data to be processed;
a preprocessing module 803, configured to preprocess the image data to be processed to obtain preprocessed image data;
the classification module 805 is configured to classify the preprocessed image data based on a convolution module, a global feature fusion module, and a classification module, to obtain a classification result of the image data to be processed, where the convolution module includes a first convolution module and a second convolution module.
Further, the preprocessing the image data to be processed to obtain preprocessed image data specifically includes:
and after removing the skull in the image data to be processed, carrying out normalization processing to obtain preprocessed image data.
Further, the classifying the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a classification result of the image data to be processed specifically includes:
obtaining local features of the preprocessed image data based on the convolution module;
obtaining the global features of the preprocessed image data based on the global feature fusion module;
and inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed.
Further, the convolution module further includes a third convolution module, and the classifying module classifies the preprocessed image data based on the convolution module, the global feature fusion module and the classifying module to obtain a classification result of the image data to be processed, which specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the second convolution module is used for extracting local features of the first feature map to obtain a second feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map;
the second convolution module and the global fusion module are in parallel connection, so that the second feature map and the third feature map are fused to obtain a fourth feature map;
and inputting the fourth feature map into the third convolution module for convolution and pooling, and inputting the fourth feature map into the classification module for classification to obtain a classification result of the image data to be processed.
Further, the classifying the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a classification result of the image data to be processed specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a second feature map;
inputting the second feature map into the second convolution module to carry out convolution and pooling operation, and obtaining a third feature map;
and inputting the third feature map into the classification module for classification to obtain a classification result of the image data to be processed.
Further, the inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed specifically includes:
the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, the classification module further comprises a full connection layer, and the classification result of the image data to be processed is output after the local features of the preprocessed image data and the global features of the preprocessed image data pass through the global pooling layer or the one-dimensional processing layer and then pass through the full connection layer.
An embodiment of the present specification further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring image data to be processed;
preprocessing the image data to be processed to obtain preprocessed image data;
based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method.
The apparatus, the electronic device, the nonvolatile computer storage medium and the method provided in the embodiments of the present description correspond to each other, and therefore, the apparatus, the electronic device, and the nonvolatile computer storage medium also have similar advantageous technical effects to the corresponding method.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (15)

1. A typing method, characterized in that the method comprises:
acquiring image data to be processed;
preprocessing the image data to be processed to obtain preprocessed image data;
based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
2. The method according to claim 1, wherein the preprocessing the image data to be processed to obtain preprocessed image data specifically comprises:
and after removing the skull in the image data to be processed, carrying out normalization processing to obtain preprocessed image data.
3. The method according to claim 1, wherein the classifying the pre-processed image data based on the convolution module, the global feature fusion module and the classification module to obtain the classification result of the image data to be processed specifically comprises:
obtaining local features of the preprocessed image data based on the convolution module;
obtaining the global features of the preprocessed image data based on the global feature fusion module;
and inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed.
4. The method according to claim 1, wherein the convolution module further includes a third convolution module, and the classifying module based on the convolution module, the global feature fusion module and the classification module classifies the preprocessed image data to obtain a classification result of the image data to be processed, specifically including:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the second convolution module is used for extracting local features of the first feature map to obtain a second feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map;
the second convolution module and the global fusion module are in parallel connection, so that the second feature map and the third feature map are fused to obtain a fourth feature map;
and inputting the fourth feature map into the third convolution module for convolution and pooling, and inputting the fourth feature map into the classification module for classification to obtain a classification result of the image data to be processed.
5. The method according to claim 1, wherein the classifying the pre-processed image data based on the convolution module, the global feature fusion module and the classification module to obtain the classification result of the image data to be processed specifically comprises:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a second feature map;
inputting the second feature map into the second convolution module to carry out convolution and pooling operation, and obtaining a third feature map;
and inputting the third feature map into the classification module for classification to obtain a classification result of the image data to be processed.
6. The method according to claim 3, wherein the inputting the local features of the pre-processed image data and the global features of the pre-processed image data into the classification module to obtain the classification result of the image data to be processed specifically comprises:
the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, the classification module further comprises a full connection layer, and the classification result of the image data to be processed is output after the local features of the preprocessed image data and the global features of the preprocessed image data pass through the global pooling layer or the one-dimensional processing layer and then pass through the full connection layer.
7. A typing device, said device comprising:
the acquisition module acquires image data to be processed;
the preprocessing module is used for preprocessing the image data to be processed to obtain preprocessed image data;
and the parting module is used for parting the preprocessed image data based on the convolution module, the global feature fusion module and the classification module to obtain a parting result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
8. The apparatus according to claim 7, wherein the pre-processing the image data to be processed to obtain pre-processed image data specifically comprises:
and after removing the skull in the image data to be processed, carrying out normalization processing to obtain preprocessed image data.
9. The apparatus according to claim 7, wherein the classifying the pre-processed image data based on the convolution module, the global feature fusion module and the classification module to obtain the classification result of the image data to be processed specifically includes:
obtaining local features of the preprocessed image data based on the convolution module;
obtaining the global features of the preprocessed image data based on the global feature fusion module;
and inputting the local features of the preprocessed image data and the global features of the preprocessed image data into the classification module to obtain the classification result of the image data to be processed.
10. The apparatus according to claim 7, wherein the convolution module further includes a third convolution module, and the classifying module based on the convolution module, the global feature fusion module and the classification module classifies the preprocessed image data to obtain a classification result of the image data to be processed, specifically including:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the second convolution module is used for extracting local features of the first feature map to obtain a second feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a third feature map;
the second convolution module and the global fusion module are in parallel connection, so that the second feature map and the third feature map are fused to obtain a fourth feature map;
and inputting the fourth feature map into the third convolution module for convolution and pooling, and inputting the fourth feature map into the classification module for classification to obtain a classification result of the image data to be processed.
11. The apparatus according to claim 7, wherein the classifying the pre-processed image data based on the convolution module, the global feature fusion module and the classification module to obtain the classification result of the image data to be processed specifically includes:
the first convolution module is used for extracting local features of the preprocessed image data to obtain a first feature map;
the global feature fusion module is used for extracting global features of the first feature map to obtain a second feature map;
inputting the second feature map into the second convolution module to carry out convolution and pooling operation, and obtaining a third feature map;
and inputting the third feature map into the classification module for classification to obtain a classification result of the image data to be processed.
12. The apparatus of claim 9, wherein the inputting the local features of the pre-processed image data and the global features of the pre-processed image data into the classification module to obtain the classification result of the image data to be processed comprises:
the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, the classification module further comprises a full connection layer, and the classification result of the image data to be processed is output after the local features of the preprocessed image data and the global features of the preprocessed image data pass through the global pooling layer or the one-dimensional processing layer and then pass through the full connection layer.
13. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring image data to be processed;
preprocessing the image data to be processed to obtain preprocessed image data;
based on a convolution module, a global feature fusion module and a classification module, the pre-processed image data is classified to obtain a classification result of the image data to be processed, and the convolution module comprises a first convolution module and a second convolution module.
14. A neural network model applied to the typing method according to any one of claims 1 to 6, the neural network model comprising:
the input layer is used for receiving image data to be processed;
the convolution module is used for extracting local features of the image data to be processed and comprises a first convolution module and a second convolution module;
the global feature fusion module is used for extracting global features of the image data to be processed, and the convolution module and the global feature fusion module are in a parallel connection relation;
the classification module is used for performing classification based on the local features of the image data to be processed and the global features of the image data to be processed to obtain a classification result of the image data to be processed, the first layer of the classification module is a global pooling layer or a one-dimensional processing layer, and the classification module further comprises a full connection layer.
15. The model of claim 14, wherein said convolution module further comprises a third convolution module when said convolution module is in a parallel relationship with said global feature fusion module.
CN202011052646.6A 2020-09-29 2020-09-29 Typing method, device and equipment Pending CN112185550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011052646.6A CN112185550A (en) 2020-09-29 2020-09-29 Typing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011052646.6A CN112185550A (en) 2020-09-29 2020-09-29 Typing method, device and equipment

Publications (1)

Publication Number Publication Date
CN112185550A true CN112185550A (en) 2021-01-05

Family

ID=73946553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011052646.6A Pending CN112185550A (en) 2020-09-29 2020-09-29 Typing method, device and equipment

Country Status (1)

Country Link
CN (1) CN112185550A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862022A (en) * 2021-04-26 2021-05-28 南京钺曦医疗科技有限公司 ASPECTS scoring method for calculating non-enhanced CT
CN113593678A (en) * 2021-08-03 2021-11-02 北京安德医智科技有限公司 Cerebral apoplexy typing method and device based on blood vessel image completion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993732A (en) * 2019-03-22 2019-07-09 杭州深睿博联科技有限公司 The pectoral region image processing method and device of mammography X
CN110826629A (en) * 2019-11-08 2020-02-21 华南理工大学 Otoscope image auxiliary diagnosis method based on fine-grained classification
CN111080596A (en) * 2019-12-11 2020-04-28 浙江工业大学 Auxiliary screening method and system for pneumoconiosis fusing local shadows and global features

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993732A (en) * 2019-03-22 2019-07-09 杭州深睿博联科技有限公司 The pectoral region image processing method and device of mammography X
CN110826629A (en) * 2019-11-08 2020-02-21 华南理工大学 Otoscope image auxiliary diagnosis method based on fine-grained classification
CN111080596A (en) * 2019-12-11 2020-04-28 浙江工业大学 Auxiliary screening method and system for pneumoconiosis fusing local shadows and global features

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862022A (en) * 2021-04-26 2021-05-28 南京钺曦医疗科技有限公司 ASPECTS scoring method for calculating non-enhanced CT
CN113593678A (en) * 2021-08-03 2021-11-02 北京安德医智科技有限公司 Cerebral apoplexy typing method and device based on blood vessel image completion

Similar Documents

Publication Publication Date Title
CN109685123B (en) Scoring method and system based on skull CT image
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
Meijs et al. Robust segmentation of the full cerebral vasculature in 4D CT of suspected stroke patients
Chetoui et al. Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets
CN111081378B (en) Aneurysm rupture risk assessment method and system
Patel et al. Multi-resolution CNN for brain vessel segmentation from cerebrovascular images of intracranial aneurysm: a comparison of U-Net and DeepMedic
CN109448004B (en) Centerline-based intracranial blood vessel image interception method and system
CN111584077A (en) Aneurysm rupture risk assessment method and system
CN112185550A (en) Typing method, device and equipment
CN109712122B (en) Scoring method and system based on skull CT image
CN109671069B (en) Method and system for measuring core infarction volume based on skull CT image
Chen et al. Generative adversarial network based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image
CN109671067B (en) Method and system for measuring core infarction volume based on skull CT image
CN111584076A (en) Aneurysm rupture risk assessment method and system
Tariq et al. Diabetic retinopathy detection using transfer and reinforcement learning with effective image preprocessing and data augmentation techniques
CN112734726B (en) Angiography typing method, angiography typing device and angiography typing equipment
Zhang et al. A unified mammogram analysis method via hybrid deep supervision
CN115082405A (en) Training method, detection method, device and equipment of intracranial focus detection model
CN112927815B (en) Method, device and equipment for predicting intracranial aneurysm information
CN112801996A (en) Grading method, grading device and grading equipment
Yoon et al. Collaborative multi-modal deep learning and radiomic features for classification of strokes within 6 h
CN113160165A (en) Blood vessel segmentation method, device and equipment
CN110517244B (en) Positioning method and system based on DSA image
Bidwai et al. Detection of Diabetic Retinopathy using Deep Learning
CN110517243B (en) Positioning method and system based on DSA image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 2301, 23rd Floor, Building 3, No. 2 Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing, 100176

Applicant after: UNION STRONG (BEIJING) TECHNOLOGY Co.,Ltd.

Address before: 100176 901, building 3, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant before: UNION STRONG (BEIJING) TECHNOLOGY Co.,Ltd.