CN113781385B - Combined attention-seeking convolution method for automatic classification of brain medical images - Google Patents
Combined attention-seeking convolution method for automatic classification of brain medical images Download PDFInfo
- Publication number
- CN113781385B CN113781385B CN202110299393.0A CN202110299393A CN113781385B CN 113781385 B CN113781385 B CN 113781385B CN 202110299393 A CN202110299393 A CN 202110299393A CN 113781385 B CN113781385 B CN 113781385B
- Authority
- CN
- China
- Prior art keywords
- brain
- network
- attention
- node
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 105
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 claims abstract description 6
- 238000003745 diagnosis Methods 0.000 claims abstract description 4
- 238000011176 pooling Methods 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 23
- 238000010586 diagram Methods 0.000 claims description 18
- 230000002776 aggregation Effects 0.000 claims description 10
- 238000004220 aggregation Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000005714 functional activity Effects 0.000 claims description 6
- 101100481876 Danio rerio pbk gene Proteins 0.000 claims description 5
- 101100481878 Mus musculus Pbk gene Proteins 0.000 claims description 5
- 230000014509 gene expression Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 210000001259 mesencephalon Anatomy 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims 1
- 238000005096 rolling process Methods 0.000 claims 1
- 238000002598 diffusion tensor imaging Methods 0.000 abstract description 22
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000010801 machine learning Methods 0.000 abstract description 5
- 230000004927 fusion Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract description 2
- 238000009792 diffusion process Methods 0.000 abstract 1
- 238000002599 functional magnetic resonance imaging Methods 0.000 abstract 1
- 238000010276 construction Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000000547 structure data Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
- G06T2207/10092—Diffusion tensor magnetic resonance imaging [DTI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a medical image automatic identification technology based on a graph convolution neural network. Based on brain diffusion tensor imaging and functional magnetic resonance imaging, a machine learning automatic diagnosis method capable of fusing two modal data and reasonably retaining and utilizing brain map structural information is designed. The current brain image classification lacks an effective method for fusing multi-mode images, and has poor learning effect on the image structure information in the data. The invention discloses a medical image automatic identification technology based on fusion diffusion tensor imaging of a graph convolution neural network and a functional magnetic resonance image. Specifically, we first aggregate neighbor information between brain region nodes on a brain diffusion Zhang Liangcheng image and then extract brain region joint attention. Then, a graph convolution operation is performed on the brain functional magnetic resonance image by taking the attention of the brain area as a reference. Finally, the characteristics are input into a multi-layer perceptron to be automatically identified and classified.
Description
Technical field: machine learning field, in particular automatic brain medical image recognition technology
The background technology is as follows: the technology is applied to the field of automatic identification of medical images by machine learning. Scientists in the machine learning field have proposed a number of novel technical models based thereon for implementing automatic recognition analysis of brain medical images. The graph convolution neural network technology is a deep neural network technology suitable for graph structure data analysis processing. Researchers in the field of deep learning solve the problem of processing graph structure data by designing a graph convolutional neural network framework.
Disclosure of Invention
Object of the Invention
The automatic identification technology of the medical image has important auxiliary effect on the work of doctors, and the invention and the application of a good technology in the related field can greatly improve the diagnosis level of hospitals. The existing automatic brain image recognition method has the defects of poor robustness and insufficient interpretation. The existing machine learning technology model cannot be well adapted to the graph structural characteristics of a brain function network, and lacks a multi-modal framework for organically fusing Functional Magnetic Resonance (FMRI) and Diffusion Tensor Imaging (DTI).
In order to solve the problems, a technical scheme is explored, which can organically integrate FMRI and DTI of the brain and can perfectly adapt to the structural characteristics of a brain network diagram, and the method has robustness and good interpretability.
Technical proposal
In order to achieve the above purpose, the technical scheme adopted by the invention comprises the following four steps:
firstly, a higher-order functional brain network is built based on FMRI, a higher-order functional brain network is built by using FMRI signal vectors of brain regions, and the fully-connected brain network built based on sparse representation in all brain regions can better reflect higher-order relations and potential hidden information in brain regions.
Secondly, a structural network for extracting brain region attention information is constructed based on DTI, in order to extract brain network attention weight, we use DTI structural data as edges between nodes, and FMRI as node feature vectors to construct a brain network diagram.
And (III) calculating a first-order second-order third-order attention score of the brain region through node aggregation, extracting joint attention on a DTI mode and an FMRI mode and using the joint attention on pooling of the GCN. This method of calculating attention does not require setting additional parameters, so that the calculation is very convenient. The attention score extracted by the above formula is then applied to the pooling layer of the GCN. We measure the intensity of the functional activity of each edge on the structural network represented by DTI, and further evaluate the attention score for each brain region node. If the attention score of a node on the structural network is relatively high we can understand that this brain region has relatively much functional activity through edges and other brain regions in the DTI structural network.
And fourthly, performing graph convolution and pooling operation, and classifying the output characteristics by using a multi-layer perceptron to obtain a final recognition result. And sorting the nodes by combining joint attention scores learned in two modes of DTI and FMRI, screening out unimportant nodes, and reserving nodes with high attention. In the downsampling of the GCN, a pooling layer adopts a mode of discarding nodes layer by layer to improve the fusion efficiency of higher-order neighbors of the nodes, and the method breaks through the structural integrity of a brain network graph, so that the lower network lacks the capability of fusing all the nodes. Therefore, a readout layer is added after each pooling layer of the GCN, the readout layer completes one time of aggregation of global information on the graph nodes of the current layer, and the result is fed back to the multi-layer perceptron for final classification.
Drawings
Construction of the higher-order functional brain network diagram of FIG. 1
FIG. 2 DTI structured brain network map construction
FIG. 3 illustrates aggregation of node attention
FIG. 4 GCN network structure diagram
Detailed Description
The implementation mode of the technical scheme is specifically introduced as follows:
the work firstly builds a higher-order brain network by using a sparse representation method, and the higher-order brain network is used as side information in a brain network diagram. And uses the signal vector of the original brain region FMRI time sequence as node information, thereby defining the complete graph structure. Different previous brain network classification diagnosis methods and frameworks, the method works by constructing a high-order graph functional brain network by using brain FMRI data, constructing a structural brain network graph by taking DTI structural data as edges between nodes and FMRI as node feature vectors, and extracting attention weight of brain regions.
Construction of higher-order functional brain networks based on FMRI
We use x= [ X ] 1 ,x 2 ,...,x 90 ]∈R 240×90 To represent the feature vectors of the nodes in the brain network graph, i.e. FMRI data of individual samples in the sample set, where 90 is the number of brain regions corresponding to each sample, x i FMRI characteristic of the ith brain region of the sample. 240 is the length of the brain region feature vector.
The brain region FMRI signal vector is used for constructing a high-order functional brain network, and the fully-connected brain network constructed based on sparse representation in all brain regions can better embody the high-order relation and potential hidden information in the brain region function.
In the expression A i Representative of the classical corresponding to the ith brain region, is composed of the characteristics of other N-1 brain regions, A i The midbrain region corresponds to the element value of column i being zero. E (E) i Is 90 x 1, representing an indication vector of the dictionary representation corresponding to the ith brain region, here representing the weight of the edge between brain region node i and other nodes. By calculating the indication vectors of the sparse representation corresponding to all brain areas, an indication vector matrix can be obtained:
E={E 1 ,E 2 ,......,E 90 }
e is the adjacency matrix of the graph and represents the edges of the higher-order functional brain network graph. Since E is constructed using sparse representation, the value of many elements in the matrix will be zero, which means that there is no edge connection between the corresponding nodes. We use figure 1 to illustrate the construction of a high-level functional brain network graph. We name this network map as a high-level functional brain network map.
(II) constructing structural network for extracting brain region attention information based on DTI
Meanwhile, in order to extract the attention weight of the brain network, we construct a brain network graph by taking DTI structure data as edges between nodes and FMRI as a node feature vector. As shown in fig. 2, the edge matrix of the brain network diagram constructed by using DTI structure informationThe characteristic of the node is X= [ X ] as in the functional brain network diagram 1 ,x 2 ,...,x 90 ]. We name this network map as a structural brain network map. The purpose of constructing a structural brain network graph is to extract the attention weight of brain region nodes.
(III) calculating first order second order third order attention score of brain region by node aggregation
This section describes how we draw joint attention and direct it on the DTI and FMRI modalitiesFor use in the pooling of GCNs. Firstly, in the first two steps we define a DTI structure brain network diagram taking FMRI signal vector as node characteristic, the node characteristic of the diagram is represented by X, and the edge of the diagram is represented byAnd (3) representing. We take the node joint attention score Z by aggregating the nodes.
Wherein the method comprises the steps of<x i ,x j >The inner product of the node i and the node j represents the correlation calculation of the characteristics of the two nodes, and the correlation calculation reflects the functional connection of the brain interval on the side of the brain network diagram. This means that we measure the intensity of the functional activity of each edge on the structural network represented by DTI, and thus evaluate the attention score for each brain region node. If the attention score of a node on the structural network is relatively high we can understand that this brain region has relatively much functional activity through edges and other brain regions in the DTI structural network. Layer 1 aggregated attention score we use Z l To indicate, the calculation mode of the first+1 layer is as follows:
this method of calculating attention does not require setting additional parameters, so that the calculation is very convenient. The attention score extracted by the above formula is then applied to the pooling layer of the GCN. Screening of nodes we used the topK mechanism
(IV) performing graph convolution and pooling operations and classifying the output features by using a multi-layer perceptron to obtain a final recognition result
The pooling mechanism of TopK is to imitate the thought of the maximum pooling operation in CNN and screen out the most valuable information. In the work, nodes are sequenced by combining joint attention scores learned in two modes of DTI and FMRI, unimportant nodes are screened out, and nodes with high attention are reserved.
After determining the sub-graph division and the corresponding adjacency matrix, we need to integrate and extract the information of the non-grid graph, and we define the aggregation method of node attention in the last step. We have first introduced the node aggregation approach of the convolutional layer.
h=GNN(X,E)
Specifically, the method can be written as follows:
where W is a parameter to be trained, is->Is a degree matrix of (2). Layer 1+1 is defined as
Because the pooling layer adopts a topK method in the downsampling of the GCN, the fusion efficiency of the higher-order neighbors of the nodes is improved in a layer-by-layer node discarding mode, the method breaks the structural integrity of the brain network graph, and the lower network lacks the capability of fusing all the nodes. Therefore, a readout layer is added after each pooling layer of the GCN, the readout layer completes one time of aggregation of global information on the graph nodes of the current layer, and the result is fed back to the multi-layer perceptron for final classification. The readout layer is calculated as follows:
the above equation represents the result of stitching together the global average pooling and the global maximum pooling to get the readout layer, ||represents the stitching operation. Maximum pooling is a method commonly used in neural networks to extract features and reduce the impact of unwanted information, and average pooling is used as a complement to maximum pooling to preserve background information, overfitting due to non-uniformity of feature data divisions. The readout layer structures of the layers are added to give a full-view representation:
the readout layer functions like the global pooling of the convolutional layers commonly used in CNN network models, and it is their common feature to obtain global expressions by one-time aggregation of global inputs. And similar to global pooling in CNN, the readout layer may also employ common operations such as summing, averaging, maximizing, etc. The GCN network eventually containing the readout layer is shown in FIG. 4
And aggregating neighbor nodes on the DTI structure brain network diagram to obtain joint attention on DTI and FMRI modes, then pooling unimportant brain region nodes by using a topK according to the value of the attention, and in order to learn global information of each layer, setting a reading layer behind each pooled layer to aggregate the information of the global nodes, and finally adding and transmitting the results of the reading layers of each layer to a multi-layer perceptron for classification.
Claims (1)
1. The joint attention seeking convolution method for automatic classification of brain medical images is characterized by comprising the following steps of: the method comprises the following steps:
constructing a high-order functional brain network based on FMRI, constructing the high-order functional brain network by using correlation among FMRI signal vectors of a brain region by using a sparse representation method, and taking an original vector as node characteristics of network nodes after constructing the brain network by using the sparse representation method;
with X= [ X ] 1 ,x 2 ,…,x 90 ]∈R 240×90 To represent the feature vector of the nodes in the brain network diagram, wherein 90 is the number of brain areas corresponding to each sample, and x i The ith brain region representing the sample240 is the length of the brain region feature vector, and the mathematical expression of the fully connected brain network constructed based on sparse representation is as follows:
in the expression A i Representative of the classical corresponding to the ith brain region, is composed of the characteristics of other N-1 brain regions, A i The element value of the midbrain region corresponding to the ith column is zero, E i A 90×1 vector representing an indication vector of a dictionary representation corresponding to the ith brain region, where the indication vector represents the weight of an edge between the brain region node i and other nodes;
by calculating the indication vectors of the sparse representation corresponding to all brain areas, an indication vector matrix can be obtained:
E={E 1 ,E 2 ,……,E 90 }
e is an adjacent matrix of the graph and represents the edges of the higher-order functional brain network graph, and because E is constructed by using sparse representation, the value of partial elements existing in the matrix is zero, which represents that no edge connection exists between the corresponding nodes;
secondly, constructing a structural network for extracting brain region attention information based on DTI, and constructing a brain network diagram by taking DTI structural data as edges between nodes and FMRI as node feature vectors in order to extract brain network attention weights;
thirdly, calculating a first-order second-order third-order attention score of the brain region through node aggregation, extracting joint attention on a DTI mode and an FMRI mode and using the joint attention on pooling of the GCN;
the extraction method of the attention of the brain region node in the step comprises the step of calculating the importance degree of the brain region by using a graph convolution node aggregation mode, and the specific steps are as follows:
defining a DTI structure brain network diagram taking FMRI signal vector as node characteristic, wherein the node characteristic of the diagram is represented by X, and the side of the diagram is represented byRepresentation, use ofThe node joint attention score Z is obtained by aggregating the nodes, and the expression is as follows:
wherein the method comprises the steps of<x i ,x j >The inner product of the node i and the node j represents the correlation calculation of the characteristics of the two nodes, and the functional connection between brains on the side of a brain network diagram is reflected; this means that the intensity of the functional activity of each edge is measured on the structural network represented by DTI, and thus the attention score is evaluated for each brain region node;
if the attention score of a node on the structural network is high, it is understood that this brain region has performed relatively more functional activities through edges and other brain regions in the DTI structural network;
the attention score of the first layer aggregate is Z l To indicate, the calculation mode of the first+1 layer is as follows:
this method of calculating attention does not require setting additional parameters, and the attention score extracted by the above formula is applied to the pooling layer of the GCN;
fourthly, performing graph rolling and pooling operation, classifying output characteristics by using a multi-layer perceptron to obtain a final recognition result, sorting nodes by combining joint attention scores learned by two modes of DTI and FMRI, screening unimportant nodes by using a TopK strategy, and reserving nodes with high attention;
in the process of learning brain structure and function information and extracting features, the DTI and FMRI two-mode data are used simultaneously, and the function network and the structure network are fused in the graph convolution neural network to improve the performance of automatic diagnosis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110299393.0A CN113781385B (en) | 2021-03-19 | 2021-03-19 | Combined attention-seeking convolution method for automatic classification of brain medical images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110299393.0A CN113781385B (en) | 2021-03-19 | 2021-03-19 | Combined attention-seeking convolution method for automatic classification of brain medical images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781385A CN113781385A (en) | 2021-12-10 |
CN113781385B true CN113781385B (en) | 2024-03-08 |
Family
ID=78835561
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110299393.0A Active CN113781385B (en) | 2021-03-19 | 2021-03-19 | Combined attention-seeking convolution method for automatic classification of brain medical images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781385B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114743053B (en) * | 2022-04-14 | 2023-04-25 | 电子科技大学 | Magnetic resonance image auxiliary processing system based on graph neural network and self-attention |
CN115359297B (en) * | 2022-08-24 | 2024-01-26 | 南京航空航天大学 | Classification method, system, electronic equipment and medium based on higher-order brain network |
CN116206752B (en) * | 2023-02-22 | 2024-07-30 | 南通大学 | Mental disease identification method based on structure-function brain network |
CN118298491B (en) * | 2024-06-04 | 2024-08-06 | 烟台大学 | Expression recognition method and system based on multi-scale features and spatial attention |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110522448A (en) * | 2019-07-12 | 2019-12-03 | 东南大学 | A kind of brain network class method based on figure convolutional neural networks |
CN111127441A (en) * | 2019-12-25 | 2020-05-08 | 兰州大学 | Multi-modal brain image depression recognition method and system based on graph node embedding |
CN112329801A (en) * | 2020-12-03 | 2021-02-05 | 中国石油大学(华东) | Convolutional neural network non-local information construction method |
-
2021
- 2021-03-19 CN CN202110299393.0A patent/CN113781385B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110522448A (en) * | 2019-07-12 | 2019-12-03 | 东南大学 | A kind of brain network class method based on figure convolutional neural networks |
CN111127441A (en) * | 2019-12-25 | 2020-05-08 | 兰州大学 | Multi-modal brain image depression recognition method and system based on graph node embedding |
CN112329801A (en) * | 2020-12-03 | 2021-02-05 | 中国石油大学(华东) | Convolutional neural network non-local information construction method |
Also Published As
Publication number | Publication date |
---|---|
CN113781385A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113781385B (en) | Combined attention-seeking convolution method for automatic classification of brain medical images | |
CN108734208B (en) | Multi-source heterogeneous data fusion system based on multi-mode deep migration learning mechanism | |
CN107229914B (en) | Handwritten digit recognition method based on deep Q learning strategy | |
CN109871875B (en) | Building change detection method based on deep learning | |
CN111950708B (en) | Neural network structure and method for finding daily life habits of college students | |
CN112488025B (en) | Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion | |
CN112687374B (en) | Psychological crisis early warning method based on text and image information joint calculation | |
CN110991532A (en) | Scene graph generation method based on relational visual attention mechanism | |
CN111090764A (en) | Image classification method and device based on multitask learning and graph convolution neural network | |
US20220121902A1 (en) | Method and apparatus for quality prediction | |
CN110473195B (en) | Medical focus detection framework and method capable of being customized automatically | |
CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
CN111008570B (en) | Video understanding method based on compression-excitation pseudo-three-dimensional network | |
CN116089708A (en) | Agricultural knowledge recommendation method and device | |
CN115858919A (en) | Learning resource recommendation method and system based on project field knowledge and user comments | |
Stracuzzi et al. | Quantifying Uncertainty to Improve Decision Making in Machine Learning. | |
Lonij et al. | Open-world visual recognition using knowledge graphs | |
Kamsu-Foguem et al. | Generative Adversarial Networks based on optimal transport: a survey | |
CN116633639B (en) | Network intrusion detection method based on unsupervised and supervised fusion reinforcement learning | |
CN117173595A (en) | Unmanned aerial vehicle aerial image target detection method based on improved YOLOv7 | |
Vergara et al. | A Schematic Review of Knowledge Reasoning Approaches Based on the Knowledge Graph | |
CN116680578A (en) | Cross-modal model-based deep semantic understanding method | |
CN114429460A (en) | General image aesthetic assessment method and device based on attribute perception relationship reasoning | |
CN113392934A (en) | Bias data balancing method and device for deep learning | |
CN112489012A (en) | Neural network architecture method for CT image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |