CN116246112A - Data processing method and system based on neural image database training classification model - Google Patents

Data processing method and system based on neural image database training classification model Download PDF

Info

Publication number
CN116246112A
CN116246112A CN202310180625.XA CN202310180625A CN116246112A CN 116246112 A CN116246112 A CN 116246112A CN 202310180625 A CN202310180625 A CN 202310180625A CN 116246112 A CN116246112 A CN 116246112A
Authority
CN
China
Prior art keywords
model
classification
training
image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310180625.XA
Other languages
Chinese (zh)
Other versions
CN116246112B (en
Inventor
姚洪祥
崔津津
刘贯中
张恒
韩邵军
胡兴和
王新江
安宁豫
周波
刘勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Medical Center of PLA General Hospital
Original Assignee
Second Medical Center of PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Medical Center of PLA General Hospital filed Critical Second Medical Center of PLA General Hospital
Priority to CN202310180625.XA priority Critical patent/CN116246112B/en
Publication of CN116246112A publication Critical patent/CN116246112A/en
Application granted granted Critical
Publication of CN116246112B publication Critical patent/CN116246112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The embodiment of the invention relates to a data processing method and a system for training a classification model based on a neuroimage database, wherein the method comprises the following steps: creating a multi-modal neuroimage database in each region; performing image label labeling processing on each multi-mode neural image database based on a plurality of unified single-mode image classification prediction models; carrying out SL node docking processing on each multi-modal neural image database subjected to image label labeling processing and an SL model learning network to generate corresponding database docking nodes; carrying out M times of full-node model training on the multi-mode classification model according to a preset maximum training frequency M based on the SL model learning network; and after the M times of full-node model training are finished, the nodes are abutted to the latest model parameter templates of the multi-mode classification model through each database. The invention can reduce the model training difficulty, improve the model training sufficiency and ensure the model consistency of all regional model using mechanisms.

Description

Data processing method and system based on neural image database training classification model
Technical Field
The invention relates to the technical field of data processing, in particular to a data processing method and system based on a neural image database training classification model.
Background
Alzheimer's Disease (AD) is a major type of dementia, a progressive, irreversible neurological disorder, and early identification and early intervention are effective methods for slowing down the progression of the disease. With the application and development of artificial intelligence models in the field of neural images, early AD classification prediction based on artificial intelligence classification models according to neural images, such as magnetic resonance imaging (Magnetic Resonance Imaging, MRI), electroencephalogram (EEG), positron emission tomography (positron emission tomography, PET), etc., has become an effective auxiliary recognition means at present. However, the premise of normal use of artificial intelligence classification models is that the model use mechanism needs to collect a large number of multi-mode neural images to construct a corresponding neural image database and fully train the classification models based on the neural image database. However, due to the regional distribution of different institutions, the difficulty of acquiring original data and other factors, the neural image databases created by most institutions are difficult to meet the requirements of data total amount and data complexity required by full training of the model. That is, even if artificial intelligence classification models with the same structure are used, the model use effects of model use institutions in different areas are difficult to reach the consistency level.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a data processing method, a system and electronic equipment for training a classification model based on a neuro-image database; creating a multi-mode neural image database for storing various types of neural image data (MRI data, EEG data and PET data) in each region, providing a plurality of unified single-mode image classification prediction models for each region, performing early AD classification prediction based on single-mode images, respectively performing image label (normal type labels and a plurality of early AD classification type labels) on each neural image data of each multi-mode neural image database based on the plurality of single-mode image classification prediction models, accessing all multi-mode neural image databases with the image labels to a SL (Swarm Learning) model learning network to form a plurality of database docking nodes, and performing multi-time full-node model training on the multi-mode classification models for performing early AD classification prediction by the SL model learning network through all the database docking nodes so as to obtain a model parameter template meeting the maximum training quantity and training complexity. According to the invention, the model using mechanisms in each region can obtain the consistent model parameter templates from the corresponding database docking nodes, and the model using effect of each model using mechanism can reach the consistency level.
To achieve the above object, a first aspect of an embodiment of the present invention provides a data processing method for training a classification model based on a neuroimage database, the method including:
creating a multi-modal neuroimage database in each region;
performing image label labeling processing on each multi-mode neural image database based on a plurality of unified single-mode image classification prediction models;
carrying out SL node docking processing on each multi-modal neural image database subjected to image label labeling processing and an SL model learning network to generate corresponding database docking nodes;
carrying out M times of full-node model training on the multi-mode classification model according to a preset maximum training frequency M based on the SL model learning network;
and obtaining the latest model parameter templates of the multi-mode classification model through the database docking nodes when the M times of full-node model training is finished.
Preferably, the multi-modal neuroimaging database includes a plurality of first data records; the first data record comprises a first image mode, first image data and a first image label; the first imaging modality includes an MRI modality, an EEG modality, and a PET modality; the first image data corresponding to the first image mode being an MRI mode, an EEG mode or a PET mode is corresponding MRI data, EEG data and PET data; the first image tag comprises a normal classification tag and a multi-class early Alzheimer disease classification tag; all the first image tags are initialized to be empty;
The unified single-mode image classification prediction models consist of a plurality of single-mode image classification prediction models which are trained and mature in advance, and are an MRI image classification prediction model, an EEG image classification prediction model and a PET image classification prediction model respectively;
the SL model learning network is a decentralization model learning network constructed based on a SL block chain network; the SL model learning network comprises a plurality of SL nodes; each SL node is connected with all other SL nodes in pairs;
the SL model learning network presets a model intelligent contract which is used for providing a model data packet for the SL node signing the contract; the model data packet is composed of the multi-mode classification model, a model loss function, a training objective function, a loss convergence range and a model parameter template.
Further, the MRI image classification prediction model comprises a first feature extraction module, a first classification module and a first label output module; the input end of the first feature extraction module is a model input end, and the output end of the first feature extraction module is connected with the input end of the first classification module; the output end of the first classification module is connected with the input end of the first label output module; the output end of the first label output module is a model output end; the first feature extraction module is used for carrying out full brain feature extraction processing on MRI data input by the model according to a preset full brain structure template to obtain a corresponding first feature map; the first classification module is used for carrying out classification prediction according to the first feature map to obtain a first classification prediction vector with a plurality of classification probabilities; the first tag output module is used for outputting a prediction type corresponding to the maximum classification probability in the first classification prediction vector as a corresponding type tag; the prediction type of the first classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types;
The EEG image classification prediction model comprises a second feature extraction module, a second classification module and a second label output module; the input end of the second feature extraction module is a model input end, and the output end of the second feature extraction module is connected with the input end of the second classification module; the output end of the second classification module is connected with the input end of the second label output module; the second feature extraction module is used for extracting the whole brain features of the EEG data input by the model to obtain a corresponding second feature map; the second classification module is used for carrying out classification prediction according to the second feature map to obtain a second classification prediction vector with a plurality of classification probabilities; the second tag output module is configured to output a prediction type corresponding to a maximum classification probability in the second classification prediction vector as a corresponding type tag; the prediction type of the second classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types;
the PET image classification prediction model comprises a third feature extraction module, a third classification module and a third label output module; the input end of the third feature extraction module is a model input end, and the output end of the third feature extraction module is connected with the input end of the third classification module; the output end of the third classification module is connected with the input end of the third tag output module; the output end of the third tag output module is a model output end; the third feature extraction module is used for carrying out full brain feature extraction processing on PET data input by the model to obtain a corresponding third feature map; the third classification module is used for carrying out classification prediction according to the third feature map to obtain a third classification prediction vector with a plurality of classification probabilities; the third tag output module is configured to output, as a corresponding type tag, a prediction type corresponding to a maximum classification probability in the third classification prediction vector; the prediction type of the third classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types;
The multi-modal classification model comprises a multi-modal feature extraction module, a multi-modal feature fusion module and a fusion feature classification module; the multi-modal feature extraction module consists of three parallel feature extraction units, namely a first feature extraction unit, a second feature extraction unit and a third feature extraction unit, wherein the input ends of the first feature extraction unit, the second feature extraction unit and the third feature extraction unit are model input ends, and the output ends of the first feature extraction unit, the second feature extraction unit and the third feature extraction unit are connected with the input end of the multi-modal feature fusion module; the output end of the multi-mode feature fusion module is connected with the input end of the fusion feature classification module; the output end of the fusion characteristic classification module is a model output end; the first, second and third feature extraction units are respectively used for performing full brain feature extraction processing on the input MRI data, EEG data and PET data to obtain three corresponding full brain feature graphs; the multi-modal feature fusion module is used for carrying out multi-modal feature fusion processing on the three brain feature images to obtain a corresponding first fusion feature image; the fusion feature classification module is used for carrying out classification prediction according to the first fusion feature map to obtain a fourth classification prediction vector with a plurality of classification probabilities; each classification probability of the fourth classification prediction vector corresponds to a prediction type, and the prediction type corresponding to the fourth classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types.
Preferably, the image tag labeling process is performed on each of the multi-modal neural image databases based on a plurality of unified single-mode image classification prediction models, and specifically includes:
traversing each first data record of the multi-modal neuro-image database; the first data record of the current traversal is used as a corresponding current data record, and the first image mode and the first image data of the current data record are extracted to be used as corresponding current image mode and current image data; identifying the current image mode; if the current image mode is an MRI mode, performing classification prediction processing on the current image data based on the MRI image classification prediction model to obtain a corresponding first type tag, if the current image mode is an EEG mode, performing classification prediction processing on the current image data based on the EEG image classification prediction model to obtain a corresponding first type tag, and if the current image mode is a PET mode, performing classification prediction processing on the current image data based on the PET image classification prediction model to obtain a corresponding first type tag; and setting the first image tag of the current data record as the corresponding first type tag.
Preferably, the step of performing SL node docking processing on each of the multimodal neural image databases after image tag labeling processing and the SL model learning network to generate a corresponding database docking node specifically includes:
optionally selecting one SL node which is not docked with any multi-modal neural image database in the SL model learning network as the database docking node corresponding to the current multi-modal neural image database; and signing the model intelligent contract by the database docking node to obtain the corresponding model data packet which is stored locally.
Preferably, the training network performs M times of full-node model training on the multi-mode classification model according to a preset maximum training time M based on the SL model, and specifically includes:
when each full-node model training is performed, performing one round of model training by all the database docking nodes according to the corresponding multi-mode neural image database and the locally stored model data packet, and updating the locally stored model parameter templates through training; selecting one of all the database docking nodes as a corresponding current leader node; all the rest database docking nodes except the current leader node send the latest model parameter templates to the current leader node; the current leader node performs full-node model parameter combination processing on the model parameter templates locally stored by the current leader node and the model parameter templates sent by all other database docking nodes to generate corresponding global model parameter templates; the current leader node sends the global model parameter template back to all other database docking nodes; and enabling all database docking nodes of the SL model learning network to update the locally stored model parameter templates according to the global model parameter templates.
Further, the method for performing a round of model training by all the database docking nodes according to the corresponding multimodal neural image database and the locally stored model data packet and updating the locally stored model parameter template through training specifically includes:
selecting corresponding first, second and third training data sets from the corresponding multi-modal neuroimage databases by each of the database docking nodes; extracting the locally stored model parameter template to serve as a corresponding first model parameter template; performing local model training processing on the locally stored multi-mode classification model for one time according to the first training data set, the first model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding second model parameter template; performing local model training processing on the locally stored multi-mode classification model once according to the second training data set, the second model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding third model parameter template; performing local model training processing on the locally stored multi-mode classification model once according to the third training data set, the third model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding fourth model parameter template; and using the fourth model parameter template to update the locally stored model parameter template;
The first training data set comprises a plurality of first training data records, and each first training data record consists of MRI data, EEG data and PET data of a group of normal classification labels with image labels; the second training data set comprises a plurality of second training data records, each second training data record consists of MRI data, EEG data and PET data, wherein a group of image labels are similar early Alzheimer's disease classification labels, and the second training data records corresponding to the early Alzheimer's disease classification labels are included in the second training data set; the third training data set comprises a plurality of third training data records, each third training data record consists of MRI data, EEG data and PET data with the same image label, and the third training data set comprises the third training data records corresponding to normal classification labels and the third training data records corresponding to various early Alzheimer disease classification labels; the training data recording proportion of the first, second and third training data sets is a preset proportion A, B is C, A, B, C is an integer and B > C > A; the total number of training data records of the first, second and third training data sets is not less than a preset number threshold; none of the first, second and third training data records are identical.
Further preferably, the performing a local model training process on the locally stored multi-mode classification model specifically includes:
step 81, using the first, second or third training data set processed this time as a corresponding current training data set; the first, second or third model parameter templates corresponding to the current training data set are used as corresponding current model parameter templates; the current training data set comprises a plurality of training data records consisting of MRI data, EEG data and PET data with the same image label;
step 82, extracting a first training data record of the current training data set as a corresponding current training data record;
step 83, forming a corresponding current multi-mode image set by MRI data, EEG data and PET data recorded by the current training data; initializing an all-zero vector as a corresponding label classification prediction vector according to the vector length of the classification prediction vector output by the multi-mode classification model, and setting the classification probability corresponding to the image label recorded by the current training data in the label classification prediction vector as 1; the label classification prediction vector comprises a plurality of classification probabilities, and each classification probability is initialized to 0 when the label classification prediction vector is initialized;
Step 84, initializing model parameters of the multi-mode classification model according to the current model parameter template; inputting the current multi-modal image set into the multi-modal classification model for model reasoning to generate a corresponding training classification prediction vector; performing model loss estimation processing according to the training classification prediction vector and the label classification prediction vector based on the model loss function to generate a corresponding model estimation loss value;
step 85, identifying whether the model estimated loss value meets the loss convergence range; if yes, go to step 87; if not, go to step 86;
step 86, performing model parameter reverse modulation processing according to the current model parameter template and the label classification prediction vector based on the training objective function to generate a corresponding current modulation parameter template; taking the current modulation parameter template as a new current model parameter template; and returns to step 84 to continue training;
step 87, identifying whether the current training data record is the last training data record in the current training data set; if yes, go to step 88; if not, taking the next training data record in the current training data set as a new current training data record and returning to the step 83 to continue training;
And step 88, outputting the latest current model parameter template as the corresponding second, third or fourth model parameter template according to the corresponding relation between the current training data set and the first, second or third training data set.
A second aspect of the embodiment of the present invention provides a system for implementing the data processing method based on training a classification model of a neuro-image database according to the first aspect, where the system includes: the system comprises a database processing end, a multi-mode neural image database, a label labeling processing end, a node processing end and an SL model learning network;
the database processing end is used for creating a multi-mode neural image database in each region;
the label labeling processing end is used for carrying out image label labeling processing on each multi-mode neural image database based on a plurality of unified single-mode image classification prediction models;
the node processing end is used for carrying out SL node docking processing on each multi-mode neural image database which is subjected to image label marking processing and the SL model learning network to generate corresponding database docking nodes; carrying out M times of full-node model training on the multi-mode classification model according to a preset maximum training frequency M based on the SL model learning network; and obtaining the latest model parameter templates of the multi-mode classification model through the database docking nodes when the M times of full-node model training is finished.
A third aspect of an embodiment of the present invention provides an electronic device, including: memory, processor, and transceiver;
the processor is configured to couple to the memory, and read and execute the instructions in the memory, so as to implement the method steps described in the first aspect;
the transceiver is coupled to the processor and is controlled by the processor to transmit and receive messages.
The embodiment of the invention provides a data processing method, a system and electronic equipment based on a neural image database training classification model; creating a multi-mode neural image database for storing various types of neural image data (MRI data, EEG data and PET data) in each region, providing a plurality of unified single-mode image classification prediction models for each region, performing early AD classification prediction based on single-mode images, respectively performing image label (normal type labels and a plurality of early AD classification type labels) on each neural image data of each multi-mode neural image database based on the plurality of single-mode image classification prediction models, accessing all multi-mode neural image databases with the image labels to a SL (Swarm Learning) model learning network to form a plurality of database docking nodes, and performing multi-time full-node model training on the multi-mode classification models for performing early AD classification prediction by the SL model learning network through all the database docking nodes so as to obtain a model parameter template meeting the maximum training quantity and training complexity. The invention solves the problem that the using effect of the model of the using mechanism of different regional models is inconsistent due to the influence of factors such as different regional distribution, different data acquisition difficulties and the like, reduces the training difficulty of the multi-mode classification model, improves the training sufficiency of the multi-mode classification model, and further improves the auxiliary supporting strength of the multi-mode classification model on the whole regional mechanism.
Drawings
Fig. 1 is a schematic diagram of a data processing method based on training a classification model of a neuro-image database according to an embodiment of the present invention;
FIG. 2 is a system architecture diagram of a data processing system based on a training classification model of a neuro-image database according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of the present invention provides a data processing method based on a neural image database training classification model, as shown in fig. 1, which is a schematic diagram of a data processing method based on a neural image database training classification model according to an embodiment of the present invention, the method mainly includes the following steps:
Step 1, creating a multi-mode neural image database in each region;
the multi-modal neuroimaging database comprises a plurality of first data records; the first data record comprises a first image mode, first image data and a first image label; the first imaging modality includes an MRI modality, an EEG modality, and a PET modality; the first image data corresponding to the first image mode being an MRI mode, an EEG mode or a PET mode is corresponding MRI data, EEG data and PET data; the first image tag comprises a normal classification tag and a multi-class early Alzheimer disease classification tag; all first image tags are initialized to null.
Here, the multimodal neural image database according to the embodiment of the present invention is used to store the neural image data acquired by the local model using mechanism.
Step 2, image label labeling processing is carried out on each multi-mode neural image database based on a plurality of unified single-mode image classification prediction models;
wherein the unified single-mode image classification prediction models consist of a plurality of pre-trained and mature single-mode image classification prediction models, the method comprises the steps of respectively classifying and predicting an MRI image, classifying and predicting an EEG image and classifying and predicting a PET image;
The method specifically comprises the following steps: traversing each first data record of the multi-modal neuro-image database; the first data record of the current traversal is used as a corresponding current data record in traversal, and the first image mode and the first image data of the current data record are extracted to be used as corresponding current image mode and current image data; and identifying the current image mode; if the current image mode is an MRI mode, performing classification prediction processing on the current image data based on the MRI image classification prediction model to obtain a corresponding first type label, if the current image mode is an EEG mode, performing classification prediction processing on the current image data based on the EEG image classification prediction model to obtain a corresponding first type label, and if the current image mode is a PET mode, performing classification prediction processing on the current image data based on the PET image classification prediction model to obtain a corresponding first type label; and setting the first image tag of the current data record as a corresponding first type tag.
Here, the classification prediction type ranges of the MRI image classification prediction model, the EEG image classification prediction model, and the PET image classification prediction model according to the embodiment of the present invention are the same, and are composed of one normal type and a plurality of early-stage alzheimer's disease classification types, which are set according to the classification rules set in the specific application. According to the embodiment of the invention, the single-mode image label marking is carried out on each nerve image data in the current step 2, so that the multi-mode image data extraction can be carried out based on the similar labels in the subsequent step, and a corresponding training data set is formed.
It should be noted that, the MRI image classification prediction model according to the embodiment of the present invention includes a first feature extraction module, a first classification module, and a first tag output module; the input end of the first feature extraction module is a model input end, and the output end of the first feature extraction module is connected with the input end of the first classification module; the output end of the first classification module is connected with the input end of the first label output module; the output end of the first label output module is a model output end; the first feature extraction module is used for carrying out full brain feature extraction processing on MRI data input by the model according to a preset full brain structure template to obtain a corresponding first feature map; the first classification module is used for carrying out classification prediction according to the first feature map to obtain a first classification prediction vector with a plurality of classification probabilities; the first label output module is used for outputting a prediction type corresponding to the maximum classification probability in the first classification prediction vector as a corresponding type label; the prediction type of the first classified prediction vector includes a normal type and a plurality of early Alzheimer's disease classification types. The whole brain structure template is a preset 3D whole brain functional structure feature template, such as a conventional anatomic automatic labeling (Anatomical Automatic Labeling, AAL) template, and can also be a customized template obtained by carrying out information acquisition and feature combination on brain functional structure features of normal people appointed in the whole region in advance; the first feature extraction module may be implemented based on a 2D convolutional neural network (2D-Convolutional Neural Networks, 2D-CNN) network+long Short-Term Memory (LSTM) network, or based on a 3D convolutional neural network (3D-Convolutional Neural Networks, 2D-CNN); the first classification module is implemented based on a multi-layer perceptron (Multilayer Perceptron, MLP) network.
It should be noted that, the EEG image classification prediction model according to the embodiment of the invention includes a second feature extraction module, a second classification module and a second tag output module; the input end of the second feature extraction module is a model input end, and the output end of the second feature extraction module is connected with the input end of the second classification module; the output end of the second classification module is connected with the input end of the second label output module; the output end of the second label output module is a model output end; the second feature extraction module is used for extracting the whole brain features of the EEG data input by the model to obtain a corresponding second feature map; the second classification module is used for carrying out classification prediction according to the second feature map to obtain a second classification prediction vector with a plurality of classification probabilities; the second tag output module is used for outputting a prediction type corresponding to the maximum classification probability in the second classification prediction vector as a corresponding type tag; the prediction type of the second classification prediction vector includes a normal type and a plurality of early Alzheimer's disease classification types. Wherein the second feature extraction module may be configured based on a graph annotation network (Graph attention networks, GAT); the second classification module is implemented based on an MLP network.
It should be noted that, the PET image classification prediction model in the embodiment of the present invention includes a third feature extraction module, a third classification module, and a third tag output module; the input end of the third feature extraction module is a model input end, and the output end of the third feature extraction module is connected with the input end of the third classification module; the output end of the third classification module is connected with the input end of the third label output module; the output end of the third label output module is a model output end; the third feature extraction module is used for carrying out full brain feature extraction processing on PET data input by the model to obtain a corresponding third feature map; the third classification module is used for carrying out classification prediction according to the third feature map to obtain a third classification prediction vector with a plurality of classification probabilities; the third tag output module is used for outputting a prediction type corresponding to the maximum classification probability in the third classification prediction vector as a corresponding type tag; the prediction type of the third classification prediction vector includes a normal type and a plurality of early Alzheimer's disease classification types. Wherein the third feature extraction module may be implemented based on a transducer neural network consisting of an encoder (encoder) +a decoder (decoder); the third classification module is implemented based on an MLP network.
Step 3, carrying out SL node docking processing on each multi-mode neural image database subjected to image label marking processing and an SL model learning network to generate corresponding database docking nodes;
the SL (Swarm Learning) model learning network is a decentralised model learning network constructed based on an SL blockchain network; the SL model learning network comprises a plurality of SL nodes; each SL node is connected with all other SL nodes in pairs; the SL model learning network presets a model intelligent contract which is used for providing a model data packet for SL nodes signing the contract; the model data packet consists of a multi-mode classification model, a model loss function, a training objective function, a loss convergence range and a model parameter template;
the method specifically comprises the following steps: selecting one SL node which is not docked with any multi-modal neural image database in the SL model learning network as a database docking node corresponding to the current multi-modal neural image database; and signing the model intelligent contract by the database docking node to obtain a corresponding model data packet which is stored locally.
Here, in the embodiment of the present invention, an SL model learning network is pre-constructed, which is a de-centralized model learning network constructed based on an SL blockchain network, and the network principle of the SL blockchain network is implemented by referring to technical paper Demystifying Swarm Learning: A New Paradigm of Blockchain-based Decentralized Federated Learning, and the blockchain platform used herein defaults to an ethernet blockchain platform; specifically, the embodiment of the invention constructs a similar network based on the paper as a SL model learning network, and the network standard nodes given in the paper can know that the SL model learning network should comprise a plurality of SL nodes, a plurality of SN (Swarm Network) nodes, SWCI (Swarm Learning Command Interface) nodes, SPIFFE SPIRE server nodes and LS (License Server) nodes; each SL node is connected with all other SL nodes in pairs; each SN node is respectively connected with one or more SL nodes, and each SN node is connected with all other SN nodes in pairs; the SWCI nodes are respectively connected with the SN nodes; SPIFFE SPIRE server nodes are respectively connected with each SL node and each SN node; the LS node is connected with a plurality of SL nodes; wherein the SPIFFE SPIRE server node is configured to manage verifiable identity documents (SPIFFE Verifiable Identity Document, SVID) for each SL node and each SN node; the LS node is used for carrying out SL node license management; the SN node is used for carrying out SL node registration management, SL node classification model training state monitoring processing and SL node state global synchronization processing; the SWCI node monitors all the SL nodes through all the SN nodes and performs all-node state monitoring and all-node control management; each SL node used for combined training acquires a license from the LS node, acquires an SVID from the SPIE server node, registers with the SN node before starting a local model training process flow, and then starts the local model training process flow;
It should be noted that, based on the implementation of the above paper scheme, the SL model learning network according to the embodiment of the present invention configures a model intelligence contract corresponding to the multi-modal classification model to be trained for the SL model learning network, where the model intelligence contract is composed of a specific model corresponding to the multi-modal classification model to be trained, that is, the multi-modal classification model, a model loss function, a training objective function, a loss convergence range, and a model parameter template; in the embodiment of the invention, each database docking node is required to perform a contract signing operation on the intelligent contract of the model so as to acquire and locally store a model data packet comprising a multi-mode classification model, a model loss function, a training objective function, a loss convergence range and a model parameter template, besides completing the standard flow (license acquisition, SVID acquisition and SN node registration) required in the paper before starting the local model training processing flow;
it should be further noted that, based on the implementation of the above paper scheme, the SL model learning network according to the embodiment of the present invention further performs local training processing on how to combine all the database docking nodes, and performs global model parameter template simultaneous scheme expansion based on the full node training result of each round, which is specifically described by the subsequent step 4.
Step 4, training the multi-mode classification model for M times according to a preset maximum training frequency M based on the SL model learning network;
the multi-modal classification model comprises a multi-modal feature extraction module, a multi-modal feature fusion module and a fusion feature classification module; the multi-modal feature extraction module consists of three parallel feature extraction units, namely a first feature extraction unit, a second feature extraction unit and a third feature extraction unit, wherein the input ends of the first feature extraction unit, the second feature extraction unit and the third feature extraction unit are model input ends, and the output ends of the first feature extraction unit, the second feature extraction unit and the third feature extraction unit are connected with the input end of the multi-modal feature fusion module; the output end of the multi-mode feature fusion module is connected with the input end of the fusion feature classification module; the output end of the fusion characteristic classification module is a model output end; the first, second and third feature extraction units are respectively used for carrying out full brain feature extraction processing on the input MRI data, EEG data and PET data to obtain three corresponding full brain feature graphs; the multi-modal feature fusion module is used for carrying out multi-modal feature fusion processing on the three brain feature images to obtain a corresponding first fusion feature image; the fusion feature classification module is used for carrying out classification prediction according to the first fusion feature map to obtain a fourth classification prediction vector with a plurality of classification probabilities; each classification probability of the fourth classification prediction vector corresponds to a prediction type, and the prediction type corresponding to the fourth classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types; further, the first feature extraction unit may be implemented based on a 3D-CNN network, the second feature extraction unit may be implemented based on a GAT network, the third feature extraction unit may be implemented based on a transducer neural network, the multi-modal feature fusion module may be implemented based on a self-attention mechanism based feature fusion neural network, and the fusion feature classification module may be implemented based on an MLP network;
Here, the maximum training number M is a preset integer greater than zero; the M times of full-node model training consists of M times of independent full-node model training sequences;
each time the full-node model is trained, the method specifically comprises the following steps:
step 41, performing a round of model training by all database docking nodes according to the corresponding multi-modal neural image database and the locally stored model data packet, and updating the locally stored model parameter template through training;
the method specifically comprises the following steps: step 411, selecting, by each database docking node, a corresponding first, second, and third training data set from the corresponding multimodal neural image database; extracting a locally stored model parameter template to serve as a corresponding first model parameter template; performing one-time local model training processing on the locally stored multi-model classification model according to the first training data set, the first model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding second model parameter template; performing one-time local model training processing on the locally stored multi-model classification model according to the second training data set, the second model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding third model parameter template; performing one-time local model training processing on the locally stored multi-model classification model according to the third training data set, the third model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding fourth model parameter template; and using a fourth model parameter template to update the locally stored model parameter template;
The first training data set comprises a plurality of first training data records, and each first training data record consists of MRI data, EEG data and PET data of a group of normal classification labels with image labels; the second training data set comprises a plurality of second training data records, each second training data record consists of MRI data, EEG data and PET data, wherein a group of image labels are similar early Alzheimer's disease classification labels, and the second training data records corresponding to the early Alzheimer's disease classification labels are included in the second training data set; the third training data set comprises a plurality of third training data records, each third training data record consists of a group of MRI data, EEG data and PET data with the same image label, and the third training data set comprises a third training data record corresponding to a normal classification label and a third training data record corresponding to various early Alzheimer disease classification labels; the training data recording ratio of the first, second and third training data sets is a preset ratio A: B: C, A, B, C is an integer and B>C>A, A is as follows; the total number of training data records of the first, second and third training data sets is not less than a preset number threshold; none of the three first, second and third training data records are identical; here, because the number of data records of the multimodal neural image databases in each region may be different, the preset number threshold corresponding to each database docking node may be customized based on the real situation of the corresponding multimodal neural image database, and the preset ratio a: B: C corresponding to each database docking node may be customized based on the real situation of the corresponding multimodal neural image database, but in order to further improve the training effect, the embodiment of the present invention suggests that the preset ratio a: B: C corresponding to each database docking node is kept as consistent as possible even if each preset ratio is to be used Examples A, B, C the adjustment can also be carried out with reference to a predetermined desired ratio A * :B * :C * Make minimum adjustments, A * 、B * 、C * Is an integer and B * >C * >A *
The processing mechanisms of the three local model training processes for generating the second, third and fourth model parameter templates in the embodiment of the present invention are similar, that is, each time, the locally stored multi-model classification model is subjected to one local model training process, which specifically includes:
a1, taking the first, second or third training data set processed at this time as a corresponding current training data set; taking the first, second or third model parameter templates corresponding to the current training data set as corresponding current model parameter templates;
the current training data set comprises a plurality of training data records consisting of MRI data, EEG data and PET data with the same image label;
step A2, extracting a first training data record of the current training data set to serve as a corresponding current training data record;
a3, forming a corresponding current multi-mode image set by the MRI data, the EEG data and the PET data recorded by the current training data; initializing an all-zero vector as a corresponding label classification prediction vector according to the vector length of the classification prediction vector output by the multi-mode classification model, and setting the classification probability corresponding to the image label recorded by the current training data in the label classification prediction vector as 1; the label classification prediction vector comprises a plurality of classification probabilities, and each classification probability is initialized to 0 when the label classification prediction vector is initialized;
Step A4, initializing model parameters of the multi-mode classification model according to the current model parameter template; inputting the current multi-modal image set into a multi-modal classification model for model reasoning to generate a corresponding training classification prediction vector; performing model loss estimation processing according to the training classification prediction vector and the label classification prediction vector based on the model loss function to generate a corresponding model estimation loss value;
step A5, identifying whether the model estimated loss value meets the loss convergence range; if yes, go to step A7; if not, turning to the step A6;
step A6, performing model parameter reverse modulation processing according to the current model parameter template and the label classification prediction vector based on the training objective function to generate a corresponding current modulation parameter template; taking the current modulation parameter template as a new current model parameter template; and returning to the step A4 to continue training;
step A7, identifying whether the current training data record is the last training data record in the current training data set; if yes, go to step A8; if not, taking the next training data record in the current training data set as a new current training data record and returning to the step A3 to continue training;
A8, outputting the latest current model parameter template as a corresponding second, third or fourth model parameter template according to the corresponding relation between the current training data set and the first, second or third training data set;
step 42, selecting one of all the database docking nodes as the corresponding current leader node;
here, it can be known from the foregoing that, in the SL model learning network according to the embodiment of the present invention, the SWCI node may perform all-node state monitoring on all the SL nodes through all the SN nodes, that is, the SWCI node may obtain all the state information of all the SL nodes, that is, may obtain the state information of all the database docking nodes, so the natural SWCI node may also obtain the single-round model training progress state information, the total computing power resource state information, the computing power resource usage state information, and the remaining computing power resource usage state information of all the database docking nodes; therefore, the embodiment of the invention realizes that one node is selected from all the database docking nodes to serve as the corresponding current leader node through the SWCI node, and specifically comprises the following steps: the SWCI node regularly acquires the single-round model training progress state information, the total computing power resource state information and the computing power resource usage state information of all the database docking nodes, and selects the database docking node with the largest residual computing power resource usage state information from all the database docking nodes as the corresponding current leader node (leader node) when the single-round model training progress state information of all the database docking nodes is in a training ending state; the selected current leader node performs the merging processing of the global model parameter templates in the subsequent steps;
Step 43, making all the rest database docking nodes except the current leader node send the latest model parameter templates to the current leader node;
here, it can be known from the foregoing that, in the SL model learning network according to the embodiment of the present invention, each SL node is connected in pairs, and the SWCI node may perform full-node control on all the SL nodes through all the SN nodes, that is, the SWCI node may send a control instruction to all the SL nodes, so that the natural SWCI node may also send a control instruction to all the database docking nodes; based on the above characteristics, the specific operation steps of the processing of the embodiment of the invention, which enable all the other database docking nodes except the current leader node to send the latest model parameter templates to the current leader node, are as follows: the SWCI node transmits a model parameter template transmission control instruction carrying a leader node address (or identifier) to all database butt joint nodes except the current leader node to all database butt joint nodes, and each other database butt joint node except the current leader node transmits a latest model parameter template stored locally to the current leader node corresponding to the leader node address (or identifier) after receiving the model parameter template transmission control instruction;
Step 44, the current leader node performs full-node model parameter combination processing on the model parameter templates stored locally and the model parameter templates sent by all other database docking nodes to generate corresponding global model parameter templates;
the model parameter template of the embodiment of the invention is composed of a plurality of fixed parameter objects, and each parameter object has at least one object name and one parameter value; the current leader node performs the combination processing of the full node model parameters by default by adopting a mean value method, namely: the current leader node forms a corresponding model parameter template set by the model parameter templates stored locally and the model parameter templates sent by all other database docking nodes; constructing a template with all-empty parameter values based on the template format of the model parameter template as an initialized global model parameter template; traversing each parameter object on the initialized global model parameter template; the method comprises the steps of traversing a current parameter object to serve as a corresponding current parameter object, taking an object name of the current parameter object as a corresponding current object name, extracting parameter values of all parameter objects with object names matched with the current object name in a model parameter template set to form a corresponding current parameter value sequence, carrying out mean calculation on the current parameter value sequence to obtain a corresponding average parameter value, and setting the parameter value of the current parameter object on a global model parameter template based on the average parameter value; when the traversal is finished, the global model parameter template with the parameter values of all the parameter objects being completely set is used as a processing result of the current full-node model parameter merging processing to be output;
Step 45, the current leader node sends the global model parameter template back to all other database docking nodes;
here, it can be known from the foregoing that, in the SL model learning network according to the embodiment of the present invention, after the current leader node obtains the global model parameter template, the current leader node returns the global model parameter template to all other database docking nodes through connection channels with other database docking nodes;
step 46, making all database docking nodes of the SL model learning network perform template updating processing on the locally stored model parameter templates according to the global model parameter templates;
here, the embodiment of the invention provides that all the database docking nodes need to update the locally stored model parameter templates according to the global model parameter templates immediately after receiving the global model parameter templates, i.e. use the global model parameter templates to replace the parameter values of the locally stored model parameter templates.
Here, it can be known from the above step 4 that in the M full-node model training processes, global model parameter template synchronization is performed on all the database docking nodes once every time full-node model training is performed, and the last synchronized global model parameter template is used as the initial model parameter template of the current training in the next full-node model training; and when the M times of full-node model training are finished, the model parameter templates of the databases local to the butt joint nodes are the latest global model parameter templates obtained after the M rounds of global iteration.
And 5, abutting nodes through each database when the M times of full-node model training is finished, and obtaining the latest model parameter templates of the multi-mode classification model.
Here, the model using mechanism in each region at the end of M times of full-node model training can obtain the latest global model parameter template of the multi-model classification model, that is, the latest model parameter template of the multi-model classification model, from the local of the corresponding database docking node. Each model using mechanism carries out parameter configuration on the multi-mode classification model based on the latest model parameter template, so that the multi-mode classification model can be used for carrying out corresponding prediction classification processing. Because the model parameters used by all model using mechanisms are consistent, the consistency of the processing results of the prediction classification processing can be ensured naturally.
Fig. 2 is a system configuration diagram of a data processing system based on a training classification model of a neuroimage database according to a second embodiment of the present invention, where the system is a system, a terminal device, or a server for implementing the first embodiment of the foregoing method, or may be an apparatus capable of implementing the first embodiment of the foregoing method by the foregoing system, terminal device, or server, for example, the apparatus may be an apparatus or a chip system of the foregoing terminal device or server. As shown in fig. 2, the system includes: a database processing end 201, a multi-modal neuroimage database 202, a label labeling processing end 203, a node processing end 204 and a SL model learning network 205.
The database processing end 201 is configured to create a multi-modal neural image database 202 in each region.
The labeling processing end 203 is configured to perform image labeling processing on each of the multi-modal neuro-image databases 202 based on a plurality of unified single-modal image classification prediction models.
The node processing end 204 is configured to perform SL node docking processing on each multimodal neural image database 202 that completes the image tag labeling processing with the SL model learning network 205 to generate a corresponding database docking node; based on the SL model learning network 205, training the multi-mode classification model for M times according to a preset maximum training frequency M; and at the end of M times of full-node model training, the nodes are abutted through each database to form the latest model parameter templates of the multi-mode classification model.
The data processing system based on the neural image database training classification model provided in the second embodiment of the present invention may perform the method steps in the first embodiment of the method, and its implementation principle and technical effects are similar and will not be described herein.
It should be noted that, the division of the processing end, the database and the network in the above system is merely a division of logic functions, and may be fully or partially integrated into one physical entity or may be physically separated. And the processing ends, the database and the network can be all realized in the form of software calling through the processing element; or can be realized in hardware; it may also be implemented partly in the form of software called by a processing element and partly in the form of hardware. For example, the database processing end may be a processing element which is set up separately, or may be a chip which is integrated on the apparatus, the device or the server, or may be stored in the memory of the apparatus, the device or the server in the form of program codes, and may be called by a certain processing element of the apparatus, the device or the server and implement the corresponding processing end function. Other processing ends, databases, networks are implemented similarly. In addition, all or part of the processing end, the database and the network can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In the implementation process, each method step of the method or each processing end of the system, the database and the processing step of the network can be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the processing ends, databases, networks of the above system may be one or more integrated circuits configured to implement the foregoing methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC), or one or more digital signal processors (Digital Signal Processor, DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), etc. For another example, when the above system is implemented in the form of a processing end, database, network, etc., the processing element may be a general purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces, in whole or in part, the processes or functions described in connection with the foregoing method embodiments. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line ((Digital Subscriber Line, DSL)), or wireless (e.g., infrared, wireless, bluetooth, microwave, etc.) means, the computer-readable storage medium may be any available medium that can be accessed by the computer or a data storage device such as a server, data center, etc., that contains an integration of one or more available media, the available media may be magnetic media (e.g., floppy disk, hard disk, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid state disk, SSD), etc.
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. The electronic device may be the aforementioned terminal device or server, or may be a terminal device or server connected to the aforementioned terminal device or server for implementing the method of the embodiment of the present invention. As shown in fig. 3, the electronic device may include: a processor 301 (e.g., a CPU), a memory 302, a transceiver 303; the transceiver 303 is coupled to the processor 301, and the processor 301 controls the transceiving actions of the transceiver 303. The memory 302 may store various instructions for performing the various processing functions and implementing the processing steps described in the method embodiments previously described. Preferably, the electronic device according to the embodiment of the present invention further includes: a power supply 304, a system bus 305, and a communication port 306. The system bus 305 is used to implement communication connections between the elements. The communication port 306 is used for connection communication between the electronic device and other peripheral devices.
The system bus 305 referred to in fig. 3 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries). The Memory may comprise random access Memory (Random Access Memory, RAM) and may also include Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory.
The processor may be a general-purpose processor, including a Central Processing Unit (CPU), a network processor (Network Processor, NP), a graphics processor (Graphics Processing Unit, GPU), etc.; but may also be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
The embodiment of the invention provides a data processing method, a system and electronic equipment based on a neural image database training classification model; creating a multi-mode neural image database for storing various types of neural image data (MRI data, EEG data and PET data) in each region, providing a plurality of unified single-mode image classification prediction models for each region, performing early AD classification prediction based on single-mode images, respectively performing image label (normal type labels and a plurality of early AD classification type labels) on each neural image data of each multi-mode neural image database based on the plurality of single-mode image classification prediction models, accessing all multi-mode neural image databases with the image labels to a SL (Swarm Learning) model learning network to form a plurality of database docking nodes, and performing multi-time full-node model training on the multi-mode classification models for performing early AD classification prediction by the SL model learning network through all the database docking nodes so as to obtain a model parameter template meeting the maximum training quantity and training complexity. The invention solves the problem that the using effect of the model of the using mechanism of different regional models is inconsistent due to the influence of factors such as different regional distribution, different data acquisition difficulties and the like, reduces the training difficulty of the multi-mode classification model, improves the training sufficiency of the multi-mode classification model, and further improves the auxiliary supporting strength of the multi-mode classification model on the whole regional mechanism.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A data processing method for training a classification model based on a neuro-image database, the method comprising:
creating a multi-modal neuroimage database in each region;
performing image label labeling processing on each multi-mode neural image database based on a plurality of unified single-mode image classification prediction models;
carrying out SL node docking processing on each multi-modal neural image database subjected to image label labeling processing and an SL model learning network to generate corresponding database docking nodes;
carrying out M times of full-node model training on the multi-mode classification model according to a preset maximum training frequency M based on the SL model learning network;
and obtaining the latest model parameter templates of the multi-mode classification model through the database docking nodes when the M times of full-node model training is finished.
2. The method for processing data based on training classification models of neuro-imaging database as claimed in claim 1,
the multi-modal neuroimaging database includes a plurality of first data records; the first data record comprises a first image mode, first image data and a first image label; the first imaging modality includes an MRI modality, an EEG modality, and a PET modality; the first image data corresponding to the first image mode being an MRI mode, an EEG mode or a PET mode is corresponding MRI data, EEG data and PET data; the first image tag comprises a normal classification tag and a multi-class early Alzheimer disease classification tag; all the first image tags are initialized to be empty;
The unified single-mode image classification prediction models consist of a plurality of single-mode image classification prediction models which are trained and mature in advance, and are an MRI image classification prediction model, an EEG image classification prediction model and a PET image classification prediction model respectively;
the SL model learning network is a decentralization model learning network constructed based on a SL block chain network; the SL model learning network comprises a plurality of SL nodes; each SL node is connected with all other SL nodes in pairs;
the SL model learning network presets a model intelligent contract which is used for providing a model data packet for the SL node signing the contract; the model data packet is composed of the multi-mode classification model, a model loss function, a training objective function, a loss convergence range and a model parameter template.
3. The method for processing data based on training classification models of neuro-image database as claimed in claim 2,
the MRI image classification prediction model comprises a first feature extraction module, a first classification module and a first label output module; the input end of the first feature extraction module is a model input end, and the output end of the first feature extraction module is connected with the input end of the first classification module; the output end of the first classification module is connected with the input end of the first label output module; the output end of the first label output module is a model output end; the first feature extraction module is used for carrying out full brain feature extraction processing on MRI data input by the model according to a preset full brain structure template to obtain a corresponding first feature map; the first classification module is used for carrying out classification prediction according to the first feature map to obtain a first classification prediction vector with a plurality of classification probabilities; the first tag output module is used for outputting a prediction type corresponding to the maximum classification probability in the first classification prediction vector as a corresponding type tag; the prediction type of the first classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types;
The EEG image classification prediction model comprises a second feature extraction module, a second classification module and a second label output module; the input end of the second feature extraction module is a model input end, and the output end of the second feature extraction module is connected with the input end of the second classification module; the output end of the second classification module is connected with the input end of the second label output module; the second feature extraction module is used for extracting the whole brain features of the EEG data input by the model to obtain a corresponding second feature map; the second classification module is used for carrying out classification prediction according to the second feature map to obtain a second classification prediction vector with a plurality of classification probabilities; the second tag output module is configured to output a prediction type corresponding to a maximum classification probability in the second classification prediction vector as a corresponding type tag; the prediction type of the second classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types;
the PET image classification prediction model comprises a third feature extraction module, a third classification module and a third label output module; the input end of the third feature extraction module is a model input end, and the output end of the third feature extraction module is connected with the input end of the third classification module; the output end of the third classification module is connected with the input end of the third tag output module; the output end of the third tag output module is a model output end; the third feature extraction module is used for carrying out full brain feature extraction processing on PET data input by the model to obtain a corresponding third feature map; the third classification module is used for carrying out classification prediction according to the third feature map to obtain a third classification prediction vector with a plurality of classification probabilities; the third tag output module is configured to output, as a corresponding type tag, a prediction type corresponding to a maximum classification probability in the third classification prediction vector; the prediction type of the third classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types;
The multi-modal classification model comprises a multi-modal feature extraction module, a multi-modal feature fusion module and a fusion feature classification module; the multi-modal feature extraction module consists of three parallel feature extraction units, namely a first feature extraction unit, a second feature extraction unit and a third feature extraction unit, wherein the input ends of the first feature extraction unit, the second feature extraction unit and the third feature extraction unit are model input ends, and the output ends of the first feature extraction unit, the second feature extraction unit and the third feature extraction unit are connected with the input end of the multi-modal feature fusion module; the output end of the multi-mode feature fusion module is connected with the input end of the fusion feature classification module; the output end of the fusion characteristic classification module is a model output end; the first, second and third feature extraction units are respectively used for performing full brain feature extraction processing on the input MRI data, EEG data and PET data to obtain three corresponding full brain feature graphs; the multi-modal feature fusion module is used for carrying out multi-modal feature fusion processing on the three brain feature images to obtain a corresponding first fusion feature image; the fusion feature classification module is used for carrying out classification prediction according to the first fusion feature map to obtain a fourth classification prediction vector with a plurality of classification probabilities; each classification probability of the fourth classification prediction vector corresponds to a prediction type, and the prediction type corresponding to the fourth classification prediction vector comprises a normal type and a plurality of early Alzheimer disease classification types.
4. The data processing method based on the neural image database training classification model according to claim 3, wherein the image label labeling process is performed on each of the multi-modal neural image databases based on a plurality of unified single-mode image classification prediction models, specifically comprising:
traversing each first data record of the multi-modal neuro-image database; the first data record of the current traversal is used as a corresponding current data record, and the first image mode and the first image data of the current data record are extracted to be used as corresponding current image mode and current image data; identifying the current image mode; if the current image mode is an MRI mode, performing classification prediction processing on the current image data based on the MRI image classification prediction model to obtain a corresponding first type tag, if the current image mode is an EEG mode, performing classification prediction processing on the current image data based on the EEG image classification prediction model to obtain a corresponding first type tag, and if the current image mode is a PET mode, performing classification prediction processing on the current image data based on the PET image classification prediction model to obtain a corresponding first type tag; and setting the first image tag of the current data record as the corresponding first type tag.
5. The data processing method based on neural image database training classification model according to claim 2, wherein the performing SL node docking processing on each of the multi-modal neural image databases subjected to image tag labeling processing and the SL model learning network to generate a corresponding database docking node specifically comprises:
optionally selecting one SL node which is not docked with any multi-modal neural image database in the SL model learning network as the database docking node corresponding to the current multi-modal neural image database; and signing the model intelligent contract by the database docking node to obtain the corresponding model data packet which is stored locally.
6. The data processing method based on training a classification model of a neuroimage database according to claim 3, wherein the training of the multi-modal classification model based on the SL model learning network for M times of full-node model training according to a preset maximum training time M specifically includes:
when each full-node model training is performed, performing one round of model training by all the database docking nodes according to the corresponding multi-mode neural image database and the locally stored model data packet, and updating the locally stored model parameter templates through training; selecting one of all the database docking nodes as a corresponding current leader node; all the rest database docking nodes except the current leader node send the latest model parameter templates to the current leader node; the current leader node performs full-node model parameter combination processing on the model parameter templates locally stored by the current leader node and the model parameter templates sent by all other database docking nodes to generate corresponding global model parameter templates; the current leader node sends the global model parameter template back to all other database docking nodes; and enabling all database docking nodes of the SL model learning network to update the locally stored model parameter templates according to the global model parameter templates.
7. The neural image database-based training classification model data processing method of claim 6, wherein the performing, by all the database docking nodes, a round of model training according to the corresponding multimodal neural image database and the locally stored model data packet and updating the locally stored model parameter templates through training, specifically comprises:
selecting corresponding first, second and third training data sets from the corresponding multi-modal neuroimage databases by each of the database docking nodes; extracting the locally stored model parameter template to serve as a corresponding first model parameter template; performing local model training processing on the locally stored multi-mode classification model for one time according to the first training data set, the first model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding second model parameter template; performing local model training processing on the locally stored multi-mode classification model once according to the second training data set, the second model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding third model parameter template; performing local model training processing on the locally stored multi-mode classification model once according to the third training data set, the third model parameter template, the locally stored model loss function, the training objective function and the loss convergence range to generate a corresponding fourth model parameter template; and using the fourth model parameter template to update the locally stored model parameter template;
The first training data set comprises a plurality of first training data records, and each first training data record consists of MRI data, EEG data and PET data of a group of normal classification labels with image labels; the second training data set comprises a plurality of second training data records, each second training data record consists of MRI data, EEG data and PET data, wherein a group of image labels are similar early Alzheimer's disease classification labels, and the second training data records corresponding to the early Alzheimer's disease classification labels are included in the second training data set; the third training data set comprises a plurality of third training data records, each third training data record consists of MRI data, EEG data and PET data with the same image label, and the third training data set comprises the third training data records corresponding to normal classification labels and the third training data records corresponding to various early Alzheimer disease classification labels; the training data recording proportion of the first, second and third training data sets is a preset proportion A, B is C, A, B, C is an integer and B > C > A; the total number of training data records of the first, second and third training data sets is not less than a preset number threshold; none of the first, second and third training data records are identical.
8. The data processing method based on neural image database training classification model according to claim 7, wherein the performing the local model training processing on the locally stored multi-modal classification model comprises:
step 81, using the first, second or third training data set processed this time as a corresponding current training data set; the first, second or third model parameter templates corresponding to the current training data set are used as corresponding current model parameter templates; the current training data set comprises a plurality of training data records consisting of MRI data, EEG data and PET data with the same image label;
step 82, extracting a first training data record of the current training data set as a corresponding current training data record;
step 83, forming a corresponding current multi-mode image set by MRI data, EEG data and PET data recorded by the current training data; initializing an all-zero vector as a corresponding label classification prediction vector according to the vector length of the classification prediction vector output by the multi-mode classification model, and setting the classification probability corresponding to the image label recorded by the current training data in the label classification prediction vector as 1; the label classification prediction vector comprises a plurality of classification probabilities, and each classification probability is initialized to 0 when the label classification prediction vector is initialized;
Step 84, initializing model parameters of the multi-mode classification model according to the current model parameter template; inputting the current multi-modal image set into the multi-modal classification model for model reasoning to generate a corresponding training classification prediction vector; performing model loss estimation processing according to the training classification prediction vector and the label classification prediction vector based on the model loss function to generate a corresponding model estimation loss value;
step 85, identifying whether the model estimated loss value meets the loss convergence range; if yes, go to step 87; if not, go to step 86;
step 86, performing model parameter reverse modulation processing according to the current model parameter template and the label classification prediction vector based on the training objective function to generate a corresponding current modulation parameter template; taking the current modulation parameter template as a new current model parameter template; and returns to step 84 to continue training;
step 87, identifying whether the current training data record is the last training data record in the current training data set; if yes, go to step 88; if not, taking the next training data record in the current training data set as a new current training data record and returning to the step 83 to continue training;
And step 88, outputting the latest current model parameter template as the corresponding second, third or fourth model parameter template according to the corresponding relation between the current training data set and the first, second or third training data set.
9. A system for implementing the neural image database-based data processing method of training classification models of any of claims 1-8, the system comprising: the system comprises a database processing end, a multi-mode neural image database, a label labeling processing end, a node processing end and an SL model learning network;
the database processing end is used for creating a multi-mode neural image database in each region;
the label labeling processing end is used for carrying out image label labeling processing on each multi-mode neural image database based on a plurality of unified single-mode image classification prediction models;
the node processing end is used for carrying out SL node docking processing on each multi-mode neural image database which is subjected to image label marking processing and the SL model learning network to generate corresponding database docking nodes; carrying out M times of full-node model training on the multi-mode classification model according to a preset maximum training frequency M based on the SL model learning network; and obtaining the latest model parameter templates of the multi-mode classification model through the database docking nodes when the M times of full-node model training is finished.
10. An electronic device, comprising: memory, processor, and transceiver;
the processor being adapted to couple with the memory, read and execute instructions in the memory to implement the method of any one of claims 1-8;
the transceiver is coupled to the processor and is controlled by the processor to transmit and receive messages.
CN202310180625.XA 2023-02-17 2023-02-17 Data processing method and system based on neural image database training classification model Active CN116246112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310180625.XA CN116246112B (en) 2023-02-17 2023-02-17 Data processing method and system based on neural image database training classification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310180625.XA CN116246112B (en) 2023-02-17 2023-02-17 Data processing method and system based on neural image database training classification model

Publications (2)

Publication Number Publication Date
CN116246112A true CN116246112A (en) 2023-06-09
CN116246112B CN116246112B (en) 2024-03-22

Family

ID=86623812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310180625.XA Active CN116246112B (en) 2023-02-17 2023-02-17 Data processing method and system based on neural image database training classification model

Country Status (1)

Country Link
CN (1) CN116246112B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215291A (en) * 2020-10-19 2021-01-12 中国计量大学 Method for extracting and classifying medical image features under cascade neural network
CN112465058A (en) * 2020-12-07 2021-03-09 中国计量大学 Multi-modal medical image classification method under improved GoogLeNet neural network
US20210398017A1 (en) * 2020-06-23 2021-12-23 Hewlett Packard Enterprise Development Lp Systems and methods for calculating validation loss for models in decentralized machine learning
CN114330757A (en) * 2021-12-02 2022-04-12 刘维炜 Group learning method and device, block link points and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210398017A1 (en) * 2020-06-23 2021-12-23 Hewlett Packard Enterprise Development Lp Systems and methods for calculating validation loss for models in decentralized machine learning
CN112215291A (en) * 2020-10-19 2021-01-12 中国计量大学 Method for extracting and classifying medical image features under cascade neural network
CN112465058A (en) * 2020-12-07 2021-03-09 中国计量大学 Multi-modal medical image classification method under improved GoogLeNet neural network
CN114330757A (en) * 2021-12-02 2022-04-12 刘维炜 Group learning method and device, block link points and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SABAT, SAMRAT L. ET AL.: "Integrated Learning Particle Swarm Optimizer for global optimization", APPLIED SOFT COMPUTING, 31 January 2011 (2011-01-31), pages 574 - 584, XP027260587 *
邢杰;萧德云;: "基于集成神经网络的CSTR状态预测", 计算机与应用化学, no. 04, 28 April 2007 (2007-04-28), pages 433 - 436 *

Also Published As

Publication number Publication date
CN116246112B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN109919928B (en) Medical image detection method and device and storage medium
CN110288049B (en) Method and apparatus for generating image recognition model
WO2020019738A1 (en) Plaque processing method and device capable of performing magnetic resonance vessel wall imaging, and computing device
JP2024019441A (en) Training method for specializing artificial intelligence model in deployed institution, apparatus performing the same
CN111753865A (en) Recognition of realistic synthetic images generated using generative confrontation networks based on deep neural networks
KR102229218B1 (en) Signal translation system and signal translation method
CN111445440A (en) Medical image analysis method, equipment and storage medium
WO2021179692A1 (en) Head ct image segmentation method and apparatus, electronic device and storage medium
JP2017059090A (en) Generation device, generation method, and generation program
US20210312263A1 (en) Techniques For Matching Disparate Input Data
CN111797672A (en) Object recognition system and object recognition method
CN114708465B (en) Image classification method and device, electronic equipment and storage medium
WO2023020214A1 (en) Retrieval model training method and apparatus, retrieval method and apparatus, device and medium
CN116484867A (en) Named entity recognition method and device, storage medium and computer equipment
CN116245086A (en) Text processing method, model training method and system
CN113705276A (en) Model construction method, model construction device, computer apparatus, and medium
CN116246112B (en) Data processing method and system based on neural image database training classification model
CN114169339A (en) Medical named entity recognition model training method, recognition method and federal learning system
CN114266920A (en) Deep learning image classification method and system based on knowledge driving
CN113344067A (en) Method, device and equipment for generating customer portrait
EP4160488A1 (en) Adaptive aggregation for federated learning
US20210168195A1 (en) Server and method for controlling server
EP3683733A1 (en) A method, an apparatus and a computer program product for neural networks
JP2017134853A (en) Generation device, generation method, and generation program
CN114822866B (en) Medical data learning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant