WO2022146050A1 - Procédé et système d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression - Google Patents

Procédé et système d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression Download PDF

Info

Publication number
WO2022146050A1
WO2022146050A1 PCT/KR2021/020216 KR2021020216W WO2022146050A1 WO 2022146050 A1 WO2022146050 A1 WO 2022146050A1 KR 2021020216 W KR2021020216 W KR 2021020216W WO 2022146050 A1 WO2022146050 A1 WO 2022146050A1
Authority
WO
WIPO (PCT)
Prior art keywords
local
model
global
data
learning
Prior art date
Application number
PCT/KR2021/020216
Other languages
English (en)
Korean (ko)
Inventor
김현승
최준희
이종민
최민규
Original Assignee
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 성균관대학교산학협력단 filed Critical 성균관대학교산학협력단
Publication of WO2022146050A1 publication Critical patent/WO2022146050A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to an artificial intelligence joint learning method and system for diagnosing depression.
  • Artificial intelligence technology is expected to have a significant impact on various medical fields in the near future.
  • Artificial intelligence-based medical treatment is expected to improve reading accuracy and contribute to disease prediction and prevention.
  • Artificial intelligence-based medical treatment has the characteristic of being able to improve performance and efficiency compared to existing medical treatment.
  • a convolutional neural network in the field of computer vision can be directly applied to medical image analysis.
  • the previously published artificial intelligence model for diagnosing depression diagnoses depression only with words and intonation through an interview with a clinician.
  • the AI model developed in this way utilizes text and voice data and responds more quickly to text information.
  • Embodiments of the present invention are intended to provide an artificial intelligence joint learning method and system for diagnosing depression in order to improve the accuracy of an artificial intelligence model through joint learning between a global model and a plurality of local models for diagnosing depression.
  • Embodiments of the present invention are to provide a joint learning method of an artificial intelligence model for diagnosing depression using brain wave data and brain imaging (fMRI) data.
  • fMRI brain imaging
  • embodiments of the present invention prevent the risk of personal information leakage due to not sharing the patient personal information data held by each institution and improve the accuracy of the global artificial intelligence model, artificial intelligence joint learning for depression diagnosis It is intended to provide a method and system.
  • an artificial intelligence federated learning method performed by an artificial intelligence federated learning apparatus, the method comprising: pre-learning a global model using global learning data; pre-training a local model based on the weight parameters of the pre-trained global model; updating a weight parameter of the pre-trained local model using a feature vector extracted from pre-stored local training data; and updating the weight parameter of the pre-trained global model based on the updated weight parameter of the local model, the artificial intelligence joint learning method for diagnosing depression may be provided.
  • the method may further include retraining the local model based on a weight parameter of the updated global model.
  • the global model may include any one neural network from a support vector machine (SVM), a convolutional neural network (CNN), and a recurrent neural network (RNN).
  • SVM support vector machine
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the local model may be configured with the same neural network as that of the global model.
  • the global model is configured as a support vector machine if the number of data of the global learning data is less than a preset number of data, and if the number of data of the global learning data is greater than or equal to the preset number of data, it is completely at the end of a convolutional neural network or a recurrent neural network.
  • the fully-connected layer may be configured as a fully-connected neural network (NN).
  • the updating of the weight parameter of the local model includes extracting a feature vector using a convolutional neural network if the pre-stored local learning data is brain image data, or using a recurrent neural network if the pre-stored local learning data is time series data. Feature vectors can be extracted.
  • the updating of the weight parameter of the local model may include updating the weight parameter of the pre-trained local model by using individual feature vectors for each of the plurality of local models.
  • the updating of the weight parameter of the local model may include updating the weight parameter of the pre-trained local model based on the extracted feature vector using stochastic gradient descent.
  • the weight and bias are updated as much as the gradient value for the loss by the step size indicating the learning rate.
  • the updating of the weight parameters of the global model includes individually receiving the weight parameters of the updated local models for each of the plurality of local models, and integrating the weight parameters of the plurality of individually received local models to obtain the previously learned weight parameters. You can update the weight parameters of the global model.
  • a global federated learning apparatus for pre-learning a global model using global learning data; and a local federated learning apparatus that pre-trains a local model based on the weight parameter of the pre-trained global model, and updates the weight parameter of the pre-trained local model using a feature vector extracted from pre-stored local training data.
  • the global federated learning apparatus updates the weight parameter of the pre-trained global model based on the weight parameter of the updated local model.
  • the local federated learning apparatus may re-learn the local model based on the weight parameter of the updated global model.
  • the global model may include any one neural network from a support vector machine (SVM), a convolutional neural network (CNN), and a recurrent neural network (RNN).
  • SVM support vector machine
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the local model may be configured with the same neural network as that of the global model.
  • the global model is composed of a support vector machine if the number of data of the global learning data is less than a preset number of data, and if the number of data of the global learning data is greater than or equal to the preset number of data, it is completely at the end of a convolutional neural network or a recurrent neural network.
  • the fully-connected layer may be configured as a fully-connected neural network (NN).
  • the local federated learning apparatus may extract a feature vector using a convolutional neural network if the pre-stored local learning data is image data, or extract a feature vector using a recurrent neural network if the pre-stored local learning data is time series data. .
  • the local federated learning apparatus may update each of the weight parameters of the pre-trained local model by using individual feature vectors for each of the plurality of local models.
  • the local federated learning apparatus may update the weight parameter of the pre-trained local model based on the extracted feature vector using stochastic gradient descent.
  • the local federated learning apparatus may update weights and biases as much as a gradient value for a loss in the stochastic gradient descent method by a step size indicating a learning rate.
  • the global federated learning apparatus individually receives the weight parameters of the updated local model for each of the plurality of local models, and integrates the weight parameters of the individually received plurality of local models to obtain the weight parameters of the pre-trained global model. can be updated.
  • a non-transitory computer-readable storage medium for storing instructions that, when executed by a processor, cause the processor to execute a method, the method comprising: using global learning data pre-training the global model; pre-training a local model based on the weight parameters of the pre-trained global model; updating a weight parameter of the pre-trained local model using a feature vector extracted from pre-stored local training data; and updating the weight parameter of the pre-trained global model based on the updated weight parameter of the local model.
  • the disclosed technology may have the following effects. However, this does not mean that a specific embodiment should include all of the following effects or only the following effects, so the scope of the disclosed technology should not be understood as being limited thereby.
  • Embodiments of the present invention can accurately diagnose depression by using brain wave data and brain imaging (fMRI) data in each institution.
  • fMRI brain imaging
  • embodiments of the present invention allow each institution to learn an artificial intelligence model within each institution by using such data, and improve the accuracy of the global artificial intelligence model by using the weights of the learned artificial intelligence model.
  • embodiments of the present invention can ensure privacy protection by individually managing the patient's personal data by each institution.
  • FIG. 1 is a block diagram of an artificial intelligence joint learning system for diagnosing depression according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a process of extracting a feature vector from a brain image used in an embodiment of the present invention.
  • FIG. 3 is a diagram showing the configuration of a CNN model used in an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a feature vector extraction process from EEG data according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating a learning process of a global model according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a process of transmitting a weight parameter of a global model according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an update process of a local model according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating a process of transmitting a weight parameter of a local model according to an embodiment of the present invention.
  • 9 and 10 are diagrams showing a learning result according to an embodiment of the present invention.
  • the term used in the present invention should be defined based on the meaning of the term and the overall content of the present invention, rather than the name of a simple term.
  • FIG. 1 is a block diagram of an artificial intelligence joint learning system for diagnosing depression according to an embodiment of the present invention.
  • the artificial intelligence federated learning system 100 for diagnosing depression includes a global federated learning device 120 and a plurality of local federated learning devices.
  • a global federated learning device 120 includes a global federated learning device 120 and a plurality of local federated learning devices.
  • not all illustrated components are essential components.
  • the artificial intelligence federated learning system 100 may be implemented by more components than the illustrated components, and the artificial intelligence federated learning system 100 may be implemented by fewer components than that.
  • the artificial intelligence combined learning system 100 aims to diagnose depression using brain image (fMRI) data and EEG data.
  • fMRI brain image
  • EEG electronic glycosysilicate
  • the size of brain imaging (fMRI) data and EEG data is large, the number of related data is not large, so it is difficult to implement an artificial intelligence model for diagnosing depression with high accuracy.
  • the artificial intelligence joint learning system 100 extracts feature vectors from a Convolutional Neural Network (CNN) model and a Recurrent Neural Network (RNN) model from brain image data and EEG data, respectively, to diagnose depression from the feature vectors.
  • CNN Convolutional Neural Network
  • RNN Recurrent Neural Network
  • an artificial intelligence model for diagnosing depression using CNN, RNN, or Support Vector Machine (SVM) may be trained by using brain wave data and fMRI data.
  • an embodiment of the present invention may receive and update a weight parameter of a local AI model of each institution using a global AI model.
  • the weight parameters of the artificial intelligence models of each institution may be updated by utilizing the brain wave data and fMRI data owned by each institution.
  • the weight parameter of the global AI model may be updated by transmitting the weight parameter of the local AI model of each institution to the global AI model.
  • the weight parameters of the updated global artificial intelligence model may be transmitted back to the local artificial intelligence models of various institutions.
  • a support vector machine (SVM) technique capable of effectively performing binary classification may be used among machine learning techniques.
  • the global federated learning apparatus 120 pre-trains the global SVM model using global learning data (S101).
  • the global federated learning apparatus 120 updates the weight of the global model (S102).
  • the local federated learning devices A, B, and C (111, 112, 113) implemented in each institution receive the weight parameters of the pre-trained global SVM model (S103).
  • the local federated learning devices A, B, and C (111, 112, 113) start learning the local SVM model of each institution based on the received weight parameter (S104, S106, S108).
  • Local federated learning devices A, B, and C put the data owned by each institution into a 3D CNN model commonly used by all institutions to extract feature vectors, and then use this to determine the weight parameters of the local model as a probability. It is updated using the stochastic gradient descent method (S105, S107, S109).
  • the local federated learning devices A, B, and C (111, 112, 113) transfer the local SVM model weights of each institution learned to the global SVM model again when the local model learning of each institution is completed (S110, S111, S112) ).
  • the global federated learning apparatus 120 updates the weights of the global SVM model with an arithmetic average value or a weighted average value of the weights received from each institution to create a high-accuracy global SVM model (S113) ).
  • the artificial intelligence combined learning system 100 may improve the accuracy of diagnosis of depression by performing joint learning using EEG data using a similar method.
  • the global federated learning apparatus 120 learns the global model in advance by using the global learning data.
  • the local federated learning device pre-learns a local model based on the weight parameters of the global model trained in advance in the global federated learning device 120, and uses a feature vector extracted from pre-stored local training data of the pre-trained local model. Update the weight parameter.
  • the global federated learning apparatus 110 updates the weight parameter of the pre-trained global model based on the updated weight parameter of the local model.
  • the local federated learning apparatus may relearn the local model based on the updated weight parameter of the global model.
  • the global model may be composed of any one neural network among a support vector machine (SVM), a convolutional neural network (CNN), and a recurrent neural network (RNN).
  • SVM support vector machine
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the local model may be configured with the same neural network as that of the global model.
  • the global model is configured as a support vector machine when the number of data of the global training data is less than the preset number of data, and when the number of data of the global learning data is greater than or equal to the preset number of data, the last stage of the convolutional neural network or the recurrent neural network It may be composed of a fully-connected neural network (NN) with a fully-connected layer connected to .
  • NN fully-connected neural network
  • the SVM when the number of medical data currently possessed is small, the SVM may be used as a global model and a local model by extracting a feature vector.
  • a fully-connected layer is attached to the last stage of a CNN or RNN, and this part can be used as a global model and a local model instead of SVM. .
  • a fully connected layer contains a large number of parameters, so it is impossible to learn with a small number of data. Accordingly, in another embodiment of the present invention, if a sufficient amount of data is retained rather than the format of the data, a fully connected layer may be used instead of the SVM.
  • the CNN and RNN in the preceding stage may vary depending on the format of the medical data (3D-CNN is used in the case of a brain image image, and RNN is used because EEG data is time series data).
  • CNN when classifying a feature extracted through CNN or RNN, whether to use SVM as a global model or a local model classifier model, a fully-connected neural network (CNN) is selected. Whether to use it may depend on the amount of data.
  • CNN fully-connected neural network
  • the local federated learning apparatus extracts a feature vector using a convolutional neural network if the pre-stored local learning data is image data, or uses a recurrent neural network if the pre-stored local learning data is time series data. can be extracted.
  • the local federated learning apparatus may update each of the weight parameters of the pre-trained local model by using individual feature vectors for each of the plurality of local models.
  • the local federated learning apparatus may update the weight parameter of the pre-trained local model based on the extracted feature vector using stochastic gradient descent.
  • the local federated learning apparatus may update the weight and bias as much as the gradient value for the loss in the stochastic gradient descent method by the step size indicating the learning rate. .
  • the stochastic gradient descent method will be referred to as w, the bias as b, the feature vector as x, and the feature label as y.
  • the stochastic gradient descent method is a method of updating w and b by the step size (step_size (learning rate)) indicating the learning rate by the gradient value for the loss.
  • the global federated learning apparatus 120 individually receives the weight parameters of the local models updated for each of the plurality of local models, and integrates the weight parameters of the individually received plurality of local models to obtain a pre-trained global You can update the weight parameters of the model.
  • FIG. 2 is a diagram illustrating a process of extracting a feature vector from a brain image used in an embodiment of the present invention.
  • An embodiment of the present invention extracts a feature vector from a brain image (fMRI image) through a convolutional neural network.
  • FIG. 3 is a diagram showing the configuration of a CNN model used in an embodiment of the present invention.
  • fMRI Brain imaging
  • CNN 3D Convolutional Neural Network
  • FIG. 4 is a diagram illustrating a feature vector extraction process from EEG data according to an embodiment of the present invention.
  • An embodiment of the present invention extracts the EEG data through a recurrent neural network to extract a feature vector.
  • FIG. 5 is a diagram illustrating a learning process of a global model according to an embodiment of the present invention.
  • An embodiment of the present invention obtains a corresponding feature vector through global training data, and learns a global support vector machine model based on the feature vector.
  • FIG. 6 is a diagram illustrating a process of transmitting a weight parameter of a global model according to an embodiment of the present invention.
  • An embodiment of the present invention transmits weight parameters of a pre-trained global model, that is, weight & bias, to each local support vector machine model (Local SVM model). .
  • FIG. 7 is a diagram illustrating an update process of a local model according to an embodiment of the present invention.
  • a pre-trained weight parameter received from a global model as a feature vector from local training data possessed by each local federated learning device is used. update
  • a weight parameter of a local model is updated using a stochastic gradient descent method.
  • FIG. 8 is a diagram illustrating a process of transmitting a weight parameter of a local model according to an embodiment of the present invention.
  • An embodiment of the present invention transfers the weight parameters of the local support vector machine model (Local SVM model) learned by each local federated learning apparatus to the global model (Global model).
  • the update method of the weight parameter in the global support vector machine model may be updated using an arithmetic average or a weighted average method.
  • 9 and 10 are diagrams showing a learning result according to an embodiment of the present invention.
  • Brain image (fMRI) data is put into a pre-trained 3D CNN model to extract a feature vector of the 3D image, and the last convolution layer value of the 3D CNN model was used for the feature vector. Depression was diagnosed by applying the extracted feature vector to a machine learning technique.
  • a linear support vector machine (SVM) was used among the machine learning techniques.
  • the federated learning method of the SVM model is as follows. After training the global SVM model using the global training data set, the weights and bias values of the global SVM model are transferred to Model A, Model B, and Model C. Model A, model B, and model C train each model using the data available to each model based on the received SVM weights and biases.
  • the stochastic gradient descent method is used as a method for learning the SVM.
  • the global SVM model receives weights and bias values from each model and updates the weights and biases of the global SVM model.
  • the update method an arithmetic average value of the weight and bias values of each model or a weighted average value according to the number of training data was used.
  • the training epoch used in the stochastic gradient descent method was set to 10, and the step size was set to 10e-3, and the federated learning results for the SVM model according to the size of the training data set. Is as follows.
  • the patient's personal information data held by each institution is not directly used, but only the weight value of each model is used for global classification (classification). ) to improve the accuracy of the model.
  • a non-transitory computer-readable storage medium for storing instructions that, when executed by a processor, cause the processor to execute a method, the method comprising: pre-training a global model using global training data; pre-training a local model based on the weight parameters of the pre-trained global model; updating a weight parameter of the pre-trained local model using a feature vector extracted from pre-stored local training data; and updating the weight parameter of the pre-trained global model based on the updated weight parameter of the local model.
  • the various embodiments described above are implemented as software including instructions stored in a machine-readable storage media readable by a machine (eg, a computer).
  • a machine eg, a computer
  • the device is a device capable of calling a stored command from a storage medium and operating according to the called command, and may include an electronic device (eg, the electronic device A) according to the disclosed embodiments.
  • the processor may perform a function corresponding to the instruction by using other components directly or under the control of the processor.
  • Instructions may include code generated or executed by a compiler or interpreter.
  • the device-readable storage medium may be provided in the form of a non-transitory storage medium.
  • 'non-transitory' means that the storage medium does not include a signal and is tangible, and does not distinguish that data is semi-permanently or temporarily stored in the storage medium.
  • the methods according to the various embodiments described above may be provided by being included in a computer program product.
  • Computer program products may be traded between sellers and buyers as commodities.
  • the computer program product may be distributed in the form of a device-readable storage medium (eg, compact disc read only memory (CD-ROM)) or online through an application store (eg, PlayStoreTM).
  • an application store eg, PlayStoreTM
  • at least a portion of the computer program product may be temporarily stored or temporarily generated in a storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server.
  • the various embodiments described above are stored in a recording medium readable by a computer or a similar device using software, hardware, or a combination thereof. can be implemented in In some cases, the embodiments described herein may be implemented by the processor itself. According to the software implementation, embodiments such as the procedures and functions described in this specification may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described herein.
  • non-transitory computer-readable medium refers to a medium that stores data semi-permanently, not a medium that stores data for a short moment, such as a register, cache, memory, etc., and can be read by a device.
  • Specific examples of the non-transitory computer-readable medium may include a CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, and the like.
  • each of the components may be composed of a single or a plurality of entities, and some sub-components of the above-described corresponding sub-components may be omitted, or other Sub-components may be further included in various embodiments.
  • some components eg, a module or a program
  • operations performed by a module, program, or other component are sequentially, parallel, repetitively or heuristically executed, or at least some operations are executed in a different order, are omitted, or other operations are added.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Psychiatry (AREA)
  • Databases & Information Systems (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Computational Linguistics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Surgery (AREA)
  • Educational Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un système d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression, et le procédé d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression, selon un mode de réalisation de la présente invention, comprend les étapes suivantes : le pré-entraînement d'un modèle global à l'aide de données d'entraînement global ; le pré-entraînement d'un modèle local sur la base de paramètres de poids du modèle global pré-entraîné ; la mise à jour des paramètres de poids du modèle local pré-entraîné à l'aide d'un vecteur de caractéristiques extrait de données d'entraînement local pré-stockées ; et la mise à jour des paramètres de poids du modèle global pré-entraîné sur la base des paramètres de poids mis à jour du modèle local.
PCT/KR2021/020216 2020-12-29 2021-12-29 Procédé et système d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression WO2022146050A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200186728A KR102562377B1 (ko) 2020-12-29 2020-12-29 우울증 진단 정보를 제공하기 위한 인공지능 연합학습 방법 및 시스템
KR10-2020-0186728 2020-12-29

Publications (1)

Publication Number Publication Date
WO2022146050A1 true WO2022146050A1 (fr) 2022-07-07

Family

ID=82259550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/020216 WO2022146050A1 (fr) 2020-12-29 2021-12-29 Procédé et système d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression

Country Status (2)

Country Link
KR (1) KR102562377B1 (fr)
WO (1) WO2022146050A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240103473A (ko) 2022-12-27 2024-07-04 성균관대학교산학협력단 연합학습 방법과 흉부질환 탐지 방법 및 컴퓨팅 장치
WO2024162728A1 (fr) * 2023-01-30 2024-08-08 울산과학기술원 Appareil et procédé de méta-apprentissage pour apprentissage fédéré personnalisé
CN116564356A (zh) * 2023-04-26 2023-08-08 新疆大学 一种基于时延神经网络与门控循环单元算法的抑郁症诊断方法与系统
KR102643869B1 (ko) * 2023-07-31 2024-03-07 주식회사 몰팩바이오 연합학습모델을 이용한 병리진단시스템 및 그 프로세싱 방법
KR102670189B1 (ko) 2023-12-18 2024-05-29 스타라이크 주식회사 리듬게임을 이용한 ai 기반의 디지털 치료제를 제공하기 위한 운영 서버 및 그 동작 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190032433A (ko) * 2016-07-18 2019-03-27 난토믹스, 엘엘씨 분산 머신 학습 시스템들, 장치, 및 방법들
KR20190136825A (ko) * 2018-05-31 2019-12-10 가천대학교 산학협력단 가중 퍼지소속함수 기반의 퍼지 신경망을 이용한 우울증 진단을 위한 최적의 컨텐츠 판별 방법
US20190385043A1 (en) * 2018-06-19 2019-12-19 Adobe Inc. Asynchronously training machine learning models across client devices for adaptive intelligence
KR102089014B1 (ko) * 2018-09-07 2020-03-13 연세대학교 산학협력단 피검사체의 뇌 활동을 재구성한 이미지 생성 장치 및 그 방법
US20200356878A1 (en) * 2019-05-07 2020-11-12 Cerebri AI Inc. Predictive, machine-learning, time-series computer models suitable for sparse training sets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102021515B1 (ko) * 2018-12-27 2019-09-16 (주)제이엘케이인스펙션 뇌혈관 질환 학습 장치, 뇌혈관 질환 검출 장치, 뇌혈관 질환 학습 방법 및 뇌혈관 질환 검출 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190032433A (ko) * 2016-07-18 2019-03-27 난토믹스, 엘엘씨 분산 머신 학습 시스템들, 장치, 및 방법들
KR20190136825A (ko) * 2018-05-31 2019-12-10 가천대학교 산학협력단 가중 퍼지소속함수 기반의 퍼지 신경망을 이용한 우울증 진단을 위한 최적의 컨텐츠 판별 방법
US20190385043A1 (en) * 2018-06-19 2019-12-19 Adobe Inc. Asynchronously training machine learning models across client devices for adaptive intelligence
KR102089014B1 (ko) * 2018-09-07 2020-03-13 연세대학교 산학협력단 피검사체의 뇌 활동을 재구성한 이미지 생성 장치 및 그 방법
US20200356878A1 (en) * 2019-05-07 2020-11-12 Cerebri AI Inc. Predictive, machine-learning, time-series computer models suitable for sparse training sets

Also Published As

Publication number Publication date
KR20220094967A (ko) 2022-07-06
KR102562377B1 (ko) 2023-08-01

Similar Documents

Publication Publication Date Title
WO2022146050A1 (fr) Procédé et système d'entraînement d'intelligence artificielle fédéré pour le diagnostic de la dépression
WO2017213398A1 (fr) Modèle d'apprentissage pour détection de région faciale saillante
WO2021002549A1 (fr) Système basé sur un apprentissage profond et procédé permettant de déterminer automatiquement le degré de dommage de chaque zone de véhicule
WO2018212494A1 (fr) Procédé et dispositif d'identification d'objets
WO2017022882A1 (fr) Appareil de classification de diagnostic pathologique d'image médicale, et système de diagnostic pathologique l'utilisant
WO2021054706A1 (fr) Apprendre à des gan (réseaux antagonistes génératifs) à générer une annotation par pixel
WO2019235828A1 (fr) Système de diagnostic de maladie à deux faces et méthode associée
WO2017164478A1 (fr) Procédé et appareil de reconnaissance de micro-expressions au moyen d'une analyse d'apprentissage profond d'une dynamique micro-faciale
WO2018135696A1 (fr) Plate-forme d'intelligence artificielle utilisant une technologie d'apprentissage auto-adaptative basée sur apprentissage profond
WO2020111754A9 (fr) Procédé pour fournir un système de diagnostic utilisant l'apprentissage semi-supervisé, et système de diagnostic l'utilisant
WO2020204364A2 (fr) Procédé et dispositif de plongement lexical sur la base d'informations contextuelles et d'informations morphologiques d'un mot
WO2022149894A1 (fr) Procédé pour entraîner un réseau neuronal artificiel fournissant un résultat de détermination d'un échantillon pathologique, et système informatique pour sa mise en œuvre
WO2020045848A1 (fr) Système et procédé pour le diagnostic d'une maladie à l'aide d'un réseau neuronal effectuant une segmentation
WO2018212584A2 (fr) Procédé et appareil de classification de catégorie à laquelle une phrase appartient à l'aide d'un réseau neuronal profond
WO2020071854A1 (fr) Appareil électronique et son procédé de commande
WO2020101457A2 (fr) Procédé de diagnostic de consensus à base d'apprentissage supervisé et système associé
WO2014106979A1 (fr) Procédé permettant de reconnaître un langage vocal statistique
WO2021075742A1 (fr) Procédé et appareil d'évaluation de valeur basée sur un apprentissage profond
WO2020159241A1 (fr) Procédé permettant de traiter une image et appareil associé
WO2021010671A2 (fr) Système de diagnostic de maladie et procédé pour réaliser une segmentation au moyen d'un réseau neuronal et d'un bloc non localisé
WO2022191513A1 (fr) Dispositif et système d'entraînement de modèle de suivi de connaissances basé sur l'augmentation de données et leur procédé de fonctionnement
WO2019240330A1 (fr) Système de prédiction de force basé sur des images et procédé correspondant
WO2023113437A1 (fr) Dispositif et procédé de segmentation sémantique à l'aide d'une mémoire
WO2019198900A1 (fr) Appareil électronique et procédé de commande associé
WO2022004970A1 (fr) Appareil et procédé d'entraînement de points clés basés sur un réseau de neurones artificiels

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21915847

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21915847

Country of ref document: EP

Kind code of ref document: A1