CN112966307B - Medical privacy data protection method based on federal learning tensor factorization - Google Patents

Medical privacy data protection method based on federal learning tensor factorization Download PDF

Info

Publication number
CN112966307B
CN112966307B CN202110422402.0A CN202110422402A CN112966307B CN 112966307 B CN112966307 B CN 112966307B CN 202110422402 A CN202110422402 A CN 202110422402A CN 112966307 B CN112966307 B CN 112966307B
Authority
CN
China
Prior art keywords
factor matrix
tensor
global
gradient
medical institution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110422402.0A
Other languages
Chinese (zh)
Other versions
CN112966307A (en
Inventor
郑子彬
麦成源
陈川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongai Health Technology Guangdong Co ltd
Original Assignee
Zhongai Health Technology Guangdong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongai Health Technology Guangdong Co ltd filed Critical Zhongai Health Technology Guangdong Co ltd
Priority to CN202110422402.0A priority Critical patent/CN112966307B/en
Publication of CN112966307A publication Critical patent/CN112966307A/en
Application granted granted Critical
Publication of CN112966307B publication Critical patent/CN112966307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Bioethics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention discloses a medical privacy data protection method based on federal learning tensor factorization, which comprises the following specific steps of: step one: each medical institution maintains a locally decomposed tensor factor matrix and a global tensor non-patient factor matrix and initializes the tensor factor matrix and the global tensor non-patient factor matrix when the federal process begins; step two: each medical institution performs local tensor factorization training and gradient descent by using a loss function; step three: according to the locally decomposed factor matrix and the global non-patient factor matrix, a corresponding factor matrix update gradient is obtained; the medical privacy data protection method based on federal learning tensor factorization can improve communication efficiency, further protect user data privacy, reduce homomorphic encryption operand, and solve the problem that the accuracy of the aggregated global factor matrix is low due to local training of clients in non-independent and homomorphic distribution.

Description

Medical privacy data protection method based on federal learning tensor factorization
Technical Field
The invention relates to the field of privacy data protection, in particular to a medical privacy data protection method based on federal learning tensor factorization.
Background
Through retrieval, chinese patent number CN109510712A discloses a remote medical data privacy protection method, a remote medical data privacy protection system and a remote medical data privacy protection terminal, which are easy to have privacy protection limitation problems in the privacy protection process, and meanwhile, the communication efficiency is low, and the homomorphic encryption operand is large;
in a medical scenario, an Electronic Health Record (EHRs) of a patient user contains comprehensive information of clinical medical history of the patient, and an EHR data is utilized to calculate a phenotype (phenoyping), so that a disease risk is predicted by using the phenotype and an accurate medical assistance is realized, tensor decomposition in unsupervised learning is an efficient and alternative method for calculating the phenotype, but limited EHRs data of a single medical institution limits the performance of tensor decomposition for predicting the disease risk, centralized machine learning brings about privacy risks, a distributed and privacy-protected learning method is urgently needed, at present, a federal learning framework can better meet the scene requirement, knowledge or information of each institution is jointly learned while the original data privacy is protected, and therefore, the problem of protecting the existing medical privacy data by using the federal tensor decomposition method is proposed, but certain sensitive information exists in shared local phenotype information, so that the problem needs to be solved by using a related privacy protection strategy, and meanwhile, due to the fact that patient user data of most medical institutions also have non-independent and uniform distribution, the general and accurate medical institutions are very important for guaranteeing the global phenotype.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a medical privacy data protection method based on federal learning tensor factorization.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a medical privacy data protection method based on federal learning tensor factorization comprises the following specific steps:
step one: each medical institution maintains a locally decomposed tensor factor matrix and a global tensor non-patient factor matrix and initializes the tensor factor matrix and the global tensor non-patient factor matrix when the federal process begins;
step two: each medical institution performs local tensor factorization training and gradient descent by using a loss function;
step three: according to the locally decomposed factor matrix and the global non-patient factor matrix, a corresponding factor matrix update gradient is obtained;
step four: the medical institution sparsifies the factor matrix updating gradient through a gradient compression strategy;
step five: each medical institution encrypts the non-zero gradient updated by the non-patient factor matrix of the round by using the homomorphic encryption algorithm and sends the non-zero gradient to the central server;
step six: the central server carries out homomorphic addition aggregation on the encrypted gradients updated by the non-patient factor matrix of all the clients, and returns the aggregated gradients to each medical institution;
step seven: the medical institution client decrypts the global encryption gradient and performs gradient descent on the global non-patient factor matrix;
step eight: the client side continues the next round of tensor factor decomposition training after obtaining the global factor matrix;
step nine: and stopping the federal training when the global factor matrix converges or reaches a certain round.
As a further aspect of the present invention, in the step one, the tensor is used to represent EHR data of all patients of the medical institution, the factor matrix is obtained by performing tensor factorization locally by each medical institution, and the factor matrix approximates to obtain the original tensor through tensor product.
As a further aspect of the present invention, the loss function in the second step is a function that maps the value of the random event or the related random variable thereof to a non-negative real number to represent the risk or loss of the random event, where the loss function is used to evaluate the degree to which the predicted value and the true value of the model are different.
As a further aspect of the present invention, the loss function is divided into the following two types: an empirical risk loss function and a structural risk loss function;
wherein the empirical risk loss function refers to the difference between the predicted result and the actual result, and the structural risk loss function refers to the empirical risk loss function plus a regularization term.
As a further scheme of the invention, the specific compression mode of the gradient compression strategy in the fourth step is as follows:
and (3) locally, thinning the update gradient of the non-patient factor matrix by using a hard threshold method, taking the absolute value of the gradient matrix element to be zero within a threshold range, homomorphic encryption is only carried out on the non-zero gradient element, and the non-zero gradient element is sent to a central server for aggregation, so that unimportant gradient update is shielded.
As a further scheme of the invention, the specific steps for encrypting the homomorphic encryption algorithm in the fifth step are as follows:
s1: maintaining a local factorization factor matrix and a global non-patient factor matrix at a medical institution, and performing local tensor factorization;
s2: calculating gradients according to the updated local phenotype and the global phenotype maintained locally, encrypting the gradients, and sending the gradients to a server;
s3: homomorphic addition is carried out on a central server, and a global update gradient is obtained;
s4: and returning to each medical institution to perform decryption and calculate an updated global phenotype.
Compared with the prior art, the invention has the beneficial effects that:
according to the medical privacy data protection method based on the federal learning tensor factorization, as the gradient is updated instead of the original factor matrix and sent by each medical institution in the proposed homomorphic encryption privacy protection strategy, the gradient compression strategy can be combined to mutually cooperate, so that the communication efficiency is improved, the user data privacy is further protected, meanwhile, the calculation amount of homomorphic encryption is reduced, the penalty term of L2 norm is added into the loss function of tensor factorization, the shared factor matrix is enabled not to deviate from the global factor matrix in the local training, and the problem that the accuracy of the aggregated global factor matrix is lower due to the local training of clients with non-independent homomorphic distribution can be solved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
FIG. 1 is a flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Referring to fig. 1, a medical privacy data protection method based on federal learning tensor factorization includes the following specific steps:
step one: each medical institution maintains a locally decomposed tensor factor matrix and a global tensor non-patient factor matrix and initializes the tensor factor matrix and the global tensor non-patient factor matrix when the federal process begins;
step two: each medical institution performs local tensor factorization training and gradient descent by using a loss function;
step three: according to the locally decomposed factor matrix and the global non-patient factor matrix, a corresponding factor matrix update gradient is obtained;
step four: the medical institution sparsifies the factor matrix updating gradient through a gradient compression strategy;
step five: each medical institution encrypts the non-zero gradient updated by the non-patient factor matrix of the round by using the homomorphic encryption algorithm and sends the non-zero gradient to the central server;
step six: the central server carries out homomorphic addition aggregation on the encrypted gradients updated by the non-patient factor matrix of all the clients, and returns the aggregated gradients to each medical institution;
step seven: the medical institution client decrypts the global encryption gradient and performs gradient descent on the global non-patient factor matrix;
step eight: the client side continues the next round of tensor factor decomposition training after obtaining the global factor matrix;
step nine: and stopping the federal training when the global factor matrix converges or reaches a certain round.
In the first step, tensors are used for representing EHR data of all patients of the medical institutions, a factor matrix is obtained by locally performing tensor factorization on each medical institution, and the factor matrix approximately obtains an original tensor through tensor product.
In the second step, the loss function represents a function of mapping the random event or the value of the related random variable into a non-negative real number so as to represent the risk or loss of the random event, wherein the loss function is used for evaluating the degree that the predicted value and the true value of the model are different, and the better the loss function is, the better the performance of the model is, and the different models generally use the loss function.
The loss function is divided into two types: an empirical risk loss function and a structural risk loss function;
wherein the empirical risk loss function refers to the difference between the predicted result and the actual result, and the structural risk loss function refers to the empirical risk loss function plus a regularization term.
The penalty term of L2 norm is added into the loss function of tensor factor decomposition, so that the shared factor matrix does not deviate from the global factor matrix in the local training, the problem that the precision of the aggregated global factor matrix is lower due to the local training of the client with the same non-independent distribution can be solved, and the specific formula is as follows:
the specific compression mode of the gradient compression strategy in the fourth step is as follows:
and (3) locally, thinning the update gradient of the non-patient factor matrix by using a hard threshold method, taking the absolute value of the gradient matrix element to be zero within a threshold range, homomorphic encryption is only carried out on the non-zero gradient element, and the non-zero gradient element is sent to a central server for aggregation, so that unimportant gradient update is shielded.
The specific steps of encrypting by the homomorphic encryption algorithm in the fifth step are as follows:
s1: maintaining a local factorization factor matrix and a global non-patient factor matrix at a medical institution, and performing local tensor factorization;
s2: calculating gradients according to the updated local phenotype and the global phenotype maintained locally, encrypting the gradients, and sending the gradients to a server;
s3: homomorphic addition is carried out on a central server, and a global update gradient is obtained;
s4: and returning to each medical institution to perform decryption and calculate an updated global phenotype.
Through the technical scheme: according to the medical privacy data protection method based on federal learning tensor factorization, as the gradient is updated instead of the original factor matrix sent by each medical institution in the proposed homomorphic encryption privacy protection strategy, the gradient compression strategy can be combined to mutually cooperate, so that the communication efficiency is improved, the user data privacy is further protected, and meanwhile, the calculation amount of homomorphic encryption is reduced.
The working principle and the using flow of the invention are as follows: firstly, each medical institution needs to maintain a locally decomposed tensor factor matrix and a global tensor non-patient factor matrix, and initializes the locally decomposed tensor factor matrix and global tensor factor matrix at the beginning of a federal process, then each medical institution performs local tensor factor decomposition training, by performing gradient descent by using a loss function, adding a penalty term of L2 norm into the loss function of tensor factor decomposition, the shared factor matrix can be enabled not to deviate from the global factor matrix in the local training, the problem that the precision of the aggregated global factor matrix is lower due to the local training of a client side which is in non-independent and same distribution can be solved, then the corresponding factor matrix update gradient is calculated according to the locally decomposed factor matrix and the global non-patient factor matrix, then the medical institution performs sparsification on the factor matrix update gradient through a gradient compression strategy, the gradient compression concrete compression mode is as follows, the non-patient factor matrix update gradient is locally thinned by using a hard threshold method, the absolute value of a gradient matrix element is enabled to be zero within a threshold range, only the non-zero gradient element is subjected to homomorphic encryption and is transmitted to a central server aggregation, the important gradient update is shielded, the important gradient is enabled to be transmitted to the central server, the contrast is improved, the homomorphic encryption algorithm is further carried out by the contrast of the contrast between the different 1 encryption algorithm and the contrast between the different 1 and the different encryption algorithm is carried out by the contrast of the contrast-state encryption algorithm, and the contrast of the contrast between the contrast-1 and the contrast-encryption algorithm is further improved by the contrast-encryption algorithm, s1: maintaining a local factorization factor matrix and a global non-patient factor matrix at a medical institution, and performing local tensor factorization; s2: calculating gradients according to the updated local phenotype and the global phenotype maintained locally, encrypting the gradients, and sending the gradients to a server; s3: homomorphic addition is carried out on a central server, and a global update gradient is obtained; s4: and (3) returning to each medical institution to execute decryption and calculate to obtain an updated global phenotype, then carrying out homomorphic addition aggregation on the encrypted gradients updated by the non-patient factor matrix of all the clients by the central server, returning the aggregated gradients to each medical institution, then decrypting the global encrypted gradients by the medical institution clients, executing gradient descent on the global non-patient factor matrix, continuing to carry out next tensor factor decomposition training after the client obtains the global factor matrix, and finally stopping federal training when the global factor matrix converges or reaches a certain turn.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (4)

1. The medical privacy data protection method based on federal learning tensor factorization is characterized by comprising the following specific steps of:
step one: each medical institution maintains a locally decomposed tensor factor matrix and a global tensor non-patient factor matrix and initializes the tensor factor matrix and the global tensor non-patient factor matrix when the federal process begins;
step two: each medical institution performs local tensor factorization training and gradient descent by using a loss function;
step three: according to the locally decomposed factor matrix and the global non-patient factor matrix, a corresponding factor matrix update gradient is obtained;
step four: the medical institution sparsifies the factor matrix updating gradient through a gradient compression strategy;
step five: each medical institution encrypts the non-zero gradient updated by the non-patient factor matrix of the round by using the homomorphic encryption algorithm and sends the non-zero gradient to the central server;
step six: the central server carries out homomorphic addition aggregation on the encrypted gradients updated by the non-patient factor matrix of all the clients, and returns the aggregated gradients to each medical institution;
step seven: the medical institution client decrypts the global encryption gradient and performs gradient descent on the global non-patient factor matrix;
step eight: the client side continues the next round of tensor factor decomposition training after obtaining the global factor matrix;
step nine: stopping federal training when the global factor matrix converges or reaches a certain round;
the specific compression mode of the gradient compression strategy in the fourth step is as follows:
at the local of each medical institution, the gradient of the non-patient factor matrix update is thinned by using a hard threshold method, so that the absolute value of the gradient matrix element is zero in the threshold range, and only the non-zero gradient element is homomorphic encrypted and sent to a central server for aggregation, so that the unimportant gradient update is shielded;
the specific steps of encrypting by the homomorphic encryption algorithm in the fifth step are as follows:
s1: maintaining a local factorization factor matrix and a global non-patient factor matrix at a medical institution, and performing local tensor factorization;
s2: calculating gradients according to the updated local phenotype and the global phenotype maintained locally, encrypting the gradients, and sending the gradients to a server;
s3: homomorphic addition is carried out on a central server, and a global update gradient is obtained;
s4: and returning to each medical institution to execute decryption and calculate to obtain an updated global table.
2. The method of claim 1, wherein in the step one, the tensor is used to represent EHR data of all patients of the medical institution, the factor matrix is obtained by locally performing tensor factorization on each medical institution, and the factor matrix is obtained by approximating the tensor product to obtain the original tensor.
3. The method according to claim 1, wherein the loss function in the second step represents a function of mapping the random event or the value of the random variable related thereto to a non-negative real number to represent the risk or loss of the random event, and the loss function is used to evaluate the degree to which the predicted value and the true value of the model are different.
4. A medical privacy data protection method based on federal learning tensor factorization according to claim 3, wherein the loss function is divided into two types: an empirical risk loss function and a structural risk loss function;
wherein the empirical risk loss function refers to the difference between the predicted result and the actual result, and the structural risk loss function refers to the empirical risk loss function plus a regularization term.
CN202110422402.0A 2021-04-20 2021-04-20 Medical privacy data protection method based on federal learning tensor factorization Active CN112966307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110422402.0A CN112966307B (en) 2021-04-20 2021-04-20 Medical privacy data protection method based on federal learning tensor factorization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110422402.0A CN112966307B (en) 2021-04-20 2021-04-20 Medical privacy data protection method based on federal learning tensor factorization

Publications (2)

Publication Number Publication Date
CN112966307A CN112966307A (en) 2021-06-15
CN112966307B true CN112966307B (en) 2023-08-22

Family

ID=76280846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110422402.0A Active CN112966307B (en) 2021-04-20 2021-04-20 Medical privacy data protection method based on federal learning tensor factorization

Country Status (1)

Country Link
CN (1) CN112966307B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724092A (en) * 2021-08-20 2021-11-30 同盾科技有限公司 Cross-feature federated marketing modeling method and device based on FM and deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909865A (en) * 2019-11-18 2020-03-24 福州大学 Federated learning method based on hierarchical tensor decomposition in edge calculation
WO2020177392A1 (en) * 2019-03-01 2020-09-10 深圳前海微众银行股份有限公司 Federated learning-based model parameter training method, apparatus and device, and medium
CN112231756A (en) * 2020-10-29 2021-01-15 湖南科技学院 FL-EM-GMM medical user privacy protection method and system
CN112600697A (en) * 2020-12-07 2021-04-02 中山大学 QoS prediction method and system based on federal learning, client and server

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11431470B2 (en) * 2019-08-19 2022-08-30 The Board Of Regents Of The University Of Texas System Performing computations on sensitive data while guaranteeing privacy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020177392A1 (en) * 2019-03-01 2020-09-10 深圳前海微众银行股份有限公司 Federated learning-based model parameter training method, apparatus and device, and medium
CN110909865A (en) * 2019-11-18 2020-03-24 福州大学 Federated learning method based on hierarchical tensor decomposition in edge calculation
CN112231756A (en) * 2020-10-29 2021-01-15 湖南科技学院 FL-EM-GMM medical user privacy protection method and system
CN112600697A (en) * 2020-12-07 2021-04-02 中山大学 QoS prediction method and system based on federal learning, client and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于张量分解的域适应算法;徐书艳;韩立新;徐国夏;;计算机科学(第12期);第95-100页 *

Also Published As

Publication number Publication date
CN112966307A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
Wang et al. An efficient and privacy-preserving outsourced support vector machine training for internet of medical things
US10635833B2 (en) Uniform-frequency records with obscured context
WO2021051610A1 (en) Data training method, apparatus and system
US20210058229A1 (en) Performing computations on sensitive data while guaranteeing privacy
CN109670626B (en) Prediction model distribution method and prediction model distribution system
US9524370B2 (en) Method for privacy-preserving medical risk test
US11769584B2 (en) Face reattachment to brain imaging data
Stripelis et al. Scaling neuroscience research using federated learning
US11868506B2 (en) Systems and methods for implementing a secure database for storing a patient operational longitudinal record
CN112966307B (en) Medical privacy data protection method based on federal learning tensor factorization
Alabdulkarim et al. A Privacy-Preserving Algorithm for Clinical Decision-Support Systems Using Random Forest.
CN112465819A (en) Image abnormal area detection method and device, electronic equipment and storage medium
CN112289448A (en) Health risk prediction method and device based on joint learning
WO2021159814A1 (en) Text data error detection method and apparatus, terminal device, and storage medium
Reynolds et al. Three‐dimensional visualization of skin lymphatic drainage patterns of the head and neck
Sun et al. Privacy-preserving self-helped medical diagnosis scheme based on secure two-party computation in wireless sensor networks
Xiang et al. BMIF: Privacy-preserving blockchain-based medical image fusion
Qamar Healthcare data analysis by feature extraction and classification using deep learning with cloud based cyber security
CN113849828A (en) Anonymous generation and attestation of processed data
Chen et al. Hadoop-based healthcare information system design and wireless security communication implementation
CN112259238A (en) Electronic device, disease type detection method, apparatus, and medium
Zhang et al. SIP: An efficient and secure information propagation scheme in e-health networks
CN113591154B (en) Diagnosis and treatment data de-identification method and device and query system
US20240106627A1 (en) Computer-implemented method for providing an encrypted dataset providing a global trained function, computer-implemented method for recovering personal information, computer system and computer program
CN117150562B (en) Blood glucose monitoring method, device, equipment and storage medium based on blockchain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant