CN114861931A - Front-end and back-end separated asynchronous federal learning method, system, device and storage medium - Google Patents

Front-end and back-end separated asynchronous federal learning method, system, device and storage medium Download PDF

Info

Publication number
CN114861931A
CN114861931A CN202210373640.1A CN202210373640A CN114861931A CN 114861931 A CN114861931 A CN 114861931A CN 202210373640 A CN202210373640 A CN 202210373640A CN 114861931 A CN114861931 A CN 114861931A
Authority
CN
China
Prior art keywords
training
service
back end
local
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210373640.1A
Other languages
Chinese (zh)
Inventor
郭子晗
由林麟
吴承瀚
林俊龙
李浩源
侯英威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202210373640.1A priority Critical patent/CN114861931A/en
Publication of CN114861931A publication Critical patent/CN114861931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a front-end and back-end separated asynchronous federal learning method, a system, a device and a storage medium, wherein the method comprises the following steps: according to the service to be trained, the service user sends service request information to the back end; the service request information received by the back end is pushed to a system manager; a system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, sends the global training strategy to a back end, and starts asynchronous federal learning of the task to be executed by the back end; the back end sends a training notice to the training participants; training participants to train a local model, and uploading the trained local model to a back end; the back end generates a global model according to the received local model and evaluates the global model; and when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user. The front end and the back end of the asynchronous federated learning system provided by the embodiment of the application are relatively independent and loosely coupled, and the system compatibility can be effectively improved.

Description

Front-end and back-end separated asynchronous federal learning method, system, device and storage medium
Technical Field
The application relates to the technical field of federal learning application, in particular to a front-end and back-end separated asynchronous federal learning method, system, device and storage medium.
Background
Along with the development of big data technology, a large amount of multisource heterogeneous data cause the problem of data island distribution, and the data island of fine grit not only increases the degree of difficulty of data processing, but also increases the degree of difficulty of data supervision, and the risk of sensitive information leakage also improves thereupon. In order to solve the problems to a certain extent, in recent years, a federal learning framework is utilized to perform user data training so as to meet the requirements of multi-source heterogeneous data fusion and user privacy protection.
The traditional federal learning adopts a synchronous optimization mode, which is not flexible enough and easily causes the problems of network blockage at a central server end, inefficient global model updating and the like. Accordingly, federal learning of asynchronous mode has become a new direction of research. However, most existing federal learning frames do not support asynchronous models and are not suitable for edge-end equipment, and are usually developed for specific landing scenes, so that expansibility is poor, and configuration difficulty is high.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, the application provides a front-end and back-end separated asynchronous federal learning method, system, device and storage medium.
In a first aspect, an embodiment of the present application provides a front-end and back-end separated asynchronous federated learning method, where the front-end and back-end separated asynchronous federated learning method is applied to a front-end and back-end separated asynchronous federated learning system, where the system includes a front end and a back end, and data exchange is performed between the front end and the back end in an interface form; the user identities of the front end comprise a service user, a system manager and a training participant; the method comprises the following steps: according to the service to be trained, the service user sends service request information to the back end; the service request information received by the back end is pushed to the system manager; according to the received service request information, the system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, and sends the global training strategy to the back end; according to the global training strategy, the back end starts asynchronous federal learning of the task to be executed; during the asynchronous federated learning, the back-end sends a training notification to the training participants; according to the training notice, the training participants train local models and upload the trained local models to the back end; the back end generates a global model according to the received local model and evaluates the global model; when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user; after receiving the use state, the service user sends a local service request corresponding to the service to be trained to the back end; according to the local service request, the back end sends the global model to the service user; and the service user receives the global model and performs local service calculation to obtain a service calculation result, and the service calculation result is visualized on a front-end interface.
Optionally, the method further comprises: after the back end receives the service request information, the back end records the service request information in a back end database; the back end sends response information to the service user, and declares the response information as information flow; according to the response information, the service user updates the front-end interface; after receiving the use state, closing an information flow channel, and performing popup prompt or providing a local service request button on the front-end interface; the local service button is an interface button for sending the local service request to the back end.
Optionally, the training participants training local models according to the training notifications and uploading the trained local models to the backend includes: the training notification comprises a special event identifier corresponding to the task to be executed and a local model uploading interface; according to the special event identification, the training participant sends a user information query request to the back end; according to the user information query request, the back end judges whether the training participants train the task to be executed for the first time; if the training participant trains the task to be executed for the first time, the back end returns a global training strategy, a global encryption model, a local training container mirror image and a container configuration file corresponding to the task to be executed to the training participant; the training participants deploy local training containers according to the received local training container mirror images and container configuration files and store the global training strategy; the training participants lead the global encryption model into the local training container, and train the global encryption model according to the global training strategy to obtain the local model; and uploading the local model to the back end by the training participant through the local model uploading interface.
Optionally, the method further comprises: according to the local service request, the back end sends a local computing container mirror image and the container configuration file to the service user; the service user deploys a local computing container according to the local computing container mirror image and the container configuration file; the service user receives the global model and performs local service calculation to obtain a service calculation result, which specifically comprises the following steps: and importing the global model into the local computation container, and operating the local computation container to obtain the service computation result.
Optionally, the method further comprises: and when the evaluation is not passed, the back end receives the new local model, updates the global model according to the new local model, and re-evaluates the updated global model.
In a second aspect, an embodiment of the present application provides an asynchronous federated learning system with separated front and back ends, including a front end and a back end: the front end and the back end exchange data in an interface mode; the front end comprises a service user terminal, a training participant terminal and a system manager terminal; the back end comprises a service user module, a training participant module and a system manager module; the service user terminal is used for the service user to call the service user module; the service user module is used for sending service request information of the service to be trained to the back end, sending a local service request to the back end and carrying out local service calculation; the training participant terminal is used for a training participant to call the training participant module; the training participant module is used for training a local model according to the training notice sent by the back end and uploading the trained local model to the back end; the system manager module is used for a system manager to call the system manager module; the system manager module is used for setting a global training strategy of a service to be trained and sending the global training strategy to the back end.
Optionally, the interface communication mechanism of the system includes an instant response mode and an asynchronous processing mode.
Optionally, the front end further includes a universal terminal, and the back end further includes a universal function module; the universal terminal is used for the user to call the universal function module; the general function module is used for providing at least one function of user registration login, interface resource response, database management, data statistics processing and data encryption and decryption.
In a third aspect, an embodiment of the present application provides an asynchronous federated learning apparatus with separated front and back ends, including: at least one processor; at least one memory for storing at least one program; when executed by the at least one processor, cause the at least one processor to implement the front-end decoupled asynchronous federated learning approach described above.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored therein a processor-executable program, which when executed by the processor, is configured to implement the front-end and back-end decoupled asynchronous federated learning method described above.
The beneficial effects of the embodiment of the application are as follows: the method provided by the application is applied to an asynchronous federated learning system with separated front and back ends, the system comprises a front end and a back end, and the user identity of the front end comprises a service user, a system manager and a training participant; the method comprises the following steps: according to the service to be trained, the service user sends service request information to the back end; the service request information received by the back end is pushed to a system manager; according to the received service request information, a system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, and sends the global training strategy to a back end; according to the global training strategy, the back end starts asynchronous federal learning of the task to be executed; in the asynchronous federal learning process, the back end sends a training notice to a training participant; training the participants to train the local model according to the training notice, and uploading the trained local model to the back end; the back end generates a global model according to the received local model and evaluates the global model; when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user; after receiving the use state, the service user sends a local service request corresponding to the service to be trained to the back end; according to the local service request, the back end sends a global model to the service user; and the service user receives the global model and performs local service calculation to obtain a service calculation result, and the service calculation result is visualized on a front-end interface. The asynchronous federated learning system provided by the embodiment of the application is separated in front and back ends, and data exchange is performed between the front and back ends in an interface mode, so that the front and back ends are relatively independent and loosely coupled, and thus, front-end equipment can not be limited by the back end, can adapt to configuration, development and deployment of a complex environment, and effectively improves multi-end compatibility of the asynchronous federated learning system provided by the embodiment of the application. In addition, the asynchronous federal learning method of the embodiment of the application provides roles of a service user, a training participant and a system manager at the front end, can be well adapted to multi-thread asynchronous processing of asynchronous federal learning procedures by different terminal devices and different users under a multi-end environment, and provides a feasible implementation scheme for multi-end edge devices to participate in asynchronous federal learning.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
Fig. 1 is a system architecture diagram of an asynchronous federated learning system with separated front and back ends provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a service user terminal interface according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a training participant terminal interface according to an embodiment of the present application;
fig. 4 is a schematic interface diagram of a system administrator terminal according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating the steps of a front-end and back-end decoupled asynchronous federated learning method provided in an embodiment of the present application;
fig. 6 is a schematic diagram illustrating module division of a module interface layer according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a service user initiating a service request message according to an embodiment of the present application;
FIG. 8 is a schematic flowchart illustrating a process of a system administrator configuring a global training policy and initiating asynchronous federated learning according to an embodiment of the present application;
FIG. 9 is a flowchart of steps provided by an embodiment of the present application for training participants to train local models;
FIG. 10 is a flowchart of the steps taken by a system administrator and training participants to perform asynchronous federated learning in an embodiment of the present application;
FIG. 11 is a flowchart illustrating steps performed by a service user to perform local service computation according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of an asynchronous federal learning device with separated front and back ends according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that although functional block divisions are provided in the system drawings and logical orders are shown in the flowcharts, in some cases, the steps shown and described may be performed in different orders than the block divisions in the systems or in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
With the development of a big data technology, a large amount of multi-source heterogeneous data causes the problem of data island distribution, the data island with fine granularity not only increases the difficulty of data processing, but also increases the difficulty of data supervision, and the risk of sensitive information leakage is increased. In order to solve the problems to a certain extent, in recent years, a federal learning framework is utilized to perform user data training so as to meet the requirements of multi-source heterogeneous data fusion and user privacy protection.
The traditional federal learning adopts a synchronous optimization mode, namely, in each training turn, a central server synchronously sends a global model to a plurality of clients, the clients return the updated model to the central server after training the model based on local data, and the central server performs global model aggregation updating after waiting for all the clients to upload the local model. It can be understood that, because the global model is synchronously sent to a plurality of clients, the resource occupied by the central server in a short time may be excessive, and a network congestion situation may occur; and the synchronous optimization mode is inefficient for updating the global model and is difficult to control. Accordingly, federal learning of asynchronous mode has become a new direction of research. For example, in the related art, frames such as FATE initiated by AI department of Miss Bank, PySyft of OpenMind, and Flower of Cambridge university support asynchronous mode.
However, there are not many federal learning frameworks supporting asynchronous mode, and most federal learning frameworks do not support asynchronous mode and are not suitable for edge devices. Moreover, the federal learning frames are usually developed for a specific landing scene (for example, the federal learning system project of enterprise self-research), and for reasons of safety, special use and the like, the federal learning frames are generally poor in expansibility, complex in configuration and high in actual use difficulty.
Based on this, the embodiment of the application provides a front-end and back-end separated asynchronous federated learning method, system, device and storage medium, the method provided by the application is applied to a front-end and back-end separated asynchronous federated learning system, the system comprises a front end and a back end, and the user identity of the front end comprises a service user, a system manager and a training participant; the method comprises the following steps: according to the service to be trained, the service user sends service request information to the back end; the service request information received by the back end is pushed to a system manager; according to the received service request information, a system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, and sends the global training strategy to a back end; according to the global training strategy, the back end starts asynchronous federal learning of the task to be executed; in the asynchronous federal learning process, the back end sends a training notice to a training participant; training the participants to train the local model according to the training notice, and uploading the trained local model to the back end; the back end generates a global model according to the received local model and evaluates the global model; when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user; after receiving the use state, the service user sends a local service request corresponding to the service to be trained to the back end; according to the local service request, the back end sends a global model to the service user; and the service user receives the global model and performs local service calculation to obtain a service calculation result, and the service calculation result is visualized on a front-end interface. The asynchronous federated learning system provided by the embodiment of the application is separated in front and back ends, and data exchange is performed between the front and back ends in an interface mode, so that the front and back ends are relatively independent and loosely coupled, and thus, front-end equipment can not be limited by the back end, can adapt to configuration, development and deployment of a complex environment, and effectively improves multi-end compatibility of the asynchronous federated learning system provided by the embodiment of the application. In addition, the asynchronous federal learning method of the embodiment of the application provides roles of a service user, a training participant and a system manager at the front end, can be well adapted to multi-thread asynchronous processing of asynchronous federal learning procedures by different terminal devices and different users under a multi-end environment, and provides a feasible implementation scheme for multi-end edge devices to participate in asynchronous federal learning.
The embodiments of the present application will be further explained with reference to the drawings.
First, for convenience of explanation, unless otherwise specified, all "systems" described below are front-end separated asynchronous federal learning systems provided in the embodiments of the present application, and all "methods" described below are front-end separated asynchronous federal learning methods provided in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture of an asynchronous federated learning system with separated front and back ends provided in an embodiment of the present application, and as shown in fig. 1, the system architecture of the system provided in the embodiment of the present application includes, but is not limited to, an interactive terminal layer, a module interface layer, a container deployment layer, a system data layer, and a basic algorithm layer.
In the system, the front end generally refers to a terminal device with an operation interface and a local data storage and data processing system, and is used for providing static resources from the local data storage, completing local service processing through the local data processing system, and sending an HTTP request to the back end. The back end generally refers to a device responsible for receiving, analyzing and processing requests, and the device is mostly a server, and the back end performs operations such as data processing and resource response based on the request of the front end.
Therefore, based on the development mode of separating the front end from the back end of the system, the interactive terminal layer is positioned at the front end and used for finishing interactive operation between a user and the system, and the issue of a front-end equipment request and the visualization of the response of a back-end server are realized through interface calling and data communication. The module interface layer is located at the back end, adopts the technology including but not limited to Python as a function realization technology, and is used for receiving a request issued by the front end and executing a function module at the back end, and calling the contents in the algorithm library or the container pool as required to complete the response of the back-end resource. The front end calls the interface through the interactive terminal layer, and the back end executes the function through the module interface layer, so that the front end and the back end exchange data in an interface mode, and the data exchange can be in an information stream mode or a file stream mode. Therefore, by setting different communication protocols, the system of the embodiment of the application can realize cross-language and cross-platform multi-terminal equipment management, thereby further improving the compatibility of the asynchronous federal learning system.
The interface communication mechanism of the module interface layer comprises an instant response mode and an asynchronous processing mode. The instant response mode is that the front end initiates an HTTP request, the back end performs logic processing on the request, responds to the processing result and then the communication session is ended; the asynchronous processing mode is that the front end initiates an HTTP request, the back end declares the data as information flow when responding to the front end, the front end opens a channel to keep connected with the back end, the back end can continuously push processing results to the front end in the process of logically processing the request until the processing is finished and declares an information flow end mark in response data, the front end closes the channel, and the communication session is finished.
In addition, as the operation basis of the system supporting the separation of the front end and the rear end, the basic algorithm layer, the container deployment layer and the system data layer are correspondingly arranged at the front end and the rear end. The basic algorithm layer is used for storing a system core algorithm to form a core algorithm library for supporting the operation of the system. The container deployment layer is used for storing a system core container mirror image and a configuration file in a back-end server to form a core container pool for supporting the operation of the system, and deploying a packaged container in front-end equipment according to the requirement in the federal learning process, for example, a container deployment module is arranged at the front end, and the container pool is arranged at the back end. The container mirror of the back-end container pool virtualizes, can be migrated to, part of the required applications and dependency packages (these include but are not limited to PyTorch machine learning scripts and their environment) to meet asynchronous federal learning system requirements and at the same time reduce the dependency of the application program on the system operating environment. The system data layer is used for storing data of the asynchronous federated learning system in the front-end equipment and the back-end server and providing database operation, the system data layer adopts MySQL (MySQL query language) as a database management system, for example, a front-end database is deployed in the front-end equipment and is used for storing the front-end equipment data of the asynchronous federated learning system, including but not limited to end user information and user data for local training or calculation; and is used to provide database management tools or commands to support the functional implementation of the terminal services. Deploying a backend database in the backend server, the backend database for storing backend server data of the asynchronous federated learning system, including but not limited to service request information, service availability status, training participant information, and global model information, and for providing database management tools or commands to support functional implementation of the backend interface.
In some embodiments, the system constructs communication connection and interface call of the front end and the back end through RESTful API, and completes call of the terminal device of the front end to the function module of the back end server, where JSON is a front-end and back-end data exchange format. In order to ensure data security, the front end and the back end of the system adopt technologies including but not limited to Token and Session as identity authentication and management.
Further, the interactive terminal layer and the module interface layer are specifically divided according to the user identity. The interactive interface layer arranged at the front end comprises a service user terminal, a training participant terminal and a system manager terminal. The module interface layer arranged at the back end comprises a service user module, a training participant module and a system manager module corresponding to the three terminals. Three types of terminals in the interactive terminal layer are first described below.
The service user terminal is used for the service user to call the service user module, the service user terminal can be deployed in the mobile phone, the interface construction technology including but not limited to JAVA and XML is adopted, when the service user uses the service user terminal in the mobile phone, the service user terminal can browse and operate interfaces such as a service list page, a service result page and a service available state popup window in a display interface of the service user terminal. Referring to fig. 2, fig. 2 is a schematic view of a service user terminal interface according to an embodiment of the present application, and fig. 2 shows a service list page and a service available status popup in the service user terminal. The training participant terminal is used for the training participant to call the training participant module, the training participant terminal can be deployed in a mobile phone, the interface construction technology including but not limited to JAVA and XML is adopted, and when the training participant uses the training participant terminal in the mobile phone, a task information list can be browsed and operated in a display interface of the training participant terminal. Referring to fig. 3, fig. 3 is a schematic diagram of a training participant terminal interface according to an embodiment of the present application, and fig. 3 shows a task information list in the training participant terminal. The system administrator module is used for a system administrator to call the system administrator module, the system administrator terminal is generally deployed in a webpage, HTML, CSS and JavaScript are adopted as interface construction technologies, and when the system administrator terminal is used by a system administrator, a request list page, a task information list page and a strategy setting popup window can be browsed and operated in a display interface of the system administrator terminal. Referring to fig. 4, fig. 4 is a schematic interface diagram of a system administrator terminal according to an embodiment of the present application, and fig. 4 shows a task information list in the system administrator terminal.
Corresponding to three terminals in the interactive interface layer, the module interface layer also has three corresponding modules for calling. The service user module is used for sending service request information of the service to be trained to the back end, sending a local service request to the back end and carrying out local service calculation; the training participant module is used for training the local model according to the training notice sent by the back end and uploading the trained local model to the back end; the system manager module is used for setting a global training strategy of the service to be trained and sending the global training strategy to the back end. The more specific functions and sub-module division of the three modules in the module interface layer will be further described below in conjunction with the method of the embodiments of the present application.
In addition, the interactive terminal layer also comprises a universal terminal and a corresponding module interface layer, wherein the universal terminal is used for the user to call the universal functional module; the general function module is used for providing at least one function of user registration login, interface resource response, database management, data statistics processing and data encryption and decryption.
In the following, based on the front-end and back-end separated asynchronous federal learning system in one or more embodiments, the front-end and back-end separated asynchronous federal learning method proposed in the embodiments of the present application is explained.
Referring to fig. 5, fig. 5 is a flowchart illustrating steps of an asynchronous federated learning method with separated front and back ends according to an embodiment of the present application, where the method is applied to an asynchronous federated learning system with separated front and back ends according to an embodiment of the present application, where the system includes a front end and a back end, and data exchange is performed between the front end and the back end in an interface form; the user identities of the front end comprise a service user, a system manager and a training participant; the following describes a specific process of implementing asynchronous federated learning by cooperation of these three user identities with reference to fig. 5, where the method includes, but is not limited to, S500-S5100:
s500, according to the service to be trained, the service user sends service request information to the back end;
specifically, in order to meet the requirement of implementing asynchronous federal learning in a multi-terminal and multi-user multithreading mode, the embodiment of the application designs three user identities at the front end, namely a service user, a system manager and a training participant. A service user can initiate a service training request of federal learning according to requirements and perform local calculation service according to a trained global model; configuring a specific training strategy for federal learning by a system manager; and the training participants carry out local model training locally. In this way, the whole process of the federal learning is split into request-configuration-training-service, which is asynchronously carried out by different users, thereby greatly improving the flexibility and portability of the federal learning system.
At the beginning of the whole federal study, the service user firstly sends service request information to the back end according to the service to be trained. In the embodiment of the application, the service to be trained refers to a service which is trainable in task and has not been trained, and a service user can make a service request for the service to be trained.
The following describes the process of the service user initiating the service request message with reference to fig. 6 and fig. 7. Fig. 6 is a schematic diagram of module division of a module interface layer provided in the embodiment of the present application, and fig. 7 is a schematic flowchart of a service user initiating service request information provided in the embodiment of the present application. As shown in fig. 6, the module interface layer includes a service user module, a training participant module, and a system manager module, the service user module includes a service request issuing sub-module and a local service calculation sub-module, the training participant module includes a local model training sub-module, and the system manager module includes a training strategy setting sub-module and an asynchronous federal training sub-module.
As shown in fig. 7, a service user accesses a service user terminal on a mobile phone, referring to fig. 2, there is a service list page in the interface, the service user performs an interactive operation on the service user terminal, for example, clicks a certain service in the service list, a service user module in a module interface layer is called, a service publishing sub-module in the service user module sends service request information to a back end in an asynchronous mode, the back end records the service request information in a back end database, then the back end sends response information to the service user, declares the response information to be an information stream, an information stream channel of the front end and the back end is always opened, the interface of the service user terminal is asynchronously updated, and is displayed as waiting, the back end continues to read the back end database, and continuously pushes a service state of a current service to be trained to the service user terminal. If the service state of the current service to be trained becomes available, the back end returns the service state to the service user terminal through the information flow channel and informs the service user terminal to close the information flow channel; then, the service user terminal closes the information flow channel, and performs a pop-up window prompt or provides a local service request button on the front-end interface for the service user to use the trained service.
It can be understood that, if there is a service of the trained model in the service list of the service user terminal, the service user can directly click the service to perform local computation. In addition, if the service list has non-trainable services, the terminal of the service user can prompt through a popup window when the service user clicks the services.
S510, pushing the service request information received by the rear end to a system manager;
specifically, according to the step S500, the back end receives the service request message sent by the service user. In the embodiment of the present application, the system administrator terminal may continuously receive the information stream of the back end, so that the back end pushes the received service request information to the system administrator, and the request list of the system administrator terminal may asynchronously update the service request information, for example, the request list displays information of a task to be executed corresponding to the service to be trained that has issued the request.
S520, according to the received service request information, a system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, and sends the global training strategy to a back end;
specifically, the system manager terminal asynchronously updates the request list according to the received service request information. Referring to fig. 8, fig. 8 is a schematic flow chart illustrating a process of configuring a global training policy and initiating asynchronous federated learning by a system administrator according to an embodiment of the present application. As shown in fig. 8, the system administrator interacts with the terminal interface of the system administrator, the sub-module may pop up a setting window of the task to be executed on the interface, the system administrator may browse and operate the setting window, the system administrator fills the global training policy of the task to be executed in the window, and sends the HTTP request with the global training policy to the back end through the interactive button, and invokes the training policy setting sub-module in fig. 6. And after receiving the global training strategy, the training strategy setting submodule sends the global training strategy to the back-end database, and the back-end database stores or updates the global training strategy. The back end continuously reads the database and returns a response to the system manager terminal, the current response is an information flow, the information flow channels of the system manager terminal and the back end are kept open, the back end can continuously push the training state of the current task to be executed to the system manager terminal, and the system manager terminal asynchronously updates the interface into the training state.
S530, according to the global training strategy, the back end starts asynchronous federal learning of the task to be executed;
specifically, referring to fig. 8, after the global training strategy is set, the asynchronous federated training submodule in fig. 6 is automatically called to start training the task to be executed. The asynchronous federation sub-module queries the back-end database for training participants who may participate in the current round of global training, e.g., to determine the number of training participants and the specific front-end devices, etc.
S540, in the asynchronous federal learning process, the back end sends a training notice to a training participant;
specifically, referring to step S530, the asynchronous federation submodule determines the training participants participating in the current round of training through the backend database, and the training participants receive the training notification sent by the background because the training participants will always be connected to the backend after the training participants are online.
S550, training the participant to train the local model according to the training notice, and uploading the trained local model to the back end;
specifically, referring to fig. 9, fig. 9 is a flowchart illustrating steps provided by an embodiment of the present application for training a participant to train a local model. As shown in fig. 9, since the training notification received by the training participant includes the special event identifier corresponding to the task to be executed and the local model upload interface, the local model training sub-module in fig. 6 is invoked, and sends a user information query request to the database at the back end according to the special event identifier. The back-end database returns information used by the training participants to the local model training submodule, the local model training submodule judges whether the training participants train the tasks to be executed for the first time according to the user information query request, if not, the back end only returns the current global encryption model to the training participant terminal, and according to the global encryption model; if yes, the back end returns a global training strategy, a global encryption model, a local training container mirror image and a container configuration file corresponding to the task to be executed to the training participant terminal.
It can be appreciated that the global training strategy is pushed by the back-end database; encrypting the untrained global model by a basic algorithm layer, and pushing the global encryption model to the training participant terminal; the local training container image and the container configuration file are pushed by a container pool at the back end. And the file is pushed to a training participant terminal at the front end by the local model training submodule.
In some embodiments, the encryption and decryption algorithms in the basic algorithm layer include, but are not limited to, the AES algorithm used to encrypt the model files transferred between the front and back ends and the RSA algorithm used to encrypt the AES keys between the front and back ends.
The training participants deploy local training containers at the training participant terminals according to the received local training container mirror images and container configuration files, and save global training strategies; then, the training participant terminal leads the global encryption model and the local training related data read in the front-end database into a local training container, the local training container is operated to enable the container to train the global encryption model according to a global training strategy, and after the script operation in the local training container is finished, a local model is obtained; and the training participant terminal exports the trained local model to the local equipment, and then uploads the local model to the back end through the local model uploading interface.
S560, the back end generates a global model according to the received local model and evaluates the global model;
specifically, when the back end receives the global model sent by the training participant terminal, the asynchronous federal training submodule in fig. 6 uses an asynchronous federal aggregation algorithm to aggregate the local models to generate the global model, and evaluates the updated global model, if the evaluation fails, the asynchronous federal training submodule continues to organize a new round of global training, receives and receives a new local model from the training participant terminal, updates the global model according to the new local model, and re-evaluates the updated global model.
In some embodiments, the asynchronous federated aggregation algorithm employs an asynchronous aggregation algorithm based on weight summarization and version awareness that is capable of recording and dynamically updating the model parameters uploaded by each training participant in its version at the back-end. When the training participants upload the locally trained model parameters, uploading the numbers of the model parameters; and when the back end receives the model parameters, updating the latest model parameters of the node by using the weight abstract and updating the version of the latest model parameters.
In some embodiments, the specific procedure of model parameter aggregation in the asynchronous federated aggregation algorithm is as follows: 1) calculating the latest versions of all current training participants; 2) calculating an aggregation weight according to the difference between the current version and the latest version of each training participant and performing aggregation of the global model; 3) and calculating a version difference threshold, if the sum of the version differences of all the training participants is greater than the threshold, releasing the aggregated global model to all the training participants, otherwise, only releasing the global model to the training participants who upload the model parameters. The specific calculation formula of model parameter aggregation in the asynchronous federated aggregation algorithm is as follows:
Figure BDA0003589887170000111
Figure BDA0003589887170000112
wherein i represents the ith training participant, and n represents the total number of training participants; v. of latest Representing the latest version of the uploaded model parameters; alpha represents a version proportion over-parameter, and alpha belongs to (0, 1); server v [i]Representing the version of the uploading parameter of the current ith training participant, and the initial version is 1;server w [i]a model representing the upload of the ith training participant; w is a latest Global model parameters, w ', representing version-weighted post-aggregation' latest Representing the normalized global model parameters.
v latest ,server v [i],server w [i]The update formula of (2) is:
v latest =v latest +1
server v [i]=v latest
server w [i]=w
wherein w is a model parameter uploaded by a training participant; when the server receives the model parameters uploaded by the training participants, accumulating and updating v latest (ii) a Updating server v [i]Recording versions of model parameters uploaded by corresponding training participants; updating server w [i]And recording model parameters uploaded by corresponding training participants.
S570, when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user;
specifically, referring to step S560, when the updated global model evaluation passes, the global training of the task to be currently executed is finished. And the back-end database updates the use state of the current service to be trained and updates the task training state of the current task to be executed. Then, the back end informs the updated task training state to a system manager terminal, and the system manager terminal refreshes a task list correspondingly and changes the state of the current task into the trained state; and the back end informs the updated service using state of the service to be trained to the service user terminal, the service user terminal correspondingly refreshes the service list, changes the using state of the current service into the trained service, and performs popup notification in the interface.
To this end, both the system administrator and the training participants complete all of their steps in the asynchronous federated learning method of the embodiments of the present application. Referring to fig. 10, fig. 10 is a flowchart illustrating steps of a system administrator and training participants to complete asynchronous federal learning in an embodiment of the present application. As shown in fig. 10, firstly, a system administrator terminal sends a task request to a local model training submodule, then the local model training submodule reads available training participants from a back-end database and notifies the training participants terminals to start training, after the training participants finish training the local model, the local model is encrypted and uploaded to the local model training submodule at the back end, the back end decrypts the local model through an encryption and decryption algorithm of a basic algorithm layer, and performs aggregation updating on a global model through an asynchronous federal aggregation algorithm, the local model training submodule evaluates the global model, and repeats training until the evaluation is passed, then the local model training submodule informs the system administrator that the current training is finished, and the training participants terminals correspondingly clear files such as related containers and scripts.
S580, after receiving the using state, the service user sends a local service request corresponding to the service to be trained to the back end;
specifically, after the service user receives the use status of the service corresponding to the trained service, referring to the content of step S500, the service user terminal interface performs a pop-up window prompt or provides a local service request button for the service user to use the trained service. The service user can send a local service request corresponding to the service to be trained to the backend by clicking the local service request button.
S590, according to the local service request, the back end sends a global model to the service user;
specifically, referring to fig. 11, fig. 11 is a flowchart illustrating steps of a service user performing local service computation according to an embodiment of the present disclosure. As shown in fig. 11, the service user clicks the local service request button by the service user, sends a local service request corresponding to the service to be trained to the back end, invokes the local service computation submodule in fig. 6, reads the local computation container image and the container configuration file from the back end container pool, reads the global model from the back end database, encrypts the global model by the encryption and decryption algorithm of the basic algorithm layer to obtain a global encryption model, and then sends the global model, the local computation container image and the container configuration file to the service user terminal of the front end by the local service computation submodule of the back end. The service user deploys the local computation container according to the local computation container mirror image and the container configuration file, decrypts the global encryption model to obtain a global model, reads data related to local service computation in the front-end database, introduces the data and the global model into the local computation container, and operates the local computation container to obtain a service computation result.
S5100, receiving the global model by a service user, carrying out local service calculation to obtain a service calculation result, and visualizing the service calculation result on a front-end interface;
specifically, according to the step S590, the service user receives the global model and performs local service calculation to obtain a service calculation result, and then the service user terminal derives the local calculation result and visualizes the local calculation result on the interface of the service user terminal.
Through steps S500-S5100, the embodiment of the present application describes an asynchronous federated learning method with separated front and back ends in combination with an asynchronous federated learning system with separated front and back ends, in the embodiment of the present application, three user identities are proposed at the front end, which are respectively a service user, a system administrator, and a training participant, and the method includes: according to the received service request information, a system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, and sends the global training strategy to a back end; according to the global training strategy, the back end starts asynchronous federal learning of the task to be executed; in the process of asynchronous federal learning, a back end sends a training notice to a training participant; training the participants to train the local model according to the training notice, and uploading the trained local model to the back end; the back end generates a global model according to the received local model and evaluates the global model; when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user; after receiving the use state, the service user sends a local service request corresponding to the service to be trained to the back end; according to the local service request, the back end sends a global model to the service user; and the service user receives the global model and performs local service calculation to obtain a service calculation result, and the service calculation result is visualized on a front-end interface.
The asynchronous federated learning system provided by the embodiment of the application is separated in front and back ends, and data exchange is performed between the front and back ends in an interface mode, so that the front and back ends are relatively independent and loosely coupled, and thus, front-end equipment can not be limited by the back end, can adapt to configuration, development and deployment of a complex environment, and effectively improves multi-end compatibility of the asynchronous federated learning system provided by the embodiment of the application. Moreover, the asynchronous federated learning method provided by the embodiment of the application provides roles of a service user, a training participant and a system manager at the front end, can be well adapted to multithreading asynchronous processing of an asynchronous federated learning process by different terminal devices (such as a webpage, an IOS system, an android system and the like) and different users under a multi-end environment, and provides a feasible complete work flow for multi-end edge devices to participate in asynchronous federated learning. Moreover, the platform architecture of the embodiment of the application is clear, the whole process is designed according to a modularized thought, all module interfaces are mutually coordinated and matched, a developer does not need to actively manage the service interfaces, a user can independently construct a new module interface based on service requirements, and loose coupling is realized in both the development stage and the deployment stage of the system. Finally, the system combines the container technology, container deployment layers are arranged at the front end and the rear end, and codes and environments required by processes of federal learning local training, local calculation and the like are encapsulated in the containers, so that the data are not influenced by the equipment operation supporting environment, and the dependency of application programs on the system is greatly reduced, so that the asynchronous federal learning system with the front end and the rear end separated, which is provided by the embodiment of the application, has the capability of being transplanted to a user, an enterprise, a platform and other multi-party systems and breaking the data barriers of the system.
Referring to fig. 12, fig. 12 is a schematic diagram of an asynchronous federated learning apparatus with front-end and back-end separation according to an embodiment of the present application, where the apparatus 1200 includes at least one processor 1210 and further includes at least one memory 1220 for storing at least one program; one processor and one memory are exemplified in fig. 12.
The processor and memory may be connected by a bus or other means, such as by a bus in FIG. 12.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
The embodiment of the application also discloses a computer storage medium, wherein a program executable by a processor is stored, and the program executable by the processor is used for realizing the method provided by the application when being executed by the processor.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and such equivalent modifications or substitutions are included in the scope of the present invention defined by the claims.

Claims (10)

1. The front-end and back-end separated asynchronous federated learning method is applied to a front-end and back-end separated asynchronous federated learning system, the system comprises a front end and a back end, and data exchange is carried out between the front end and the back end in an interface form; the user identities of the front end comprise a service user, a system manager and a training participant; characterized in that the method comprises:
according to the service to be trained, the service user sends service request information to the back end;
the service request information received by the back end is pushed to the system manager;
according to the received service request information, the system manager sets a global training strategy of a task to be executed corresponding to the service to be trained, and sends the global training strategy to the back end;
according to the global training strategy, the back end starts asynchronous federal learning of the task to be executed;
during the asynchronous federal learning, the back-end sends a training notice to the training participants;
according to the training notice, the training participants train local models and upload the trained local models to the back end;
the back end generates a global model according to the received local model and evaluates the global model;
when the evaluation is passed, the back end updates the use state of the service to be trained and returns the updated use state to the service user;
after receiving the use state, the service user sends a local service request corresponding to the service to be trained to the back end;
according to the local service request, the back end sends the global model to the service user;
and the service user receives the global model and performs local service calculation to obtain a service calculation result, and the service calculation result is visualized on a front-end interface.
2. The front-end separated asynchronous federated learning method of claim 1, further comprising:
after the back end receives the service request information, the back end records the service request information in a back end database;
the back end sends response information to the service user and declares the response information as information flow;
according to the response information, the service user updates the front-end interface;
after receiving the use state, closing an information flow channel, and performing popup prompt or providing a local service request button on the front-end interface;
the local service button is an interface button for sending the local service request to the back end.
3. The front-end separated asynchronous federated learning method of claim 1, wherein the training participants training local models according to the training notifications and uploading the trained local models to the back-end comprises:
the training notification comprises a special event identifier corresponding to the task to be executed and a local model uploading interface;
according to the special event identification, the training participant sends a user information query request to the back end;
according to the user information query request, the back end judges whether the training participants train the task to be executed for the first time;
if the training participant trains the task to be executed for the first time, the back end returns a global training strategy, a global encryption model, a local training container mirror image and a container configuration file corresponding to the task to be executed to the training participant;
the training participants deploy local training containers according to the received local training container mirror images and container configuration files and store the global training strategy;
the training participants lead the global encryption model into the local training container, and train the global encryption model according to the global training strategy to obtain the local model;
and uploading the local model to the back end by the training participant through the local model uploading interface.
4. The front-end separated asynchronous federated learning method of claim 1, further comprising:
according to the local service request, the back end sends a local computing container mirror image and the container configuration file to the service user;
the service user deploys a local computing container according to the local computing container mirror image and the container configuration file;
the service user receives the global model and performs local service calculation to obtain a service calculation result, which specifically comprises the following steps:
and importing the global model into the local computation container, and operating the local computation container to obtain the service computation result.
5. The front-end separated asynchronous federated learning method of any one of claims 1-4, wherein the method further comprises:
and when the evaluation is not passed, the back end receives the new local model, updates the global model according to the new local model, and re-evaluates the updated global model.
6. An asynchronous federated learning system with separated front and back ends, comprising a front end and a back end:
the front end and the back end exchange data in an interface mode;
the front end comprises a service user terminal, a training participant terminal and a system manager terminal;
the back end comprises a service user module, a training participant module and a system manager module;
the service user terminal is used for the service user to call the service user module;
the service user module is used for sending service request information of the service to be trained to the back end, sending a local service request to the back end and carrying out local service calculation;
the training participant terminal is used for a training participant to call the training participant module;
the training participant module is used for training a local model according to the training notice sent by the back end and uploading the trained local model to the back end;
the system manager module is used for a system manager to call the system manager module;
the system manager module is used for setting a global training strategy of a service to be trained and sending the global training strategy to the back end.
7. The front-end separated asynchronous federated learning system of claim 6, wherein the interface communication mechanisms of the system include an immediate response mode and an asynchronous processing mode.
8. The front-end and back-end separated asynchronous federated learning system of any of claims 6-7, wherein the front end further comprises a generic terminal and the back end further comprises a generic function module;
the universal terminal is used for the user to call the universal function module;
the general function module is used for providing at least one function of user registration login, interface resource response, database management, data statistics processing and data encryption and decryption.
9. An asynchronous federated learning device with separated front and back ends, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the front-end separated asynchronous federated learning method of any one of claims 1-5.
10. A computer storage medium having stored therein a processor-executable program, wherein the processor-executable program, when executed by the processor, is configured to implement the front-end separated asynchronous federated learning method of any of claims 1-5.
CN202210373640.1A 2022-04-11 2022-04-11 Front-end and back-end separated asynchronous federal learning method, system, device and storage medium Pending CN114861931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210373640.1A CN114861931A (en) 2022-04-11 2022-04-11 Front-end and back-end separated asynchronous federal learning method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210373640.1A CN114861931A (en) 2022-04-11 2022-04-11 Front-end and back-end separated asynchronous federal learning method, system, device and storage medium

Publications (1)

Publication Number Publication Date
CN114861931A true CN114861931A (en) 2022-08-05

Family

ID=82629200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210373640.1A Pending CN114861931A (en) 2022-04-11 2022-04-11 Front-end and back-end separated asynchronous federal learning method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114861931A (en)

Similar Documents

Publication Publication Date Title
US10904363B2 (en) Tiered framework for proving remote access to an application accessible at a uniform resource locator (URL)
US11800165B2 (en) Virtual live streaming method and apparatus, device and storage medium
US8578285B2 (en) Methods, apparatus and systems for providing secure information via multiple authorized channels to authenticated users and user devices
US20210274014A1 (en) Systems And Methods For Initiating Processing Actions Utilizing Automatically Generated Data Of A Group-Based Communication System
US10893329B1 (en) Dynamic occlusion of livestreaming
CN109816447A (en) Intelligent control method, device and the storage medium of cabinet-type air conditioner advertisement
US20150310377A1 (en) Methods, devices and systems for providing online customer service
CN110505141B (en) Instant messaging message processing method and device, readable medium and electronic equipment
CN107943439A (en) Interface Moving method, apparatus, intelligent terminal, server and operating system
US20210168412A1 (en) Method and apparatus for provisioning secondary content based on primary content
US20200351471A1 (en) Method and a device for a video call based on a virtual image
US10372882B2 (en) Media distribution network, associated program products, and methods of using the same
US20150170184A1 (en) Method for providing advertisement service based on call
CN106462329A (en) Light dismiss manager
US11811875B2 (en) Method and system for providing web content in virtual reality environment
WO2023045912A1 (en) Selective content transfer for streaming content
CN110334246A (en) A kind of data request processing method, apparatus, terminal device and storage medium
CN109034757A (en) Method and apparatus for distributing resource, getting resource, issue resource
CN106506568B (en) Information interaction system
CN118260107A (en) Method and terminal for sharing information between applications
WO2021082460A1 (en) Ai processing method and device
CN112988315A (en) Method, system and readable storage medium for personalized viewing of shared desktop
CN114861931A (en) Front-end and back-end separated asynchronous federal learning method, system, device and storage medium
US20230362460A1 (en) Dynamically generated interactive video content
CN115248800A (en) File processing method, computer and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination