CN114489950A - Component adapting method and device, electronic equipment and storage medium - Google Patents

Component adapting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114489950A
CN114489950A CN202210099403.0A CN202210099403A CN114489950A CN 114489950 A CN114489950 A CN 114489950A CN 202210099403 A CN202210099403 A CN 202210099403A CN 114489950 A CN114489950 A CN 114489950A
Authority
CN
China
Prior art keywords
federal learning
information
task object
task
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210099403.0A
Other languages
Chinese (zh)
Inventor
伊世林
卞阳
杜浩
朱崇炳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fudata Technology Co ltd
Original Assignee
Shanghai Fudata Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fudata Technology Co ltd filed Critical Shanghai Fudata Technology Co ltd
Priority to CN202210099403.0A priority Critical patent/CN114489950A/en
Publication of CN114489950A publication Critical patent/CN114489950A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides a component adapting method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring different task object information from the component container; and acquiring target federal learning information, and performing adaptive conversion on task object information according to the target federal learning information to obtain a converted task object. In the implementation process of the scheme, the target federal learning information is obtained, and the task object information is subjected to adaptive conversion according to the target federal learning information to obtain the task object for creating and operating the federal learning actual service, so that the adaptive conversion process is effectively added between the federal learning actual service and the component container, so that when the component container in the privacy computing platform is upgraded or modified, the adaptive conversion service can be directly upgraded or modified, the adaptive conversion process decouples the dependency relationship between the federal learning actual service and the component container, and the problem of excessive coupling between the task object and the federal learning actual service is solved.

Description

Component adapting method and device, electronic equipment and storage medium
Technical Field
The present application relates to the technical field of federal learning, machine learning, and software engineering, and in particular, to a component adaptation method, apparatus, electronic device, and storage medium.
Background
The privacy computing platform is a federated learning information processing platform which is proposed for solving the problem of data privacy disclosure, and the platform mainly aims to solve the problem of data isolated island caused by privacy protection.
At present, the process of processing a data stream by a privacy computing platform includes: and creating a flow task through an engineering management module of the data stream, and directly calling the federal learning actual service after the flow task passes through a component container through an engine of the data stream. However, in a specific practical process, when a component container in a privacy computing platform is upgraded or modified, a developer of the federal learning practical service is required to correspondingly make a secondary modification to be compatible with the component container modification, that is, a task object running in the component container is excessively coupled with the federal learning practical service.
Disclosure of Invention
An object of the embodiments of the present application is to provide a component adapting method, device, electronic device and storage medium, which are used to improve the problem that task objects running in a component container are too coupled to a federal learning practice service.
In a first aspect, an embodiment of the present application provides an assembly adapting method, including: acquiring different task object information from the component container; and acquiring target federal learning information, and performing adaptive conversion on task object information according to the target federal learning information to obtain a converted task object. In the implementation process of the scheme, the target federal learning information is obtained, and the task object information is subjected to adaptive conversion according to the target federal learning information to obtain the task object for creating and operating the federal learning actual service, so that the adaptive conversion process is effectively added between the federal learning actual service and the component container, so that when the component container in the privacy computing platform is upgraded or modified, the adaptive conversion service can be directly upgraded or modified, the adaptive conversion process decouples the dependency relationship between the federal learning actual service and the component container, and the problem of excessive coupling between the task object and the federal learning actual service is solved.
In an optional implementation manner of the first aspect, performing adaptive conversion on task object information according to target federal learning information to obtain a converted task object includes: disassembling task object information to obtain object data in a tree format; and carrying out adaptation conversion on the object data in the tree format according to the target federal learning information to obtain a converted task object.
In the implementation process of the scheme, the task object information is disassembled to obtain the object data in the tree format, and the object data in the tree format is subjected to adaptive conversion according to the target federal learning information, so that the dependency relationship between the federal learning actual service and the component container is effectively decoupled, and the problem of excessive coupling between the task object and the federal learning actual service is solved.
In an optional implementation manner of the first aspect, the disassembling task object information includes: and disassembling the task object information by using a script object numbered notation JSON format or an extensible markup language XML format.
In an optional implementation manner of the first aspect, the object data in the tree format includes: at least one data record, the data record including an identifier and a specific value; carrying out adaptive conversion on object data in a tree format according to target federal learning information, wherein the adaptive conversion comprises the following steps: judging whether the identifier of the data record is in the target federal learning information; if yes, moving the position of the data record in the object data in the tree format according to the target federal learning information.
In the implementation process of the scheme, under the condition that whether the identifier of the data record is in the target federal learning information or not, the position of the data record in the object data in the tree format is moved according to the target federal learning information, so that the dependency relationship between the federal learning actual service and the component container is effectively decoupled, and the problem of excessive coupling between the task object and the federal learning actual service is solved.
In an optional implementation manner of the first aspect, after determining whether the identifier of the data record is included in the target federal learning information, the method further includes: and if the identifier of the data record is not in the target federal learning information, removing the data record from the object data in the tree format.
In the implementation process of the scheme, the data records are removed from the object data in the tree format under the condition that the identifiers of the data records are not in the target federal learning information, so that the dependency relationship between the federal learning actual service and the component container is effectively decoupled, and the problem of excessive coupling between the task object and the federal learning actual service is solved.
In an optional implementation manner of the first aspect, after obtaining the converted task object, the method further includes: and creating and running a corresponding task of the target federal learning by using the converted task object. In the implementation process of the scheme, the converted task object is used for creating and operating the task corresponding to the target federal learning, so that the program interface of the target federal learning can be operated without being changed, decoupling between the actual federal learning service and the component container is realized, and development and maintenance difficulty of the actual federal learning service is reduced.
In a second aspect, an embodiment of the present application provides a component adapting apparatus, including: the task information acquisition module is used for acquiring different task object information from the component container; and the task adaptation conversion module is used for acquiring target federal learning information and performing adaptation conversion on the task object information according to the target federal learning information to obtain a converted task object.
In an optional implementation manner of the second aspect, the task adaptation conversion module includes: the object data acquisition module is used for disassembling the task object information to obtain object data in a tree format; and the task object conversion module is used for carrying out adaptive conversion on the object data in the tree format according to the target federal learning information to obtain a converted task object.
In an optional implementation manner of the second aspect, the object data obtaining module includes: and the task object disassembling module is used for disassembling the task object information by using a script object numbered notation JSON format or an extensible markup language XML format.
In an optional implementation manner of the second aspect, the object data in the tree format includes: at least one data record, the data record including an identifier and a specific value; a task object conversion module comprising: the data record judging module is used for judging whether the identifier of the data record is in the target federal learning information or not; and the data record moving module is used for moving the position of the data record in the object data in the tree format according to the target federal learning information if the identifier of the data record is in the target federal learning information.
In an optional implementation manner of the second aspect, the task object conversion module further includes: and the data record removing module is used for removing the data record from the object data in the tree format if the identifier of the data record is not in the target federal learning information.
In an optional implementation manner of the second aspect, the component adapting apparatus further includes: and the task creating and running module is used for creating and running a task corresponding to the target federal learning by using the converted task object.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method as described above.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments in the embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other relevant drawings may be obtained based on the drawings without inventive effort.
Fig. 1 is a schematic diagram of an application scenario to which a component adaptation method provided in an embodiment of the present application is applied;
FIG. 2 is a flow chart of a component adaptation method provided by an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of a component adaptation service in a private computing platform provided by an embodiment of the present application;
fig. 4 is a schematic diagram illustrating task object information in JSON format according to an embodiment of the present application;
FIG. 5 is a schematic diagram of target federated learning information provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating several adaptive transformation methods provided by an embodiment of the present application;
fig. 7 is a schematic structural diagram of a component adapting device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the embodiments of the present application, as claimed, but is merely representative of selected embodiments of the present application. All other embodiments obtained by a person skilled in the art based on the embodiments of the present application without any creative effort belong to the protection scope of the embodiments of the present application.
It is to be understood that "first" and "second" in the embodiments of the present application are used to distinguish similar objects. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before introducing the component adaptation method provided in the embodiment of the present application, some concepts related to the embodiment of the present application are introduced:
federal Machine Learning (FML), also known as federal Learning, joint Learning, or league Learning, is a Machine Learning technique, specifically, one that trains algorithms on multiple distributed edge devices or servers that have local data samples. This approach is significantly different from traditional centralized machine learning techniques that upload all local data sets onto one server, whereas more classical decentralized approaches typically assume that the local data samples are all equally distributed. Federal machine learning is a machine learning framework, and can effectively help a plurality of organizations to perform data use and machine learning modeling under the condition of meeting the requirements of user privacy protection, data safety and laws and regulations.
Front-end (English: front-end) and back-end (English: back-end) refer to the universal vocabulary describing the system or process; the function of the front end is to input information and output information, and the function of the back end is to process the input information to obtain output information; specific examples thereof include: the interface style and visual presentation presented in the browser belong to the front end, while the business logic processing and micro-service processing of the computer program processed in the server belong to the back end.
The script Object Notation (JSON) is a lightweight data exchange format; JSON is based on a subset of ECMAScript, which is a JavaScript specification set by the european computer association, which stores and represents data in a text format that is completely independent of programming languages.
eXtensible Markup Language (XML) refers to a subset of standard generalized Markup languages, and XML is a Markup Language used to mark electronic documents to be structured.
Domain-Specific Language (DSL), also translated as a Domain-Specific Language, refers to a computer Language that is dedicated to a certain application Domain.
It should be noted that the component adapting method provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal or a server having a function of executing a computer program, and the device terminal includes, for example: a smart phone, a personal computer, a tablet computer, a personal digital assistant, or a mobile internet device, etc. A server refers to a device that provides computing services over a network, such as: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, minicomputer, and UNIX server.
Please refer to fig. 1, which is a schematic diagram of an application scenario to which the component adaptation method provided in the embodiment of the present application is applied; application scenarios to which the component adaptation method is applicable are described below, where the application scenarios include, but are not limited to: the privacy computing platform in the figure comprises: front-end, engineering management of flows, engines of flows, federal learning services, and the like. The front end can directly receive the operations of creating, saving and modifying the flow chart of the user, automatically convert the flow chart modified by the front end into the workflow, and give the workflow to the engineering management of the flow for processing (namely, the operations of creating, saving, selecting a template, modifying and the like of the front end are executed), so that the workflow described by the domain specific language is obtained. Then, the workflow described by the domain specific language can be analyzed by using a flow engine, so that a Directed Acyclic Graph (DAG) is generated, the Directed Acyclic Graph (DAG) is arranged and optimized and then stored in a component container in a task object information mode, and the federal learning actual service is executed and issued according to the task object information in the component container.
From the above, it can be seen that the federal learning actual service is strongly associated with the flow engine, and the flow engine is responsible for parsing and executing a Directed Acyclic Graph (DAG), and scheduling component execution according to a change of a state machine through a component container, that is, task object information in the component container is strongly bound to the federal learning actual service. That is to say, the operation of the federal learning actual service depends on the component container, and the component container depends on the flow engine, so that in the case that the task object operated in the component container is too coupled with the federal learning actual service, the component adaptation method can be used to add an adaptation conversion process between the federal learning actual service and the component container, and the adaptation conversion process decouples the dependency relationship between the federal learning actual service and the component container, thereby improving the problem of too coupling the task object with the federal learning actual service.
Please refer to fig. 2, which is a schematic flow chart of an assembly adaptation method provided in an embodiment of the present application; the main idea of the component adaptation method is that a task object used for creating and running the federal learning actual service is obtained by obtaining target federal learning information and performing adaptation conversion on task object information according to the target federal learning information, and an adaptation conversion process is effectively added between the federal learning actual service and a component container, and the adaptation conversion process decouples the dependency relationship between the federal learning actual service and the component container, so that the problem of excessive coupling between the task object and the federal learning actual service is solved. The component adapting method may include:
step S110: different task object information is obtained from the component container.
The embodiment of step S110 described above is, for example: the method comprises the steps that an engine in a privacy computing platform analyzes a workflow described by a domain specific language, a Directed Acyclic Graph (DAG) generated by analysis is arranged and optimized and then stored in a component container in a task object information mode, and then different task object information can be read from the component container through the engine in the privacy computing platform. The task object information is generated and analyzed by a workflow described by a domain specific language in the component container through an engine of a stream in the privacy computing platform, and the task object information refers to metadata information of a task executed in the component container at one time.
Step S120: and acquiring target federal learning information, and performing adaptive conversion on task object information according to the target federal learning information to obtain a converted task object.
Please refer to fig. 3, which illustrates a schematic diagram of a component adaptation service in a privacy computing platform provided by an embodiment of the present application; the embodiment of step S120 described above is, for example: and acquiring target federal learning information, and performing adaptation conversion on task object information according to the target federal learning information through a component adaptation service in the privacy computing platform to obtain a converted task object, namely converting the converted task object into a task object executable by the federal learning actual service. From the viewpoint of process scheduling, task object information is delivered to a current component (i.e., a component adaptation service) by a front-end component (i.e., a component agent in an engine of a stream) in a linear delivery manner, so that the component adaptation service can receive the task object information and perform adaptation transformation on the task object information according to target federal learning information, and a specific adaptation transformation process is as follows.
As an optional implementation manner of step S120, performing adaptive conversion on task object information according to the target federal learning information to obtain a converted task object, including:
step S121: and disassembling the task object information to obtain object data in a tree format.
Please refer to a schematic diagram of task object information in JSON format provided in the embodiment of the present application shown in fig. 4; the task object information may include input information and static information, and may be in a tree format (i.e., a format expressed by a tree in a data structure) such as a JSON format or an XML format.
The embodiment of the step S121 specifically includes: the task object information is disassembled by using a script object notation JSON format to obtain object data in a tree format, such as: { "ware" { "ware ID": 123', "version": 456"}," jobA "{" jobId ": 789)," inputs "[ { }, { } setting" { } and "info": task information. The object data in the tree format includes: at least one data record, the data record including an identifier (i.e., a key in a key-value pair) and a specific value (i.e., a value in a key-value pair); the version is the identifier in the data record, and 456 is the specific value in the data record, which indicates that the version number is 456.
Step S122: and carrying out adaptation conversion on the object data in the tree format according to the target federal learning information to obtain a converted task object.
Please refer to fig. 5, which is a schematic diagram of target federal learning information provided in an embodiment of the present application; since the target federal learning information (i.e., the task object information required by the target federal learning service) may not match the task object information stored in the component container, the object data in the tree format needs to be adaptively converted according to the format of the target federal learning information to obtain the converted task object.
Please refer to fig. 6 for a schematic diagram of several adaptive transformation methods provided in the embodiment of the present application; there are many adaptation conversion manners in step S122, including but not limited to the following:
the first adaptive conversion manner, the moved or removed adaptive conversion manner, may specifically include: determining whether the identifiers (e.g., ware id and info) of the data records are in the target federal learning information; if the identifier of the data record is in the target federal learning information, the position of the data record in the object data in the tree format is moved according to the target federal learning information, for example: the ware ID is in the task object information and the target federal learning information at the same time, and only the positions of the two are different, so that the data record corresponding to the ware ID needs to be moved to the position. If the identifier of the data record is not in the target federal learning information, removing the data record from the object data in the tree format, for example: the ware ID is only in the task object information but not in the target federal learning information, so that the data record corresponding to the ware ID needs to be removed from the object data in the tree format.
A second adaptive conversion method, specifically, for example: the identifier is a data record of inputs, and the specific value is two input values, so it needs to be converted into a form of two input values in the target federal learning information (i.e. a data record corresponding to input1 and a data record corresponding to input2 in the figure).
A third adaptive conversion mode, which is a filling adaptive conversion mode, specifically for example: since the identifier is author only in the target federal learning information and not in the task object information, the data record corresponding to author needs to be filled in, and similarly, the data record corresponding to the identifier is other needs to be filled in the target federal learning information.
In the implementation process of the scheme, the target federal learning information is obtained, and the task object information is subjected to adaptive conversion according to the target federal learning information to obtain the task object for creating and operating the federal learning actual service, so that the adaptive conversion process is effectively added between the federal learning actual service and the component container, so that when the component container in the privacy computing platform is upgraded or modified, the adaptive conversion service can be directly upgraded or modified, the adaptive conversion process decouples the dependency relationship between the federal learning actual service and the component container, and the problem of excessive coupling between the task object and the federal learning actual service is solved.
As an optional implementation of the above component adaptation method, after obtaining the converted task object, the federal learning task may be created and run by using the converted task object, and the implementation includes:
step S130: and creating and running a corresponding task of the target federal learning by using the converted task object.
The embodiment of step S130 described above is, for example: and creating and operating the task corresponding to the target federal learning by using the converted task object, so that the program interface of the target federal learning can be operated without changing, thereby decoupling the actual federal learning service from the component container, and reducing the development and maintenance difficulty of the actual federal learning service.
In the implementation process of the scheme, the converted task object is used for creating and operating the task corresponding to the target federal learning, so that the program interface of the target federal learning can be operated without being changed, decoupling between the actual federal learning service and the component container is realized, and development and maintenance difficulty of the actual federal learning service is reduced.
Please refer to fig. 7 for a schematic structural diagram of an assembly adapting device provided in the embodiment of the present application. The embodiment of the present application provides a component adapting device 200, including:
the task information obtaining module 210 is configured to obtain different task object information from the component container.
And the task adaptation conversion module 220 is configured to obtain the target federal learning information, and perform adaptation conversion on the task object information according to the target federal learning information to obtain a converted task object.
Optionally, in an embodiment of the present application, the task adaptation converting module includes:
and the object data acquisition module is used for disassembling the task object information to obtain object data in a tree format.
And the task object conversion module is used for carrying out adaptive conversion on the object data in the tree format according to the target federal learning information to obtain a converted task object.
Optionally, in an embodiment of the present application, the object data obtaining module includes:
and the task object disassembling module is used for disassembling the task object information by using a script object numbered notation JSON format or an extensible markup language XML format.
Optionally, in this embodiment of the present application, the object data in the tree format includes: at least one data record, the data record including an identifier and a specific value; a task object conversion module comprising:
and the data record judging module is used for judging whether the identifier of the data record is in the target federal learning information.
And the data record moving module is used for moving the position of the data record in the object data in the tree format according to the target federal learning information if the identifier of the data record is in the target federal learning information.
Optionally, in an embodiment of the present application, the task object conversion module further includes:
and the data record removing module is used for removing the data record from the object data in the tree format if the identifier of the data record is not in the target federal learning information.
Optionally, in an embodiment of the present application, the component adapting apparatus further includes:
and the task creating and running module is used for creating and running a task corresponding to the target federal learning by using the converted task object.
It should be understood that the apparatus corresponds to the above-mentioned component adapting method embodiment, and can perform the steps related to the above-mentioned method embodiment, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
An electronic device provided in an embodiment of the present application includes: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as above.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method as above is performed.
The computer-readable storage medium may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
In addition, functional modules of the embodiments in the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part. Furthermore, in the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the modifications and substitutions should be covered by the scope of the embodiments of the present application.

Claims (10)

1. A method of component adaptation, comprising:
acquiring different task object information from the component container;
and acquiring target federal learning information, and performing adaptive conversion on the task object information according to the target federal learning information to acquire a converted task object.
2. The method according to claim 1, wherein the adaptively converting the task object information according to the target federal learning information to obtain a converted task object comprises:
disassembling the task object information to obtain object data in a tree format;
and performing adaptation conversion on the object data in the tree format according to the target federal learning information to obtain the converted task object.
3. The method of claim 2, wherein the disassembling the task object information comprises:
and disassembling the task object information by using a script object notation JSON format or an extensible markup language XML format.
4. The method of claim 2, wherein the object data in tree format comprises: at least one data record, the data record comprising an identifier and a specific value; the adapting and converting the object data in the tree format according to the target federal learning information comprises the following steps:
determining whether an identifier of the data record is in the target federated learning information;
and if so, moving the position of the data record in the object data in the tree format according to the target federal learning information.
5. The method of claim 4, further comprising, after the determining whether the identifier of the data record is included in the target federated learning information:
and if the identifier of the data record is not in the target federal learning information, removing the data record from the object data in the tree format.
6. The method of claim 1, further comprising, after said obtaining the transformed task object:
and creating and operating the task corresponding to the target federal learning by using the converted task object.
7. A component adapting device, comprising:
the task information acquisition module is used for acquiring different task object information from the component container;
and the task adaptation conversion module is used for acquiring target federal learning information and carrying out adaptation conversion on the task object information according to the target federal learning information to obtain a converted task object.
8. The apparatus of claim 7, wherein the task adaptation transformation module comprises:
the object data acquisition module is used for disassembling the task object information to obtain object data in a tree format;
and the task object conversion module is used for performing adaptive conversion on the object data in the tree format according to the target federal learning information to obtain the converted task object.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 6.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 6.
CN202210099403.0A 2022-01-27 2022-01-27 Component adapting method and device, electronic equipment and storage medium Pending CN114489950A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210099403.0A CN114489950A (en) 2022-01-27 2022-01-27 Component adapting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210099403.0A CN114489950A (en) 2022-01-27 2022-01-27 Component adapting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114489950A true CN114489950A (en) 2022-05-13

Family

ID=81475588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210099403.0A Pending CN114489950A (en) 2022-01-27 2022-01-27 Component adapting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114489950A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114691342A (en) * 2022-05-31 2022-07-01 蓝象智联(杭州)科技有限公司 Method and device for realizing priority scheduling of federated learning algorithm component and storage medium
CN117742928A (en) * 2024-02-20 2024-03-22 蓝象智联(杭州)科技有限公司 Algorithm component execution scheduling method for federal learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114691342A (en) * 2022-05-31 2022-07-01 蓝象智联(杭州)科技有限公司 Method and device for realizing priority scheduling of federated learning algorithm component and storage medium
CN117742928A (en) * 2024-02-20 2024-03-22 蓝象智联(杭州)科技有限公司 Algorithm component execution scheduling method for federal learning
CN117742928B (en) * 2024-02-20 2024-04-26 蓝象智联(杭州)科技有限公司 Algorithm component execution scheduling method for federal learning

Similar Documents

Publication Publication Date Title
US11080493B2 (en) Translation review workflow systems and methods
US7200668B2 (en) Document conversion with merging
US7340534B2 (en) Synchronization of documents between a server and small devices
US8583413B2 (en) Computer method and apparatus for chaining of model-to-model transformations
CN103294475A (en) Automatic service generating system and automatic service generating method both of which are based on imaging service scene and field template
CN114489950A (en) Component adapting method and device, electronic equipment and storage medium
CN110968325A (en) Applet conversion method and device
CN110244941B (en) Task development method and device, electronic equipment and computer readable storage medium
US20080222216A1 (en) Application migration file scanning and conversion
US9053450B2 (en) Automated business process modeling
CN110837356A (en) Data processing method and device
US9378115B2 (en) Base line for code analysis
US10656922B2 (en) Systems and methods for providing an application transformation tool
JP7346332B2 (en) Database migration method, database migration system, and database migration program
CN111126965B (en) Auditing rule optimization method, auditing rule optimization device, computer equipment and storage medium
CN111061469B (en) WEB front-end source code generation method and device, storage medium and processor
US11740995B2 (en) Source quality check service
US20190018663A1 (en) Code lineage tool
Le Zou et al. On synchronizing with web service evolution
US8543969B2 (en) Method and apparatus for developing a computer program utilizing a service oriented architecture
CN111273901B (en) File format and deployment method of machine learning model capable of being rapidly deployed online
CN111125743B (en) Authority management method, system, computer device and computer readable storage medium
CN113033177A (en) Method and device for analyzing electronic medical record data
CN115378996B (en) Method, device, equipment and storage medium for data transmission between systems
CN116226066B (en) Low code platform code synchronization method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination