CN112416887B - Information interaction method and device and electronic equipment - Google Patents

Information interaction method and device and electronic equipment Download PDF

Info

Publication number
CN112416887B
CN112416887B CN202011299414.0A CN202011299414A CN112416887B CN 112416887 B CN112416887 B CN 112416887B CN 202011299414 A CN202011299414 A CN 202011299414A CN 112416887 B CN112416887 B CN 112416887B
Authority
CN
China
Prior art keywords
log
federal learning
public domain
opposite end
predefined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011299414.0A
Other languages
Chinese (zh)
Other versions
CN112416887A (en
Inventor
李龙一佳
赵鹏
吴迪
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lemon Inc Cayman Island
Original Assignee
Lemon Inc Cayman Island
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lemon Inc Cayman Island filed Critical Lemon Inc Cayman Island
Priority to CN202011299414.0A priority Critical patent/CN112416887B/en
Publication of CN112416887A publication Critical patent/CN112416887A/en
Application granted granted Critical
Publication of CN112416887B publication Critical patent/CN112416887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1805Append-only file systems, e.g. using logs or journals to store data
    • G06F16/1815Journaling file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1744Redundancy elimination performed by the file system using compression, e.g. sparse files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the disclosure discloses an information interaction method, an information interaction device and electronic equipment. One embodiment of the method comprises the following steps: acquiring a log generated by a local end in the federal learning process; determining a predefined public domain log from the obtained log, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; and sending the public domain log to an opposite end of federal learning. Therefore, a new federal learning information interaction mode is provided.

Description

Information interaction method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to an information interaction method, an information interaction device and electronic equipment.
Background
Federal machine learning (Federated machine learning/Federated Learning), also known as federal learning, joint learning, federal learning. Federal machine learning is a machine learning framework under which the problem of disparate data owners collaborating without exchanging data is solved by designing a virtual model. The virtual model is the optimal model for each party to aggregate data together, and each party serves a local target according to the model. Because the data is not transferred, the user privacy is not revealed or the data specification is not influenced, and a plurality of institutions can be effectively helped to carry out data use and machine learning modeling under the condition that the requirements of user privacy protection, data safety and government regulations are met.
Disclosure of Invention
This disclosure is provided in part to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an information interaction method, including: acquiring a log generated by a local end in the federal learning process; determining a predefined public domain log from the obtained log, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; and sending the public domain log to an opposite end of federal learning.
In a second aspect, an embodiment of the present disclosure provides an information interaction apparatus, including: the acquisition unit is used for acquiring logs generated by the local end in the federal learning process; the determining unit is used for determining a predefined public domain log from the obtained logs, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; and the sending unit is used for sending the public domain log to an opposite end of federal learning.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the information interaction method as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the information interaction method according to the first aspect.
According to the information interaction method, the information interaction device and the electronic equipment, the predefined public domain log is determined from the log of the local end, and then the public domain log is shared to the opposite end of federal learning, so that multi-terminal cooperation of federal learning can be realized under the condition that the private data related to a user is not revealed, and the efficiency of federal learning is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of one embodiment of an information interaction method according to the present disclosure;
FIG. 2 is a schematic diagram of a federal learning framework;
FIG. 3 is an application scenario of the information interaction method according to the present disclosure
FIG. 4 is a schematic structural view of one embodiment of an information interaction device according to the present disclosure;
FIG. 5 is an exemplary system architecture in which the information interaction method of one embodiment of the present disclosure may be applied;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Referring to fig. 1, a flow of one embodiment of an information interaction method according to the present disclosure is shown. The information interaction method is applied to the terminal equipment. The information interaction method as shown in fig. 1 comprises the following steps:
step 101, obtaining a log generated by the local end in the federal learning process.
In this embodiment, the execution body (e.g., a server) of the information interaction method may obtain a log generated by the local end in the federal learning process.
In this embodiment, federal machine learning (Federated machine learning/Federated Learning), also known as federal learning, joint learning, federal learning. Federal machine learning is a machine learning framework under which the problem of disparate data owners collaborating without exchanging data is solved by designing a virtual model. The virtual model is the optimal model for each party to aggregate data together, and each party serves a local target according to the model. Because the data is not transferred, the user privacy is not revealed or the data specification is not influenced, and a plurality of institutions can be effectively helped to carry out data use and machine learning modeling under the condition that the requirements of user privacy protection, data safety and government regulations are met. The federal learning is used as a distributed machine learning paradigm, so that the problem of data island can be effectively solved, participants can jointly model on the basis of not sharing data, the data island can be broken technically, and Artificial Intelligence (AI) cooperation is realized.
As an example, please refer to fig. 2, the concept of the present end is proposed to distinguish a participant of federal learning as a node from other participants of federal learning. Referring to fig. 2, if the executing entity is the party having the Y sample, a series of actions performed by the party having the Y sample, such as loss value calculation, encryption, model updating, etc., belong to the local actions. It will be appreciated that the loss value calculation, encryption, update model, etc. described above are performed on different physical machines, but all belong to the local end of the party that owns the Y sample.
In this embodiment, the log may be understood as a log generally known in the computer field. The log file may be a file that records events that occur in the running operating system or other software. The type of event that occurs may be varied and is not limited herein.
In this embodiment, the log generated in the federal learning process may be used to record some relevant matters in the federal learning process.
As an example, logs generated during the federal learning process may be used to record various stages during the federal learning process. For example, matters of a sample alignment phase of federal learning may be recorded, matters of a federal learning model training phase may be recorded, and so on.
In other words, the logs generated during the federal learning process, which may be specific in content or form, are not limited herein.
Step 102, determining a predefined public domain log from the obtained logs.
In this embodiment, the execution body may determine a predefined public domain log from the obtained log.
Here, the log obtained in step 101 may be a plurality of logs, or may be understood as a log set including at least one log.
In this embodiment, the public domain log may be a predefined log to be shared to the federal learning peer. In other words, the public domain log may indicate a log capable of disclosing transmissions over the public network.
It should be noted that the above-mentioned determination may be determined according to an actual application scenario, for example, according to local relevant laws and regulations at the time of implementing the present embodiment.
In this embodiment, the specific content included in the public domain log may vary according to the actual application scenario, which is not limited herein.
Referring to fig. 2, if the executing entity is the party having the Y sample, the opposite end of federal learning may be the party having the X sample.
Step 103, the public domain log is sent to the opposite end of federal learning.
In this embodiment, the execution body may send the common domain log to the opposite end of federal learning.
Here, the opposite end of federal learning may be a non-home end of federal learning.
In some application scenarios, federal learning may include two participants that are opposite each other.
In some application scenarios, federal learning may include at least two participants. For any participant, the other ends except the local end in the federal learning are opposite ends.
It should be noted that, according to the information interaction method applied to federation learning provided by the embodiment, the predefined public domain log is determined from the log of the local end, and then the public domain log is shared to the opposite end of federation learning, so that multi-end collaboration of federation learning can be realized and efficiency of federation learning is improved under the condition that the private data related to the user is not revealed.
It should be noted that in some related technologies, no data interaction is performed between federally learned participants, except for the parameters used to update the model. When the federal learning execution system may be problematic, the staff is also used to communicate from person to person. I.e., data that is not relevant to the user data but that is beneficial to the stability of the federal learning system, is not interacted with. The inventor thinks that the part of the log, which is safe, of the irrelevant user data is interacted between the federal learning participants, so that the communication cost of federal learning is greatly reduced, and the coordination and efficiency between the federal learning participants are greatly improved.
In some embodiments, the public domain log may include system related parameters.
Here, system-related parameters may be used to characterize system performance.
In some embodiments, the system-related parameters may include, but are not limited to, at least one of: system performance metrics and system frame metrics.
In some embodiments, the system performance metrics may include at least one of, but are not limited to: task time consumption, throughput, resource usage.
Here, task time consumption (latency) may refer to the time consumption of a service level.
Throughput (qps) may refer herein to an operand that may be executed over a period of time.
As an example, the resource usage amount may include, but is not limited to, at least one of: the CPU idle rate and the memory.
As an example, in the scenario of federal learning training, if the training time is too long, any end of federal learning can determine at which stage the performance optimization can be performed for the model training through the time consumption of phasing in the common domain log, and then optimize the bilateral training efficiency at this stage.
In some embodiments, the system frame index may include at least one of, but is not limited to: failure error code, timeout condition information.
As an example, in the scenario of federal learning data interchange, if the success rate of data interchange does not meet the expectation, it can be determined whether it is an abnormality at the system level or a problem of the content of the data itself by, for example, identifying an error code in the public domain.
From the above examples, it can be seen that federal learning may include multiple scenarios, where different public domain log entries may be selected for problem investigation.
It should be noted that, the public domain log includes related parameters of the system, so that the opposite end of federal learning can learn about the condition of the federal learning process of the local end, and the opposite end is convenient to conduct problem investigation, so that the speed of determining the occurrence reason of the problem is improved.
In some embodiments, the step 103 may include: packaging and compressing at least two public domain logs; and sending the packed and compressed public domain log to an opposite end of federal learning by using a public network channel.
Here, the number of the public domain logs packed at one time may be set according to an actual application scenario. As an example, ten logs may be packaged in batches, then compressed, and the packaged and compressed data may be sent to the opposite end of federal learning.
Here, the public network is relative to the intranet. The IP address obtained by the computer of the intranet is a reserved address on the Internet; the IP address obtained by the computer on the public network is the public address of the internet and is an unreserved address. The computers of the public network and other computers on the internet may be accessible to each other.
Here, the file size of the public domain log can be reduced by packing and compressing at least two public domain logs. The public domain log after packing and compressing is transmitted by using a public network channel, so that the data quantity transmitted in the public network can be reduced relative to the public domain log with the original size.
It should be noted that, in the federal learning scenario, the largest resource bottleneck is the bandwidth of the public network communication, that is, the cost of the public network bandwidth is high. By packing a plurality of public domain logs and then compressing the logs for transmission, the overall time consumption and the transmission data volume of public domain log transmission can be reduced, and therefore the cost of model training based on federal learning is reduced.
In some embodiments, the above method may further comprise: from the obtained logs, a predefined private domain log is determined.
Here, the private domain log is a predefined privacy log. In other words, the private domain log is a predefined log that cannot be transmitted in a place other than the home terminal.
It should be noted that the above-mentioned incapacity determination may be determined according to an actual application scenario, for example, according to local relevant laws and regulations at the time of implementing the present embodiment.
It should be noted that, determining the private domain log and determining the public domain log from the obtained logs may be understood as ranking the obtained logs. The private domain log is only visible inside the home end, and the public domain log can interact between participants of federal learning. Therefore, the communication cost of the federal learning participants can be reduced, and the maintenance cost of the federal learning framework is reduced.
In some embodiments, the step 101 may further include: adding the obtained log into a cache queue; the method further comprises: and storing the information in the cache queue into a log storage area preset by the local end.
In some application scenarios, the local end may adopt distributed deployment, that is, adopt a plurality of devices to perform the computation of the federal learning local end. In other words, a cluster of computers may be employed to implement the computation of federal learning at the local end. A computer cluster may be divided into a plurality of nodes (nodes). The Node equipment Node can be deployed with a log information acquisition instance, and the acquisition instance actively or passively receives various log information of a log source.
The log information of each node may be sent to a cache queue as a log message.
Here, the cache queue may be used to cache log information.
It should be noted that, the log information cached by the cache queue can be conveniently packaged in batches, and the packaged log information is sent, so that the transmission efficiency can be improved.
Here, the log storage area may be used to store logs. The log storage area preset by the local end can be used for storing logs which can be obtained by the local end.
It should be noted that, setting the log storage area of the local end can improve the storage concentration of the log which can be obtained by the local end, thereby being convenient for the local end to carry out problem examination and query calculation.
In some embodiments, the above method may further comprise: and storing the log received from the opposite end to a log storage area preset by the local end.
It can be understood that the opposite terminal can also determine the public domain log generated by the opposite terminal itself and then send the public domain log to the local terminal.
As an example, referring to fig. 2, if a party having a Y sample is taken as the home terminal, a party having an X sample may be taken as the opposite terminal. The party possessing the X sample can send the public domain log generated by the party to the party possessing the Y sample.
It should be noted that, the logs generated and sent by the opposite end are obtained and then stored in the log storage area for centralized storage, so that the problem investigation by the local end can be facilitated, the problem determination speed of the local end when the problem occurs is improved, and the stability of the system is improved.
Referring to fig. 3, an application scenario of some embodiments of the present application is shown.
In fig. 3, federal learning participant 301 may include a log storage area, a log cache queue, and several node devices. Federal learning participant 302 may include a log store, a log cache queue, and a number of node devices. It will be appreciated that the log storage area may refer to a device that stores logs, and the log cache queue may also indicate a device that caches logs. For convenience of explanation, the device is referred to herein as a log buffer region, and the device is referred to herein as a log buffer queue.
Here, in the federal learning participant 301, the node device may send the log to the log cache queue. Optionally, the logs are ranked (i.e., whether they are public domain logs or private domain logs is determined), either at the node device or by the device executing the log cache queue. The log buffer queue may send the public domain log of the home terminal to the federal learning participant 302. The log buffer queue may send the log of the local end total and the log received from the federal learning participant 302 to the log storage area for storage.
Here, in the federal learning participant 302, the node device may send the log to a log cache queue. Optionally, the logs are ranked (i.e., whether they are public domain logs or private domain logs is determined), either at the node device or by the device executing the log cache queue. The log buffer queue may send the public domain log of the home terminal to the federal learning participant 301. The log buffer queue may send the log of the local end total and the log received from the federal learning participant 301 to the log storage area for storage.
With further reference to fig. 4, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an information interaction device, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 1, and the device may be specifically applied to various electronic devices.
As shown in fig. 4, the information interaction device of the present embodiment includes: an acquisition unit 401, a determination unit 402, and a transmission unit 403. The acquisition unit is used for acquiring logs generated by the local end in the federal learning process; the determining unit is used for determining a predefined public domain log from the obtained logs, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; and the sending unit is used for sending the public domain log to an opposite end of federal learning.
In this embodiment, the specific processes of the acquiring unit 401, the determining unit 402, and the transmitting unit 403 of the information interaction device and the technical effects thereof may refer to the descriptions related to the steps 101, 102, and 103 in the corresponding embodiment of fig. 1, and are not repeated here.
In some embodiments, the public domain log includes system-related parameters.
In some embodiments, the system-related parameters include at least one of: system performance index and system frame index; wherein the system performance index comprises at least one of: task time consumption, throughput, resource usage; the system frame index includes at least one of: failure error code, timeout condition information.
In some embodiments, the sending the public domain log to an opposite end of federal learning includes: packaging and compressing at least two public domain logs; and sending the packed and compressed public domain log to an opposite end of federal learning by using a public network channel.
In some embodiments, the apparatus is further to: and determining a predefined private domain log from the logs, wherein the private domain log is a predefined secret log.
In some embodiments, the obtaining the log generated by the local end in the federal learning process includes: storing the obtained log into a cache queue; the apparatus is further for: and storing the information in the log cache queue to a log storage area preset by a local end.
In some embodiments, the apparatus is further to: and storing the log received from the opposite end to a log storage area preset by the local end.
Referring to fig. 5, fig. 5 illustrates an exemplary system architecture in which the information interaction method of one embodiment of the present disclosure may be applied.
As shown in fig. 5, the system architecture may include a server 501, a network 502, and a server 503. Network 502 is the medium used to provide communications links between servers 501 and 503. Network 502 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The server 501 may interact with the server 503 through the network 502 to receive or send messages, etc.
The servers 501 and 503 may be hardware or software. Various applications, such as a data alignment application, a model training application, etc., may be installed on the servers 501, 503. The applications in the servers 501, 503 may be instructed by the system to schedule and perform the corresponding functions according to the instructions, such as model training according to the instructions. The server may be a server providing various services, such as receiving messages sent by other servers, and performing corresponding actions based on the received messages. The server 501 or 503 may be a single computer device or may be a cluster of computers.
It should be noted that, the information interaction method provided by the embodiment of the present disclosure may be executed by the server 501, and accordingly, the information interaction device may be disposed in the server 5013. In addition, the information interaction method provided by the embodiment of the present disclosure may also be executed by the server 503, and accordingly, the information interaction device may be disposed in the server 503.
It should be understood that the number of servers 501, networks, and servers 503 in fig. 5 are merely illustrative. There may be any number of servers 501, networks, and servers 503, as desired for implementation.
Referring now to fig. 6, a schematic diagram of a configuration of an electronic device (e.g., a terminal device or server in fig. 5) suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a log generated by a local end in the federal learning process; determining a predefined public domain log from the obtained log, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; and sending the public domain log to an opposite end of federal learning.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit is not limited to the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires a log generated by the local end in the federal learning process".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (9)

1. An information interaction method, comprising:
acquiring a log generated by a local end in the federal learning process;
determining a predefined public domain log from the obtained log, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; the public domain log comprises system related parameters and does not comprise privacy data related to users;
and sending the public domain log to an opposite end of federal learning.
2. The method of claim 1, wherein the system-related parameters include at least one of: system performance index and system frame index; wherein the method comprises the steps of
The system performance index includes at least one of: task time consumption, throughput, resource usage;
the system frame index includes at least one of: failure error code, timeout condition information.
3. The method of claim 1, wherein said sending the public domain log to an opposite end of federal learning comprises:
packaging and compressing at least two public domain logs;
and sending the packed and compressed public domain log to an opposite end of federal learning by using a public network channel.
4. The method according to claim 1, wherein the method further comprises:
from the obtained logs, a predefined private domain log is determined, wherein the private domain log is a predefined secret log.
5. The method of claim 1, wherein the obtaining the log generated by the home terminal during the federal learning process comprises:
storing the obtained log into a cache queue; and
the method further comprises the steps of:
and storing the information in the log cache queue to a log storage area preset by a local end.
6. The method according to claim 1, wherein the method further comprises:
and storing the log received from the opposite end to a log storage area preset by the local end.
7. An information interaction device, comprising:
the acquisition unit is used for acquiring logs generated by the local end in the federal learning process;
the determining unit is used for determining a predefined public domain log from the obtained logs, wherein the public domain log is a predefined log to be shared to an opposite end of federal learning; the public domain log comprises system related parameters and does not comprise privacy data related to users;
and the sending unit is used for sending the public domain log to an opposite end of federal learning.
8. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202011299414.0A 2020-11-18 2020-11-18 Information interaction method and device and electronic equipment Active CN112416887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011299414.0A CN112416887B (en) 2020-11-18 2020-11-18 Information interaction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011299414.0A CN112416887B (en) 2020-11-18 2020-11-18 Information interaction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112416887A CN112416887A (en) 2021-02-26
CN112416887B true CN112416887B (en) 2024-01-30

Family

ID=74774071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011299414.0A Active CN112416887B (en) 2020-11-18 2020-11-18 Information interaction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112416887B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282475B (en) * 2021-06-02 2022-12-06 青岛海尔科技有限公司 Method and device for evaluating interactive performance of interactive system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063059A (en) * 2018-07-20 2018-12-21 腾讯科技(深圳)有限公司 User behaviors log processing method, device and electronic equipment
CN110290162A (en) * 2018-03-19 2019-09-27 北京京东尚科信息技术有限公司 Document transmission method and its system, computer system
CN110874646A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Exception handling method and device for federated learning and electronic equipment
CN111224807A (en) * 2018-11-27 2020-06-02 中国移动通信集团江西有限公司 Distributed log processing method, device, equipment and computer storage medium
CN111539731A (en) * 2020-06-19 2020-08-14 支付宝(杭州)信息技术有限公司 Block chain-based federal learning method and device and electronic equipment
CN111553485A (en) * 2020-04-30 2020-08-18 深圳前海微众银行股份有限公司 View display method, device, equipment and medium based on federal learning model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8281370B2 (en) * 2006-11-27 2012-10-02 Therap Services LLP Managing secure sharing of private information across security domains

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290162A (en) * 2018-03-19 2019-09-27 北京京东尚科信息技术有限公司 Document transmission method and its system, computer system
CN109063059A (en) * 2018-07-20 2018-12-21 腾讯科技(深圳)有限公司 User behaviors log processing method, device and electronic equipment
CN111224807A (en) * 2018-11-27 2020-06-02 中国移动通信集团江西有限公司 Distributed log processing method, device, equipment and computer storage medium
CN110874646A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Exception handling method and device for federated learning and electronic equipment
CN111553485A (en) * 2020-04-30 2020-08-18 深圳前海微众银行股份有限公司 View display method, device, equipment and medium based on federal learning model
CN111539731A (en) * 2020-06-19 2020-08-14 支付宝(杭州)信息技术有限公司 Block chain-based federal learning method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
联邦学习安全与隐私保护综述;陈兵等;《南京航空航天大学学报》;第52卷(第5期);675-684 *

Also Published As

Publication number Publication date
CN112416887A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN110795022B (en) Terminal testing method, system and storage medium
CN110753089B (en) Method, device, medium and electronic equipment for managing client
CN110909521B (en) Online document information synchronous processing method and device and electronic equipment
WO2021082649A1 (en) List update method and apparatus, readable medium, and electronic device
CN112434620B (en) Scene text recognition method, device, equipment and computer readable medium
CN113760536A (en) Data caching method and device, electronic equipment and computer readable medium
CN112416887B (en) Information interaction method and device and electronic equipment
CN111858381B (en) Application fault tolerance capability test method, electronic device and medium
CN112486825B (en) Multi-lane environment architecture system, message consumption method, device, equipment and medium
CN111596992B (en) Navigation bar display method and device and electronic equipment
CN115022328B (en) Server cluster, testing method and device of server cluster and electronic equipment
US20230418794A1 (en) Data processing method, and non-transitory medium and electronic device
CN111309497B (en) Information calling method and device, server, terminal and storage medium
CN111756833B (en) Node processing method, node processing device, electronic equipment and computer readable medium
CN112507676B (en) Method and device for generating energy report, electronic equipment and computer readable medium
CN112346661B (en) Data processing method and device and electronic equipment
CN110941683B (en) Method, device, medium and electronic equipment for acquiring object attribute information in space
WO2022017458A1 (en) Data synchronization method and apparatus, electronic device, and medium
CN111444457B (en) Data release method and device, storage medium and electronic equipment
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN112163176A (en) Data storage method and device, electronic equipment and computer readable medium
CN113157365B (en) Program running method, program running device, electronic equipment and computer readable medium
US11349950B1 (en) Remotely interacting with database
CN111258670B (en) Method and device for managing component data, electronic equipment and storage medium
CN116755889B (en) Data acceleration method, device and equipment applied to server cluster data interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant