CN116957062A - Federal learning method and device based on calculation network - Google Patents

Federal learning method and device based on calculation network Download PDF

Info

Publication number
CN116957062A
CN116957062A CN202211514683.3A CN202211514683A CN116957062A CN 116957062 A CN116957062 A CN 116957062A CN 202211514683 A CN202211514683 A CN 202211514683A CN 116957062 A CN116957062 A CN 116957062A
Authority
CN
China
Prior art keywords
model
security
parameter
node
force
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211514683.3A
Other languages
Chinese (zh)
Inventor
唐云洁
蒋家驹
吕严
陆田
王钊
吴晓
顾珺菲
陈卓然
朱张琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Zijin Jiangsu Innovation Research Institute Co ltd
China Mobile Communications Group Co Ltd
China Mobile Group Jiangsu Co Ltd
Original Assignee
China Mobile Zijin Jiangsu Innovation Research Institute Co ltd
China Mobile Communications Group Co Ltd
China Mobile Group Jiangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Zijin Jiangsu Innovation Research Institute Co ltd, China Mobile Communications Group Co Ltd, China Mobile Group Jiangsu Co Ltd filed Critical China Mobile Zijin Jiangsu Innovation Research Institute Co ltd
Priority to CN202211514683.3A priority Critical patent/CN116957062A/en
Publication of CN116957062A publication Critical patent/CN116957062A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/502Proximity

Abstract

The application relates to the technical field of computers, and provides a federal learning method and device based on a computing power network. The method comprises the following steps: receiving a parameter request sent by a security node, wherein the security node is deployed in a security federation learning system; transmitting the neural network model and the first model parameters to the security node based on the parameter request; receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of a neural network model and the first model parameter by the security node; and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model. According to the embodiment of the application, under the condition of integrating the federal learning algorithm under the power network architecture, under the condition of ensuring that all user data is not directly transmitted to the side end server, the parameters are updated and issued by encrypting the transmission parameters, so that the improvement of the recognition precision of all models is realized.

Description

Federal learning method and device based on calculation network
Technical Field
The application relates to the technical field of computers, in particular to a federal learning method and device based on a computing power network.
Background
At present, an intelligent security system generally integrates a target detection function and is used for identifying alarm events in a monitoring video, such as illegal invasion, dangerous behaviors, sensitive target detection and the like. Implementation of these recognition algorithms typically relies on various convolutional neural network models, such as YOLOv5, far-RCNN, and the like. However, to ensure information security, security systems are typically deployed locally in groups of companies, communities, government, and public security, and data between different institutions is not typically disclosed.
If a consumer adopting the security system in the prior art uses a neural network model in a special scene (such as a machine room with a complex structure, a construction site with a complex light environment, a dim warehouse and the like), the recognition accuracy of the model can be reduced. Meanwhile, due to the information sensitivity of the security system, the accuracy of the intelligent recognition model is usually improved by repeated upgrading after the security system is deployed, and a manufacturer can hardly update and upgrade the security system in time after the security system is deployed, so that the recognition accuracy of the model is low.
Disclosure of Invention
The embodiment of the application provides a federal learning method and a federal learning device based on a computational power network, which are used for solving the problem of low recognition accuracy of a model in a security system.
In a first aspect, an embodiment of the present application provides a federal learning method based on a power network, including:
receiving a parameter request sent by a security node, wherein the security node is deployed in a security federation learning system;
transmitting a neural network model and first model parameters to the security node based on the parameter request;
receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of the neural network model and the first model parameter by the security node;
and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model.
In one embodiment, the determining the third model parameter according to the second model parameters of the plurality of security nodes includes:
grouping second model parameters of a plurality of security nodes to determine parameter distances between each group of second model parameters;
and if the parameter distance is smaller than the set value, determining a third model parameter based on the second model parameters of the security nodes.
In one embodiment, before receiving the parameter request sent by the security node, the method includes:
Receiving a connection request of the security node;
acquiring a first calculated force value and a first data information quantization value based on the connection request, comparing the first calculated force value with a second calculated force value, and comparing the first data information quantization value with a second data information quantization value;
and establishing connection with the security node based on the comparison result, and sending a response message of the connection request to the security node.
In one embodiment, determining the calculated force value includes:
determining a first number of logic computing capabilities and logic computing chips, a second number of parallel computing capabilities and parallel computing chips, a neural network computing capability and a third number of neural network chips;
based on the logic computation capability and the first number, the parallel computation capability and the second number, the neural network computation capability and the third number, and the performance bias value, a computation force value is determined.
In a second aspect, an embodiment of the present application provides a federal learning method based on a computing power network, including:
sending a parameter request to a force edge calculation node;
receiving a neural network model and first model parameters sent by the computing force edge node;
Model training is carried out based on the neural network model and the first model parameters to obtain second model parameters, and the second model parameters are sent to the force calculation edge nodes;
and receiving a third model parameter sent by the force edge node, and updating the second model parameter based on the third model parameter.
In one embodiment, before the sending the parameter request to the computing edge node, the method includes:
transmitting a connection request to each force edge node based on the force edge node table;
receiving response messages of the connection requests sent by the force edge computing nodes, and establishing connection with the force edge computing nodes.
In one embodiment, before the sending the connection request to each edge node based on the edge node table, the method includes:
broadcasting a connection establishment message to the local network segment to receive the address information of the computing power edge node;
and establishing the force calculation edge node table based on the address information of the force calculation edge node.
In a third aspect, an embodiment of the present application provides a federal learning apparatus based on a power network, including:
the parameter request receiving module is used for receiving parameter requests sent by security nodes, and the security nodes are deployed in a security federation learning system;
The sending module of the model information is used for sending a neural network model and first model parameters to the security node based on the parameter request;
the second model parameter receiving module is used for receiving second model parameters sent by the security node, and the second model parameters are obtained by model training of the security node based on the neural network model and the first model parameters;
and the third model parameter sending module is used for determining third model parameters according to the second model parameters of the plurality of security nodes and sending the third model parameters to the security nodes with the same neural network model.
In a fourth aspect, an embodiment of the present application provides a federal learning apparatus based on a power network, including:
the parameter request sending module is used for sending a parameter request to the force calculation edge node;
the receiving module of the model information is used for receiving the neural network model and the first model parameters sent by the computing force edge node;
the training module is used for carrying out model training based on the neural network model and the first model parameters to obtain second model parameters, and sending the second model parameters to the computing force edge nodes;
And the updating module is used for receiving a third model parameter sent by the computing force edge node and updating the second model parameter based on the third model parameter.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory storing a computer program, where the processor implements the federal learning method based on a computing network according to the first aspect or the second aspect when executing the program.
In a sixth aspect, an embodiment of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the federal learning method based on a computing power network according to the first or second aspect.
According to the federal learning method and device based on the computational power network, the security nodes are deployed in the security federal learning system by receiving the parameter requests sent by the security nodes; transmitting the neural network model and the first model parameters to the security node based on the parameter request; receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of a neural network model and the first model parameter by the security node; and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model. According to the embodiment of the application, under the condition of integrating the federal learning algorithm under the power network architecture, under the condition of ensuring that all user data is not directly transmitted to the side end server, the parameters are updated and issued by encrypting the transmission parameters, so that the improvement of the recognition precision of all models is realized.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow diagram of a federal learning method based on a computing network according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a computing power network overlay architecture according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a message format of a power network layer according to an embodiment of the present application;
FIG. 4 is a second flow chart of a federal learning method based on a computing network according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of establishing connection between a security node and a computing power edge node according to an embodiment of the present application;
FIG. 6 is an information interaction diagram of security nodes and computing force edge nodes provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a federal learning device based on a power network according to an embodiment of the present application;
FIG. 8 is a second schematic diagram of a federal learning device based on a power network according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic flow chart of a federal learning method based on a computing power network according to an embodiment of the present application. Referring to fig. 1, an embodiment of the present application provides a federal learning method based on a computing power network, which may include:
step 100, receiving a parameter request sent by a security node, wherein the security node is deployed in a security federal learning system;
it should be noted that the embodiment of the application provides a federal learning method facing to a monitoring security scene based on a computing network. The security scene has less user overlap and more feature overlap, and belongs to a horizontal federal learning scene.
Federal learning is a distributed federal learning technology, and aims to realize common modeling among different organizations on the basis of ensuring data privacy safety and legal compliance so as to jointly promote the effect of an AI model.
The computing power network is a novel information infrastructure for integrating network, cloud, digital, intelligent, edge, end, chain and other multi-level computing power resources by means of high-speed, mobile, safe and ubiquitous network connection, and providing data sensing, transmission, storage, operation and other integrated services by combining with various emerging digital technologies such as AI, blockchain, cloud, big data and edge calculation.
The existing federation learning system is deployed in an edge computing network, relies on a cloud server, a data center and the like, achieves the effect of federation learning, and is a federation learning method based on the existing network structure. The embodiment of the application establishes a computational power network communication mechanism to help the security system to realize heterogeneous federal learning.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a power network overlay architecture provided by an embodiment of the present application, where the power network overlay architecture includes a transport layer, a power network layer, a data link layer, and a physical layer, and interactions of power information operate between the network layer and the transport layer in the form of independent protocols.
Further, referring to fig. 3, fig. 3 is a schematic diagram of a message format of a power network layer according to an embodiment of the present application. The message format of the power calculation network layer comprises state information, power calculation information, data information, system information and IP datagram.
And the state information determines the processing mode of the message after the first part of state information is transmitted to the terminal for unpacking. The state information occupies 16 bits, which are respectively:
acknowledgement ACK (acknowledgment) occupies 8 bits and the segment content is valid when ack+.0. When ack=1, the message segment content is the routing information of the potential computing edge node, and is used for establishing a potential computing edge node address table; when ACK is not equal to 1, but is a random number in any range, the random number is expressed by ack=rand, the message segment is used for establishing connection between the security node and the computing edge node, and when ack=0, the content in the message segment is invalid.
Synchronization SYN (synchronization) occupies 1 bit and when syn=1, ack=0, the segment is a request connection broadcast segment. When syn=1, ack=1, the segment is a connection establishment segment.
The return BCK (back) occupies 1 bit, and when bck=1, ack=1, syn=1, the segment is a segment with the information of the edge node. The operation state information of the computing force edge node comprises the current operation load information of the computing force edge node and the predicted waiting time, and the information is encapsulated in an IP datagram.
Reset RST (reset) occupies 1 bit, and when rst=1, it indicates that serious errors occur in the communication process, such as a user end host crash, a network fault, or a power edge node fault, etc.
In the state information, the remaining 6 bits are reserved positions.
The second part of calculation force information relates to calculation force modeling, and three key indexes of calculation force are respectively: logic computing power, parallel computing power, neural network computing power. Among them, the CPU (Central Processing Unit ) is a representative logical operation hardware chip, the GPU (Graphics Processing Unit, graphics processor) is a common parallel computation hardware chip, and the NPU (Neural network Processing Unit, neural network processor), the TPU (Tensor Processing Unit, tensor processor) are common neural network processing chips.
For example, assuming that the average time for a chip to run 10 times on the COCO dataset at yolov5 with a default parameter is t seconds, the average time for the other chip to run 10 times under the same conditions is t 'seconds, if t'/t=1.5, the computational power of the chip is 1.5.
Wherein, the calculation formula of the total calculated force value (namely calculated force value) is as follows:
assume that: the logic operation capability is a, and the number of logic operation chips is alpha;
The parallel computing capability is b, and the number of parallel computing chips is beta;
the operation capability of the neural network is c, and the number of the neural network chips is gamma;
delta is the performance bias due to other components of the host;
the total calculated force value C can be expressed as:
C=aα+bβ+cγ+δ
the third part of data information is used for model training and reasoning of a security system based on a convolutional neural network model in a video frame extraction mode, and the shooting scene of a security camera is relatively constant, so that the size of each frame of data is relatively consistent, the average picture size s of the camera can be obtained by extracting one frame of pictures of the camera in different time periods under the same scene and taking average, and for the scene of the convolutional neural network under security monitoring, the frame extraction frequency is 5-10 frames per second, so that the model accuracy can be ensured under the condition of ensuring real-time performance.
The quantization of the data information is represented as follows:
assume that the average picture sizes extracted by all cameras of the security system are s respectively 0 ,s 1 ,s 2 ,...s n The unit is GB, n is the number of pictures, and the frame frequency of each second set by the security system is f, so that the data information quantization value of the security system is expressed as follows:
the fourth part is system information, occupies 8 bits, records the function information requested by the child node, and for the security node, the filling content of the field is security system codes; for the semantic analysis node, the content filled in the field is semantic analysis code.
The fifth part is the IP datagram encapsulated by the network layer.
It should be noted that, the power table is a table that all hosts in the network need to maintain, and the power table includes information of the hosts. Whether the host is a calculation edge node or not, the table needs to be maintained, and the information of the calculation table comprises: the address of the force calculation edge node ip, the total force of the force calculation edge node, the current force calculation, the data processing type, whether federal learning is supported or not, and a federal learning support system.
The execution main body of the embodiment of the application is a force edge calculation node. After determining the message format of the power network overlay architecture and the power network layer, the security system can access the power network, and then establish communication connection between the security node and the power edge node, wherein the security node is deployed in the security federal learning system, in other words, the security node is deployed in the security system, and the security system supports federal learning. One security system corresponds to one security node.
After the security node and the force calculation side node establish communication connection, the force calculation side node receives a parameter request sent by the security node, wherein a message of the parameter request comprises: status information: ack=ack+1, syn=1 remaining 0; calculating force information: the total calculation force value of the node; data information: data information quantization; message segment data: the data information type of the node, the functions to be realized (target detection and behavior recognition).
Step 200, a neural network model and a first model parameter are sent to the security node based on the parameter request;
it should be noted that, the computing power edge node supporting federal learning maintains an algorithm table, where the algorithm table includes various system types supported by the node, algorithms, and detailed information of the algorithms, where the information includes a data input format of the algorithms, a result output format of the algorithms, a type of the algorithms (such as a neural network, a random forest, a hybrid type, etc.), a usage scenario of the algorithms (such as object detection, behavior recognition, semantic analysis, data analysis, etc.), and current parameter values of the algorithms. For example, the form of a yolov5 algorithm in the algorithm table for supporting federal learning of security systems is shown in table 1:
TABLE 1
The computing force edge node matches the corresponding neural network model and the first model parameter based on the parameter request sent by the security node, and then sends the matched neural network model and the matched first model parameter to the security node, for example, the first model parameter is w ik ,w ik Representing the ith round and the kth model.
It should be noted that, the security monitoring data contains a large amount of sensitive information, and if the monitoring picture data is disclosed to be transmitted on the external network, data leakage may be caused. The federal learning based on the computing power network avoids the public transmission of sensitive information on the network by transmitting model types and model parameters, and meanwhile, the original data cannot be traced only by the model types and the model parameters, so that the safety of the data is ensured.
Step 300, receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by model training of the security node based on the neural network model and the first model parameter;
it should be noted that, the force calculation edge node stores a maintenance list, and the maintenance list is used for recording security nodes using various models and model parameter states.
After the power calculation edge node sends the matched neural network model and the first model parameters to the security node, the security node carries out model training based on the received neural network model and the first model parameters to obtain second model parameters, the second model parameters are sent to the power calculation edge node, and the power calculation edge node updates a maintenance list based on the received second model parameters.
For example, the security node receives a neural network model and parameters w issued by the computing edge node ik Then training the neural network model by using the current data to generate a new parameter w ikj Wherein w is ikj And the ith round and the jth security node under the kth class model are represented. New parameter w ikj After training, the security node encrypts the message and returns the message to the power computing edge node, and the power computing edge node decrypts the message and according to the new parameter w ikj Updating the maintenance list.
Step 400, determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model.
In order to further improve the accuracy of the model, the force edge node determines a third model parameter according to the second model parameters of the plurality of security nodes, and sends the third model parameter to the security nodes with the same neural network model. Specifically, the second model parameters of the plurality of security nodes are grouped to determine the parameter distance between each group of the second model parameters, if the parameter distance is smaller than a set value, a third model parameter is determined based on the second model parameters of the plurality of security nodes, and then the third model parameter is sent to the security nodes with the same neural network model, so that the security nodes update the second model parameter based on the received third model parameter, and the recognition accuracy of the model is improved.
For example, the force edge nodes are in a maintenance list, and a certain number of security nodes are selected from each type of model at random in proportion, wherein the proportion selection standard relates to the total number of security nodes using the model. For models with a small number of use (such as less than 10 security nodes), the proportion can be fully selected or increased appropriately, for example 80%; for a model with medium use quantity (more than 10 and less than 100 security nodes are used), 50% of models can be selected; for security nodes with larger scale or ultra-large scale, 20% -30% of nodes can be selected.
Calculating the Euclidean distance between the second model parameters of each selected security node, for example:
wherein L represents the Euclidean distance between two second model parameters, w ikj Represents the ith round, the jth security node under the kth class model, w ikj’ And (3) representing the j' security node under the ith round and the kth model. If the Euclidean distance between the two second model parameters is smaller than the set value, the security node j and the security node j' are judged to be similar, for example:
wherein a depends on the number of groupings of the computing force edge nodes, the smaller a, the greater the number of groupings. And calculating the distance between every two second model parameters, and after the second model parameters are judged to be similar, repeating calculation is not performed. Third model parameter w bikh Representing the reference parameter w b In the ith round, the kth model and the h group, after grouping is finished, updating the reference parameters of the approximate model to w bikh =Avg(w ikj ,w ikj’ ,......). Where Avg () represents an averaging function. Calculating third model parameters w bikh The model and the data are similar, but are not independently distributed, but are dependent.
Taking yolov5 as an example, for independent parameters, such as the Euclidean distance or mean value of the parameters can be directly calculated, and for complex parameters such as convolution kernels, the Euclidean distance or mean value of each subdivision parameter can be calculated in a one-to-one correspondence manner. For example, for cores [ [10, 20], [4,7] ] and cores [ [14, 22], [6,7] ], the average value thereof is [ [12, 21] [5,7] ].
The force edge node calculates a third model parameter w bikh As a new parameter, all the parameters using the model are issuedSecurity nodes, e.g. assuming that the third model parameter w of model A is currently calculated bikh Then a third model parameter w is sent to all security nodes using the model A bikh . At the same time based on the third model parameter w bikh The parameter field of the maintenance list is updated.
Judging the similarity of the models by judging the Euclidean distance of the model parameters, and combining the training of the similar models, thereby improving the recognition accuracy of the models under similar or related scenes. For example: in an office building, the decoration style and the display are consistent, and the model is intensively trained, so that the recognition accuracy of the model is improved.
According to the federal learning method based on the computational power network, which is provided by the embodiment of the application, the security node is deployed in a security federal learning system by receiving a parameter request sent by the security node; transmitting the neural network model and the first model parameters to the security node based on the parameter request; receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of a neural network model and the first model parameter by the security node; and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model. According to the embodiment of the application, under the condition of integrating the federal learning algorithm under the power network architecture, under the condition of ensuring that all user data is not directly transmitted to the side end server, the parameters are updated and issued by encrypting the transmission parameters, so that the improvement of the recognition precision of all models is realized.
Based on the above embodiment, before receiving the parameter request sent by the security node, the method includes: receiving a connection request of the security node; acquiring a first calculated force value and a first data information quantization value based on the connection request, comparing the first calculated force value with a second calculated force value, and comparing the first data information quantization value with a second data information quantization value; and establishing connection with the security node based on the comparison result, and sending a response message of the connection request to the security node.
The first calculation force value and the first data information quantization value refer to values determined from the connection request; the second calculated force value and the second data information quantized value refer to the value of the current calculated force edge node.
After the force calculation side node receives the connection request, a first force calculation value and a first data information quantization value are obtained based on a message of the connection request, then the first force calculation value is compared with a second force calculation value of the current force calculation side node, the first data information value is compared with a second data information value of the current force calculation side node, if the first force calculation value is smaller than the second force calculation value and the first data information value is smaller than the second data information value, the condition that the load capacity of the current force calculation side node is not exceeded is indicated, connection is established with a security node, meanwhile, a security node response message is returned, and the message is: ack=rand, syn=1, bck=1 remaining 0; calculating force information: calculating a total force value of the force edge node; data information: calculating a force edge node data processing capacity quantization value; system information: a security federal learning system; message segment data: the IP value in the power calculation table maintained by the power calculation edge node is the information of the power calculation edge node IP.
According to the embodiment of the application, the communication connection between the security node and the force calculation edge node is established, so that the information interaction between the security node and the force calculation edge node is realized, the encryption transmission parameters are further realized, the parameters are updated, the parameters are issued, and the recognition precision of the model is improved.
Based on the above embodiment, referring to fig. 4, the embodiment of the present application further provides a federal learning method based on a computing power network, including:
step 500, sending a parameter request to a force edge node;
it should be noted that, the execution main body of the embodiment of the present application is a security node, where the security node is deployed in a security federal learning system.
After the security node and the force calculation edge node establish communication connection, the security node sends a parameter request to the force calculation edge node, wherein a message of the parameter request comprises: status information: ack=ack+1, syn=1 remaining 0; calculating force information: the total calculation force value of the node; data information: data information quantization; message segment data: the data information type of the node, the functions to be realized (target detection and behavior recognition).
Step 600, receiving a neural network model and a first model parameter sent by the computing force edge node;
the security node receives a neural network model and a first model parameter, for example, the first model parameter is w, which are matched by the computing power edge node based on the parameter request ik ,w ik Representing the ith round and the kth model.
Step 700, performing model training based on the neural network model and the first model parameters to obtain second model parameters, and sending the second model parameters to the computing force edge nodes;
and the security node carries out model training based on the received neural network model and the first model parameter to obtain a second model parameter, and sends the second model parameter to the computing force edge node. For example, the security node receives a neural network model and parameters w issued by the computing edge node ik Then training the neural network model by using the current data to generate a new parameter w ikj Wherein w is ikj And the ith round and the jth security node under the kth class model are represented. New parameter w ikj After training, the security node encrypts the message and returns the message to the power computing edge node, and the power computing edge node decrypts the message and according to the new parameter w ikj Updating the maintenance list.
Step 800, receiving a third model parameter sent by the computing force edge node, and updating the second model parameter based on the third model parameter.
The security node receives a third model parameter sent by the computing force edge node, and updates the second model parameter based on the third model parameter, so that the recognition accuracy of the model is improved.
According to the federal learning method based on the power calculation network, which is provided by the embodiment of the application, a parameter request is sent to a power calculation edge node; receiving a neural network model and first model parameters sent by a computing force edge node; model training is carried out based on the neural network model and the first model parameters to obtain second model parameters, and the second model parameters are sent to the computing force edge nodes; and receiving a third model parameter sent by the computing force edge node, and updating the second model parameter based on the third model parameter. According to the embodiment of the application, under the condition of integrating the federal learning algorithm under the power network architecture, under the condition of ensuring that all user data is not directly transmitted to the side end server, the parameters are updated and issued by encrypting the transmission parameters, so that the improvement of the recognition precision of all models is realized.
Based on the above embodiment, before sending the parameter request to the force edge node, the method includes: transmitting a connection request to each force edge node based on the force edge node table; receiving response messages of the connection requests sent by the force edge computing nodes, and establishing connection with the force edge computing nodes.
The security node can pre-establish a force calculation edge node table, and the force calculation edge node table is used for storing address information of each force calculation edge node. Specifically, a connection establishment message is broadcast to the local network segment to receive address information of the computing force edge node, and then a computing force edge node table is established based on the address information of the computing force edge node. After the calculated edge node table is determined, a connection request is sent to each calculated edge node based on the calculated edge node table; and receiving a response message of the connection request sent by each computing edge node, and establishing connection with each computing edge node.
For example, when a security system is connected to a network, security system information is encapsulated in a header message of a computing layer, and at this time, the header of the computing layer is: status information: ack=0, syn=1 remaining 0; calculating force information: the total calculation force value of the node; data information: data information quantization; system information: a security federal learning system; message segment data: empty.
After the connection message is established by broadcasting in the local network segment, the other hosts in the local network segment unpack the message to the power calculation layer, if the hosts are the security nodes, the host is matched with the power calculation table maintained by the host, the first matched power calculation side node information in the power calculation table is used as the message segment data, ACK=1, SYN=1, BCK=1, the other fields are empty, and the content is returned to the security nodes. Namely: the message content sent to the security system by the host is as follows: status information: ack=1, syn=1, bck=1 remaining 0; calculating force information: empty; data information: empty; system information: empty; system information: empty; message segment data: and calculating force side node information in the force calculation table. If the host is a force edge node, the same message information is returned to the security system.
The security system stores the IP addresses of all the computing power edge nodes supporting federal learning, and then establishes a computing power edge node table based on the IP addresses. The waiting time for establishing the table is determined according to the congestion state of the network, and the higher the congestion degree is, the longer the duration is. And after the waiting time is exceeded, the security system discards the returned computing force side node information.
According to the embodiment of the application, the communication connection between the security node and the force calculation edge node is established, so that the information interaction between the security node and the force calculation edge node is realized, the encryption transmission parameters are further realized, the parameters are updated, the parameters are issued, and the recognition precision of the model is improved.
Based on the above embodiments, referring to fig. 5, fig. 5 is a schematic flow chart of establishing connection between a security node and a force edge node according to an embodiment of the present application.
The step of establishing connection between the security node and the computing edge node is as follows:
step one: and broadcasting a connection establishment message to the local network segment.
When a security system is connected to a network, security system information is packaged in a header message of a force calculation layer, and the header of the force calculation layer is: status information: ack=0, syn=1 remaining 0; calculating force information: the total calculation force value of the node; data information: data information quantization; system information: a security federal learning system; message segment data: empty.
Step two: waiting for the host to respond to the request and returning the address information of the computing power edge node.
After the connection message is established by broadcasting in the local network segment, the other hosts in the local network segment unpack the message to the power calculation layer, if the hosts are the security nodes, the host is matched with the power calculation table maintained by the host, the first matched power calculation side node information in the power calculation table is used as the message segment data, ACK=1, SYN=1, BCK=1, the other fields are empty, and the content is returned to the security nodes. Namely: the message content sent to the security system by the host is as follows: status information: ack=1, syn=1, bck=1 remaining 0; calculating force information: empty; data information: empty; system information: empty; system information: empty; message segment data: and calculating force side node information in the force calculation table. If the host is a force edge node, the same message information is returned to the security system.
Step three: and (5) establishing a table of the returned address information of the computing force edge node.
And storing the IP addresses of all the computing power edge nodes supporting federal learning, and establishing a computing power edge node table based on the IP addresses. The table-building waiting time is determined according to the network congestion state, and the higher the congestion degree is, the longer the duration is.
Step four: the security system no longer receives the computing force side node information.
And after the waiting time is exceeded, the security system discards the returned computing force side node information.
Step five: and the security system sequentially sends connection establishment requests to the force calculation edge node table.
The format of the connection establishment request message is as follows: status information: ack=0, syn=1 remaining 1; calculating force information: a total calculated force value; data information: data information quantization; system information: a security federal learning system; message segment data: empty.
Step six: the force edge node responds to the request.
The computing force edge node compares the computing force value and the data information quantization value in the received request message with the current computing force value and the data processing capacity quantization value of the computing force edge node, establishes connection if the load capacity is not exceeded, and returns a message, wherein the message is ACK=rand, SYN=1, and BCK=1 and the rest is set to 0; calculating force information: calculating a total force value of the force edge node; data information: calculating a force edge node data processing capacity quantization value; system information: a security federal learning system; message segment data: the IP value in the power calculation table maintained by the power calculation edge node is the information of the power calculation edge node IP.
Step seven: and the security system adds unresponsive computing edge nodes into a connection blacklist.
When the request is sent to all the computing edge nodes in the computing edge node list and the response message is not received within the set time, the computing edge information message is received, the non-responded computing edge nodes IP are added into the blacklist, and the connection is not requested to the computing edge nodes for a period of time.
Step eight: and entering a second stage, and sending a parameter request by the security system.
The security system returns a parameter request message to the back calculation force edge node, wherein the parameter request message comprises: status information: ack=ack+1, syn=1 remaining 0; calculating force information: the total calculation force value of the node; data information: data information quantization; message segment data: the data information type of the node, the functions to be realized (target detection and behavior recognition).
Step nine: and forwarding the request to establish the connection message through the router.
And when the connection establishment request sent in the step one is not answered in the set time, the same message is sent to the router of the network segment, 3 routers (less than or equal to 3 routers if the number of the adjacent routers is less than 3) in the adjacent 1-hop routers are randomly selected by the router for preventing network congestion, and the message is forwarded, and other routers broadcast the message in the network segment.
Step ten: the security system starts waiting and enters a waiting period.
According to the network congestion condition, setting the waiting time as t, and if no message is received after the waiting period, considering that the edge node is difficult to find, and entering a new edge node from the beginning of the step to search.
According to the embodiment of the application, the communication connection between the security node and the force calculation edge node is established, so that the information interaction between the security node and the force calculation edge node is realized, the encryption transmission parameters are further realized, the parameters are updated, the parameters are issued, and the recognition precision of the model is improved.
Based on the above embodiments, referring to fig. 6, fig. 6 is an information interaction diagram of a security node and a computing force edge node according to an embodiment of the present application.
The information interaction between the security node and the computing force edge node comprises the following steps:
step one: and the security node is connected with the force calculation edge node.
Step two: the force edge node sends the neural network model and the first model parameters to the security node based on the parameter request of the security node.
Step three: and the security node carries out model training based on the received neural network model and the first model parameter to obtain a second model parameter, and calculates the force edge node of the second model parameter.
Step four: and randomly selecting security nodes by the force edge calculation nodes, and calculating parameter distances among the security nodes to determine a third model parameter.
Step five: and the force calculation edge node sends the third model parameter to the security node.
According to the embodiment of the application, under the construction of the computational power network, a transverse federal learning algorithm is fused for security application scenes, and a method for improving the neural network precision is provided for scenes with more overlapping characteristics and less overlapping samples of security, and under the condition that the heterogeneous federal learning algorithm ensures that all user data are not directly transmitted to a side server, parameters are updated and issued through encryption of transmission parameters, so that the recognition precision of all models is improved. Meanwhile, the embodiment of the application also solves the inherent defects of centralized federal learning.
Referring to fig. 7, fig. 7 is a schematic block diagram of a federal learning device based on a power network according to an embodiment of the present application, where the federal learning device based on a power network includes a first determining module 701, a second determining module 702, a third determining module 703, and a scheduling module 704.
The parameter request receiving module 701 is configured to receive a parameter request sent by a security node, where the security node is deployed in a security federal learning system;
A sending module 702 of model information, configured to send a neural network model and a first model parameter to the security node based on the parameter request;
a second model parameter receiving module 703, configured to receive a second model parameter sent by the security node, where the second model parameter is obtained by performing model training by the security node based on the neural network model and the first model parameter;
and a third model parameter sending module 704, configured to determine a third model parameter according to the second model parameters of the plurality of security nodes, and send the third model parameter to a security node having the same neural network model.
According to the federal learning device based on the computational power network, which is provided by the embodiment of the application, the security node is deployed in a security federal learning system by receiving the parameter request sent by the security node; transmitting the neural network model and the first model parameters to the security node based on the parameter request; receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of a neural network model and the first model parameter by the security node; and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model. According to the embodiment of the application, under the condition of integrating the federal learning algorithm under the power network architecture, under the condition of ensuring that all user data is not directly transmitted to the side end server, the parameters are updated and issued by encrypting the transmission parameters, so that the improvement of the recognition precision of all models is realized.
In one embodiment, the third model parameter sending module 704 is specifically configured to:
grouping second model parameters of a plurality of security nodes to determine parameter distances between each group of second model parameters;
and if the parameter distance is smaller than the set value, determining a third model parameter based on the second model parameters of the security nodes.
In one embodiment, the parameter request receiving module 701 is further configured to:
receiving a connection request of the security node;
acquiring a first calculated force value and a first data information quantization value based on the connection request, comparing the first calculated force value with a second calculated force value, and comparing the first data information quantization value with a second data information quantization value;
and establishing connection with the security node based on the comparison result, and sending a response message of the connection request to the security node.
In one embodiment, the parameter request receiving module 701 is further configured to:
determining a first number of logic computing capabilities and logic computing chips, a second number of parallel computing capabilities and parallel computing chips, a neural network computing capability and a third number of neural network chips;
Based on the logic computation capability and the first number, the parallel computation capability and the second number, the neural network computation capability and the third number, and the performance bias value, a computation force value is determined.
Referring to fig. 8, fig. 8 is a second schematic block diagram of a federal learning device based on a power network according to an embodiment of the present application, where the federal learning device based on a power network includes a first determining module 801, a second determining module 802, a third determining module 803, and a scheduling module 804.
A parameter request sending module 801, configured to send a parameter request to a force edge node;
a receiving module 802 of model information, configured to receive a neural network model and a first model parameter sent by the computing power edge node;
the training module 803 is configured to perform model training based on the neural network model and the first model parameter, obtain a second model parameter, and send the second model parameter to the computing force edge node;
an updating module 804, configured to receive a third model parameter sent by the computing force edge node, and update the second model parameter based on the third model parameter.
According to the federal learning device based on the computational power network, which is provided by the embodiment of the application, the security node is deployed in a security federal learning system by receiving the parameter request sent by the security node; transmitting the neural network model and the first model parameters to the security node based on the parameter request; receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of a neural network model and the first model parameter by the security node; and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model. According to the embodiment of the application, under the condition of integrating the federal learning algorithm under the power network architecture, under the condition of ensuring that all user data is not directly transmitted to the side end server, the parameters are updated and issued by encrypting the transmission parameters, so that the improvement of the recognition precision of all models is realized.
In one embodiment, the parameter request sending module 801 is further configured to:
transmitting a connection request to each force edge node based on the force edge node table;
receiving response messages of the connection requests sent by the force edge computing nodes, and establishing connection with the force edge computing nodes.
In one embodiment, the parameter request sending module 801 is further configured to:
broadcasting a connection establishment message to the local network segment to receive the address information of the computing power edge node;
and establishing the force calculation edge node table based on the address information of the force calculation edge node.
Fig. 9 illustrates a physical schematic diagram of an electronic device, as shown in fig. 9, which may include: processor 910, communication interface (Communication Interface), memory 930, and communication bus 940, wherein processor 910, communication interface 920, and memory 930 communicate with each other via communication bus 940. Processor 910 may invoke computer programs in memory 930 to perform the steps of the federal learning method based on a computational power network, including, for example:
receiving a parameter request sent by a security node, wherein the security node is deployed in a security federation learning system;
Transmitting a neural network model and first model parameters to the security node based on the parameter request;
receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of the neural network model and the first model parameter by the security node;
and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model.
Or, sending a parameter request to the force edge node;
receiving a neural network model and first model parameters sent by the computing force edge node;
model training is carried out based on the neural network model and the first model parameters to obtain second model parameters, and the second model parameters are sent to the force calculation edge nodes;
and receiving a third model parameter sent by the force edge node, and updating the second model parameter based on the third model parameter.
Further, the logic instructions in the memory 930 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present application further provide a computer program product, where the computer program product includes a computer program, where the computer program may be stored on a non-transitory computer readable storage medium, where the computer program when executed by a processor is capable of executing the steps of the federal learning method based on a computing power network provided in the foregoing embodiments, for example, including:
receiving a parameter request sent by a security node, wherein the security node is deployed in a security federation learning system;
transmitting a neural network model and first model parameters to the security node based on the parameter request;
receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of the neural network model and the first model parameter by the security node;
and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model.
Or, sending a parameter request to the force edge node;
receiving a neural network model and first model parameters sent by the computing force edge node;
Model training is carried out based on the neural network model and the first model parameters to obtain second model parameters, and the second model parameters are sent to the force calculation edge nodes;
and receiving a third model parameter sent by the force edge node, and updating the second model parameter based on the third model parameter.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A federal learning method based on a power network, comprising:
receiving a parameter request sent by a security node, wherein the security node is deployed in a security federation learning system;
transmitting a neural network model and first model parameters to the security node based on the parameter request;
receiving a second model parameter sent by the security node, wherein the second model parameter is obtained by performing model training on the basis of the neural network model and the first model parameter by the security node;
and determining a third model parameter according to the second model parameters of the plurality of security nodes, and sending the third model parameter to the security nodes with the same neural network model.
2. The method of claim 1, wherein determining a third model parameter from the second model parameters of the plurality of security nodes comprises:
grouping second model parameters of a plurality of security nodes to determine parameter distances between each group of second model parameters;
and if the parameter distance is smaller than the set value, determining a third model parameter based on the second model parameters of the security nodes.
3. The federal learning method based on a power network according to claim 1, wherein before receiving the parameter request sent by the security node, the method comprises:
receiving a connection request of the security node;
acquiring a first calculated force value and a first data information quantization value based on the connection request, comparing the first calculated force value with a second calculated force value, and comparing the first data information quantization value with a second data information quantization value;
and establishing connection with the security node based on the comparison result, and sending a response message of the connection request to the security node.
4. The method of federal learning over a power network according to claim 3, wherein determining the power value comprises:
Determining a first number of logic computing capabilities and logic computing chips, a second number of parallel computing capabilities and parallel computing chips, a neural network computing capability and a third number of neural network chips;
based on the logic computation capability and the first number, the parallel computation capability and the second number, the neural network computation capability and the third number, and the performance bias value, a computation force value is determined.
5. A federal learning method based on a power network, comprising:
sending a parameter request to a force edge calculation node;
receiving a neural network model and first model parameters sent by the computing force edge node;
model training is carried out based on the neural network model and the first model parameters to obtain second model parameters, and the second model parameters are sent to the force calculation edge nodes;
and receiving a third model parameter sent by the force edge node, and updating the second model parameter based on the third model parameter.
6. The method of claim 5, wherein before sending the parameter request to the edge node, the method comprises:
Transmitting a connection request to each force edge node based on the force edge node table;
receiving response messages of the connection requests sent by the force edge computing nodes, and establishing connection with the force edge computing nodes.
7. The method of federal learning over a power network according to claim 6, wherein before the power edge node-based table sends a connection request to each power edge node, comprising:
broadcasting a connection establishment message to the local network segment to receive the address information of the computing power edge node;
and establishing the force calculation edge node table based on the address information of the force calculation edge node.
8. A federal learning device based on a power network, comprising:
the parameter request receiving module is used for receiving parameter requests sent by security nodes, and the security nodes are deployed in a security federation learning system;
the sending module of the model information is used for sending a neural network model and first model parameters to the security node based on the parameter request;
the second model parameter receiving module is used for receiving second model parameters sent by the security node, and the second model parameters are obtained by model training of the security node based on the neural network model and the first model parameters;
And the third model parameter sending module is used for determining third model parameters according to the second model parameters of the plurality of security nodes and sending the third model parameters to the security nodes with the same neural network model.
9. A federal learning device based on a power network, comprising:
the parameter request sending module is used for sending a parameter request to the force calculation edge node;
the receiving module of the model information is used for receiving the neural network model and the first model parameters sent by the computing force edge node;
the training module is used for carrying out model training based on the neural network model and the first model parameters to obtain second model parameters, and sending the second model parameters to the computing force edge nodes;
and the updating module is used for receiving a third model parameter sent by the computing force edge node and updating the second model parameter based on the third model parameter.
10. An electronic device comprising a processor and a memory storing a computer program, wherein the processor implements the federal learning method based on a computational power network of any one of claims 1 to 7 when executing the computer program.
11. A computer program product comprising a computer program which, when executed by a processor, implements the federal learning method based on a computational power network according to any one of claims 1 to 7.
CN202211514683.3A 2022-11-29 2022-11-29 Federal learning method and device based on calculation network Pending CN116957062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211514683.3A CN116957062A (en) 2022-11-29 2022-11-29 Federal learning method and device based on calculation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211514683.3A CN116957062A (en) 2022-11-29 2022-11-29 Federal learning method and device based on calculation network

Publications (1)

Publication Number Publication Date
CN116957062A true CN116957062A (en) 2023-10-27

Family

ID=88441590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211514683.3A Pending CN116957062A (en) 2022-11-29 2022-11-29 Federal learning method and device based on calculation network

Country Status (1)

Country Link
CN (1) CN116957062A (en)

Similar Documents

Publication Publication Date Title
JP4729262B2 (en) Location awareness architecture and systems
CN110209820B (en) User identification detection method, device and storage medium
CN113452676B (en) Detector distribution method and Internet of things detection system
US20210158353A1 (en) Methods, systems, apparatuses, and devices for processing request in consortium blockchain
US20240089343A1 (en) Service layer-based methods to enable efficient analytics of iot data
CN114567650A (en) Data processing method and Internet of things platform system
CN112184436A (en) Data synchronization method, electronic device and readable storage medium
CN108833195B (en) Process-based network data flow analysis method
CN108696418B (en) Privacy protection method and device in social network
CN114338510A (en) Data forwarding method and system with separated control and forwarding
CN113986811A (en) High-performance kernel-mode network data packet acceleration method
CN111315026B (en) Channel selection method, device, gateway and computer readable storage medium
CN116957062A (en) Federal learning method and device based on calculation network
CN116806038A (en) Decentralizing computer data sharing method and device
CN116545871A (en) Multi-mode network traffic prediction method, device and medium
CN113703996B (en) Access control method, equipment and medium based on user and YANG model grouping
CN110020166B (en) Data analysis method and related equipment
CN106254375B (en) A kind of recognition methods of hotspot equipment and device
CN114090927A (en) Page loading method and device, computer equipment and storage medium
CN114567678A (en) Resource calling method and device of cloud security service and electronic equipment
CN112152854A (en) Information processing method and device
KR20200066133A (en) Electronic device implementing mobile device and gateway device operating on platform
CN112738808B (en) DDoS attack detection method in wireless network, cloud server and mobile terminal
CN113687870B (en) Terminal operation optimization method and device, storage medium and terminal
CN110943973B (en) Data stream classification method and device, model training method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination