CN113255937A - Federal learning method and system for different intelligent agents in intelligent workshop - Google Patents

Federal learning method and system for different intelligent agents in intelligent workshop Download PDF

Info

Publication number
CN113255937A
CN113255937A CN202110715806.9A CN202110715806A CN113255937A CN 113255937 A CN113255937 A CN 113255937A CN 202110715806 A CN202110715806 A CN 202110715806A CN 113255937 A CN113255937 A CN 113255937A
Authority
CN
China
Prior art keywords
robot
local model
client
local
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110715806.9A
Other languages
Chinese (zh)
Other versions
CN113255937B (en
Inventor
凌婧
翟晓东
汝乐
凌涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Austin Photoelectric Technology Co ltd
Original Assignee
Jiangsu Austin Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Austin Photoelectric Technology Co ltd filed Critical Jiangsu Austin Photoelectric Technology Co ltd
Priority to CN202110715806.9A priority Critical patent/CN113255937B/en
Publication of CN113255937A publication Critical patent/CN113255937A/en
Application granted granted Critical
Publication of CN113255937B publication Critical patent/CN113255937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the field of artificial intelligence, and particularly relates to a federal learning method and a system for different intelligent agents in an intelligent workshop, which are characterized in that: the method comprises the steps of establishing a graph theory model, clustering a client, namely a robot; the method comprises the following steps of performing federal learning on clients in the same cluster, specifically: each client in the same cluster carries out local model training, and carries out edge calculation by utilizing self data to generate local model parameters; and aggregating the local model parameters of each client based on the ResNet residual network to generate a global model, obtaining global model parameters, and improving training precision, thereby improving the recognition precision of the workshop robot.

Description

Federal learning method and system for different intelligent agents in intelligent workshop
Technical Field
The invention relates to the field of artificial intelligence, in particular to a federal learning method and a system for different intelligent agents in an intelligent workshop.
Background
Generally speaking, a company's production line will be composed of intelligent devices, i.e. intelligent bodies, produced by many companies, some of which are used for detecting part of the quality of products respectively, some of which are used for carrying and assembling one device together, but the learning and control systems of the intelligent devices are likely to be independent respectively, and the technical barriers among the manufacturers of the intelligent devices make it difficult for people to coordinate and assemble the intelligent devices and do not want to share the data of the intelligent devices;
federated learning is a distributed machine learning/deep learning framework capable of protecting data privacy, and has a wide application scene due to the characteristics of privacy protection and indirect fusion of multi-party data. The object/face recognition task of the vision robot in the intelligent automatic workshop environment has a problem, specifically, a plurality of vision robots exist in a workshop, but generally the robots are isolated from each other, if high automation is realized in the workshop, the first task is to enable the robot in the workshop to recognize the surrounding environment, namely, to recognize objects, but a single vision robot (client) can only see the part of an object to be recognized at the same time, so that the recognition accuracy of the single robot is not high, especially for some large-sized objects in the workshop, the information of each robot needs to be fully utilized and fused, and therefore, the information acquired by a plurality of robots needs to be fused through federal learning to improve the overall performance;
the traditional federal learning fusion algorithm directly fuses clients without distinguishing, and can possibly generate adverse effects, so that the performance of the clients can be reduced; on the other hand, if the original data are directly transmitted between the robots, the privacy information of the goods of the user can be easily stolen by a third party. Although traditional federal learning can protect privacy and fuse client models by only transmitting model parameters, the aggregation process of manual design at the present stage is difficult to consider comprehensively, and damage to the model performance caused by environmental interference factors cannot be reduced well.
Disclosure of Invention
Aiming at the problems of the conventional federal learning algorithm, the invention mainly aims at the isolated operation of a visual robot and cannot well integrate the current situation of other related robot data, and provides a federal learning method and a system for different intelligent agents in an intelligent workshop.
In order to solve the technical problems, the technical scheme of the invention is as follows: a federal learning method for different intelligent agents in an intelligent workshop comprises the following steps:
establishing a graph theory model, and clustering a client, namely a robot;
the method comprises the following steps of performing federal learning on clients in the same cluster, specifically: each client in the same cluster carries out local model training, and carries out edge calculation by utilizing self data to generate local model parameters; and aggregating the local model parameters of each client based on the ResNet residual network to generate a global model, and obtaining global model parameters.
According to the scheme, the method comprises the following specific steps:
step 1: establishing a graph theory model, and clustering out related clients, namely robots;
step 2: each client in the same cluster carries out data preprocessing on the acquired image data;
and step 3: carrying out federal learning on clients in the same cluster;
step 3.1: local model training of the robot: a local model is constructed by adopting a convolutional neural network MobileNet, and local training is carried out on the local model by adopting data of different robots to generate local model parameters;
step 3.2: aggregating global models: and constructing a global model by adopting a residual error neural network Resnet, taking local model parameters as characteristics, extracting the characteristics through the residual error neural network Resnet, and generating global model parameters.
According to the scheme, the step 1 specifically comprises the following steps:
because a plurality of robots exist in a collaborative workshop, namely an environment with N clients is set, the environment is expressed into a topological structure G (V, A, E);
Figure 894024DEST_PATH_IMAGE001
representing the position of each robot for a node set in the topological structure G;
Figure 388328DEST_PATH_IMAGE002
is the edge between the i node and the j node, wherein i, j represents the number of the node, i.e. the number of the robot, the edge is defined according to the relative position and the deflection angle between different robots, each robot is provided with a positioning sensor and a vision sensor, and the relative position between the two robots
Figure 964803DEST_PATH_IMAGE003
Wherein
Figure 549893DEST_PATH_IMAGE004
Is the position of the robot at the time of time,
Figure 301948DEST_PATH_IMAGE005
the position at time j of the robot time,
Figure 108099DEST_PATH_IMAGE006
represents a 2 norm; the difference of the deflection angles between the two robots is as follows by taking the south direction as the reference direction
Figure 765345DEST_PATH_IMAGE007
Wherein
Figure 154126DEST_PATH_IMAGE008
Is the deflection angle of the robot time relative to the south-plus-right direction,
Figure 26267DEST_PATH_IMAGE009
the deflection angle of the robot time relative to the south alignment direction is j;
Figure 285210DEST_PATH_IMAGE010
is a contiguous matrix of topology G, if between robotsThe relative position is less than 1/3 of the robot vision detection range, and the difference of the deflection angles is less than 90 degrees,
Figure 492069DEST_PATH_IMAGE011
otherwise, the value is 0;
according to the above data
Figure 635606DEST_PATH_IMAGE012
Figure 424570DEST_PATH_IMAGE013
And
Figure 306944DEST_PATH_IMAGE014
establishing a graph theory model G (V, A, E);
and classifying the partial robots with the adjacency matrix of 1 into a type of cluster so as to carry out federal learning on the robots.
According to the scheme, the data preprocessing step comprises the following steps:
step 2.1: denoising preprocessing is used for reducing noise in original image data: the wavelet transformation algorithm based on the convolutional neural network CNN is characterized in that firstly, single-scale discrete wavelet decomposition is carried out on an image collected by a robot to obtain a low-frequency component and three high-frequency components, and then noise images in the four components are separated through four CNNs; the method specifically comprises the following steps:
step 2.1.1: wavelet decomposition of an original image: performing single-scale discrete wavelet decomposition on the image for 2 times to obtain a low-frequency component in one direction and high-frequency components in the other three directions, as shown in formula (1):
Figure 955094DEST_PATH_IMAGE015
(1)
wherein X represents an image to be decomposed; n represents a decomposition scale; l represents a low-frequency component obtained after the image is subjected to discrete wavelet decomposition; h represents a horizontal direction high-frequency component; v represents a vertical-direction high-frequency component; d represents a high-frequency component in the diagonal direction; haar represents that haar base is adopted during wavelet decomposition; based on wavelet transformationThe four components generated by the decomposition are respectively trained into 4 CNN networks, so as to remove the noise in each high-frequency component and low-frequency component to obtain corresponding prediction components
Figure 495797DEST_PATH_IMAGE016
As shown in equation (2):
Figure 591798DEST_PATH_IMAGE017
(2)
wherein the content of the first and second substances,
Figure 333489DEST_PATH_IMAGE018
a function of the ReLU activation is represented,
Figure 327990DEST_PATH_IMAGE019
and
Figure 850545DEST_PATH_IMAGE020
are all training parameters;
step 2.1.2: reconstructing the image by inverse discrete wavelet transform to obtain the final denoised image
Figure 489468DEST_PATH_IMAGE021
As shown in equation (3):
Figure 713645DEST_PATH_IMAGE022
(3)
wherein the content of the first and second substances,
Figure 195442DEST_PATH_IMAGE023
2-time discrete wavelet inverse decomposition is shown for reconstructing an image;
step 2.2: graying the denoised image by adopting a maximum value method, as shown in formula (4), removing some irrelevant information, reducing the parameter quantity, and reducing the influence of the background on the identification, so as to enhance the real-time performance of the client:
Figure 218893DEST_PATH_IMAGE024
(4)
wherein the content of the first and second substances,
Figure 23906DEST_PATH_IMAGE025
respectively, the de-noised images
Figure 497613DEST_PATH_IMAGE026
Color components of three channels of R, G, B of (1);
Figure 76493DEST_PATH_IMAGE027
is the final input gray scale image.
According to the scheme, the step 3.1 is specifically as follows:
step 3.1.1: training data is a set of images collected by robots within the same cluster
Figure 949640DEST_PATH_IMAGE028
Wherein
Figure 953368DEST_PATH_IMAGE029
Is as followskA data set of the individual robot, namely the gray level image (image set subjected to denoising preprocessing) obtained in the step 2; constructing an initial local model of the robot, wherein the initial local model uses a lightweight convolutional neural network MobileNet, and the basic structure of the initial local model comprises a 3 × 3 depth separable convolution layer, batch normalization, a ReLU activation function and a 1 × 1 conventional convolution;
step 3.1.2: setting a loss function of the robot local model, as shown in formula (5):
Figure 207763DEST_PATH_IMAGE030
(5)
wherein the content of the first and second substances,
Figure 588453DEST_PATH_IMAGE031
is a predicted value under the current local model parameters,
Figure 78341DEST_PATH_IMAGE032
the weight value at the moment of time t is shown,
Figure 343100DEST_PATH_IMAGE033
is shown askPersonal robot data set
Figure 283243DEST_PATH_IMAGE034
The number i of the samples in (a) is,
Figure 633453DEST_PATH_IMAGE035
the representation features are an RGB picture of n x m, n is the length of the picture, m is the width of the picture,
Figure 927031DEST_PATH_IMAGE036
the true value of the tag is represented,
Figure 561143DEST_PATH_IMAGE037
represents the number of samples of the kth data set;
step 3.1.3: updating local model parameters: from equation (5), the first
Figure 157341DEST_PATH_IMAGE038
The loss function of each client side local model is updated according to the Adam optimizer
Figure 322743DEST_PATH_IMAGE039
As shown in formula (6) and formula (7), formula (7) is a parameter update formula;
Figure 341383DEST_PATH_IMAGE040
(6)
Figure 49576DEST_PATH_IMAGE041
(7)
wherein m is the first moment estimate of the gradient, i.e. the mean of the gradient, v is the second moment estimate of the gradient, i.e. the biased variance of the gradient, g is the gradient, t represents the number of iterations of the current learning,
Figure 675730DEST_PATH_IMAGE042
represents a constant added to maintain numerical stability,
Figure 518308DEST_PATH_IMAGE043
Figure 29055DEST_PATH_IMAGE044
is a multiplication by a co-located element;
Figure 372181DEST_PATH_IMAGE045
is a set of hyper-parameters, define
Figure 700394DEST_PATH_IMAGE046
Figure 450175DEST_PATH_IMAGE047
Figure 76198DEST_PATH_IMAGE048
And
Figure 618037DEST_PATH_IMAGE049
is the mean and the biased variance of the modified gradient,
Figure 726939DEST_PATH_IMAGE050
is the learning rate.
According to the scheme, the step 3.2 specifically comprises the following steps: relating parameters of layers to local models
Figure 275601DEST_PATH_IMAGE051
As features, extracting the features through a Resnet residual error neural network, and further generating parameters of a corresponding layer of the global model in a self-adaptive mode; the neuron number of the input layer of Resnet is consistent with that of the corresponding layer of the local model, the neuron number of the output layer is U, the number of the neurons of the corresponding layer of the global model corresponds to that of the neurons of the relevant layer of the global model, and the local training iteration number of the local model is U
Figure 518363DEST_PATH_IMAGE052
The number of communication times is R; after the local model is trained by using local data, the client uploads local model parameters to the terminal server, and when the local model parameters of each layer reach a preset number, Resnet is trained to extract features so as to generate a global model, namely as shown in formula (8):
Figure 524496DEST_PATH_IMAGE053
(8)
wherein the content of the first and second substances,
Figure 194512DEST_PATH_IMAGE054
representing the Resnet residual neural network,
Figure 233400DEST_PATH_IMAGE055
for the global neural network parameters i.e. the global model parameters,
Figure 624061DEST_PATH_IMAGE056
the number of layers is indicated.
The invention relates to a federal learning system oriented to different intelligent agents in an intelligent workshop, which comprises: the system comprises a client and a terminal server in communication connection with the client;
the client, namely the robot, comprises a positioning sensor, a visual sensor, a node memory and a node processor, wherein the positioning sensor is used for positioning the current position of the robot, the visual sensor is used for acquiring image data, the node memory stores a node computer program, and the node computer program is executed by the node processor to realize the steps that each client in the same cluster carries out local model training and carries out edge calculation by utilizing the data of the client to generate local model parameters;
and the terminal server comprises a main memory and a main processor, wherein the main memory stores a main computer program, the main computer program realizes the step of establishing the graph theory model when being executed by the main processor, the step of clustering the clients, namely the robots, and the step of aggregating the local model parameters of each client based on the ResNet residual network to generate a global model and obtain global model parameters.
The invention has the following beneficial effects:
the invention establishes a new federal learning framework, which is characterized in that related client clusters are automatically classified by considering the relevance of each client; secondly, aggregation parameters are automatically learned, so that more accurate client model parameters are aggregated, and the training precision of federal learning is improved, so that the overall recognition precision of the workshop robot is improved; the graph theory of the invention can well describe the similarity between the clients, further classify the client clusters, carry out federal learning on different client clusters, and reduce the loss of irrelevant image data to the global model; furthermore, a Residual Network (ResNet) is a convolution feature extraction Network which is most widely applied at present, and the invention inputs the last layer of all-connection layer parameters of the client model into the ResNet as an alternative feature to generate a global model in a polymerization manner; through the federal learning framework, a plurality of client models are fused, the recognition performance of the robot is improved, and meanwhile data privacy is protected.
Drawings
FIG. 1 is a schematic diagram of a plant environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of the present invention;
FIG. 3 is a diagram of a ResNet-based federated learning framework according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a basic neural network structure of a client according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a ResNet residual neural network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The invention provides a federal learning method for different intelligent agents in an intelligent workshop.
The learning method of the invention is divided into two modules, wherein the first module is as follows: establishing a graph theory model, and clustering the federated learning client with relevance; the second module is as follows: the method comprises the following steps that robots in the same cluster perform federal learning, a second module is mainly divided into two parts, robot clients in the first part perform local model training, and edge calculation is performed by using self data to generate local model parameters; and the second part is to aggregate local model parameters of each client based on a ResNet residual network to generate a global model, obtain global model parameters, fuse associated robot information, and send the global model parameters to each client to update the model parameters of the local models of each client.
The invention provides a federal learning system oriented to different intelligent agents in an intelligent workshop, which comprises a client and a terminal server in communication connection with the client; in this embodiment, referring to fig. 1, a workshop environment includes a plurality of visual robots and a terminal server, where a visual robot, hereinafter referred to as a robot, is a client in a learning system; each robot is at least provided with a positioning sensor, a visual sensor, a node memory, a node processor and a power system, wherein the visual sensor is used for acquiring image data to enable the robot to complete an identification task; the GPS positioning sensor is used for positioning the current position of the robot; the power system is used for driving the robot to move and comprises a power supply and a driving motor; the node memory and node processors constitute edge computers, serving as computing resources. In the embodiment, the vision sensor is a KEYENCE Keyen vision sensor with the model number of CV-035MCV-035 CCV-200M; the GPS positioning sensor adopts a punctual atomic GPS big Dipper double-positioning module ATK 1218-BD; the edge computer adopts a rainbow mini quad-core micro host. The terminal server includes a main memory and a main processor for global computation. The robot and the terminal server communicate with each other, and communication connection is established by means of one or more of wifi, zigbee, 4G and 5G to transmit model parameters.
Referring to fig. 2, the federal learning method for different intelligent agents in an intelligent workshop according to the present invention includes the following steps:
step 1: establishing a graph theory model, and clustering out related clients, namely robots;
referring to fig. 1, since there are a plurality of federally learned clients in a workshop environment, that is, in this embodiment, there are a plurality of robots in cooperation with the workshop, the information collected by these clients may be related or unrelated; therefore, before jointly training the federated learning global model by combining a plurality of clients, the associated clients need to be clustered, and then the associated clients are divided into M clusters to be respectively subjected to federated learning to obtain M global models; wherein the cluster
Figure 61865DEST_PATH_IMAGE057
Therein comprises
Figure 902782DEST_PATH_IMAGE058
A personal robot; because irregular relations exist among clients in most environments, the whole environment is described in a graph theory mode;
considering an environment with N clients, the environment can be expressed as a topology G (V, a, E);
Figure 114451DEST_PATH_IMAGE059
representing the position of each robot for a node set in the topological structure G;
Figure 620388DEST_PATH_IMAGE060
is the edge between the node i and the node j, where i, j represents the node number, i.e. the robot number, in this embodiment, the edge is defined according to the relative position and the deflection angle between the robots, and each robot is equipped with a GPS positioning sensor and a vision sensor, thereby the relative position between the two robots can be obtained
Figure 991326DEST_PATH_IMAGE061
Wherein
Figure 612932DEST_PATH_IMAGE062
Is the position of the robot at the time of time,
Figure 357903DEST_PATH_IMAGE063
the concrete numerical value of the time position of the j robot is collected by a GPS positioning sensor carried on each robot,
Figure 746159DEST_PATH_IMAGE064
representing a 2 norm. The difference of the deflection angles between the two robots is as follows by taking the south direction as the reference direction
Figure 581391DEST_PATH_IMAGE065
Wherein
Figure 700130DEST_PATH_IMAGE066
Is the deflection angle of the robot time relative to the south-plus-right direction,
Figure 276605DEST_PATH_IMAGE067
is j the deflection angle of the robot time relative to the true south direction.
Figure 812760DEST_PATH_IMAGE068
Which is the adjacency matrix of the topological graph G, if the relative position between the robots is less than 1/3 of the visual detection range of the robots, and the difference between the deflection angles is less than 90 degrees,
Figure 814083DEST_PATH_IMAGE069
otherwise, the value is 0;
a graph theory model G (V, A, E) is established according to the known data;
therefore, the robots are classified into a type of cluster according to the part of the robots with the adjacency matrix of 1, and clustering is carried out, so that the federal learning is carried out on the robots.
In other embodiments, in addition to the above-described robots for the task of jointly transporting and assembling, it is also possible to face upstream and downstream inspection devices that cannot move, i.e. each inspect a portion of the quality of a productThe devices of the quantity are manually and directly built into the connection graph theory model, namely directly set to
Figure 433283DEST_PATH_IMAGE070
Figure 841261DEST_PATH_IMAGE071
Setting 1 means that they are a type of cluster, and setting 15 means that there is a difference in data intersection between them.
Step 2: each client in the same cluster carries out data preprocessing on the acquired image data; the data preprocessing is divided into two parts, wherein the first part is used for reducing noise in original image data; the second part removes some irrelevant information in order to reduce the parameter quantity, thereby enhancing the real-time performance of the client.
Step 2.1: denoising pretreatment: by combining the superior characteristic extraction capability of the convolutional neural network and the denoising capability of the wavelet transform,
the wavelet transformation algorithm based on the convolutional neural network CNN is characterized in that firstly, single-scale discrete wavelet decomposition is carried out on an image collected by a robot to obtain a low-frequency component and three high-frequency components, and then noise images in the four components are separated through four CNNs; the method specifically comprises the following steps:
step 2.1.1: wavelet decomposition of an original image: performing single-scale discrete wavelet decomposition on the image for 2 times to obtain a low-frequency component in one direction and high-frequency components in the other three directions, as shown in formula (1):
Figure 492692DEST_PATH_IMAGE072
(1)
wherein X represents an image to be decomposed; l represents a low-frequency component obtained after the image is subjected to discrete wavelet decomposition; h represents a horizontal direction high-frequency component; v represents a vertical-direction high-frequency component; d represents a high-frequency component in the diagonal direction; haar represents that haar base is adopted during wavelet decomposition; based on the four components generated by the decomposition of the wavelet transform, 4 CNN networks were trained respectively in order to remove the noise in each of the high-frequency and low-frequency componentsObtaining corresponding prediction components
Figure 427149DEST_PATH_IMAGE073
As shown in equation (2):
Figure 827038DEST_PATH_IMAGE074
(2)
wherein the content of the first and second substances,
Figure 112526DEST_PATH_IMAGE075
a function of the ReLU activation is represented,
Figure 302068DEST_PATH_IMAGE076
and
Figure 966398DEST_PATH_IMAGE077
are all training parameters;
step 2.1.2: reconstructing the image by inverse discrete wavelet transform to obtain the final denoised image
Figure 851702DEST_PATH_IMAGE078
As shown in equation (3):
Figure 93328DEST_PATH_IMAGE079
(3)
wherein idwt2 represents 2 discrete wavelet inverse decompositions;
step 2.2: carrying out graying processing on the denoised image so as to reduce the parameter quantity and reduce the influence of the background on the identification; graying the denoised image by adopting a maximum value method, as shown in formula (4):
Figure 837293DEST_PATH_IMAGE080
(4)
wherein the content of the first and second substances,
Figure 136556DEST_PATH_IMAGE081
respectively, the de-noised images
Figure 737301DEST_PATH_IMAGE082
Color components of three channels of R, G, B of (1);
Figure 872748DEST_PATH_IMAGE083
is the final input gray scale image.
And step 3: carrying out federal learning on clients in the same cluster; referring to FIG. 3, the training data is a set of images collected by the visual sensor in the same cluster
Figure 404092DEST_PATH_IMAGE084
Wherein
Figure 167649DEST_PATH_IMAGE085
Is as followskAnd (3) a data set of the individual robot, namely the gray level image (the image set subjected to denoising preprocessing) obtained in the step (2). Because the visual sensors used by the clients in the same class are the same, but the obtained image data are different, and the characteristics contained in the image data are also different, the embodiment is isomorphic longitudinal federal learning, the federal learning is divided into two parts, namely local model training of the clients, and a global model is generated by local model aggregation; the step 3 comprises the following steps:
step 3.1: local model training of the robot: in the embodiment, a classical convolutional neural network is adopted to carry out local training on data of different robots, the convolutional neural network is initialized by disclosing pre-training network parameters of an Imagnet data set, and only the parameters of the last layer of the network are updated by adopting a transfer learning idea in the local training process, so that the number of parameters is reduced, and the real-time performance is improved.
Step 3.1.1: constructing an initial local model of the robot, wherein in order to enable the robot to process image data locally, the initial local model uses a lightweight convolutional neural network MobileNet, referring to fig. 4, the basic structure of the model comprises a 3 × 3 depth separable convolutional layer, batch normalization, a ReLU activation function and a 1 × 1 conventional convolution; and adding a convolution layer and a pooling layer for every 1000 samples of the robot i, wherein the convolution kernel size of the convolution layer is 5 multiplied by 5.
Step 3.1.2: setting a loss function of the robot local model, as shown in formula (5):
Figure 876979DEST_PATH_IMAGE086
(5)
wherein the content of the first and second substances,
Figure 748989DEST_PATH_IMAGE087
is a predicted value under the current local model parameters,
Figure 897073DEST_PATH_IMAGE088
the weight value at the moment of time t is shown,
Figure 452820DEST_PATH_IMAGE089
is shown askPersonal robot data set
Figure 788510DEST_PATH_IMAGE090
The number i of the samples in (a) is,
Figure 757603DEST_PATH_IMAGE091
the representation features are an RGB picture of n x m, n is the length of the picture, m is the width of the picture,
Figure 381483DEST_PATH_IMAGE092
the true value of the tag is represented,
Figure 244265DEST_PATH_IMAGE093
represents the number of samples of the kth data set;
step 3.1.3: updating local model parameters: from equation (5), the first
Figure 623294DEST_PATH_IMAGE094
The loss function of each client side local model is updated according to the Adam optimizer
Figure 751787DEST_PATH_IMAGE095
As shown in formula (6) and formula (7),formula (7) is a parameter updating formula;
Figure 366308DEST_PATH_IMAGE096
(6)
Figure 224543DEST_PATH_IMAGE097
(7)
wherein m is the first moment estimate of the gradient, i.e. the mean of the gradient, v is the second moment estimate of the gradient, i.e. the biased variance of the gradient, g is the gradient, t represents the number of iterations of the current learning,
Figure 712156DEST_PATH_IMAGE098
the constant added for maintaining numerical stability is shown, and in the present embodiment, it is set to 10e-8 (minus eighth power of 10), that is, 10
Figure 514896DEST_PATH_IMAGE099
Figure 808474DEST_PATH_IMAGE100
Figure 193319DEST_PATH_IMAGE101
Is a multiplication by a co-located element;
Figure 41714DEST_PATH_IMAGE102
is a set of hyper-parameters, defined in this embodiment
Figure 144799DEST_PATH_IMAGE103
Figure 242068DEST_PATH_IMAGE104
Figure 668370DEST_PATH_IMAGE105
And
Figure 497786DEST_PATH_IMAGE106
is a modified gradientThe mean and the biased variance are calculated,
Figure 416063DEST_PATH_IMAGE107
is the learning rate. After the local model parameters are updated through the process, the local model parameters are uploaded to a terminal server for aggregating the global model.
Step 3.2: aggregating global models: constructing a global model by adopting a residual error neural network Resnet, taking local model parameters as characteristics, extracting the characteristics through the residual error neural network Resnet, and generating global model parameters; the method specifically comprises the following steps:
the terminal server aggregates the local model parameters uploaded in the step 3.1, which is different from the traditional federal average aggregation algorithm, and only averages the local model parameters; the embodiment receives the inspiration of a stacking combination strategy in ensemble learning, designs an aggregation module based on a ResNet residual error network, and is used for aggregating client model parameters to generate a global model, wherein the aggregation module can automatically aggregate the client parameters without additionally establishing an aggregation rule;
relating parameters of layers to local models
Figure 910498DEST_PATH_IMAGE108
As features, extracting the features through a Resnet residual neural network, and further generating parameters of a corresponding layer of the global model in a self-adaptive manner, as shown in FIG. 5; the neuron number of the input layer of Resnet is consistent with that of the corresponding layer of the local model, the neuron number of the output layer is U, the number of the neurons of the corresponding layer of the global model corresponds to that of the neurons of the relevant layer of the global model, and the local training iteration number of the local model is U
Figure 597832DEST_PATH_IMAGE109
The number of communication times is R; after the local model is trained by using local data, the client uploads local model parameters to the terminal server, and when the local model parameters of each layer reach a preset certain number, Resnet is trained to extract features so as to generate a global model, namely as shown in formula (8):
Figure 598149DEST_PATH_IMAGE110
(8)
wherein the content of the first and second substances,
Figure 597198DEST_PATH_IMAGE111
representing the Resnet residual neural network,
Figure 301848DEST_PATH_IMAGE112
for the global neural network parameters i.e. the global model parameters,
Figure 515792DEST_PATH_IMAGE113
the number of layers is indicated.
The method can cluster the clients with the relevance to perform federal learning by considering the relevance of each participant, namely the client; secondly, the local model parameters can be automatically aggregated instead of manually making a fixed fusion strategy; the training precision of the federal learning can be well improved through the two aspects; each local robot uploads trained MobileNet model parameters to a terminal server in a staged mode, the terminal server is responsible for automatically aggregating the parameters to obtain global model parameters, the aggregated global model parameters contain training experiences of each local robot, the characteristics obtained by the local robots are indirectly fused, and then the global model parameters are sent to each local robot to update the model parameters of each local model, so that a MobileNet model with high generalization performance is trained to recognize the environment. As shown in step 3.
The parts not involved in the present invention are the same as or implemented using the prior art.
The foregoing is a more detailed description of the present invention that is presented in conjunction with specific embodiments, and the practice of the invention is not to be considered limited to those descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (7)

1. A federal learning method oriented to different intelligent agents in an intelligent workshop is characterized in that: the method comprises the following steps:
establishing a graph theory model, and clustering a client, namely a robot;
the method comprises the following steps of performing federal learning on clients in the same cluster, specifically: each client in the same cluster carries out local model training, and carries out edge calculation by utilizing self data to generate local model parameters; and aggregating the local model parameters of each client based on the ResNet residual network to generate a global model, and obtaining global model parameters.
2. The method of claim 1 for federal learning between different agents in an intelligent plant, comprising: the method comprises the following specific steps:
step 1: establishing a graph theory model, and clustering out related clients, namely robots;
step 2: each client in the same cluster carries out data preprocessing on the acquired image data;
and step 3: carrying out federal learning on clients in the same cluster;
step 3.1: local model training of the robot: a local model is constructed by adopting a convolutional neural network MobileNet, and local training is carried out on the local model by adopting data of different robots to generate local model parameters;
step 3.2: aggregating global models: and constructing a global model by adopting a residual error neural network Resnet, taking local model parameters as characteristics, extracting the characteristics through the residual error neural network Resnet, and generating global model parameters.
3. The method of claim 2 for federal learning between different agents in an intelligent plant, wherein the method comprises the following steps: the step 1 specifically comprises the following steps:
because a plurality of robots exist in a collaborative workshop, namely an environment with N clients is set, the environment is expressed into a topological structure G (V, A, E);
Figure 79098DEST_PATH_IMAGE001
representing the position of each robot for a node set in the topological structure G;
Figure 576332DEST_PATH_IMAGE002
is the edge between the i node and the j node, wherein i, j represents the number of the node, i.e. the number of the robot, the edge is defined according to the relative position and the deflection angle between different robots, each robot is provided with a positioning sensor and a vision sensor, and the relative position between the two robots
Figure 824910DEST_PATH_IMAGE003
Wherein
Figure 548016DEST_PATH_IMAGE004
Is the position of the robot at the time of time,
Figure 706596DEST_PATH_IMAGE005
the position at time j of the robot time,
Figure 997900DEST_PATH_IMAGE006
represents a 2 norm; the difference of the deflection angles between the two robots is as follows by taking the south direction as the reference direction
Figure 592829DEST_PATH_IMAGE007
Wherein
Figure 368893DEST_PATH_IMAGE008
Is the deflection angle of the robot time relative to the south-plus-right direction,
Figure 241034DEST_PATH_IMAGE009
time of robot deflection relative to true south for jAn angle;
Figure 296715DEST_PATH_IMAGE010
being an adjacency matrix of the topology G, if the relative position between the robots is less than 1/3 of the robot's visual detection range, and the difference in deflection angle is less than 90 degrees,
Figure 129673DEST_PATH_IMAGE011
otherwise, the value is 0;
according to the above data
Figure 460160DEST_PATH_IMAGE012
Figure 186808DEST_PATH_IMAGE013
And
Figure 931166DEST_PATH_IMAGE014
establishing a graph theory model G (V, A, E);
and classifying the partial robots with the adjacency matrix of 1 into a type of cluster so as to carry out federal learning on the robots.
4. The method of claim 2 for federal learning between different agents in an intelligent plant, wherein the method comprises the following steps: the data preprocessing steps are as follows:
step 2.1: denoising pretreatment: the wavelet transformation algorithm based on the convolutional neural network CNN is characterized in that firstly, single-scale discrete wavelet decomposition is carried out on an image collected by a robot to obtain a low-frequency component and three high-frequency components, and then noise images in the four components are separated through four CNNs; the method specifically comprises the following steps:
step 2.1.1: wavelet decomposition of an original image: performing single-scale discrete wavelet decomposition on the image for 2 times to obtain a low-frequency component in one direction and high-frequency components in the other three directions, as shown in formula (1):
Figure 766267DEST_PATH_IMAGE015
(1)
wherein X represents an image to be decomposed; l represents a low-frequency component obtained after the image is subjected to discrete wavelet decomposition; h represents a horizontal direction high-frequency component; v represents a vertical-direction high-frequency component; d represents a high-frequency component in the diagonal direction; haar represents that haar base is adopted during wavelet decomposition; based on four components generated by decomposition of wavelet transform, 4 CNN networks are respectively trained to remove noise in each high-frequency component and low-frequency component to obtain corresponding prediction components
Figure 120019DEST_PATH_IMAGE016
As shown in equation (2):
Figure 232331DEST_PATH_IMAGE017
(2)
wherein the content of the first and second substances,
Figure 895394DEST_PATH_IMAGE018
a function of the ReLU activation is represented,
Figure 201479DEST_PATH_IMAGE019
and
Figure 483556DEST_PATH_IMAGE020
are all training parameters;
step 2.1.2: reconstructing the image by inverse discrete wavelet transform to obtain the final denoised image
Figure 575009DEST_PATH_IMAGE021
As shown in equation (3):
Figure 425284DEST_PATH_IMAGE022
(3)
wherein the content of the first and second substances,
Figure 844764DEST_PATH_IMAGE023
2-time discrete wavelet inverse decomposition is shown for reconstructing an image;
step 2.2: graying the denoised image by adopting a maximum value method, as shown in formula (4):
Figure 320745DEST_PATH_IMAGE024
(4)
wherein the content of the first and second substances,
Figure 876491DEST_PATH_IMAGE025
respectively, the de-noised images
Figure 399133DEST_PATH_IMAGE026
Color components of three channels of R, G, B of (1);
Figure 430543DEST_PATH_IMAGE027
is the final input gray scale image.
5. The method of claim 2 for federal learning between different agents in an intelligent plant, wherein the method comprises the following steps: the step 3.1 is specifically as follows:
step 3.1.1: training data is a set of images collected by robots within the same cluster
Figure 320001DEST_PATH_IMAGE028
Wherein
Figure 871200DEST_PATH_IMAGE029
Is as followskA dataset of the individual robot; constructing an initial local model of the robot, wherein the initial local model uses a lightweight convolutional neural network MobileNet, and the basic structure of the initial local model comprises a 3 × 3 depth separable convolution layer, batch normalization, a ReLU activation function and a 1 × 1 conventional convolution;
step 3.1.2: setting a loss function of the robot local model, as shown in formula (5):
Figure 453491DEST_PATH_IMAGE030
(5)
wherein the content of the first and second substances,
Figure 706617DEST_PATH_IMAGE031
is a predicted value under the current local model parameters,
Figure 773668DEST_PATH_IMAGE032
the weight value at the moment of time t is shown,
Figure 304007DEST_PATH_IMAGE033
is shown askPersonal robot data set
Figure 181833DEST_PATH_IMAGE034
The number i of the samples in (a) is,
Figure 797622DEST_PATH_IMAGE035
the representation features are an RGB picture of n x m, n is the length of the picture, m is the width of the picture,
Figure 169829DEST_PATH_IMAGE036
the true value of the tag is represented,
Figure 820253DEST_PATH_IMAGE037
represents the number of samples of the kth data set;
step 3.1.3: updating local model parameters: from equation (5), the first
Figure 337822DEST_PATH_IMAGE038
The loss function of each client side local model is updated according to the Adam optimizer
Figure 71599DEST_PATH_IMAGE039
As shown in formula (6) and formula (7), formula (7) is a parameter update formula;
Figure 840971DEST_PATH_IMAGE040
(6)
Figure 736115DEST_PATH_IMAGE041
(7)
wherein m is the first moment estimate of the gradient, i.e. the mean of the gradient, v is the second moment estimate of the gradient, i.e. the biased variance of the gradient, g is the gradient, t represents the number of iterations of the current learning,
Figure 175318DEST_PATH_IMAGE042
represents a constant added to maintain numerical stability,
Figure 31278DEST_PATH_IMAGE043
Figure 728976DEST_PATH_IMAGE044
is a multiplication by a co-located element;
Figure 88413DEST_PATH_IMAGE045
is a set of hyper-parameters, define
Figure 728211DEST_PATH_IMAGE046
Figure 664943DEST_PATH_IMAGE047
Figure 307277DEST_PATH_IMAGE048
And
Figure 396587DEST_PATH_IMAGE049
is to correctThe mean and the biased variance of the latter gradients,
Figure 833384DEST_PATH_IMAGE050
is the learning rate.
6. The method of claim 5 for federal learning between different agents in an intelligent plant, wherein the method comprises the following steps: the step 3.2 is specifically as follows: relating parameters of layers to local models
Figure 522991DEST_PATH_IMAGE051
As features, extracting the features through a Resnet residual error neural network, and further generating parameters of a corresponding layer of the global model in a self-adaptive mode; the neuron number of the input layer of Resnet is consistent with that of the corresponding layer of the local model, the neuron number of the output layer is U, the number of the neurons of the corresponding layer of the global model corresponds to that of the neurons of the relevant layer of the global model, and the local training iteration number of the local model is U
Figure 814689DEST_PATH_IMAGE052
The number of communication times is R; after the local model is trained by using local data, the client uploads local model parameters to the terminal server, and when the local model parameters of each layer reach a preset number, Resnet is trained to extract features so as to generate a global model, namely as shown in formula (8):
Figure 148718DEST_PATH_IMAGE053
(8)
wherein the content of the first and second substances,
Figure 881051DEST_PATH_IMAGE054
representing the Resnet residual neural network,
Figure 667741DEST_PATH_IMAGE055
for the global neural network parameters i.e. the global model parameters,
Figure 261665DEST_PATH_IMAGE056
the number of layers is indicated.
7. A federal learning system oriented to different intelligent agents in an intelligent workshop, which is characterized by comprising: the system comprises a client and a terminal server in communication connection with the client;
the method comprises the following steps that a client, namely a robot, comprises a positioning sensor, a visual sensor, a node memory and a node processor, wherein the positioning sensor is used for positioning the current position of the robot, the visual sensor is used for acquiring image data, the node memory stores a node computer program, and when the node computer program is executed by the node processor, the steps that each client in the same cluster in claim 1 carries out local model training, edge calculation is carried out by utilizing self data, and local model parameters are generated are realized;
a terminal server comprising a main memory and a main processor, wherein the main memory stores a main computer program, and the main computer program realizes the steps of establishing a graph theory model, clustering clients, namely robots, and aggregating local model parameters of each client based on a ResNet residual network to generate a global model and obtain global model parameters, according to the method of claim 1 when executed by the main processor.
CN202110715806.9A 2021-06-28 2021-06-28 Federal learning method and system for different intelligent agents in intelligent workshop Active CN113255937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110715806.9A CN113255937B (en) 2021-06-28 2021-06-28 Federal learning method and system for different intelligent agents in intelligent workshop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110715806.9A CN113255937B (en) 2021-06-28 2021-06-28 Federal learning method and system for different intelligent agents in intelligent workshop

Publications (2)

Publication Number Publication Date
CN113255937A true CN113255937A (en) 2021-08-13
CN113255937B CN113255937B (en) 2021-11-09

Family

ID=77189943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110715806.9A Active CN113255937B (en) 2021-06-28 2021-06-28 Federal learning method and system for different intelligent agents in intelligent workshop

Country Status (1)

Country Link
CN (1) CN113255937B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113406974A (en) * 2021-08-19 2021-09-17 南京航空航天大学 Learning and resource joint optimization method for unmanned aerial vehicle cluster federal learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254067A (en) * 2011-07-05 2011-11-23 重庆大学 Large-scale grouping optimizing method of parts based on feed characteristic
CN111079639A (en) * 2019-12-13 2020-04-28 中国平安财产保险股份有限公司 Method, device and equipment for constructing garbage image classification model and storage medium
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
US20210051169A1 (en) * 2019-08-15 2021-02-18 NEC Laboratories Europe GmbH Thwarting model poisoning in federated learning
CN112990276A (en) * 2021-02-20 2021-06-18 平安科技(深圳)有限公司 Federal learning method, device, equipment and storage medium based on self-organizing cluster

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254067A (en) * 2011-07-05 2011-11-23 重庆大学 Large-scale grouping optimizing method of parts based on feed characteristic
US20210051169A1 (en) * 2019-08-15 2021-02-18 NEC Laboratories Europe GmbH Thwarting model poisoning in federated learning
CN111079639A (en) * 2019-12-13 2020-04-28 中国平安财产保险股份有限公司 Method, device and equipment for constructing garbage image classification model and storage medium
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN112990276A (en) * 2021-02-20 2021-06-18 平安科技(深圳)有限公司 Federal learning method, device, equipment and storage medium based on self-organizing cluster

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAOYANG HE 等: "Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge", 《HTTPS://ARXIV.ORG/ABS/2007.14513》 *
周翔翔 等: "基于图论的作战指挥决策群组划分算法", 《系统工程与电子技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113406974A (en) * 2021-08-19 2021-09-17 南京航空航天大学 Learning and resource joint optimization method for unmanned aerial vehicle cluster federal learning
CN113406974B (en) * 2021-08-19 2021-11-02 南京航空航天大学 Learning and resource joint optimization method for unmanned aerial vehicle cluster federal learning

Also Published As

Publication number Publication date
CN113255937B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
JP7317717B2 (en) Systems and methods that enable memory-bound continuous learning in artificial intelligence and deep learning, operating applications continuously across network computing edges
CN107545263B (en) Object detection method and device
CN113065546B (en) Target pose estimation method and system based on attention mechanism and Hough voting
CN110222718B (en) Image processing method and device
Grigorev et al. Depth estimation from single monocular images using deep hybrid network
KR20180123810A (en) Data enrichment processing technology and method for decoding x-ray medical image
CN113011568A (en) Model training method, data processing method and equipment
CN115661246A (en) Attitude estimation method based on self-supervision learning
CN115631369A (en) Fine-grained image classification method based on convolutional neural network
CN113255937B (en) Federal learning method and system for different intelligent agents in intelligent workshop
CN116385660A (en) Indoor single view scene semantic reconstruction method and system
Ruhil et al. Detection of changes from Satellite Images Using Fused Differene Images and Hybrid Kohonen Fuzzy C-Means Sigma
AU2020102476A4 (en) A method of Clothing Attribute Prediction with Auto-Encoding Transformations
Wu et al. Using convolution neural network for defective image classification of industrial components
CN116434347B (en) Skeleton sequence identification method and system based on mask pattern self-encoder
Alam et al. Novel hierarchical Cellular Simultaneous Recurrent neural Network for object detection
He et al. The improved siamese network in face recognition
CN112365456B (en) Transformer substation equipment classification method based on three-dimensional point cloud data
Shnitzer et al. Spatiotemporal analysis using Riemannian composition of diffusion operators
CN113449193A (en) Information recommendation method and device based on multi-classification images
CN113033669B (en) Visual scene recognition method based on learnable feature map filtering and graph annotation meaning network
Uthayan et al. IoT-cloud empowered aerial scene classification for unmanned aerial vehicles
Chen et al. An image denoising method of picking robot vision based on feature pyramid network
Kertész et al. Multi-directional projection transformations for machine learning based object matching
US20230360367A1 (en) Neural network architectures for invariant object representation and classification using local hebbian rule-based updates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant