CN114677648A - Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium - Google Patents

Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114677648A
CN114677648A CN202210393711.4A CN202210393711A CN114677648A CN 114677648 A CN114677648 A CN 114677648A CN 202210393711 A CN202210393711 A CN 202210393711A CN 114677648 A CN114677648 A CN 114677648A
Authority
CN
China
Prior art keywords
network
client
training
neural network
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210393711.4A
Other languages
Chinese (zh)
Inventor
庄伟铭
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime International Pte Ltd
Original Assignee
Sensetime International Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime International Pte Ltd filed Critical Sensetime International Pte Ltd
Priority to CN202210393711.4A priority Critical patent/CN114677648A/en
Publication of CN114677648A publication Critical patent/CN114677648A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a method and apparatus for network training and pedestrian re-identification, an electronic device and a storage medium, including: the server receives second network parameters sent by the clients, a second neural network and a local image data set are deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data sets; clustering a plurality of clients to obtain a plurality of client groups; updating the first neural network based on the second network parameters sent by each client included in the client group to obtain updated first network parameters of the corresponding client group; sending the updated first network parameters to each client included in the client group so as to update the second network parameters corresponding to each client included in the client group; and iteratively executing the steps until the iterative training meets the preset training condition, wherein the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed.

Description

Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for network training and pedestrian re-identification, an electronic device, and a storage medium.
Background
Pedestrian Re-identification (also known as pedestrian Re-identification) is a technique that uses computer vision techniques to determine whether a particular pedestrian is present in an image or video sequence. At present, the pedestrian re-identification technology is widely applied to multiple fields and industries, such as intelligent video monitoring and intelligent security. Since the pedestrian re-identification technology relates to privacy data such as human faces, human bodies, personal identities and the like in the process of processing an image or video frame sequence, a pedestrian re-identification method capable of avoiding privacy data leakage is urgently needed.
Disclosure of Invention
The disclosure provides a network training and pedestrian re-identification method and device, electronic equipment and a storage medium technical scheme.
According to an aspect of the present disclosure, a network training method is provided, where the method is applied to a server, where a first neural network is deployed in the server, the first neural network having first network parameters, and the method includes: receiving a plurality of second network parameters sent by a plurality of clients, wherein a second neural network and a local image data set are respectively deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data set; clustering the plurality of clients to obtain a plurality of client groups; for any client group, updating the first neural network based on the second network parameters sent by each client included in the client group to obtain updated first network parameters corresponding to the client group; sending the updated first network parameters to each client included in the client group so as to update the second network parameters corresponding to each client included in the client group; and iteratively executing the steps until iterative training meets a preset training condition, wherein the first neural network and/or the second neural network after iterative training are/is used for carrying out image processing on the image to be processed.
In a possible implementation manner, the clustering the plurality of clients to obtain a plurality of client groups includes: receiving a first feature vector sent by each client, wherein for any client, the first feature vector is obtained by performing feature extraction on a shared image by using the second neural network trained in the current training round; clustering the first feature vectors to obtain a plurality of feature vector groups; and aiming at any one feature vector group, dividing the client corresponding to each first feature vector included in the feature vector group into the same client group.
In a possible implementation manner, the updating the first neural network based on the second network parameter sent by each of the clients included in the client group to obtain an updated first network parameter corresponding to the client group includes: determining the weight of each second network parameter corresponding to the client group; and performing weighted fusion on the plurality of second network parameters corresponding to the client group according to the weight of each second network parameter corresponding to the client group so as to update the first neural network and obtain the updated first network parameter.
In one possible implementation manner, the determining the weight of each second network parameter corresponding to the client group includes: receiving a training variation parameter sent by each client included in the client group, wherein the training variation parameter is used for indicating the variation degree of the second neural network deployed in the client before and after training in the current training turn for any client; and determining the weight of the second network parameter corresponding to each client according to the training change parameters sent by each client.
In a possible implementation manner, for any one of the clients, the training variation parameter sent by the client is determined based on a feature similarity between a first feature vector obtained by the client performing feature extraction on a shared image by using the second neural network trained in the current training round and a second feature vector obtained by performing feature extraction on the shared image by using the second neural network not trained in the current training round.
According to an aspect of the present disclosure, there is provided a network training method applied to a target client in which a second neural network and a local image data set are deployed, the second neural network having second network parameters, the method including: sending the second network parameters to a server, wherein the second network parameters are obtained by training the second neural network based on the local image data set; receiving a first network parameter returned by the server, wherein a first neural network is deployed in the server, the first network parameter is a second network parameter sent by the server based on a client group, the first network parameter is obtained by updating the first neural network, and the client group is obtained by clustering a plurality of clients including the target client by the server; training the second neural network according to the first network parameters and the local image data set to obtain updated second network parameters; and iteratively executing the steps until iterative training meets a preset training condition, wherein the first neural network and/or the second neural network after iterative training are/is used for carrying out image processing on the image to be processed.
In one possible implementation, the method further includes: performing feature extraction on the shared image by using the trained second neural network in the current training round to obtain a first feature vector; and sending the first feature vector to the server.
In one possible implementation, the method further includes: determining a training variation parameter based on the second neural network before training in the current training round and the second neural network after training in the current training round, wherein the training variation parameter is used for indicating the variation degree of the second neural network before and after training in the current training round; and sending the training change parameters to the server.
In one possible implementation, the determining a training variation parameter based on the second neural network before training in the current training round and the second neural network after training in the current training round includes: performing feature extraction on the shared image by using the second neural network before training in the current training round to obtain a second feature vector; performing feature extraction on the shared image by using the trained second neural network in the current training round to obtain a first feature vector; determining the training variation parameter based on feature similarity between the first feature vector and the second feature vector.
According to an aspect of the present disclosure, there is provided a pedestrian re-identification method including: carrying out pedestrian re-recognition on the image to be recognized through a target pedestrian re-recognition network, and determining a pedestrian re-recognition result; the target pedestrian re-identification network is a first neural network or a second neural network obtained by adopting the network training method.
According to an aspect of the present disclosure, there is provided a network training apparatus, which is applied to a server, in which a first neural network is deployed, the first neural network having a first network parameter, the apparatus including: the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a plurality of second network parameters sent by a plurality of clients, a second neural network and a local image data set are respectively deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data sets; the clustering module is used for clustering the plurality of clients to obtain a plurality of client groups; an updating module, configured to update the first neural network based on the second network parameter sent by each client included in the client group, to obtain an updated first network parameter corresponding to the client group, for any client group; a sending module, configured to send the updated first network parameter to each client included in the client group, so as to update the second network parameter corresponding to each client included in the client group; and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed.
According to an aspect of the present disclosure, there is provided a network training apparatus applied to a target client in which a second neural network and a local image data set are deployed, the second neural network having second network parameters, the apparatus comprising: a sending module, configured to send the second network parameter to a server, where the second network parameter is obtained by training the second neural network based on the local image data set; a receiving module, configured to receive a first network parameter returned by the server, where the server is deployed with a first neural network, the first network parameter is a second network parameter sent by the server based on a client group, and the first neural network is updated to be determined and obtained, where the client group is obtained by clustering, by the server, a plurality of clients including the target client; the training module is used for training the second neural network according to the first network parameters and the local image data set to obtain updated second network parameters; and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed.
According to an aspect of the present disclosure, there is provided a pedestrian re-recognition apparatus including: the pedestrian re-identification module is used for carrying out pedestrian re-identification on the image to be identified through the target pedestrian re-identification network and determining a pedestrian re-identification result; the target pedestrian re-identification network is a first neural network or a second neural network obtained by training by adopting the network training method.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, in a server deployed with a first neural network with first network parameters, receiving a plurality of second network parameters sent by a plurality of clients, wherein a second neural network and a local image data set are deployed in each client respectively, and the second network parameters are obtained by training the second neural network based on the local image data set; clustering a plurality of clients to obtain a plurality of client groups; for any client group, updating the first neural network based on the second network parameters sent by each client included in the client group to obtain updated first network parameters of the corresponding client group; sending the updated first network parameters to each client included in the client group so as to update the second network parameters corresponding to each client included in the client group; and iteratively executing the steps until the iterative training meets the preset training condition, wherein the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed. In the process of carrying out federated network training by the server side combined client, clustering a plurality of clients to obtain a plurality of client side groups, carrying out combined training by taking the client side groups as units to obtain personalized network parameters adapted to the clients in the client side groups, and because the image data set is still stored in the client side in the training process and does not need to be uploaded to the server side, the personalized updating of the neural network in the client side in the federated training process is effectively realized on the premise of protecting the data privacy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a network training method in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a diagram of a federated learning-based network training system in accordance with an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a network training method in accordance with an embodiment of the present disclosure;
FIG. 4 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The pedestrian re-identification technology aims to judge whether a specific pedestrian exists in an image or a video sequence by utilizing a computer vision technology. At present, the pedestrian re-identification technology is widely applied to multiple fields and industries, such as intelligent video monitoring and intelligent security.
In the related technology, the development of the pedestrian re-identification technology and the optimization promotion of the pedestrian re-identification network do not need to upload a large amount of video monitoring data to a central server, and then perform data processing and network optimization on the server. However, with the enhancement of individual privacy protection by laws and regulations in various countries, such as the european union's "general data protection regulation", in the related art, sensitive information such as faces, human bodies, and personal identities needs to be uploaded to a central server in the process of processing an image or video frame sequence, so that individual privacy data is leaked, which causes great challenges in the development and application of the related art, and some countries have prohibited research related to pedestrian re-identification. The development of pedestrian re-identification technology is severely challenged if the problem of personal privacy protection is not solved effectively.
Federal learning is a novel artificial intelligence distributed training technology, and can combine multiple parties to train and optimize a pedestrian re-recognition network on the premise of meeting privacy protection. The federal learning is applied to the pedestrian re-recognition technology, and the problem of data privacy can be effectively solved. However, the image data in different clients have differences in resolution, data amount, data distribution and the like, that is, the data heterogeneity existing between different clients is negatively affected by the data heterogeneity, so that the trained pedestrian re-recognition network still needs to be further improved.
In order to solve the influence of data heterogeneity among different clients in the federal training process, the embodiment of the disclosure provides a network training method, wherein a plurality of second network parameters sent by a plurality of clients are received in a server side deployed with a first neural network with first network parameters, a second neural network and a local image data set are respectively deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data set; clustering a plurality of clients to obtain a plurality of client groups; for any client group, updating the first neural network based on the second network parameters sent by each client included in the client group to obtain updated first network parameters of the corresponding client group; sending the updated first network parameters to each client included in the client group so as to update the second network parameters corresponding to each client included in the client group; and iteratively executing the steps until the iterative training meets the preset training condition, wherein the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed.
In the process of carrying out federated network training by the server side in combination with the client side, clustering is carried out on the plurality of client sides to obtain a plurality of client side groups, and the client sides in the same client side group have similar data distribution, so that personalized network parameters adapted to the client sides in the client side groups are obtained by taking the client side groups as units in combination training.
The specific training process of the network training method according to the embodiment of the present disclosure is described in detail below.
Fig. 1 shows a flow diagram of a network training method according to an embodiment of the present disclosure. The network training method can be executed by a server, wherein a first neural network is deployed in the server, and the first neural network has first network parameters. The server may be a central server. In some possible implementations, the network training method may be implemented by the central server invoking computer readable instructions stored in memory. As shown in fig. 1, the network training method may include:
in step S11, a plurality of second network parameters sent by a plurality of clients are received, where a second neural network and a local image dataset are deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image dataset.
The server is connected with at least two clients, and a second neural network is respectively deployed in each client. The server side can combine at least two clients to conduct federated network training based on a federated learning algorithm. The specific number of the at least two clients may be determined according to actual situations, and may be two or more, and this disclosure does not specifically limit this.
The first neural network and the second neural network have the same network structure, and the training data needed in the training process relates to privacy data such as a human face, a human body, and an individual identity, for example, the first neural network and the second neural network may be a pedestrian re-recognition network, a human face recognition network, an identity recognition network, and the like, which is not particularly limited in this disclosure.
Aiming at any one of the at least two clients, the client can acquire image data to obtain a local image data set, and then local network training is performed on a second neural network deployed locally according to the local image data set to obtain a second network parameter. The client sends the trained second network parameters to the server side without sending image data to the server side, so that data privacy can be protected.
In step S12, the plurality of clients are clustered to obtain a plurality of client groups.
Data heterogeneity exists between different clients due to the different data distribution of local image datasets deployed in different clients, e.g. differences in number of images, image identification (personnel ID), and image capture device views, image resolution, lighting and scenes. Therefore, the plurality of clients are clustered to obtain a plurality of client groups, wherein data heterogeneity among the clients in the same client group is small.
Hereinafter, a process of clustering a plurality of clients will be described in detail with reference to possible implementation manners of the present disclosure, and details are not described herein.
In step S13, for any client group, the first neural network is updated based on the second network parameters sent by each client included in the client group, so as to obtain updated first network parameters of the corresponding client group.
Aiming at any one client group, the server side updates the first neural network based on the second network parameters sent by each client side included in the client group so as to realize the personalized federal network training of the server side and a plurality of client sides included in the client group and obtain the updated first network parameters adapted to each client side included in the client group.
Hereinafter, a process of updating the first neural network based on the second network parameter sent by each client included in the client group to obtain the updated first network parameter corresponding to the client group will be described in detail with reference to possible implementation manners of the present disclosure, and details of the process are not described herein.
In step S14, the updated first network parameter is sent to each client included in the client group, so as to update the second network parameter corresponding to each client included in the client group.
And aiming at any one client group, after the updated first network parameters corresponding to the client group are obtained, sending the updated first network parameters adapted to the client group to each client included in the client group, so that each client included in the client carries out personalized training and updating on a second pedestrian re-identification network deployed in a local area based on a local image data set according to the updated first network parameters adapted to the client, so as to update the second network parameters, and sends the updated second network parameters to the server again for carrying out the next round of iterative training.
In step S15, the above steps are iteratively performed until the iterative training satisfies a preset training condition, and the first neural network and/or the second neural network after the iterative training is used for performing image processing on the image to be processed.
The preset training condition may be determined according to an actual situation, for example, the preset training condition may be a preset training number, and may be a network convergence degree, which is not specifically limited by the present disclosure.
And the server side and the clients are combined to carry out iterative training so as to obtain a first trained neural network in the server side and a second trained neural network in each client. The trained first neural network and the trained second neural network can be used for image processing of the image to be processed. The trained first neural network obtained by the server is obtained by combining a plurality of clients for training, and has higher universality and robustness. The trained second neural network obtained by each client is obtained by training based on the local image data set, has higher individuation, and has higher processing precision when the client performs image processing locally.
In the embodiment of the disclosure, in a server deployed with a first neural network with first network parameters, receiving a plurality of second network parameters sent by a plurality of clients, wherein a second neural network and a local image data set are deployed in each client respectively, and the second network parameters are obtained by training the second neural network based on the local image data set; clustering a plurality of clients to obtain a plurality of client groups; for any client group, updating the first neural network based on the second network parameters sent by each client included in the client group to obtain updated first network parameters of the corresponding client group; sending the updated first network parameters to each client included in the client group so as to update the second network parameters corresponding to each client included in the client group; and iteratively executing the steps until the iterative training meets the preset training condition, wherein the first neural network and/or the second neural network after the iterative training is used for processing the image to be processed. In the process of carrying out federated network training by the server-side combined client, clustering is carried out on a plurality of clients to obtain a plurality of client groups, and combined training is carried out by taking the client groups as units to obtain individualized network parameters adapted to each client in the client groups.
In a possible implementation manner, clustering a plurality of clients to obtain a plurality of client groups includes: receiving a first feature vector sent by each client, wherein the first feature vector is obtained by performing feature extraction on a shared image by using a trained second neural network for any client; clustering the first feature vectors to obtain a plurality of feature vector groups; and aiming at any characteristic vector group, dividing the client corresponding to each first characteristic vector in the characteristic vector group into the same client group.
For any client, after the client trains the second neural network based on the local image data set to obtain the second network parameters, the client performs feature extraction on the shared image by using the trained second neural network to obtain a first feature vector, and then the client sends the first feature vector and the second network parameters to the server.
The shared image may be derived from a shared image dataset that is available for network training at both a server and a client, the shared image dataset is a public image dataset and does not relate to a data privacy problem, and a specific form of the shared image dataset may be set according to an actual situation, which is not specifically limited in this disclosure.
After receiving the plurality of first feature vectors sent by the plurality of clients, the server clusters the plurality of first feature vectors to obtain a plurality of feature vector groups, and the plurality of clients corresponding to the plurality of first feature vectors in the same feature vector group have similar data distribution, so that the clients corresponding to each first feature vector included in the same feature vector group are divided into the same client group.
For example, there are five clients corresponding to the server: the client comprises a first client, a second client, a third client, a fourth client and a fifth client. The server receives the first characteristic vectors f respectively sent by the five clients1、f2、f3、f4And f5
The server-side clusters the five first feature vectors to obtain two feature vector groups, wherein the feature vector group I comprises a first feature vector f1And f4The second feature vector group includes a first feature vector f2、f3And f5. Furthermore, the first client and the fourth client are divided into a first client group, and the second client, the third client and the fifth client are divided into a second client group.
In an example, based on a FINCH clustering algorithm which has a low computational power requirement and does not need other clustering parameters, clustering of a plurality of first feature vectors can be completed quickly to obtain a plurality of feature vector groups.
Besides the FINCH clustering algorithm, other clustering algorithms may be adopted for clustering the plurality of first feature vectors according to actual conditions, which is not specifically limited in this disclosure.
FIG. 2 illustrates a diagram of a federated learning-based network training system in accordance with an embodiment of the present disclosure. As shown in fig. 2, the system includes a server and five clients (a first client, a second client, a third client, a fourth client, and a fifth client). And the server and the five clients perform joint network training based on a federated learning algorithm.
Before training is not started, a server side initializes a first neural network deployed in a local area to obtain initial first network parameters, and the initial first network parameters are respectively sent to five client sides to perform first-round network training.
In the first round of network training, the method comprises the following steps: 1) after each client receives the initial first network parameters sent by the server, local training is carried out on a second pedestrian re-recognition network deployed locally based on the initial first network parameters and a local image data set stored locally, so that each client can obtain second network parameters generated by a first round of network training; 2) each client side utilizes the second neural network after the first round of network training to perform feature extraction on the shared image to obtain a first feature vector generated by each client side after the first round of network training; 3) each client sends a second network parameter and a first feature vector generated by the first round of network training to the server; 4) the server clusters a plurality of first feature vectors generated after the first round of network training to realize clustering of a plurality of clients to obtain a plurality of client groups; 5) the server updates the first neural network based on a second network parameter generated by first-round network training sent by each client included in the client group for any client group by taking the client group as a unit, so that a first network parameter (namely the updated first network parameter obtained after the first-round network training) generated by the first-round network training and adapted to each client included in the client group can be obtained, and the first network parameter generated by the corresponding first-round network training is sent to each client, so that the client performs the second-round network training.
In the second round of network training, the method comprises the following steps: 1) after each client receives a first network parameter generated by a first round of network training sent by the server, local training is carried out on a second pedestrian re-identification network deployed locally based on the first network parameter generated by the first round of network training and a local image data set deployed locally, so that each client can obtain a second network parameter generated by the second round of network training; 2) each client side utilizes the second neural network after the second round of network training to perform feature extraction on the shared image to obtain a first feature vector generated by each client side after the second round of network training; 3) each client sends a second network parameter and a first feature vector generated by a second round of network training to the server; 4) the server clusters a plurality of first feature vectors generated after a second round of network training to realize clustering of a plurality of clients to obtain a plurality of client groups; 5) the server updates the first neural network based on a second network parameter generated by second-round network training sent by each client included in the client group for any client group by taking the client group as a unit, so that a first network parameter (namely the updated first network parameter obtained after the second-round network training) generated by the second-round network training and adapted to each client included in the client group can be obtained, and the first network parameter generated by the corresponding second-round network training is sent to each client, so that the client performs third-round network training.
And repeating the steps until the iterative training meets the preset training condition to obtain the first neural network after final training and the second neural network after final training.
The trained first neural network obtained by the server is obtained by the server through carrying out federated training by combining a plurality of clients based on a federated learning algorithm, has higher universality and robustness, and can be applied to image processing on images to be processed under various general scenes (such as intelligent security and video monitoring) and various geographic regions (such as geographic regions without neural networks deployed) according to actual needs to obtain image processing results.
The trained second neural network obtained by each client is obtained by local training based on the local image data set, so that the trained second neural network obtained by each client has higher individuation, and an image processing result with higher precision can be obtained when the image to be processed is processed in the geographic area corresponding to each client (for example, communities and enterprises corresponding to each client).
Taking the above fig. 2 as an example, in the federated learning-based network training system shown in fig. 2, the first client performs the r-th round of network training and then generates the second network parameters
Figure BDA0003596519020000091
Carrying out feature extraction on the shared image based on the trained second neural network to obtain a first feature vector f1 rAnd the first client sends the second network parameter to the server
Figure BDA0003596519020000092
First feature vector f1 r(ii) a Similarly, the second client sends the second network parameter to the server
Figure BDA0003596519020000093
First feature vector f2 rThe third client sends the second network parameter to the server
Figure BDA0003596519020000094
First feature vector f3 rThe fourth client sends the second network parameters to the server
Figure BDA0003596519020000095
First feature vector f4 rThe fifth client sends the second network parameters to the server
Figure BDA0003596519020000096
First feature vector f5 r. The server side generates five first feature vectors f by pairing1 r、f2 r、f3 r、f4 rAnd f5 rClustering is carried out to realize clustering of the five clients, and a first client group is obtained: a first client and a fourth client, a second client group: the system comprises a second client, a third client and a fifth client.
To the firstA client group, a server pair to a second network parameter
Figure BDA0003596519020000097
And
Figure BDA0003596519020000098
fusing to obtain first network parameters which are generated by the r-th round of network training and are suitable for the first client and the fourth client
Figure BDA0003596519020000101
For a second client group, the server pair has second network parameters
Figure BDA0003596519020000102
And
Figure BDA0003596519020000103
fusing to obtain first network parameters which are generated by the r-th round of network training and are adapted to the second client, the third client and the fifth client
Figure BDA0003596519020000104
The server side sends the first network parameters
Figure BDA0003596519020000105
Sending the data to the first client and the fourth client to perform the (r + 1) th round of network training in the first client and the fourth client; the server side sends the first network parameters
Figure BDA0003596519020000106
And sending the data to the second client, the third client and the fifth client to perform the (r + 1) th round of network training in the second client, the third client and the fifth client.
And repeating the steps until the iterative training meets the preset training condition to obtain the first neural network after final training and the second neural network after final training. For example, the preset training condition is a preset training round, and at this time, based on the process of the r-th round of network training shown in fig. 2, network training of the preset training round is iteratively performed until the trained first and second neural networks are finally obtained. The specific value of the preset training round can be determined according to the actual situation, and the disclosure does not specifically limit the value.
In a possible implementation manner, updating the first neural network based on the second network parameter sent by each client included in the client group to obtain an updated first network parameter corresponding to the client group includes: determining the weight of each second network parameter corresponding to the client group; and performing weighted fusion on the plurality of second network parameters corresponding to the client group according to the weight of each second network parameter corresponding to the client group so as to update the first neural network and obtain the updated first network parameters.
Due to the fact that data heterogeneity exists among different clients, for the different clients in a client group, the weights of the corresponding second network parameters in the federal network training process are different, the weights of the corresponding second network parameters of the clients in the client group are determined, so that the second network parameters corresponding to the clients in the client group are weighted and fused, the first neural network is effectively updated, and the first network parameters adaptive to the client group are obtained.
In one possible implementation manner, determining the weight of each second network parameter corresponding to the client group includes: receiving a training variation parameter sent by each client included in a client group, wherein the training variation parameter is used for indicating the variation degree of a second neural network deployed in the client before and after training in the current training turn aiming at any client; and determining the weight of the second network parameter corresponding to each client according to the training change parameter sent by each client.
In the one-time federal network training process, the client can determine a training change parameter based on the change degree of the second neural network before and after the current training round, and then send the training change parameter to the server, so that the server can determine the weight of the second network parameter corresponding to each client based on the training change parameter of each client.
In an example, for any client group, the weight of the second network parameter corresponding to one client in the client group is a ratio of the training variation parameter sent by the client to the sum of the training variation parameters sent by all clients in the client group.
Taking the above fig. 2 as an example, in the r-th round of network training process of the network training system based on federal learning shown in fig. 2, the server clusters five clients to obtain a first client group: a first client and a fourth client, a second client group: the system comprises a second client, a third client and a fifth client.
Aiming at the first client group, the server side utilizes the received training change parameters sent by the first client side
Figure BDA0003596519020000111
And training variation parameters sent by the fourth client
Figure BDA0003596519020000112
Determining a second network parameter sent by the first client based on the following formula (one)
Figure BDA0003596519020000113
Weight of (2)
Figure BDA0003596519020000114
And the second network parameter sent by the fourth client
Figure BDA0003596519020000115
Weight of (2)
Figure BDA0003596519020000116
Figure BDA0003596519020000117
For the second client group, the server utilizes the interfaceReceived training variation parameter sent by second client
Figure BDA0003596519020000118
Training variation parameter sent by third client
Figure BDA0003596519020000119
And training variation parameters sent by the fifth client
Figure BDA00035965190200001110
Determining a second network parameter sent by the second client based on the following formula (two)
Figure BDA00035965190200001111
Weight of (2)
Figure BDA00035965190200001112
Second network parameters sent by third client
Figure BDA00035965190200001113
Weight of (2)
Figure BDA00035965190200001114
And the second network parameter sent by the fifth client
Figure BDA00035965190200001115
Weight of (2)
Figure BDA00035965190200001116
Figure BDA00035965190200001117
In a possible implementation manner, for any client, the training variation parameter sent by the client is determined based on the feature similarity between a first feature vector obtained by the client performing feature extraction on the shared image by using the second neural network after the current training round training and a second feature vector obtained by performing feature extraction on the shared image by using the second neural network before the current training round training.
In an example, the feature similarity may be a cosine similarity. For any client, the cosine similarity between the first feature vector obtained by extracting the features of the shared image by the second neural network after the training of the current training round and the second feature vector obtained by extracting the features of the shared image by the second neural network before the training of the current training round can be used for representing the changing degree of the second neural network before and after the training of the current training round, so that the training variation parameters can be determined by using the cosine similarity.
The feature similarity may be a cosine similarity, and may also be in other forms capable of characterizing the similarity between two feature vectors, which is not specifically limited by the present disclosure.
Hereinafter, a detailed description will be given of how the client determines the process of training the change parameter in combination with possible implementations of the present disclosure, and details are not described herein.
In one possible implementation, determining a weight of each second network parameter corresponding to the client group includes: receiving the data volume of the local image dataset sent by each client included in the client group; and determining the weight of the second network parameter corresponding to each client according to the data volume of the local image data set sent by each client.
The second network parameters sent by the client are generated after the client is trained based on the corresponding local image data set. For any one client group, because the data volumes of the local image data sets deployed in different clients included in the client group are different, when the server merges a plurality of second network parameters sent by a plurality of clients in the client group, the server can determine the weight of the second network parameter corresponding to each client according to the data volume of the local image data set deployed in each client, so as to realize weighted fusion of the plurality of second network parameters, thereby effectively improving the accuracy of the first network parameter corresponding to the client group obtained after the fusion.
In an example, for any client group, the weight of the second network parameter corresponding to any client in the client group is a ratio of the size of the data volume sent by the client to the sum of the sizes of the data volumes sent by all clients in the client group.
Taking the above fig. 2 as an example, in the r-th round of network training process of the network training system based on federal learning shown in fig. 2, the server clusters five clients to obtain a first client group: a first client and a fourth client, a second client group: the system comprises a second client, a third client and a fifth client.
For the first client group, the server uses the received data volume of the local image data set sent by the first client
Figure BDA0003596519020000121
And the amount of data of the local image dataset sent by the fourth client
Figure BDA0003596519020000122
Determining a second network parameter sent by the first client based on the following formula (III)
Figure BDA0003596519020000123
Weight of (2)
Figure BDA0003596519020000124
And the second network parameter sent by the fourth client
Figure BDA0003596519020000125
Weight of (2)
Figure BDA0003596519020000126
Figure BDA0003596519020000127
Aiming at the second client group, the server side utilizes the received number of the local images sent by the second clientData volume of data set
Figure BDA0003596519020000128
Data volume of local image dataset sent by third client
Figure BDA0003596519020000129
And the data volume of the local image dataset sent by the fifth client
Figure BDA00035965190200001210
Determining a second network parameter sent by the second client based on the following formula (IV)
Figure BDA00035965190200001211
Weight of (2)
Figure BDA00035965190200001212
Second network parameter sent by third client
Figure BDA00035965190200001213
Weight of (2)
Figure BDA00035965190200001214
And the second network parameter sent by the fifth client
Figure BDA00035965190200001215
Weight of (2)
Figure BDA00035965190200001216
Figure BDA0003596519020000131
And performing weighted fusion in each client group based on the weight of the second network parameter corresponding to each client to update the first neural network, so as to obtain the updated first network parameter corresponding to each client group.
Taking the above fig. 2 as an example, for the first client group, based on the second network parameter
Figure BDA0003596519020000132
Weight of (2)
Figure BDA0003596519020000133
And a second network parameter
Figure BDA0003596519020000134
Weight of (2)
Figure BDA0003596519020000135
For the second network parameter
Figure BDA0003596519020000136
And a second network parameter
Figure BDA0003596519020000137
Performing weighted fusion to obtain first network parameters which are generated by the r-th round of network training and are adapted to the first client and the fourth client
Figure BDA0003596519020000138
For a second client group, based on a second network parameter
Figure BDA0003596519020000139
Weight of (2)
Figure BDA00035965190200001310
Second network parameter
Figure BDA00035965190200001311
Weight of (2)
Figure BDA00035965190200001312
And a second network parameter
Figure BDA00035965190200001313
Weight of (2)
Figure BDA00035965190200001314
For the second network parameter
Figure BDA00035965190200001315
Second network parameter
Figure BDA00035965190200001316
And a second network parameter
Figure BDA00035965190200001317
Performing weighted fusion to obtain first network parameters which are generated by the r-th round of network training and are adapted to the second client, the third client and the fifth client
Figure BDA00035965190200001318
In the embodiment of the disclosure, in the process of federate network training by a server side in combination with a client, a plurality of client groups are obtained by clustering the plurality of clients, and personalized network parameters adapted to the clients in the client groups are obtained by joint training with the client groups as units.
Fig. 3 shows a schematic flow diagram of a network training method according to an embodiment of the present disclosure. The network training method may be performed by a target client, in which a second neural network and a local image dataset are deployed, the second neural network having second network parameters. In some possible implementations, the network training method may be implemented by the target client invoking computer-readable instructions stored in memory. The target client may be any one of the plurality of clients in the above embodiments, and the present disclosure is not limited thereto. As shown in fig. 3, the network training method may include:
in step S31, the second network parameters are sent to the server, where the second network parameters are obtained by training a second neural network based on the local image data set.
In step S32, a first network parameter returned by the server is received, where the server has a first neural network deployed therein, the first network parameter is a second network parameter sent by the server based on the client group, the first neural network is updated and determined, and the client group is obtained by clustering a plurality of clients including the target client by the server.
In step S33, a second neural network is trained according to the second network parameters and the local image dataset, so as to obtain updated second network parameters.
In step S34, the above steps are iteratively performed until the iterative training satisfies a preset training condition, and the first neural network and/or the second neural network after the iterative training is used to perform image processing on the image to be processed.
The target client side is combined with the server side to conduct federal network training, and personalized network parameters which are generated after federal training and are adapted to the target client side and sent by the server side are received, and the image data set is still stored in the target client side in the training process and does not need to be uploaded to the server side, so that personalized updating of the neural network in the target client side in the federal training process is effectively achieved on the premise that data privacy is protected.
In one possible implementation, the target client may be an image capture device; the local image dataset may be acquired from an image acquisition device.
In the case where the target client is an image capture device (e.g., a smart camera) that can directly communicate with the server, the image capture device needs to have certain computing power, storage capability, and communication capability. The image acquisition equipment acquires images to be stored locally, obtains a local image data set, and periodically deletes invalid image data (for example, image data with the cache duration exceeding a preset threshold) in the local image data set so as to reduce the storage pressure.
In one possible implementation, a target client (e.g., may be an edge device with network training functionality) may be connected with at least one image capture device, the target client and the at least one image capture device being located in the same geographic area; the local image dataset is obtained by the target client from at least one image acquisition device.
In the case that at least one image capturing device is arranged in the same geographic area range, a target client can be arranged in the geographic area range, and at the moment, the at least one image capturing device is not required to have storage capacity and computing power. The target client is connected with each image acquisition device, and then acquires an image from each image acquisition device to construct a local image dataset.
In a possible implementation manner, the network training method further includes: performing feature extraction on the shared image by using a second neural network trained in the current training round to obtain a first feature vector; and sending the first feature vector to the server.
The target client side utilizes the trained second neural network to extract the features of the shared image to obtain a first feature vector, and sends the first feature vector to the server side, so that the server side can cluster the plurality of client sides based on the first feature vectors sent by the plurality of client sides to obtain a plurality of client side groups.
The process that the server performs clustering on the multiple clients based on the first feature vectors sent by the multiple clients to obtain multiple client groups may refer to the relevant description of the server embodiment, which is not described herein again.
In a possible implementation manner, the network training method further includes: determining a training variation parameter based on a second neural network before training in the current training round and the second neural network after training in the current training round, wherein the training variation parameter is used for indicating the variation degree of the second neural network before and after training in the current training round; and sending the training change parameters to the server.
In the one-time federal network training process, the target client can determine the training change parameters based on the change degree of the second neural network before and after the current round of training, and then send the training change parameters to the server, so that the server can determine the weight of the second network parameters corresponding to each client based on the training change parameters of the plurality of clients.
The process of determining the weight of the second network parameter corresponding to each client by the server based on the training variation parameters of the multiple clients may refer to the related description of the above server embodiment, which is not described herein again.
In one possible implementation, determining a training variation parameter based on the second neural network before training in the current training round and the second neural network after training in the current training round includes: performing feature extraction on the shared image by using a second neural network before training in the current training round to obtain a second feature vector; performing feature extraction on the shared image by using a second neural network trained in the current training round to obtain a first feature vector; determining a training variation parameter based on the feature similarity between the first feature vector and the second feature vector.
The feature similarity between the first feature vector obtained by extracting the features of the shared image by the second neural network after training in the current training round and the second feature vector obtained by extracting the features of the shared image by the second neural network before training in the current training round can be used for representing the change degree of the second neural network before and after training, so that the target client can determine the training change parameters based on the feature similarity.
In an example, the feature similarity may be a cosine similarity.
Taking the above fig. 2 as an example, in the r-th round of network training process of the network training system based on federated learning shown in fig. 2, for the k-th client (target client), the k value may be 1 to 5, and the k-th client performs feature extraction on the shared image by using the trained second neural network to obtain the first feature vector
Figure BDA0003596519020000151
Performing feature extraction on the shared image by using a second neural network before training to obtain a second feature vector
Figure BDA0003596519020000152
At this time, the following formula (five) may be used to determine the training variation parameter of the kth client that has undergone the r-th round of network training
Figure BDA0003596519020000153
Figure BDA0003596519020000154
Wherein,
Figure BDA0003596519020000155
is the first feature vector
Figure BDA0003596519020000156
And a second feature vector
Figure BDA0003596519020000157
Cosine similarity between them.
First feature vector
Figure BDA0003596519020000158
And a second feature vector
Figure BDA0003596519020000159
The larger the cosine similarity between them, the first feature vector is represented
Figure BDA00035965190200001510
And a second feature vector
Figure BDA00035965190200001511
The greater the feature similarity between the client terminals is, in this case, it can be shown that the smaller the change degree of the second neural network after the kth client terminal passes through the r-th round of network training is, the smaller the training change parameter is
Figure BDA00035965190200001512
The smaller.
First feature vector
Figure BDA00035965190200001513
And a second feature vector
Figure BDA00035965190200001514
The smaller the cosine similarity between them, the first feature vector is represented
Figure BDA00035965190200001515
And a second feature vector
Figure BDA00035965190200001516
The smaller the feature similarity between the client and the client is, at this time, it can be shown that the greater the change degree of the second neural network after the kth client passes through the r-th network training is, the larger the training change parameter is
Figure BDA00035965190200001517
The larger.
By utilizing the cosine similarity, the change degree of the second neural network after the target client is trained by the current training round network can be effectively determined, and the training change parameters with higher accuracy are obtained.
The method for determining the training variation parameter of the target client may adopt the above method using cosine similarity, and may also select a method capable of effectively determining the variation degree of the second neural network after the target client passes through the current training round network training according to the actual situation, which is not specifically limited by the present disclosure.
For other specific training processes when the target client performs federated network training, reference may be made to the description related to the above server embodiment, which is not described herein again.
In the federal training process, the target client performs local training based on the local image data set, so that higher individuation is achieved, and image processing is performed on the image to be processed in the geographic area corresponding to the target client (for example, communities and enterprises corresponding to the target client), so that an image processing result with higher precision can be obtained.
The embodiment of the disclosure also provides a pedestrian re-identification method. The pedestrian re-identification method may be performed by a terminal device or other processing device, where the terminal device may be an image capture device (e.g., a smart camera), a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. The other processing devices may be servers or cloud servers, etc. In some possible implementations, the image processing method may be implemented by a processor calling computer readable instructions stored in a memory. The method can comprise the following steps:
carrying out pedestrian re-recognition on the image to be recognized through a target pedestrian re-recognition network, and determining a pedestrian re-recognition result; the target pedestrian re-identification network is the first neural network or the second neural network obtained by training by adopting the network training method of the embodiment.
The image to be recognized may be an image frame or a sequence of video frames, which is not specifically limited by the present disclosure.
In one example, the image to be recognized may be acquired within a target geographic area. The pedestrian re-identification network can perform pedestrian re-identification processing on the image to be identified collected in the target geographic area range, and determine whether a specific pedestrian exists in the image to be identified.
In an example, when both the first neural network and the second neural network are the pedestrian re-recognition networks, since the trained first neural network obtained by the server has universality, that is, can be applied to any application scene, the trained first neural network obtained by the server can be used as a target pedestrian re-recognition network to realize the pedestrian re-recognition processing of the image to be recognized collected in the target geographic area range, so as to obtain the pedestrian re-recognition result.
In an example, when the first neural network and the second neural network are both pedestrian re-recognition networks, due to data heterogeneity between different clients, the trained second neural network trained by different clients according to the local image data set has individuation and is more suitable for a local scene, so that the performance of the trained second neural network in different clients is better than that of the trained first neural network trained by the server in the local scene.
Therefore, under the condition that the second neural network is deployed in the target geographic area range corresponding to the image to be recognized, the trained second neural network which is more suitable for the local scene of the target geographic area range can be used as a target pedestrian re-recognition network to perform pedestrian re-recognition processing on the image to be recognized, and a recognition result with higher recognition accuracy is obtained.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a network training/pedestrian re-identification apparatus, an electronic device, a computer-readable storage medium, and a program, which can all be used to implement any network training/pedestrian re-identification method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are not repeated.
Fig. 4 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure. The network training device is applied to a server side, wherein a first neural network is deployed in the server side, and the first neural network has first network parameters. As shown in fig. 4, the network training apparatus 40 includes:
a receiving module 41, configured to receive a plurality of second network parameters sent by a plurality of clients, where a second neural network and a local image data set are deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data set;
the clustering module 42 is configured to cluster the plurality of clients to obtain a plurality of client groups;
an updating module 43, configured to update the first neural network based on the second network parameter sent by each client included in a client group to obtain an updated first network parameter of the corresponding client group, for any client group;
a sending module 44, configured to send the updated first network parameter to each client included in the client group, so as to update a second network parameter corresponding to each client included in the client group;
and the iteration module 45 is configured to iteratively execute the above steps until the iterative training meets a preset training condition, and the first neural network and/or the second neural network after the iterative training is used for performing image processing on the image to be processed.
In a possible implementation manner, the clustering module 42 is specifically configured to:
receiving a first feature vector sent by each client, wherein the first feature vector is obtained by performing feature extraction on a shared image by using a trained second neural network for any client;
clustering the first feature vectors to obtain a plurality of feature vector groups;
and aiming at any characteristic vector group, dividing the client corresponding to each first characteristic vector in the characteristic vector group into the same client group.
In one possible implementation, the update module 43 includes:
the determining submodule is used for determining the weight of each second network parameter corresponding to the client group;
and the fusion submodule is used for performing weighted fusion on the plurality of second network parameters corresponding to the client group according to the weight of each second network parameter corresponding to the client group so as to update the first neural network and obtain the updated first network parameter.
In a possible implementation, the determining submodule is specifically configured to:
receiving a training variation parameter sent by each client included in a client group, wherein the training variation parameter is used for indicating the variation degree of a second neural network deployed in the client before and after training in the current training turn aiming at any client;
and determining the weight of the second network parameter corresponding to each client according to the training change parameter sent by each client.
In a possible implementation manner, for any client, the training variation parameter sent by the client is determined based on the feature similarity between a first feature vector obtained by the client performing feature extraction on the shared image by using the second neural network trained in the current training round and a second feature vector obtained by performing feature extraction on the shared image by using the second neural network not trained in the current training round.
Fig. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure. The network training device is applied to a target client, wherein a second neural network is deployed in the target client, and the second neural network has second network parameters. As shown in fig. 5, the network training apparatus 50 includes:
a sending module 51, configured to send a second network parameter to the server, where the second network parameter is obtained by training a second neural network based on a local image data set;
the receiving module 52 is configured to receive a first network parameter returned by the server, where the server is deployed with a first neural network, the first network parameter is a second network parameter sent by the server based on a client group, the first network parameter is determined by updating the first neural network, and the client group is obtained by clustering a plurality of clients including a target client by the server;
a training module 53, configured to train a second neural network according to the first network parameter and the local image data set, to obtain an updated second network parameter;
and the iteration module 54 is configured to iteratively perform the above steps until the iterative training meets a preset training condition, where the first neural network and/or the second neural network after the iterative training is used to perform image processing on the image to be processed.
In a possible implementation manner, the network training apparatus 50 further includes:
the feature extraction module is used for extracting features of the shared image by using the trained second neural network in the current training round to obtain a first feature vector;
and a sending module 51, configured to send the first feature vector to the server.
In a possible implementation manner, the network training apparatus 50 further includes:
the determining module is used for determining a training change parameter based on a second neural network before training in the current training round and the second neural network after training in the current training round, wherein the training change parameter is used for indicating the change degree of the second neural network before and after training in the current training round;
and a sending module 51, configured to send the training change parameter to the server.
In a possible implementation manner, the determining module is specifically configured to:
performing feature extraction on the shared image by using a second neural network before training in the current training round to obtain a second feature vector;
performing feature extraction on the shared image by using a second neural network trained in the current training round to obtain a first feature vector;
determining a training variation parameter based on the feature similarity between the first feature vector and the second feature vector.
The embodiment of the present disclosure further provides a pedestrian re-identification apparatus, including: the pedestrian re-recognition module is used for performing pedestrian re-recognition processing on the image to be recognized through the target pedestrian re-recognition network and determining a pedestrian re-recognition result; the target pedestrian re-identification network is a first neural network or a second neural network obtained by training by adopting the network training method.
The method has specific technical relevance with the internal structure of the computer system, and can solve the technical problems of how to improve the hardware operation efficiency or the execution effect (including reducing data storage capacity, reducing data transmission capacity, improving hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system according with the natural law.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure. Referring to fig. 6, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other terminal device.
Referring to fig. 6, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as wireless network (Wi-Fi), second generation mobile communication technology (2G), third generation mobile communication technology (3G), fourth generation mobile communication technology (4G), long term evolution of universal mobile communication technology (LTE), fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. Referring to fig. 7, the electronic device 1900 may be provided as a server or a terminal device. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of a graphical user interface based operating system (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, such as punch cards or in-groove raised structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (15)

1. A network training method is applied to a server, wherein a first neural network is deployed in the server, the first neural network has first network parameters, and the method comprises the following steps:
receiving a plurality of second network parameters sent by a plurality of clients, wherein a second neural network and a local image data set are respectively deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data set;
clustering the plurality of clients to obtain a plurality of client groups;
for any client group, updating the first neural network based on the second network parameters sent by each client included in the client group to obtain updated first network parameters corresponding to the client group;
sending the updated first network parameters to each client included in the client group so as to update the second network parameters corresponding to each client included in the client group;
and iteratively executing the steps until iterative training meets a preset training condition, wherein the first neural network and/or the second neural network after iterative training are/is used for carrying out image processing on the image to be processed.
2. The method of claim 1, wherein clustering the plurality of clients to obtain a plurality of client groups comprises:
receiving a first feature vector sent by each client, wherein for any client, the first feature vector is obtained by performing feature extraction on a shared image by using the second neural network trained in the current training round;
clustering the first feature vectors to obtain a plurality of feature vector groups;
and aiming at any one characteristic vector group, dividing the client corresponding to each first characteristic vector in the characteristic vector group into the same client group.
3. The method according to claim 1 or 2, wherein the updating the first neural network based on the second network parameters sent by each of the clients included in the client group to obtain updated first network parameters corresponding to the client group comprises:
determining the weight of each second network parameter corresponding to the client group;
and performing weighted fusion on the plurality of second network parameters corresponding to the client group according to the weight of each second network parameter corresponding to the client group so as to update the first neural network and obtain the updated first network parameter.
4. The method of claim 3, wherein determining the weight of each of the second network parameters corresponding to the client group comprises:
receiving a training variation parameter sent by each client included in the client group, wherein the training variation parameter is used for indicating the variation degree of the second neural network deployed in the client before and after training in the current training turn for any client;
and determining the weight of the second network parameter corresponding to each client according to the training change parameter sent by each client.
5. The method according to claim 4, wherein the training variation parameter sent by the client is determined based on a feature similarity between a first feature vector obtained by the client performing feature extraction on a shared image by using the second neural network trained in a current training round and a second feature vector obtained by performing feature extraction on the shared image by using the second neural network not trained in the current training round, for any one of the clients.
6. A network training method applied to a target client, wherein a second neural network and a local image data set are deployed in the target client, and the second neural network has second network parameters, the method comprising:
sending the second network parameters to a server, wherein the second network parameters are obtained by training the second neural network based on the local image data set;
receiving a first network parameter returned by the server, wherein a first neural network is deployed in the server, the first network parameter is a second network parameter sent by the server based on a client group, the first network parameter is obtained by updating the first neural network, and the client group is obtained by clustering a plurality of clients including the target client by the server;
training the second neural network according to the first network parameters and the local image data set to obtain updated second network parameters;
and iteratively executing the steps until iterative training meets a preset training condition, wherein the first neural network and/or the second neural network after iterative training are/is used for carrying out image processing on the image to be processed.
7. The method of claim 6, further comprising:
performing feature extraction on the shared image by using the trained second neural network in the current training round to obtain a first feature vector;
and sending the first feature vector to the server.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
determining a training variation parameter based on the second neural network before training in the current training round and the second neural network after training in the current training round, wherein the training variation parameter is used for indicating the variation degree of the second neural network before and after training in the current training round;
and sending the training change parameters to the server.
9. The method of claim 8, wherein determining training variation parameters based on the second neural network before training in the current training round and the second neural network after training in the current training round comprises:
performing feature extraction on the shared image by using the second neural network before training in the current training round to obtain a second feature vector;
performing feature extraction on the shared image by using the trained second neural network in the current training round to obtain a first feature vector;
determining the training variation parameter based on a feature similarity between the first feature vector and the second feature vector.
10. A pedestrian re-identification method is characterized by comprising the following steps:
carrying out pedestrian re-recognition on the image to be recognized through a target pedestrian re-recognition network, and determining a pedestrian re-recognition result;
the target pedestrian re-identification network is a first neural network or a second neural network obtained by training by using the network training method of any one of claims 1 to 9.
11. A network training device is applied to a server, wherein a first neural network is deployed in the server, the first neural network has first network parameters, and the device comprises:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a plurality of second network parameters sent by a plurality of clients, a second neural network and a local image data set are respectively deployed in each client, and the second network parameters are obtained by training the second neural network based on the local image data sets;
the clustering module is used for clustering the plurality of clients to obtain a plurality of client groups;
an updating module, configured to update the first neural network based on the second network parameter sent by each client included in the client group to obtain an updated first network parameter corresponding to the client group, for any client group;
a sending module, configured to send the updated first network parameter to each client included in the client group, so as to update the second network parameter corresponding to each client included in the client group;
and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed.
12. A network training apparatus, applied to a target client, in which a second neural network and a local image data set are deployed, the second neural network having second network parameters, the apparatus comprising:
a sending module, configured to send the second network parameter to a server, where the second network parameter is obtained by training the second neural network based on the local image data set;
a receiving module, configured to receive a first network parameter returned by the server, where the server is deployed with a first neural network, the first network parameter is a second network parameter sent by the server based on a client group, and the first neural network is updated to be determined, and the client group is obtained by clustering, by the server, a plurality of clients including the target client;
the training module is used for training the second neural network according to the first network parameters and the local image data set to obtain updated second network parameters;
and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the first neural network and/or the second neural network after the iterative training is used for carrying out image processing on the image to be processed.
13. A pedestrian re-identification device, comprising:
the pedestrian re-identification module is used for carrying out pedestrian re-identification on the image to be identified through the target pedestrian re-identification network and determining a pedestrian re-identification result;
the target pedestrian re-identification network is a first neural network or a second neural network trained by the network training method of any one of claims 1 to 9.
14. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
15. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202210393711.4A 2022-04-14 2022-04-14 Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium Withdrawn CN114677648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210393711.4A CN114677648A (en) 2022-04-14 2022-04-14 Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210393711.4A CN114677648A (en) 2022-04-14 2022-04-14 Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114677648A true CN114677648A (en) 2022-06-28

Family

ID=82078145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210393711.4A Withdrawn CN114677648A (en) 2022-04-14 2022-04-14 Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114677648A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311692A (en) * 2022-10-12 2022-11-08 深圳大学 Federal pedestrian re-identification method, system, electronic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311692A (en) * 2022-10-12 2022-11-08 深圳大学 Federal pedestrian re-identification method, system, electronic device and storage medium
CN115311692B (en) * 2022-10-12 2023-07-14 深圳大学 Federal pedestrian re-identification method, federal pedestrian re-identification system, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN112001321B (en) Network training method, pedestrian re-identification method, device, electronic equipment and storage medium
CN109740516B (en) User identification method and device, electronic equipment and storage medium
JP6852150B2 (en) Biological detection methods and devices, systems, electronic devices, storage media
WO2020135127A1 (en) Pedestrian recognition method and device
CN110569777B (en) Image processing method and device, electronic device and storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN110458218B (en) Image classification method and device and classification network training method and device
CN110532956B (en) Image processing method and device, electronic equipment and storage medium
CN111243011A (en) Key point detection method and device, electronic equipment and storage medium
CN111401230B (en) Gesture estimation method and device, electronic equipment and storage medium
CN111310664B (en) Image processing method and device, electronic equipment and storage medium
CN111242303A (en) Network training method and device, and image processing method and device
CN111582383A (en) Attribute identification method and device, electronic equipment and storage medium
CN108171222B (en) Real-time video classification method and device based on multi-stream neural network
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN114445753A (en) Face tracking recognition method and device, electronic equipment and storage medium
CN112613447B (en) Key point detection method and device, electronic equipment and storage medium
CN114677648A (en) Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium
CN111859003B (en) Visual positioning method and device, electronic equipment and storage medium
CN113283343A (en) Crowd positioning method and device, electronic equipment and storage medium
CN111178115B (en) Training method and system for object recognition network
CN111062407A (en) Image processing method and device, electronic equipment and storage medium
CN115035440A (en) Method and device for generating time sequence action nomination, electronic equipment and storage medium
CN114550265A (en) Image processing method, face recognition method and system
CN112734015B (en) Network generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220628

WW01 Invention patent application withdrawn after publication