WO2023272995A1 - 一种行人重识别方法、装置、设备及可读存储介质 - Google Patents

一种行人重识别方法、装置、设备及可读存储介质 Download PDF

Info

Publication number
WO2023272995A1
WO2023272995A1 PCT/CN2021/121901 CN2021121901W WO2023272995A1 WO 2023272995 A1 WO2023272995 A1 WO 2023272995A1 CN 2021121901 W CN2021121901 W CN 2021121901W WO 2023272995 A1 WO2023272995 A1 WO 2023272995A1
Authority
WO
WIPO (PCT)
Prior art keywords
isomorphic
network
branch
loss value
training
Prior art date
Application number
PCT/CN2021/121901
Other languages
English (en)
French (fr)
Inventor
王立
范宝余
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Priority to US18/265,242 priority Critical patent/US11830275B1/en
Publication of WO2023272995A1 publication Critical patent/WO2023272995A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present application relates to the technical field of image recognition, and more specifically, to a pedestrian re-identification method, device, equipment and readable storage medium.
  • Pedestrian re-identification is an important image recognition technology, widely used in public security systems, traffic supervision and other fields. Pedestrian re-identification searches cameras distributed in different locations to determine whether the pedestrians in the field of view of different cameras are the same pedestrian. This technology can be used in scenarios such as criminal suspect search and missing child search. Pedestrian re-identification is mainly realized through deep learning technology, and with the continuous development of deep learning technology, network models emerge in an endless stream. In order to further improve the accuracy and performance of pedestrian re-identification network processing pedestrian re-identification tasks, researchers tend to deepen or widen the network model. The direction of the network to design a new person re-identification network. It is undeniable that as the network becomes deeper or wider, the learning ability of the model continues to increase. However, improving network performance in this way has the following disadvantages:
  • the purpose of this application is to provide a pedestrian re-identification method, device, equipment, and readable storage medium, so as to improve the accuracy and performance of the deep learning network for processing pedestrian re-identification tasks without increasing the amount of parameters and calculations. Reducing the occupation of storage space in the device is more conducive to the storage and deployment of portable devices, reducing the amount of calculations for performing pedestrian re-identification tasks, and improving the processing rate of pedestrian re-identification tasks.
  • this application provides a pedestrian re-identification method, including:
  • the isomorphic training network corresponding to the initial pedestrian re-identification network; wherein, the isomorphic training network has a plurality of isomorphic branches with the same network structure;
  • the target loss function uses the target loss function to train the isomorphic training network, and determine the final weight parameters of each network layer in the isomorphic training network; wherein, the target loss function includes a dynamic classification probability loss function based on knowledge collaboration, so The dynamic classification probability loss function is used to: use each training sample to output features in the classification layer of each two isomorphic branches, and determine the one-way knowledge synergy loss value between the isomorphic branches;
  • said using the target loss function to train the isomorphic training network, and determining the final weight parameters of each network layer in the isomorphic training network including:
  • the process of determining the one-way knowledge collaborative loss value of the dynamic classification probability loss function includes:
  • the dynamic classification probability loss function is:
  • L ksp is the one-way knowledge collaboration loss value
  • N is the total number of training samples
  • u represents the u-th isomorphic branch
  • v represents the v-th isomorphic branch
  • K is the dimension of the output feature of the classification layer
  • x n is the nth sample
  • x n is the nth sample
  • ⁇ u represents the network parameters of the u-th isomorphic branch
  • ⁇ v represents the network parameters of the v-th isomorphic branch.
  • the determination of the isomorphic training network corresponding to the initial pedestrian re-identification network includes:
  • An auxiliary training branch is drawn from the middle layer of the initial person re-identification network to generate an isomorphic training network with an asymmetric network structure.
  • the determination of the isomorphic training network corresponding to the initial pedestrian re-identification network includes:
  • An auxiliary training branch is drawn from the middle layer of the initial person re-identification network to generate an isomorphic training network with a symmetrical network structure.
  • the process of determining the triplet loss value of the triplet loss function includes:
  • the first triplet loss function is:
  • N is the total number of training samples
  • a is the anchor sample
  • y is the classification label of the sample
  • p is the sample with the largest intra-class distance belonging to the same classification label as the anchor sample
  • q is the sample with the smallest inter-class distance belonging to different classification labels from the anchor sample
  • m is the first parameter
  • d( , ) is used to calculate the distance
  • min d( , ) means to find the minimum distance
  • y a means the classification label of the anchor sample
  • y p means the classification label of the p sample
  • y q means the classification label of the q sample.
  • each isomorphic branch after the first loss value of each isomorphic branch is determined, it also includes:
  • the second triplet loss function is:
  • the selection of the first loss value with the smallest value from each isomorphic branch as the triplet loss value includes:
  • the second loss value with the smallest value is selected from each isomorphic branch as the triplet loss value.
  • the present application further provides a pedestrian re-identification device, including:
  • a network determination module configured to determine an isomorphic training network corresponding to the initial pedestrian re-identification network; wherein, the isomorphic training network has a plurality of isomorphic branches with the same network structure;
  • a parameter determination module configured to use a target loss function to train the isomorphic training network, and determine the final weight parameters of each network layer in the isomorphic training network; wherein, the target loss function includes a knowledge-based collaborative dynamic Classification probability loss function, the dynamic classification probability loss function is used to: use each training sample to output features in the classification layer of every two isomorphic branches to determine the one-way knowledge collaborative loss value between isomorphic branches;
  • a parameter loading module configured to load the final weight parameters through the initial pedestrian re-identification network to obtain the final pedestrian re-identification network
  • a pedestrian re-identification module configured to use the final pedestrian re-identification network to perform a pedestrian re-identification task
  • the parameter determination module includes:
  • the loss value determination unit is used to determine the cross-entropy loss value of the cross-entropy loss function, determine the triplet loss value of the triplet loss function, and determine the dynamic classification probability during the training process of the isomorphic training network
  • the one-way knowledge collaborative loss value of the loss function is used to determine the cross-entropy loss value of the cross-entropy loss function, determine the triplet loss value of the triplet loss function, and determine the dynamic classification probability during the training process of the isomorphic training network.
  • a weight determination unit configured to use the total loss value of the cross-entropy loss value, the triplet loss value, and the one-way knowledge collaborative loss value to determine the final weight of each network layer in the isomorphic training network parameter;
  • the loss value determination unit includes:
  • the calculation subunit is used to calculate the one-way knowledge collaborative loss value by using the classification layer output features of each sample in each isomorphic branch and the dynamic classification probability loss function;
  • the dynamic classification probability loss function is:
  • L ksp is the one-way knowledge collaboration loss value
  • N is the total number of training samples
  • u represents the u-th isomorphic branch
  • v represents the v-th isomorphic branch
  • K is the dimension of the output feature of the classification layer
  • x n is the nth sample
  • x n is the nth sample
  • ⁇ u represents the network parameters of the u-th isomorphic branch
  • ⁇ v represents the network parameters of the v-th isomorphic branch.
  • the loss value determination unit includes:
  • the first determination subunit is used to determine the first loss value of each isomorphism branch according to the embedding layer output feature of each sample in each isomorphism branch and the first triplet loss function;
  • the first triplet loss function is:
  • N is the total number of training samples
  • a is the anchor sample
  • y is the classification label of the sample
  • p is the sample with the largest intra-class distance belonging to the same classification label as the anchor sample
  • q is the sample with the minimum inter-class distance belonging to different classification labels from the anchor sample
  • m is the first parameter
  • d( , ) is used to calculate the distance
  • [ ] + and max d( , ) both represent the maximum distance
  • min d( , ) means to find the minimum distance
  • y a means the classification label of the anchor sample
  • y p means the classification label of the p sample
  • y q means the classification label of the q sample.
  • the loss value determination unit also includes:
  • the second determining subunit is used to determine the second loss value of each isomorphic branch by using the first loss value of each isomorphic branch and the second triplet loss function;
  • the second triplet loss function is:
  • the selection subunit is specifically configured to: select the second loss value with the smallest value from each isomorphic branch as the triplet loss value.
  • an electronic device including:
  • a processor configured to implement the steps of the above pedestrian re-identification method when executing the computer program.
  • the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned pedestrian re-identification method are implemented.
  • the embodiment of the present application provides a pedestrian re-identification method, device, equipment, and readable storage medium; before performing the pedestrian re-identification task in this scheme, it is first necessary to construct an isomorphic training network of the initial pedestrian re-identification network, Since the isomorphic training network has multiple isomorphic branches with the same network structure, during the training process, this scheme can mine the feature information between the isomorphic branches to make the isomorphic branches regularize each other, thereby promoting the The accuracy rate of the structural branch is higher; moreover, this scheme trains the isomorphic training network through the dynamic classification probability loss function based on knowledge collaboration, which can realize the interaction of different levels of information between the isomorphic branches during the training process, multiple The isomorphic branches provide different perspectives on the same data, and the mutual regularization between branches is realized through the knowledge collaboration between different perspectives, thereby improving the accuracy of the network.
  • the initial person re-identification network can load the final weight parameters to perform pedestrian re-identification tasks, thereby improving the pedestrian re-identification network's ability to process pedestrians.
  • the accuracy and performance of the re-identification task can reduce the storage space occupied by the device, which is more conducive to the storage and deployment of portable devices, reduce the amount of calculation for performing the pedestrian re-identification task, and improve the processing rate of the pedestrian re-identification task; and, because of this scheme Only the network training process needs to be changed, and the network application process does not complicate the network. Therefore, this solution can maximize the potential of the network and improve network performance without increasing the amount of parameters and calculations.
  • FIG. 1 is a schematic flow diagram of a pedestrian re-identification method disclosed in an embodiment of the present application
  • Fig. 2a is a schematic diagram of a network structure disclosed in an embodiment of the present application.
  • FIG. 2b is a schematic diagram of another network structure disclosed in the embodiment of the present application.
  • FIG. 2c is a schematic diagram of another network structure disclosed in the embodiment of the present application.
  • Fig. 3a is a schematic diagram of an initial pedestrian re-identification network structure disclosed in the embodiment of the present application.
  • Fig. 3b is a schematic diagram of an isomorphic training network with an asymmetric network structure disclosed in the embodiment of the present application;
  • Fig. 3c is a schematic diagram of an isomorphic training network with a symmetrical network structure disclosed in the embodiment of the present application;
  • FIG. 4 is a schematic diagram of an isomorphic training network disclosed in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a final pedestrian re-identification network structure disclosed in the embodiment of the present application.
  • Fig. 6a is a schematic diagram of a specific isomorphic training network structure disclosed in the embodiment of the present application.
  • Fig. 6b is a schematic diagram of a specific final pedestrian re-identification network structure disclosed in the embodiment of the present application.
  • Fig. 6c is a schematic diagram of the execution flow of a pedestrian re-identification task disclosed in the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a pedestrian re-identification device disclosed in an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
  • FIG. 1 a schematic flow chart of a pedestrian re-identification method provided in an embodiment of the present application; it can be seen from FIG. 1 that the method specifically includes the following steps:
  • the initial person re-identification network in this embodiment is an untrained original deep learning network; and, in this embodiment, the specific network structure of the initial person re-identification network is not limited, as long as the After the initial person re-identification network is trained, the person re-identification operation can be performed. Moreover, this solution can be applied in many fields such as image classification, segmentation, retrieval, etc. In this embodiment, only the specific application field of person re-identification is taken as an example to describe this solution in detail.
  • a convolutional neural network is usually a deep structure composed of multi-layer networks.
  • Fig. 2a, 2b and 2c it is three kinds of different network structure diagrams that the embodiment of the present application provides; Wherein, Fig. 2a represents the ResNet network (Residual Networks, depth residual network) of 34 layers to include Shortcut Connection (cutting the road connect).
  • Figure 2b represents a 34-layer Plain network
  • Figure 2c represents a 19-layer VGG (Visual Geometry Group) network.
  • the above networks are all multi-layer stacked structures, and the above single-branch network is called the backbone network in this solution.
  • Figure 3a provides a schematic diagram of an initial pedestrian re-identification network structure for the embodiment of this application. It can be seen from Figure 3a that in this embodiment, the initial pedestrian re-identification
  • the identification network has five layers, network layer A to network layer E, as an example for illustration, and network layer A to network layer E are backbone networks.
  • an auxiliary training branch can be drawn in the middle layer of the initial person re-identification network to generate an isomorphic training network with an asymmetric network structure, or to generate an isomorphic training network with Isomorphic training networks with symmetric network structures.
  • FIG. 3b the embodiment of the present application provides a schematic diagram of an isomorphic training network with an asymmetric network structure.
  • FIG. 3c an embodiment of the present application provides a schematic diagram of an isomorphic training network with a symmetrical network structure. It can be seen from Fig. 3b and Fig.
  • the middle layer of the auxiliary training branch derived from the backbone network in this embodiment is: network layer C and network layer D; and, the auxiliary training branch derived from the network layer C in Fig. 3b is: network Layer D'-Network Layer E", the auxiliary training branch derived from network layer D in Figure 3b is: network layer E', wherein, network layer D' has the same structure as network layer D, network layer E' and network layer E " has the same structure as the network layer E, so the isomorphic training network with an asymmetric network structure generated in this embodiment has three isomorphic branches with the same network structure, which are:
  • auxiliary training branches derived from network layer C in Figure 3c are: network layer D'-network layer E" and network layer D'-network layer E"', and the auxiliary training branches derived from network layer D are: network layer E ', wherein, network layer D' has the same structure as network layer D, and network layer E', network layer E" and network layer E"' have the same structure as network layer E, so the network layer with symmetrical network structure generated in this embodiment
  • the isomorphic training network has four isomorphic branches with the same network structure, which are:
  • the network structure of the network layer in the auxiliary training branch derived from this embodiment is the same as the network structure of the corresponding network layer in the backbone network, it can be explained that the final generated isomorphic training network has multiple isomorphic training networks with the same network structure. structure branch.
  • this solution extracts the auxiliary training branch from the middle layer of the backbone network, it does not specifically limit which middle layer of the network the auxiliary training branch is drawn from, and can be set according to the actual situation; and, in this embodiment, when the auxiliary training branch is drawn.
  • an isomorphic training network with an asymmetric network structure based on auxiliary derivation can be generated (as shown in Figure 3b), or an isomorphic training network with a symmetric network structure based on hierarchical derivation can be generated (as shown in Figure 3c), in In actual application, which type of isomorphic training network to generate can be customized according to resource conditions.
  • an isomorphic training network with a symmetrical network structure can be generated. If the computing performance of the hardware device is If the performance is average, an isomorphic training network with an asymmetric network structure can be generated, etc.
  • heterogeneous auxiliary classification network structures are very common, such as GoogleNet and so on.
  • the heterogeneous auxiliary classification network refers to the auxiliary classification branch derived from the backbone network, but the network structure of the auxiliary classification branch is very different from the backbone network. Therefore, the design of the auxiliary branch based on heterogeneity requires rich experience. Introducing heterogeneous branches at some locations will not increase network performance. At the same time, the heterogeneous branch network is different from the main branch network structure and needs to be designed separately.
  • the auxiliary training branch based on the homogeneous network disclosed in the present application has at least the following advantages:
  • the network structure of the isomorphic auxiliary training branch is the same as the backbone network, and there is no need to design the network structure separately, so the network design is relatively simple.
  • the isomorphic auxiliary training branches have natural branch similarity, that is, each auxiliary training branch has the same structure and the same input, but the initialized weight values are different, and each branch provides its own views on the input data.
  • the branches can be regularized with each other, thus promoting the development of each branch in the direction of higher accuracy.
  • the target loss function uses the target loss function to train the isomorphic training network, and determine the final weight parameters of each network layer in the isomorphic training network;
  • the target loss function includes a dynamic classification probability loss function based on knowledge collaboration, which is used to utilize each training
  • the output features of the samples in the classification layer of each two isomorphic branches determine the one-way knowledge synergy loss value between the isomorphic branches;
  • the isomorphic training network needs to be trained to converge through the target loss function, and the final weight parameters of the trained network are obtained after convergence.
  • the final weight parameters trained by the network are preloaded to perform final classification on the input data.
  • the current general training process for the network can be used for training, so as to obtain the final weight parameters.
  • the loss function used can include the cross-entropy loss function , triplet loss function, etc.
  • this scheme proposes a method based on the special structure of the isomorphic training network
  • the dynamic classification probability loss function of knowledge collaboration using the dynamic classification probability loss function to train the isomorphic training network, can make the probability distribution of the final prediction result similar between the isomorphic branches through mutual imitation learning; at the same time, by strengthening the branch
  • the information exchange among them enables the backbone network to improve the generalization ability of the backbone network by supporting the convergence of multiple branch networks at the same time, thereby further improving the performance of the network.
  • a training procedure for an isomorphic training network including the following steps:
  • the network structure of the initial pedestrian re-identification network select the appropriate lead-out position in the backbone network to determine the middle layer leading to the auxiliary training branch, and construct the auxiliary training branch based on the isomorphic network to obtain the isomorphic training network.
  • the target loss function includes: cross-entropy loss function, triplet loss function and knowledge synergy loss function
  • the obtained isomorphic branch loss also includes: cross-entropy loss value, triplet loss value, and knowledge synergy loss value.
  • the current network training process usually includes the following two stages: the first stage is the stage in which data is propagated from the low level to the high level, that is, the forward propagation stage.
  • the other stage is when the result of the forward propagation does not match the expectation, the stage of propagating the error from the high level to the bottom level, that is, the back propagation stage.
  • the specific training process is:
  • the network layer weights are initialized, generally using random initialization
  • the input training image data is passed through the forward propagation of each network layer such as the convolutional layer, the downsampling layer, and the fully connected layer to obtain the output value;
  • the error calculation method is: calculate the output value of the network, and obtain the total loss value based on the above target loss function;
  • the error is reversely transmitted back to the network, and the backpropagation error of each network layer such as the fully connected layer and the convolutional layer is obtained in turn.
  • Each layer of the network adjusts all weight coefficients in the network according to the backpropagation error of each layer, that is, updates the weights.
  • the final weight parameters of each network layer in the isomorphic training network can be obtained.
  • the network performs image processing tasks such as pedestrian re-identification, it needs to remove all auxiliary training branches and load the final weight parameters.
  • this embodiment loads the final weight parameters through the initial pedestrian re-identification network without adding auxiliary training branches to obtain the final pedestrian re-identification network, and uses the final pedestrian re-identification network to perform image processing tasks such as pedestrian re-identification;
  • the weight parameters obtained by training the isomorphic training network include: the weight parameters of the backbone network and the weight parameters of the auxiliary training branch, so When loading the final weight parameters through the initial person re-identification network, only the weight parameters of the backbone network will be loaded.
  • this scheme is based on During the training process, the feature information between the isomorphic branches can be mined to regularize the isomorphic branches, so that the accuracy of each isomorphic branch is higher; moreover, this scheme uses dynamic classification probability loss based on knowledge collaboration
  • the function trains the isomorphic training network, which can realize the interaction of different levels of information between isomorphic branches during the training process. Multiple isomorphic branches provide different perspectives on the same data, and realize it through knowledge collaboration between different perspectives.
  • the present application allows the final person re-identification network to avoid additional storage space due to the huge amount of parameters of the final person re-identification network when performing the person re-identification task, thereby reducing the storage space occupation, so the The final person re-identification network is deployed in a portable device, and the final person re-identification network is run through the portable device to perform the person re-identification task; moreover, the final person re-identification network does not add additional calculations when performing the person re-identification task Therefore, in this application, the final person re-identification network can perform a real-time improved person re-identification task, thereby improving the accuracy and execution speed of the person re-identification task.
  • the isomorphic training network is trained using the target loss function, and the final weight parameters of each network layer in the isomorphic training network are determined, including:
  • the cross-entropy loss value of the cross-entropy loss function determines the triplet loss value of the triplet loss function, determine the one-way knowledge synergy loss value of the dynamic classification probability loss function, and The final weight parameters of each network layer in the isomorphic training network are determined by using the total loss value of the cross-entropy loss value, the triplet loss value, and the one-way knowledge synergy loss value.
  • this embodiment mainly trains the network based on the cross-entropy loss function (cross-entropy), the triplet loss function (Triplet Loss) and the dynamic classification probability loss function (Knowledge synergy for dynamic classified probability, KSP).
  • cross-entropy cross-entropy
  • Triplet Loss triplet loss function
  • KSP dynamic classification probability loss function
  • cross-entropy loss function cross-entropy loss
  • f c (x n , ⁇ b ) represents the output features of the network model, and the subscript c represents the classification layer features obtained after the network passes through the softmax layer.
  • the process of determining the triplet loss value of the triplet loss function in this embodiment includes:
  • the first triplet loss function is:
  • N is the total number of training samples
  • a is the anchor sample
  • y is the classification label of the sample
  • p is the sample with the largest intra-class distance belonging to the same classification label as the anchor sample
  • q is the sample with the minimum inter-class distance belonging to different classification labels from the anchor sample
  • m is the first parameter
  • d( , ) is used to calculate the distance
  • [ ] + and max d( , ) both represent the maximum distance
  • min d( , ) means to find the minimum distance
  • y a means the classification label of the anchor sample
  • y p means the classification label of the p sample
  • y q means the classification label of the q sample.
  • the triplet loss function mines the difficult samples in the input data, calculates the maximum intra-class distance and the minimum inter-class distance in the triplet data, and constrains the above distances in the loss function, so that the maximum The intra-class distance should be as small as possible, and the minimum inter-class distance should be as large as possible, so that the distance between samples of different categories in the feature space after the sample is mapped (features calculated by the deep learning network) increases, and the distance between samples of the same category increases.
  • the samples are gathered as much as possible to improve the recognition accuracy.
  • the above formula 3 is a triplet loss function provided in this embodiment, and d( ⁇ , ⁇ ) represents the distance between vectors, and Euclidean distance, cosine distance, etc. can be used.
  • f e ( ⁇ ) represents the features of the acquired image in the Embedding layer of the network. That is to say: in this embodiment, it is necessary to traverse all the samples in each batch, and the traversed samples are called anchor samples, and the maximum intra-class distance and the minimum inter-class distance of the characteristics of the anchor sample are obtained, and substituted into the above formula 3 .
  • f p represents image features of the same class as anchor samples.
  • f q represents image features that are different classes from the anchor samples. It should be noted that in this example the Both extract the features of the Embedding layer in the network.
  • the first triplet loss function described in the above formula 3 can increase the distance between samples of different categories and gather samples of the same category as much as possible, which improves the recognition accuracy, however, the first triplet loss function Only consider the difference between the intra-class difference and the inter-class difference of the sample, ignoring the absolute distance (ie: absolute value) of the intra-class difference. If the absolute value of the intra-class difference can be further limited, the same Class samples are gathered as much as possible to further improve the recognition accuracy. Therefore, in this embodiment, after determining the first loss value of each isomorphic branch, the following steps are further included:
  • the total loss function can be obtained according to the cross-entropy loss function calculated by formula 2 and the triplet loss function calculated by formula 4, such as formula 5, where ⁇ in the formula is a hyperparameter, Can be trained or preset.
  • this embodiment provides a schematic flowchart of calculating the loss value by using the cross-entropy loss function and triplet loss function:
  • the process of determining the one-way knowledge collaborative loss value of the dynamic classification probability loss function in this embodiment includes:
  • the dynamic classification probability loss function is:
  • L ksp is the one-way knowledge collaboration loss value
  • N is the total number of training samples
  • u represents the u-th isomorphic branch
  • v represents the v-th isomorphic branch
  • K is the dimension of the output feature of the classification layer
  • x n is the nth sample
  • x n is the nth sample
  • ⁇ u represents the network parameters of the u-th isomorphic branch
  • ⁇ v represents the network parameters of the v-th isomorphic branch.
  • a loss function based on knowledge collaboration is added between two branches to realize information exchange at different levels between isomorphic branches. interact.
  • Multiple isomorphic branches provide different perspectives on the same data, and the mutual regularization between branches is realized through knowledge collaboration between different perspectives, so that the network can develop toward a more accurate recognition rate with the help of group wisdom.
  • formula 6 is decomposed into the following two formulas here:
  • the one-way knowledge collaboration loss values of all combinations of all samples are summed and then averaged to obtain the final one-way knowledge collaboration loss value L ksp .
  • the output features of the classification layer of all isomorphic branches are first summed to obtain the output features of the total classification layer, and then the average value of the output features of the total classification layer is calculated as the virtual label of the virtual branch, also That is: the calculation method of the virtual label f v is:
  • B is the total number of isomorphic branches
  • b represents the b-th isomorphic branch
  • x n is the n-th sample
  • ⁇ b represents the network parameters of the b-th isomorphic branch
  • f c (x n , ⁇ b ) is x n outputs features at the classification layer of the b-th isomorphic branch.
  • the target loss function in this application also includes: a virtual branch-based knowledge collaborative loss function, and the virtual branch-based knowledge collaborative loss function is specifically:
  • L v is the virtual branch knowledge synergy loss value.
  • the total loss value of triplet loss value and one-way knowledge synergy loss value is:
  • the target loss function also includes a virtual branch-based knowledge collaboration loss function
  • the total loss also includes a virtual branch knowledge collaboration loss value, that is:
  • the embodiment of the present application provides a knowledge collaboration auxiliary training method, by reconstructing the network layer , Add knowledge collaborative loss function and other methods to carry out collaborative training to improve the performance of the network without increasing the amount of parameters and calculations.
  • FIG. 6a a schematic diagram of a specific isomorphic training network structure provided by the embodiment of the present application.
  • Figure 6a shows a typical MobileNet v2 network structure.
  • the Bottleneck network structure of MobileNet is a residual structure formed by stacking multi-layer depth separable convolutional networks. It is a fixed structure and will not be described here.
  • Conv represents the convolutional layer
  • the arrow 1 of each isomorphic branch represents the Global pool layer
  • the arrow 2 of each isomorphic branch represents Conv 1 ⁇ 1.
  • the structure in the figure is exactly the same as that of MobileNet V2.
  • Fig. 6a in this embodiment, based on the network structure of MobileNet v2, an isomorphic branch is drawn from the output position of the third Bottleneck, and an isomorphic branch is drawn from the output position of the fifth Bottleneck.
  • cross-entropy loss, triplet loss, and dynamic classification probability loss are established at the output layer, and training is performed.
  • the double-headed arrows in Figure 6a represent the collaborative relationship between two branches of knowledge.
  • Fig. 6b is a schematic diagram of a specific final pedestrian re-identification network structure provided by the embodiment of the present application.
  • FIG. 6c a schematic diagram of the execution flow of a pedestrian re-identification task provided by the embodiment of the present application, it can be seen from Figure 6c that when the final pedestrian re-identification network is applied to the pedestrian re-identification task in this embodiment, the input image 1.
  • Input image 2 and input image 3 are input to the final pedestrian re-identification network to obtain the characteristics of the embedding layer in the network.
  • Images 1, 2, and 3 constitute the query data set for the pedestrian re-identification task. Input the image to be queried into the network, and obtain the embedding layer features of the image to be queried.
  • the comparison method is: find the embedding layer features of the image to be queried The distance from all the features in the query data set, that is, the vector distance, the query data sample with the smallest distance is the same person as the image to be queried.
  • an auxiliary training method based on isomorphic branches is proposed to establish multiple views of the input data, and this scheme proposes a triplet loss function based on auxiliary branches, for each auxiliary branch
  • the head network of the head network uses this loss function for training; further, in order to carry out knowledge collaboration and realize information interaction between isomorphic branches, this application adds a loss function based on knowledge collaboration between two branches to realize The interaction of different levels of information between different branches provides different perspectives on the same data through multiple branches, and the mutual regularization between branches is realized through the knowledge collaboration between different perspectives to improve the accuracy of the network.
  • the following is an introduction to the pedestrian re-identification device, equipment, and medium provided by the embodiments of the present application.
  • the pedestrian re-identification device, equipment, and medium described below and the pedestrian re-identification method described above can be referred to each other.
  • FIG. 7 a schematic structural diagram of a pedestrian re-identification device provided by an embodiment of the present application, including:
  • a network determination module 11 configured to determine an isomorphic training network corresponding to the initial pedestrian re-identification network; wherein, the isomorphic training network has a plurality of isomorphic branches with the same network structure;
  • a parameter determination module 12 configured to use a target loss function to train the isomorphic training network, and determine the final weight parameters of each network layer in the isomorphic training network; wherein the target loss function includes knowledge-based collaboration
  • the target loss function includes knowledge-based collaboration
  • a dynamic classification probability loss function the dynamic classification probability loss function is used to: use each training sample to output features in the classification layer of every two isomorphic branches, and determine the one-way knowledge collaborative loss value between the isomorphic branches;
  • a parameter loading module 13 configured to load the final weight parameters through the initial pedestrian re-identification network to obtain the final pedestrian re-identification network;
  • the pedestrian re-identification module 14 is configured to use the final pedestrian re-identification network to perform a pedestrian re-identification task.
  • the network determination module 11 is specifically used to: lead out an auxiliary training branch in the middle layer of the initial pedestrian re-identification network to generate an isomorphic training network with an asymmetric network structure; or, in the middle of the initial pedestrian re-identification network
  • the layers lead to auxiliary training branches, resulting in an isomorphic training network with a symmetrical network structure.
  • the parameter determination module 12 includes:
  • the loss value determination unit is used to determine the cross-entropy loss value of the cross-entropy loss function, determine the triplet loss value of the triplet loss function, and determine the dynamic classification probability during the training process of the isomorphic training network
  • the one-way knowledge collaborative loss value of the loss function is used to determine the cross-entropy loss value of the cross-entropy loss function, determine the triplet loss value of the triplet loss function, and determine the dynamic classification probability during the training process of the isomorphic training network.
  • a weight determination unit configured to use the total loss value of the cross-entropy loss value, the triplet loss value, and the one-way knowledge collaborative loss value to determine the final weight of each network layer in the isomorphic training network parameter.
  • the loss value determination unit includes:
  • the first determination subunit is used to determine the first loss value of each isomorphism branch according to the embedding layer output feature of each sample in each isomorphism branch and the first triplet loss function;
  • the first triplet loss function is:
  • N is the total number of training samples
  • a is the anchor sample
  • y is the classification label of the sample
  • p is the sample with the largest intra-class distance belonging to the same classification label as the anchor sample
  • q is the sample with the minimum inter-class distance belonging to different classification labels from the anchor sample
  • m is the first parameter
  • d( , ) is used to calculate the distance
  • [ ] + and max d( , ) both represent the maximum distance
  • min d( , ) means to find the minimum distance
  • y a means the classification label of the anchor sample
  • y p means the classification label of the p sample
  • y q means the classification label of the q sample.
  • the loss value determination unit also includes:
  • the second determining subunit is used to determine the second loss value of each isomorphic branch by using the first loss value of each isomorphic branch and the second triplet loss function;
  • the second triplet loss function is:
  • the selection subunit is specifically configured to: select the second loss value with the smallest value from each isomorphic branch as the triplet loss value.
  • the loss value determination unit includes:
  • the calculation subunit is used to calculate the one-way knowledge collaborative loss value by using the classification layer output features of each sample in each isomorphic branch and the dynamic classification probability loss function;
  • the dynamic classification probability loss function is:
  • L ksp is the one-way knowledge collaboration loss value
  • N is the total number of training samples
  • u represents the u-th isomorphic branch
  • v represents the v-th isomorphic branch
  • K is the dimension of the output feature of the classification layer
  • x n is the nth sample
  • x n is the nth sample
  • ⁇ u represents the network parameters of the u-th isomorphic branch
  • ⁇ v represents the network parameters of the v-th isomorphic branch.
  • FIG. 8 a schematic structural diagram of an electronic device provided in an embodiment of the present application, including:
  • Memory 21 used to store computer programs
  • the processor 22 is configured to implement the steps of the pedestrian re-identification method described in any method embodiment above when executing the computer program.
  • the device may be a PC (Personal Computer, personal computer), or may be a terminal device such as a smart phone, a tablet computer, a palmtop computer, or a portable computer.
  • PC Personal Computer
  • terminal device such as a smart phone, a tablet computer, a palmtop computer, or a portable computer.
  • the device may include a memory 21 , a processor 22 and a bus 23 .
  • the memory 21 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc.
  • the storage 21 may be an internal storage unit of the device in some embodiments, such as a hard disk of the device.
  • Memory 21 may also be an external storage device of the device in other embodiments, such as a plug-in hard disk equipped on the device, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card) etc.
  • the memory 21 may also include both an internal storage unit of the device and an external storage device.
  • the memory 21 can not only be used to store application software and various data installed in the device, such as program codes for implementing the pedestrian re-identification method, but also be used to temporarily store data that has been output or will be output.
  • Processor 22 can be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments, is used for running the program code stored in memory 21 or processing Data, such as program codes for implementing pedestrian re-identification methods, etc.
  • CPU Central Processing Unit
  • controller microcontroller
  • microprocessor or other data processing chip in some embodiments, is used for running the program code stored in memory 21 or processing Data, such as program codes for implementing pedestrian re-identification methods, etc.
  • the bus 23 may be a peripheral component interconnect standard (PCI for short) bus or an extended industry standard architecture (EISA for short) bus or the like.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 8 , but it does not mean that there is only one bus or one type of bus.
  • the device can also include a network interface 24, and the network interface 24 can optionally include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are usually used for communication between the device and other electronic devices Establish a communication connection.
  • a network interface 24 can optionally include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are usually used for communication between the device and other electronic devices Establish a communication connection.
  • the device may further include a user interface 25, which may include a display (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 25 may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be properly referred to as a display screen or a display unit, and is used for displaying information processed in the device and for displaying a visualized user interface.
  • FIG. 8 shows a device with components 21-25. Those skilled in the art can understand that the structure shown in FIG. Combining certain parts, or different arrangements of parts.
  • the embodiment of the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the pedestrian re-identification method described in any of the above method embodiments is implemented A step of.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc., which can store program codes. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Social Psychology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种行人重识别方法、装置、设备及可读存储介质:对初始行人重识别网络的同构训练网络,通过动态分类概率损失函数等目标函数训练,得到携带更为准确的最终权重参数的最终行人重识别网络,并通过最终行人重识别网络执行行人重识别任务,通过该方式,可以提升行人重识别网络处理行人重识别任务的准确率及性能,减少设备内存储空间的占用,更利于便携式设备的存储与部署,减少执行行人重识别任务的计算量,提升行人重识别任务的处理速率。

Description

一种行人重识别方法、装置、设备及可读存储介质
本申请要求在2021年6月29日提交中国专利局、申请号为202110727876.6、发明名称为“一种行人重识别方法、装置、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别技术领域,更具体地说,涉及一种行人重识别方法、装置、设备及可读存储介质。
背景技术
行人重识别(Person re-identification,Re-ID)是一种重要的图像识别技术,广泛应用于公安系统、交通监管等领域。行人重识别对分布在不同位置的摄像头进行搜索来确定不同摄像头视野中的行人是否是同一个行人,该技术可以用于犯罪嫌疑人搜索、遗失儿童搜索等场景中。行人重识别主要通过深度学习技术实现,并且随着深度学习技术的不断发展,网络模型层出不穷,为了进一步提升行人重识别网络处理行人重识别任务的准确率及性能,研究人员往往向着加深或者加宽网络的方向去设计新的行人重识别网络。不可否认随着网络变深或者变宽,模型的学习能力也不断增强,但是,以该方式提高网络性能有以下缺点:
1、更深、更宽或更为复杂的行人重识别网络通常会带来参数量的激增,参数量的增加不利于便携式设备的存储与部署。例如:在网络摄像头中实现实时的行人检测识别程序的部署,需要网络具有较小的参数量(便于存储)和较高的识别准确率。
2、更深、更宽或更为复杂的行人重识别网络通常会带来计算量的增加,不利于对实时性要求较高的场景应用。例如:对犯罪嫌疑人的检索与跟踪,大的计算延迟会使整个系统错失最好的时机,给系统功能带来负面影响。
因此,如何在提升行人重识别网络处理行人重识别任务的准确率及性能 的基础上,减少设备存储人重识别网络所占用的存储空间,更利于便携式设备的存储与部署,减少执行行人重识别任务的计算量,提升行人重识别任务的处理速率,是本领域技术人员需要解决的技术问题。
发明内容
本申请的目的在于提供一种行人重识别方法、装置、设备及可读存储介质,以在不增加参数量和计算量的前提下,提升深度学习网络处理行人重识别任务的准确率及性能,减少设备内存储空间的占用,更利于便携式设备的存储与部署,减少执行行人重识别任务的计算量,提升行人重识别任务的处理速率。
为实现上述目的,本申请提供一种行人重识别方法,包括:
确定与初始行人重识别网络对应的同构训练网络;其中,所述同构训练网络具有多个网络结构相同的同构分支;
利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数;其中,所述目标损失函数包括基于知识协同的动态分类概率损失函数,所述动态分类概率损失函数用于:利用每个训练样本在每两个同构分支的分类层输出特征,确定同构分支间的单向知识协同损失值;
通过所述初始行人重识别网络加载所述最终权重参数,得到最终行人重识别网络,以利用所述最终行人重识别网络执行行人重识别任务;
其中,所述利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数,包括:
在对所述同构训练网络的训练过程中,确定交叉熵损失函数的交叉熵损失值、确定三元组损失函数的三元组损失值、确定所述动态分类概率损失函数的单向知识协同损失值;
利用所述交叉熵损失值、所述三元组损失值、所述单向知识协同损失值的总损失值,确定所述同构训练网络中每个网络层的最终权重参数;
其中,确定所述动态分类概率损失函数的单向知识协同损失值的过程包括:
利用每个样本在每个同构分支的分类层输出特征及所述动态分类概率损失函数计算单向知识协同损失值;所述动态分类概率损失函数为:
Figure PCTCN2021121901-appb-000001
其中,L ksp为单向知识协同损失值,N为训练样本的总数,u表示第u个同构分支,v表示第v个同构分支,
Figure PCTCN2021121901-appb-000002
表示任意两个同构分支构成的可选空间,K为分类层输出特征的维度,x n为第n个样本,
Figure PCTCN2021121901-appb-000003
为x n在第u个同构分支中的第k个维度的分类层输出特征,
Figure PCTCN2021121901-appb-000004
为x n在第v个同构分支中的第k个维度的分类层输出特征,θ u表示第u个同构分支的网络参数,θ v表示第v个同构分支的网络参数。
其中,所述确定与初始行人重识别网络对应的同构训练网络,包括:
在所述初始行人重识别网络的中间层引出辅助训练分支,生成具有非对称网络结构的同构训练网络。
其中,所述确定与初始行人重识别网络对应的同构训练网络,包括:
在所述初始行人重识别网络的中间层引出辅助训练分支,生成具有对称网络结构的同构训练网络。
其中,所述确定三元组损失函数的三元组损失值的过程包括:
根据每个样本在每个同构分支的嵌入层输出特征,以及第一三元组损失函数,确定每个同构分支的第一损失值;
从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值;
其中,所述第一三元组损失函数为:
Figure PCTCN2021121901-appb-000005
其中,
Figure PCTCN2021121901-appb-000006
为第b个同构分支的第一损失值,N为训练样本的总数,a为锚点样本,
Figure PCTCN2021121901-appb-000007
为锚点样本的嵌入层输出特征,y为样本的分类标签,p为与锚点样本属于同一分类标签的具有最大类内距离的样本,
Figure PCTCN2021121901-appb-000008
为p样本的嵌入层输出特征,q为与锚点样本属于不同分类标签的具有最小类间距离的样本,
Figure PCTCN2021121901-appb-000009
为q样本的嵌入层输出特征,m为第一参数,d(·,·)用于求取距 离,[·] +与max d(·,·)均表示求取最大距离,min d(·,·)表示求取最小距离,y a表示锚点样本的分类标签,y p表示p样本的分类标签,y q表示q样本的分类标签。
其中,所述确定每个同构分支的第一损失值之后,还包括:
利用每个同构分支的第一损失值及第二三元组损失函数,确定每个同构分支的第二损失值;
所述第二三元组损失函数为:
Figure PCTCN2021121901-appb-000010
其中,
Figure PCTCN2021121901-appb-000011
为第b个同构分支的第二损失值,β为第二参数;
相应的,所述从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值,包括:
从每个同构分支中选取数值最小的第二损失值作为所述三元组损失值。
为实现上述目的,本申请进一步提供一种行人重识别装置,包括:
网络确定模块,用于确定与初始行人重识别网络对应的同构训练网络;其中,所述同构训练网络具有多个网络结构相同的同构分支;
参数确定模块,用于利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数;其中,所述目标损失函数包括基于知识协同的动态分类概率损失函数,所述动态分类概率损失函数用于:利用每个训练样本在每两个同构分支的分类层输出特征,确定同构分支间的单向知识协同损失值;
参数加载模块,用于通过所述初始行人重识别网络加载所述最终权重参数,得到最终行人重识别网络;
行人重识别模块,用于利用所述最终行人重识别网络执行行人重识别任务;
其中,所述参数确定模块包括:
损失值确定单元,用于在对所述同构训练网络的训练过程中,确定交叉熵损失函数的交叉熵损失值、确定三元组损失函数的三元组损失值、确定所述动态分类概率损失函数的单向知识协同损失值;
权重确定单元,用于利用所述交叉熵损失值、所述三元组损失值、所述单向知识协同损失值的总损失值,确定所述同构训练网络中每个网络层的最终权重参数;
其中,所述损失值确定单元包括:
计算子单元,用于利用每个样本在每个同构分支的分类层输出特征及所述动态分类概率损失函数计算单向知识协同损失值;
所述动态分类概率损失函数为:
Figure PCTCN2021121901-appb-000012
其中,L ksp为单向知识协同损失值,N为训练样本的总数,u表示第u个同构分支,v表示第v个同构分支,
Figure PCTCN2021121901-appb-000013
表示任意两个同构分支构成的可选空间,K为分类层输出特征的维度,x n为第n个样本,
Figure PCTCN2021121901-appb-000014
为x n在第u个同构分支中的第k个维度的分类层输出特征,
Figure PCTCN2021121901-appb-000015
为x n在第v个同构分支中的第k个维度的分类层输出特征,θ u表示第u个同构分支的网络参数,θ v表示第v个同构分支的网络参数。
其中,所述损失值确定单元包括:
第一确定子单元,用于根据每个样本在每个同构分支的嵌入层输出特征,以及第一三元组损失函数,确定每个同构分支的第一损失值;
选取子单元,用于从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值;
其中,所述第一三元组损失函数为:
Figure PCTCN2021121901-appb-000016
其中,
Figure PCTCN2021121901-appb-000017
为第b个同构分支的第一损失值,N为训练样本的总数,a为锚点样本,
Figure PCTCN2021121901-appb-000018
为锚点样本的嵌入层输出特征,y为样本的分类标签,p 为与锚点样本属于同一分类标签的具有最大类内距离的样本,
Figure PCTCN2021121901-appb-000019
为p样本的嵌入层输出特征,q为与锚点样本属于不同分类标签的具有最小类间距离的样本,
Figure PCTCN2021121901-appb-000020
为q样本的嵌入层输出特征,m为第一参数,d(·,·)用于求取距离,[·] +与max d(·,·)均表示求取最大距离,min d(·,·)表示求取最小距离,y a表示锚点样本的分类标签,y p表示p样本的分类标签,y q表示q样本的分类标签。
其中,所述损失值确定单元还包括:
第二确定子单元,用于利用每个同构分支的第一损失值及第二三元组损失函数,确定每个同构分支的第二损失值;
所述第二三元组损失函数为:
Figure PCTCN2021121901-appb-000021
其中,
Figure PCTCN2021121901-appb-000022
为第b个同构分支的第二损失值,β为第二参数;
相应的,所述选取子单元具体用于:从每个同构分支中选取数值最小的第二损失值作为所述三元组损失值。
为实现上述目的,本申请进一步提供一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现上述行人重识别方法的步骤。
为实现上述目的,本申请进一步提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述行人重识别方法的步骤。
通过以上方案可知,本申请实施例提供的一种行人重识别方法、装置、设备及可读存储介质;本方案执行行人重识别任务之前,首先需要构建初始行人重识别网络的同构训练网络,由于该同构训练网络具有多个网络结构相同的同构分支,因此本方案在训练过程中,可挖掘同构分支之间的特征信息,使同构分支之间相互正则化,从而促使各个同构分支的准确率更高;并且,本方案通过基于知识协同的动态分类概率损失函数对该同构训练网络进行训练,可在训练过程中实现同构分支之间不同层次信息的交互,多个同构分支对同一数据提供各自不同的视角,通过不同视角之间的知识协同实现分支之 间的相互正则化,从而提高网络的准确率。因此,本方案通过上述操作对同构训练网络进行训练得到更为准确的最终权重参数后,初始行人重识别网络便可加载该最终权重参数执行行人重识别任务,从而提升行人重识别网络处理行人重识别任务的准确率及性能,减少设备内存储空间的占用,更利于便携式设备的存储与部署,减少执行行人重识别任务的计算量,提升行人重识别任务的处理速率;并且,由于本方案只需要更改网络训练过程,而在网络应用过程,并没有对网络进行复杂化处理,因此本方案可在不增加任何参数量和计算量的前提下最大化的挖掘网络潜能,提升网络性能。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例公开的一种行人重识别方法流程示意图;
图2a为本申请实施例公开的一网络结构示意图;
图2b为本申请实施例公开的另一网络结构示意图;
图2c为本申请实施例公开的另一网络结构示意图;
图3a为本申请实施例公开的一种初始行人重识别网络结构示意图;
图3b为本申请实施例公开的一种具有非对称网络结构的同构训练网络示意图;
图3c为本申请实施例公开的一种具有对称网络结构的同构训练网络示意图;
图4为本申请实施例公开的一种同构训练网络示意图;
图5为本申请实施例公开的一种最终行人重识别网络结构示意图;
图6a为本申请实施例公开的一种具体的同构训练网络结构示意图;
图6b为本申请实施例公开的一种具体的最终行人重识别网络结构示意图;
图6c为本申请实施例公开的一种行人重识别任务执行流程示意图;
图7为本申请实施例公开的一种行人重识别装置结构示意图;
图8为本申请实施例公开的一种电子设备结构示意图。
具体实施方式
而在本申请中,发现对同一数据的多种观点将提供额外的正则化信息,从而提高网络准确性,也即:对于同一图像的多个结果可以相互辅助,从而利用群体的智慧获得更准确的结果。该多个结果既包括最终结果,也包括中间结果。基于此,本申请公开了一种行人重识别方法、装置、设备及可读存储介质,在本方案中,通过引入知识协同方法,可在不引入额外的网络参数量和计算量的基础上,通过优化训练过程,挖掘网络的潜力,提升网络的准确率及性能,使其能够达到最优性能,从而在网络应用过程中表现出更好的结果。其中:知识在本申请中定义为网络中的特征图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参见图1,本申请实施例提供的一种行人重识别方法流程示意图;通过图1可以看出,该方法具体包括如下步骤:
S101、确定与初始行人重识别网络对应的同构训练网络;该同构训练网络具有多个网络结构相同的同构分支;
需要说明的是,本实施例中的初始行人重识别网络为未经过训练的原始深度学习网络;并且,在本实施例中,并不对该初始行人重识别网络的具体网络结构进行限定,只要该初始行人重识别网络经过训练后,可执行行人重识别操作即可。并且,本方案可以应用在图像分类、分割、检索等多个领域中,在本实施例中,仅以应用在行人重识别这一特定应用领域为例对本方案进行具体说明。
在本实施例中,获得初始行人重识别网络后,可通过对该初始行人重识别网络重构获得对应的同构训练网络。具体来说,卷积神经网络通常是一个由多层网络叠加而成的深度的结构。参见图2a、2b和2c,为本申请实施例提供的三种不同的网络结构示意图;其中,图2a代表了34层的ResNet网络(Residual Networks,深度残差网络)包含着Shortcut Connection(抄近道连接)。图2b代表34层的Plain网络,图2c代表19层的VGG(Visual Geometry Group)网络,以上网络都是多层堆叠的结构,以上单分支的网络在本方案中称之为主干网络。为了对本方案的同构训练网络进行清楚描述,参见图3a,为本申请实施例提供了一种初始行人重识别网络结构示意图,通过图3a可以看出,在本实施例中,以初始行人重识别网络具有网络层A~网络层E这五层为例进行说明,网络层A~网络层E为主干网络。
并且,本方案创建与初始行人重识别网络对应的同构训练网络时,可在初始行人重识别网络的中间层引出辅助训练分支,生成具有非对称网络结构的同构训练网络,或者,生成具有对称网络结构的同构训练网络。参见图3b,本申请实施例提供了一种具有非对称网络结构的同构训练网络示意图,参见图3c,为本申请实施例提供了一种具有对称网络结构的同构训练网络示意图。通过图3b及图3c可以看出,本实施例从主干网络引出辅助训练分支的中间层为:网络层C和网络层D;并且,图3b中的网络层C引出的辅助训练分支为:网络层D’-网络层E”,图3b中的网络层D引出的辅助训练分支为:网络层E’,其中,网络层D’与网络层D的结构相同,网络层E’和网络层E”与网络层E的结构相同,因此本实施例生成的具有非对称网络结构的同构训练网络共具有三个网络结构相同的同构分支,分别为:
1、网络层A-网络层B-网络层C-网络层D-网络层E;
2、网络层A-网络层B-网络层C-网络层D-网络层E’;
3、网络层A-网络层B-网络层C-网络层D’-网络层E”。
进一步,图3c中的网络层C引出的辅助训练分支为:网络层D’-网络层E”以及网络层D’-网络层E”’,网络层D引出的辅助训练分支为:网络层E’,其中,网络层D’与网络层D的结构相同,网络层E’、网络层E”和网络层E”’与网络层E的结构相同,因此本实施例生成的具有对称网络结构的同构训练网络共具有四个网络结构相同的同构分支,分别为:
1、网络层A-网络层B-网络层C-网络层D-网络层E;
2、网络层A-网络层B-网络层C-网络层D-网络层E’;
3、网络层A-网络层B-网络层C-网络层D’-网络层E”;
4、网络层A-网络层B-网络层C-网络层D’-网络层E”’。
可见,由于本实施例引出的辅助训练分支中的网络层的网络结构与主干网络中对应的网络层的网络结构相同,因此可以说明,最终生成的同构训练网络具有多个网络结构相同的同构分支。并且,本方案从主干网络的中间层引出辅助训练分支时,并不具体限定从网络的哪一中间层引出辅助训练分支,可根据实际情况进行设定;并且,本实施例在引出辅助训练分支后,可生成基于辅助派生的具有非对称网络结构的同构训练网络(如图3b所示),或者生成基于层级派生的具有对称网络结构的同构训练网络(如图3c所示),在实际应用时,具体生成哪种类型的同构训练网络,可根据资源情况自定义设置,如:若硬件设备的计算性能强,可生成具有对称网络结构的同构训练网络,若硬件设备的计算性能一般,则可生成具有非对称网络结构的同构训练网络等等。
可以理解的是,目前的深度学习网络中,基于异构的辅助分类网络结构非常多见,例如GoogleNet等。其中,异构的辅助分类网络是指从主干网络引出辅助分类分支,但该辅助分类分支网络结构与主干网络非常不同,因此,基于异构的辅助分支设计需要丰富的经验,简单的在网络层的某些位置引入异构分支不会增加网络性能,同时异构分支网络与主分支网络结构不同,也需要单独设计。而本申请公开的这种基于同构网络的辅助训练分支与基于异构网络的辅助训练分支相比,至少具有如下优点:
1)同构辅助训练分支的网络结构与主干网络相同,不需要单独设计网络结构,因此网络设计比较简单。
2)同构辅助训练分支具有天然的分支相似性,即,每个辅助训练分支结构相同,输入也是相同的,但初始化的权重值不同,每个分支对输入数据提供各自的观点。通过挖掘辅助分支之间的特征信息,可以使分支之间相互正则,从而促使各个分支向准确率更高的方向发展。
S102、利用目标损失函数对同构训练网络进行训练,确定同构训练网络中每个网络层的最终权重参数;该目标损失函数包括基于知识协同的动态分 类概率损失函数,用于利用每个训练样本在每两个同构分支的分类层输出特征确定同构分支间的单向知识协同损失值;
S103、通过初始行人重识别网络加载最终权重参数,得到最终行人重识别网络,以利用最终行人重识别网络执行行人重识别任务。
在本实施例中,建立同构训练网络后,需要通过目标损失函数对该同构训练网络进行训练使其收敛,收敛后得到训练好的网络最终权重参数。在执行行人重识别等任务时,预先加载网络训练好的最终权重参数对输入数据进行最终的分类。需要说明的是,本实施例对同构训练网络训练时,可利用目前对网络的通用训练过程进行训练,从而得到最终权重参数,在训练过程中,所使用的损失函数可以包括交叉熵损失函数、三元组损失函数等等,并且,由于本实施例中的同构训练网络具有多个网络结构相同的同构分支,因此本方案基于同构训练网络这一特殊结构,提出了一种基于知识协同的动态分类概率损失函数,利用该动态分类概率损失函数对同构训练网络进行训练,可让同构分支之间可以通过相互模仿学习使其最终预测结果的概率分布相似;同时通过加强分支之间的信息交流,使主干网络能够通过同时支持多个分支网络的收敛,提高主干网络的泛化能力,从而进一步提高网络的性能。
在本实施例中,提供了一种对同构训练网络的训练流程,包括如下步骤:
一、根据初始行人重识别网络的网络结构,在主干网络中选择合适的引出位置,从而确定引出辅助训练分支的中间层,并构建基于同构网络的辅助训练分支,得到同构训练网络。
二、确定目标损失函数,通过目标损失函数对同构训练网络中的所有同构分支求取损失,该损失与目标损失函数相对应,若目标损失函数包括:交叉熵损失函数、三元组损失函数及知识协同损失函数,则得到的同构分支损失也包括:交叉熵损失值、三元组损失值、知识协同损失值。
三、根据如上损失函数对网络进行训练,使其收敛。
四、存储训练好的权重参数。
具体来说,目前对网络训练过程中,通常包括如下两个阶段:第一个阶段是数据由低层次向高层次传播的阶段,即前向传播阶段。另外一个阶段是,当前向传播得出的结果与预期不相符时,将误差从高层次向底层次进行传播训练的阶段,即反向传播阶段,具体训练过程为:
1、网络层权值进行初始化,一般采用随机初始化;
2、输入训练图像数据经过卷积层、下采样层、全连接层等各网络层的前向传播得到输出值;
3、求出网络的输出值与目标值(标签)之间的误差,误差求取方法为:求取网络的输出值,并基于上述目标损失函数得出总损失值;
4、将误差反向传回网络中,依次求得网络各层:全连接层,卷积层等各网络层的反向传播误差。
5、网络各层根据各层的反向传播误差对网络中的所有权重系数进行调整,即进行权重的更新。
6、重新随机选取新的训练图像数据,然后进入到第2步,获得网络前向传播得到输出值。
7、无限往复迭代,当求出网络的输出值与目标值(标签)之间的误差小于某个阈值,或者迭代次数超过某个阈值时,结束训练。
8、保存训练好的所有层的网络参数。
通过上述流程对网络训练结束后,即可得到同构训练网络中每个网络层的最终权重参数,该网络在执行行人重识别等图像处理任务时,需要去掉所有辅助训练分支后加载最终权重参数进行处理,也就是说:本实施例通过未添加辅助训练分支的初始行人重识别网络加载最终权重参数得到最终行人重识别网络,并利用该最终行人重识别网络执行行人重识别等图像处理任务;需要说明的是,由于初始行人重识别网络只包括主干网络,不包括辅助训练分支,而对同构训练网络进行训练得到的权重参数包括:主干网络的权重参数和辅助训练分支的权重参数,因此通过初始行人重识别网络加载最终权重参数时,只会加载主干网络的权重参数。
综上可以看出,本方案执行行人重识别操作之前,首先需要构建初始行人重识别网络的同构训练网络,由于该同构训练网络具有多个网络结构相同的同构分支,因此本方案在训练过程中,可挖掘同构分支之间的特征信息,使同构分支之间相互正则化,从而促使各个同构分支的准确率更高;并且,本方案通过基于知识协同的动态分类概率损失函数对该同构训练网络进行训练,可在训练过程中实现同构分支之间不同层次信息的交互,多个同构分支对同一数据提供各自不同的视角,通过不同视角之间的知识协同实现分支之 间的相互正则化,从而提高网络的准确率。因此,本方案通过上述操作对同构训练网络进行训练得到更为准确的最终权重参数后,初始行人重识别网络便可加载该最终权重参数执行行人重识别操作,从而提升网络性能;并且,由于本方案只需要更改网络训练过程,而在网络应用过程,并没有对网络进行复杂化处理,因此本方案可在不增加任何参数量和计算量的前提下最大化的挖掘网络潜能,提升网络性能。进一步的,本申请可以让最终行人重识别网络在执行行人重识别任务时,避免因最终行人重识别网络的参数量巨大带来额外的存储空间,从而减少了存储空间的占用,因此可将该最终行人重识别网络部署在便携式设备中,通过便携式设备运行该最终行人重识别网络执行行人重识别任务;并且,该最终行人重识别网络在执行行人重识别任务时,并不会增加额外的计算量,因此,在本申请中,该最终行人重识别网络可以执行实时性校高的行人重识别任务,从而提升行人重识别任务的准确率及执行速度。
基于上述实施例,在本实施例中,利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数,包括:
在对同构训练网络的训练过程中,确定交叉熵损失函数的交叉熵损失值、确定三元组损失函数的三元组损失值、确定动态分类概率损失函数的单向知识协同损失值,并利用交叉熵损失值、三元组损失值、单向知识协同损失值的总损失值,确定同构训练网络中每个网络层的最终权重参数。
也就是说,本实施例主要基于交叉熵损失函数(cross-entropy)、三元组损失函数(Triplet Loss)和动态分类概率损失函数(Knowledge synergy for dynamic classified probability,KSP)对网络进行训练,在此,对上述各个损失函数进行具体说明。参见图4,本申请实施例提供了一种同构训练网络示意图;通过图4可以看出,该同构训练网络为非对称网络结构,在原主干网络的基础上引出两个辅助训练分支,目前共有三个同构分支:Branch1、Branch2、Branch3。该同构训练网络在训练结束获得最终权重参数后,会将辅助训练分支去掉,保留原主干网络,参见图5,为本申请实施例提供的一种最终行人重识别网络结构示意图,通过图5所示的网络加载训练获得的权重参数后,即可执行行人重识别等图像处理任务。
在本实施例中,首先求取每个分支的交叉熵损失函数(cross-entropy loss),公式如下:
Figure PCTCN2021121901-appb-000023
Figure PCTCN2021121901-appb-000024
其中,网络输入表示为:D t={(x n,y n)|n∈[1,N]},N代表样本图像的总数,x n代表第n张图像,y n代表该张图像对应的分类标签。f c(x n,θ b)代表网络模型输出特征,下标c代表获取网络经过softmax层以后的分类层特征。如图4所示,计算交叉熵损失函数获取网络分类层的输出特征f c(·),K代表网络输出的分类层特征向量的维度,B代表同构分支数目,
Figure PCTCN2021121901-appb-000025
代表第b个同构分支的交叉熵损失函数,θ b代表第b个同构分支的网络参数,α b∈(0,1]是超参数,代表各分支交叉熵损失的权重。以上公式即求取输入图像的每个同构分支的交叉熵损失并进行加权求和。
进一步,本实施例确定三元组损失函数的三元组损失值的过程包括:
根据每个样本在每个同构分支的嵌入层输出特征,以及第一三元组损失函数,确定每个同构分支的第一损失值;
从每个同构分支中选取数值最小的第一损失值作为三元组损失值;
其中,所述第一三元组损失函数为:
Figure PCTCN2021121901-appb-000026
其中,
Figure PCTCN2021121901-appb-000027
为第b个同构分支的第一损失值,N为训练样本的总数,a为锚点样本,
Figure PCTCN2021121901-appb-000028
为锚点样本的嵌入层输出特征,y为样本的分类标签,p为与锚点样本属于同一分类标签的具有最大类内距离的样本,
Figure PCTCN2021121901-appb-000029
为p样本的嵌入层输出特征,q为与锚点样本属于不同分类标签的具有最小类间距离的样本,
Figure PCTCN2021121901-appb-000030
为q样本的嵌入层输出特征,m为第一参数,d(·,·)用于求取距 离,[·] +与max d(·,·)均表示求取最大距离,min d(·,·)表示求取最小距离,y a表示锚点样本的分类标签,y p表示p样本的分类标签,y q表示q样本的分类标签。
具体来说,三元组损失函数通过对输入数据中的困难样本进行挖掘,计算三元组数据中的最大类内距离和最小类间距离,并在损失函数中对以上距离进行约束,使最大类内距离尽可能的小,最小类间距离尽可能的大,从而使样本在其映射后(深度学习网络计算后得到的特征)的特征空间中不同类别样本之间的距离增大,同类别样本尽量聚集,提高了识别准确率。上述公式3即为本实施例提供的一种三元组损失函数,d(·,·)代表求取向量之间的距离,可以使用欧式距离、余弦距离等。公式3中的
Figure PCTCN2021121901-appb-000031
a代表anchor,即锚点样本。f e(·)代表获取图像在网络Embedding层的特征。也即:在本实施例中,需要遍历每个batch中的所有样本,所遍历的样本称为锚点样本,求取锚点样本特征的最大类内距离和最小类间距离,代入如上公式3。f p代表与锚点样本同类的图像特征。f q代表与锚点样本不同类的图像特征。需要注意的是,本实施例中的
Figure PCTCN2021121901-appb-000032
均抽取网络中Embedding层的特征。
进一步,上述公式3所述的第一三元组损失函数虽然可以使不同类别样本之间的距离增大,同类别样本尽量聚集,提高了识别准确率,但是,该第一三元组损失函数仅仅考虑样本的类内差和类间差之间的差值,忽略了类内差的绝对距离大小(即:绝对值),如果能进一步限制类内差的绝对值大小,则可进一步使同类别样本尽量聚集,从而进一步提高识别准确率。因此在本实施例中,确定每个同构分支的第一损失值之后,还包括如下步骤:
利用每个同构分支的第一损失值及第二三元组损失函数,确定每个同构分支的第二损失值;其中,该第二三元组损失函数为:
Figure PCTCN2021121901-appb-000033
其中,
Figure PCTCN2021121901-appb-000034
为第b个同构分支的第一损失值,
Figure PCTCN2021121901-appb-000035
为第b个同构分支的第二损失值,β为第二参数;通过以上约束可以使
Figure PCTCN2021121901-appb-000036
朝着更 小的趋势发展,
Figure PCTCN2021121901-appb-000037
朝着更大的趋势发展,即:限制类内差的绝对距离大小。相应的,计算出第二损失值后,即可根据公式2计算的交叉熵损失函数及公式4计算的三元组损失函数得到总损失函数如公式5,其中,公式中的γ为超参数,可以训练或预先设定。
Figure PCTCN2021121901-appb-000038
基于上述内容,本实施例提供一种利用交叉熵损失函数及三元组损失函数计算损失值的具体流程示意图:
1、对每个batch的所有样本进行遍历,如上所述,假设每个batch的样本包含N个样本,则遍历N次。
2)求取每个样本在每个batch中的最小类内距离和最大类间距离,其中:每个样本在一个batch中总有一个最小的类内和最大的类间样本。
3)通过公式3、公式4计算三元组损失函数的损失值
Figure PCTCN2021121901-appb-000039
4)通过公式2计算交叉熵损失函数的损失值
Figure PCTCN2021121901-appb-000040
5)遍历每个同构分支,按如上步骤求取每个分支的
Figure PCTCN2021121901-appb-000041
Figure PCTCN2021121901-appb-000042
6)通过公式5求取总损失值。
进一步,得益于同构分支的天然相似性,不同的同构分支可以作为彼此的软标签进行模仿学习,也就是说:同构分支之间可以通过相互模仿学习使其最终预测结果的概率分布相似,因此在本实施例中,可通过基于知识协同的动态分类概率损失函数来实现分支间的相互正则化。具体来说,本实施例确定动态分类概率损失函数的单向知识协同损失值的过程包括:
利用每个样本在每个同构分支的分类层输出特征及动态分类概率损失函数计算单向知识协同损失值;
其中,动态分类概率损失函数为:
Figure PCTCN2021121901-appb-000043
其中,L ksp为单向知识协同损失值,N为训练样本的总数,u表示第u个同构分支,v表示第v个同构分支,
Figure PCTCN2021121901-appb-000044
表示任意两个同构分支构成的可选空间,K为分类层输出特征的维度,x n为第n个样本,
Figure PCTCN2021121901-appb-000045
为x n在第u个同构分支中的第k个维度的分类层输出特征,
Figure PCTCN2021121901-appb-000046
为x n在第v个同构分支中的第k个维度的分类层输出特征,θ u表示第u个同构分支的网络参数,θ v表示第v个同构分支的网络参数。
具体来说,本方案为了进行知识协同,实现分支之间的信息交互,在本实施例中,在两两分支之间添加了基于知识协同的loss函数,实现同构分支之间不同层次信息的交互。多个同构分支对同一数据提供各自不同的视角,通过不同视角之间的知识协同实现分支之间的相互正则化,从而促使网络借助群体智慧向识别率更为准确的方向发展。为了方便说明,在此将公式6分解为如下两个公式:
Figure PCTCN2021121901-appb-000047
Figure PCTCN2021121901-appb-000048
通过公式7和8所示,知识协同损失函数具体执行步骤可以归纳如下:
1)对每个batch的所有样本进行遍历,如上所述,假设每个batch的样本包含N个样本,则遍历N次。
2)将样本依次经过网络,获取样本在网络各个同构分支的分类层输出结果,例如:对于样本x n,假设网络包含3个同构分支,则共有3个同构分支分类层输出结果,f c(x n,θ 1)、f c(x n,θ 2)、f c(x n,θ 3)。
3)对于所有分支输出结果,两两进行遍历,例如:本申请举例共有3个分支1、2、3,任意两个同构分支构成的可选空间
Figure PCTCN2021121901-appb-000049
共有6种组合:(1,2)(1,3)(2,1)(2,3)(3,1)(3,2),可以看出,本方案是一种单项的知识协同的方式,也即:同构分支u向同构分支v学习时,同构分支v并不向同构分支u学习,通过公式7即可求取每种组合的单向知识协同损失值。
4)根据公式8对所有样本的所有组合的单向知识协同损失值求和后求平均,得到最终的单向知识协同损失值L ksp
进一步的,对于所有分支输出结果,虽然分支之间相互学习可以增加系统的鲁棒性和泛化能力,但是不可避免会引入分类噪声,如:两两分支之间的相互学习,A分支向B分支学习的时候,B标签一定会有错误的情况,这种情况下便不可避免的引入噪声信息。因此,在本实施例中,为了使系统能够更稳定收敛,构建了一种新型的虚拟标签学习技术。
具体来说,在本实施例中,首先对所有同构分支的分类层输出特征求和,得到总分类层输出特征,然后计算总分类层输出特征的平均值,作为虚拟分支的虚拟标签,也即:虚拟标签f v的计算方法为:
Figure PCTCN2021121901-appb-000050
其中,B为同构分支的总数,b表示第b个同构分支,x n为第n个样本,θ b表示第b个同构分支的网络参数,f c(x n,θ b)为x n在第b个同构分支的分类层输出特征。
计算出虚拟标签f v后,需要将所有同构分支的分类层输出特征与虚拟标签f v计算基于虚拟分支的知识协同损失函数。也就是说,本申请中的目标损失函数还包括:基于虚拟分支的知识协同损失函数,该基于虚拟分支的知识协同损失函数具体为:
Figure PCTCN2021121901-appb-000051
其中,L v为虚拟分支知识协同损失值。
综上可见,在本实施例中,通过在目标损失函数中添加基于虚拟分支的知识协同损失函数确定虚拟分支知识协同损失值,并将虚拟分支知识协同损失值添加至总损失值中,可以在分支之间相互学习时,避免引入分类噪声,从而提高行人重识别网络在执行行人重识别任务时的准确度。
综上所述,基于上文所述的利用交叉熵损失函数及三元组损失函数计算损失值的过程,结合动态分类概率损失函数的单向知识协同损失值,可以得出交叉熵损失值、三元组损失值、单向知识协同损失值的总损失值为:
L sum=L+L ksp    (11)
进一步,若目标损失函数还包括基于虚拟分支的知识协同损失函数,则在本实施例中,该总损失还要包括虚拟分支知识协同损失值,也即:
L sum=L+L ksp+L v   (12)
综上可见,本申请实施例为了能够提高网络在训练、应用的精度,并且不增加网络在应用时的参数量和计算量,提供了一种知识协同辅助训练方法,通过对网络层进行重构、添加知识协同损失函数等方式进行协同训练,以在不增加参数量和计算量的前提下提升网络的性能。
在此以执行行人重识别任务为例,提供一完整实施例对本方案进行清楚说明:
一、网络训练过程:
1、首先确定初始行人重识别网络,并建立该初始行人重识别网络对应的同构训练网络,参见图6a,本申请实施例提供的一种具体的同构训练网络结构示意图。其中,图6a所示的是一个典型的MobileNet v2的网络结构,MobileNet的Bottleneck网络结构是由多层深度可分离卷积网络堆叠而成的残差结构,是一种固定结构,这里不赘述。Conv代表卷积层,每个同构分支的箭头1表示Global pool层,每个同构分支的箭头2代表Conv 1×1。图中结构与MobileNet V2结构完全一致。参见图6a,在本实施例中,在MobileNet v2的网络结构基础上,从第3个Bottleneck输出位置引出同构分支,从第5个Bottleneck输出位置引出同构分支。
2、本实施例在输出层位置建立交叉熵损失、三元组损失、动态分类概率损失,并进行训练,如图6a中的双头箭头代表两两分支知识协同关系。
3、通过训练使网络收敛,存储网络训练好的权重参数。
二、网络应用过程:
1、在同构训练网络中去掉辅助训练分支,只保留原主干分支,得到初始行人重识别网络,该初始行人重识别网络通过加载对应的权重参数,得到训练好的最终行人重识别网络,参见图6b,本申请实施例提供的一种具体的最终行人重识别网络结构示意图。
2、参见图6c,本申请实施例提供的一种行人重识别任务执行流程示意图,通过图6c可以看出,本实施例将最终行人重识别网络应用在行人重识别任务 中时,将输入图像1、输入图像2、输入图像3输入到最终行人重识别网络中,获取其网络中embedding层特征,图像1、2、3构成行人重识别任务的查询数据集。将待查询图像输入到网络中,获取待查询图像的embedding层特征。
3、将待查询图像的embedding层特征与查询数据集中所有特征(输入图像1、输入图像2、输入图像3的embedding层特征)进行比对,比对方法为:求待查询图像的embedding层特征与查询数据集中所有特征的距离,即向量求距离,距离最小的查询数据样本与待查询图像是同一个人。
综上可见,在本方案中,提出了基于同构分支的辅助训练方法来建立对输入数据的多重视图,并且本方案提出一种基于辅助分支的三元组损失函数,对每个辅助分支的头部网络应用该损失函数进行训练;进一步,本方案为了进行知识协同,实现同构分支之间的信息交互,本申请在两两分支之间添加了基于知识协同的loss函数,实现分支之间不同层次信息的交互,通过多个分支对同一数据提供各自不同的视角,通过不同视角之间的知识协同实现分支之间的相互正则化,提高网络的准确率。
下面对本申请实施例提供的行人重识别装置、设备及介质进行介绍,下文描述的行人重识别装置、设备及介质与上文描述的行人重识别方法可以相互参照。
参见图7,本申请实施例提供的一种行人重识别装置结构示意图,包括:
网络确定模块11,用于确定与初始行人重识别网络对应的同构训练网络;其中,所述同构训练网络具有多个网络结构相同的同构分支;
参数确定模块12,用于利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数;其中,所述目标损失函数包括基于知识协同的动态分类概率损失函数,所述动态分类概率损失函数用于:利用每个训练样本在每两个同构分支的分类层输出特征,确定同构分支间的单向知识协同损失值;
参数加载模块13,用于通过所述初始行人重识别网络加载所述最终权重参数,得到最终行人重识别网络;
行人重识别模块14,用于利用最终行人重识别网络执行行人重识别任务。
其中,网络确定模块11具体用于:在所述初始行人重识别网络的中间层引出辅助训练分支,生成具有非对称网络结构的同构训练网络;或者,在所述初始行人重识别网络的中间层引出辅助训练分支,生成具有对称网络结构的同构训练网络。
其中,参数确定模块12包括:
损失值确定单元,用于在对所述同构训练网络的训练过程中,确定交叉熵损失函数的交叉熵损失值、确定三元组损失函数的三元组损失值、确定所述动态分类概率损失函数的单向知识协同损失值;
权重确定单元,用于利用所述交叉熵损失值、所述三元组损失值、所述单向知识协同损失值的总损失值,确定所述同构训练网络中每个网络层的最终权重参数。
其中,所述损失值确定单元包括:
第一确定子单元,用于根据每个样本在每个同构分支的嵌入层输出特征,以及第一三元组损失函数,确定每个同构分支的第一损失值;
选取子单元,用于从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值;
其中,所述第一三元组损失函数为:
Figure PCTCN2021121901-appb-000052
其中,
Figure PCTCN2021121901-appb-000053
为第b个同构分支的第一损失值,N为训练样本的总数,a为锚点样本,
Figure PCTCN2021121901-appb-000054
为锚点样本的嵌入层输出特征,y为样本的分类标签,p为与锚点样本属于同一分类标签的具有最大类内距离的样本,
Figure PCTCN2021121901-appb-000055
为p样本的嵌入层输出特征,q为与锚点样本属于不同分类标签的具有最小类间距离的样本,
Figure PCTCN2021121901-appb-000056
为q样本的嵌入层输出特征,m为第一参数,d(·,·)用于求取距离,[·] +与max d(·,·)均表示求取最大距离,min d(·,·)表示求取最小距离,y a表示锚点样本的分类标签,y p表示p样本的分类标签,y q表示q样本的分类标签。
其中,所述损失值确定单元还包括:
第二确定子单元,用于利用每个同构分支的第一损失值及第二三元组损失函数,确定每个同构分支的第二损失值;
所述第二三元组损失函数为:
Figure PCTCN2021121901-appb-000057
其中,
Figure PCTCN2021121901-appb-000058
为第b个同构分支的第二损失值,β为第二参数;
相应的,所述选取子单元具体用于:从每个同构分支中选取数值最小的第二损失值作为所述三元组损失值。
其中,所述损失值确定单元包括:
计算子单元,用于利用每个样本在每个同构分支的分类层输出特征及所述动态分类概率损失函数计算单向知识协同损失值;
所述动态分类概率损失函数为:
Figure PCTCN2021121901-appb-000059
其中,L ksp为单向知识协同损失值,N为训练样本的总数,u表示第u个同构分支,v表示第v个同构分支,
Figure PCTCN2021121901-appb-000060
表示任意两个同构分支构成的可选空间,K为分类层输出特征的维度,x n为第n个样本,
Figure PCTCN2021121901-appb-000061
为x n在第u个同构分支中的第k个维度的分类层输出特征,
Figure PCTCN2021121901-appb-000062
为x n在第v个同构分支中的第k个维度的分类层输出特征,θ u表示第u个同构分支的网络参数,θ v表示第v个同构分支的网络参数。
参见图8,本申请实施例提供的一种电子设备结构示意图,包括:
存储器21,用于存储计算机程序;
处理器22,用于执行所述计算机程序时实现上述任意方法实施例所述的行人重识别方法的步骤。
在本实施例中,设备可以是PC(Personal Computer,个人电脑),也可以是智能手机、平板电脑、掌上电脑、便携计算机等终端设备。
该设备可以包括存储器21、处理器22和总线23。
其中,存储器21至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器21在一些实施例中可以是设备的内部存储单元,例如该设备的硬盘。存储器21在另一些实施例中也可以是设备的外部存储设备,例如设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器21还可以既包括设备的内部存储单元也包括外部存储设备。存储器21不仅可以用于存储安装于设备的应用软件及各类数据,例如执行行人重识别方法的程序代码等,还可用于暂时地存储已经输出或者将要输出的数据。
处理器22在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器21中存储的程序代码或处理数据,例如执行行人重识别方法的程序代码等。
该总线23可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
进一步地,设备还可以包括网络接口24,网络接口24可选的可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该设备与其他电子设备之间建立通信连接。
可选地,该设备还可以包括用户接口25,用户接口25可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口25还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在设备中处理的信息以及用于显示可视化的用户界面。
图8示出了具有组件21-25的设备,本领域技术人员可以理解的是,图8示出的结构并不构成对设备的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意方法实施例所述的行人重识别方法的步骤。
其中,该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种行人重识别方法,其特征在于,包括:
    确定与初始行人重识别网络对应的同构训练网络;其中,所述同构训练网络具有多个网络结构相同的同构分支;
    利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数;其中,所述目标损失函数包括基于知识协同的动态分类概率损失函数,所述动态分类概率损失函数用于:利用每个训练样本在每两个同构分支的分类层输出特征,确定同构分支间的单向知识协同损失值;
    通过所述初始行人重识别网络加载所述最终权重参数,得到最终行人重识别网络,以利用所述最终行人重识别网络执行行人重识别任务;
    其中,所述利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数,包括:
    在对所述同构训练网络的训练过程中,确定交叉熵损失函数的交叉熵损失值、确定三元组损失函数的三元组损失值、确定所述动态分类概率损失函数的单向知识协同损失值;
    利用所述交叉熵损失值、所述三元组损失值、所述单向知识协同损失值的总损失值,确定所述同构训练网络中每个网络层的最终权重参数;
    其中,确定所述动态分类概率损失函数的单向知识协同损失值的过程包括:
    利用每个样本在每个同构分支的分类层输出特征及所述动态分类概率损失函数计算单向知识协同损失值;所述动态分类概率损失函数为:
    Figure PCTCN2021121901-appb-100001
    其中,L ksp为单向知识协同损失值,N为训练样本的总数,u表示第u个同构分支,v表示第v个同构分支,
    Figure PCTCN2021121901-appb-100002
    表示任意两个同构分支构成的可选空间,K为分类层输出特征的维度,x n为第n个样本,
    Figure PCTCN2021121901-appb-100003
    为x n在第u个同构分支中的第k个维度的分类层输出特征,
    Figure PCTCN2021121901-appb-100004
    为x n在第 v个同构分支中的第k个维度的分类层输出特征,θ u表示第u个同构分支的网络参数,θ v表示第v个同构分支的网络参数。
  2. 根据权利要求1所述的行人重识别方法,其特征在于,所述确定与初始行人重识别网络对应的同构训练网络,包括:
    在所述初始行人重识别网络的中间层引出辅助训练分支,生成具有非对称网络结构的同构训练网络。
  3. 根据权利要求1所述的行人重识别方法,其特征在于,所述确定与初始行人重识别网络对应的同构训练网络,包括:
    在所述初始行人重识别网络的中间层引出辅助训练分支,生成具有对称网络结构的同构训练网络。
  4. 根据权利要求1所述的行人重识别方法,其特征在于,所述确定三元组损失函数的三元组损失值的过程包括:
    根据每个样本在每个同构分支的嵌入层输出特征,以及第一三元组损失函数,确定每个同构分支的第一损失值;
    从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值;
    其中,所述第一三元组损失函数为:
    Figure PCTCN2021121901-appb-100005
    其中,
    Figure PCTCN2021121901-appb-100006
    为第b个同构分支的第一损失值,N为训练样本的总数,a为锚点样本,
    Figure PCTCN2021121901-appb-100007
    为锚点样本的嵌入层输出特征,y为样本的分类标签,p为与锚点样本属于同一分类标签的具有最大类内距离的样本,
    Figure PCTCN2021121901-appb-100008
    为p样本的嵌入层输出特征,q为与锚点样本属于不同分类标签的具有最小类间距离的样本,
    Figure PCTCN2021121901-appb-100009
    为q样本的嵌入层输出特征,m为第一参数,d(·,·)用于求取距离,[·] +与max d(·,·)均表示求取最大距离,min d(·,·)表示求取最小距离,y a表示锚点样本的分类标签,y p表示p样本的分类标签,y q表示q样本的分类标签。
  5. 根据权利要求4所述的行人重识别方法,其特征在于,所述确定每个同构分支的第一损失值之后,还包括:
    利用每个同构分支的第一损失值及第二三元组损失函数,确定每个同构分支的第二损失值;
    所述第二三元组损失函数为:
    Figure PCTCN2021121901-appb-100010
    其中,
    Figure PCTCN2021121901-appb-100011
    为第b个同构分支的第二损失值,β为第二参数;
    相应的,所述从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值,包括:
    从每个同构分支中选取数值最小的第二损失值作为所述三元组损失值。
  6. 一种行人重识别装置,其特征在于,包括:
    网络确定模块,用于确定与初始行人重识别网络对应的同构训练网络;其中,所述同构训练网络具有多个网络结构相同的同构分支;
    参数确定模块,用于利用目标损失函数对所述同构训练网络进行训练,确定所述同构训练网络中每个网络层的最终权重参数;其中,所述目标损失函数包括基于知识协同的动态分类概率损失函数,所述动态分类概率损失函数用于:利用每个训练样本在每两个同构分支的分类层输出特征,确定同构分支间的单向知识协同损失值;
    参数加载模块,用于通过所述初始行人重识别网络加载所述最终权重参数,得到最终行人重识别网络;
    行人重识别模块,用于利用所述最终行人重识别网络执行行人重识别任务;
    其中,所述参数确定模块包括:
    损失值确定单元,用于在对所述同构训练网络的训练过程中,确定交叉熵损失函数的交叉熵损失值、确定三元组损失函数的三元组损失值、确定所述动态分类概率损失函数的单向知识协同损失值;
    权重确定单元,用于利用所述交叉熵损失值、所述三元组损失值、所述单向知识协同损失值的总损失值,确定所述同构训练网络中每个网络层的最终权重参数;
    其中,所述损失值确定单元包括:
    计算子单元,用于利用每个样本在每个同构分支的分类层输出特征及所述动态分类概率损失函数计算单向知识协同损失值;
    所述动态分类概率损失函数为:
    Figure PCTCN2021121901-appb-100012
    其中,L ksp为单向知识协同损失值,N为训练样本的总数,u表示第u个同构分支,v表示第v个同构分支,
    Figure PCTCN2021121901-appb-100013
    表示任意两个同构分支构成的可选空间,K为分类层输出特征的维度,x n为第n个样本,
    Figure PCTCN2021121901-appb-100014
    为x n在第u个同构分支中的第k个维度的分类层输出特征,
    Figure PCTCN2021121901-appb-100015
    为x n在第v个同构分支中的第k个维度的分类层输出特征,θ u表示第u个同构分支的网络参数,θ v表示第v个同构分支的网络参数。
  7. 根据权利要求6所述的行人重识别装置,其特征在于,所述损失值确定单元包括:
    第一确定子单元,用于根据每个样本在每个同构分支的嵌入层输出特征,以及第一三元组损失函数,确定每个同构分支的第一损失值;
    选取子单元,用于从每个同构分支中选取数值最小的第一损失值作为所述三元组损失值;
    其中,所述第一三元组损失函数为:
    Figure PCTCN2021121901-appb-100016
    其中,
    Figure PCTCN2021121901-appb-100017
    为第b个同构分支的第一损失值,N为训练样本的总数,a为锚点样本,
    Figure PCTCN2021121901-appb-100018
    为锚点样本的嵌入层输出特征,y为样本的分类标签,p为与锚点样本属于同一分类标签的具有最大类内距离的样本,
    Figure PCTCN2021121901-appb-100019
    为p样本的嵌入层输出特征,q为与锚点样本属于不同分类标签的具有最小类间距离的样本,
    Figure PCTCN2021121901-appb-100020
    为q样本的嵌入层输出特征,m为第一参数,d(·,·)用于求取距离,[·] +与max d(·,·)均表示求取最大距离,min d(·,·)表示求取最小距离,y a表示锚点样本的分类标签,y p表示p样本的分类标签,y q表示q样本的分类标签。
  8. 根据权利要求7所述的行人重识别装置,其特征在于,所述损失值确定单元还包括:
    第二确定子单元,用于利用每个同构分支的第一损失值及第二三元组损失函数,确定每个同构分支的第二损失值;
    所述第二三元组损失函数为:
    Figure PCTCN2021121901-appb-100021
    其中,
    Figure PCTCN2021121901-appb-100022
    为第b个同构分支的第二损失值,β为第二参数;
    相应的,所述选取子单元具体用于:从每个同构分支中选取数值最小的第二损失值作为所述三元组损失值。
  9. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1至5任一项所述的行人重识别方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5任一项所述的行人重识别方法的步骤。
PCT/CN2021/121901 2021-06-29 2021-09-29 一种行人重识别方法、装置、设备及可读存储介质 WO2023272995A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/265,242 US11830275B1 (en) 2021-06-29 2021-09-29 Person re-identification method and apparatus, device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110727876.6 2021-06-29
CN202110727876.6A CN113191338B (zh) 2021-06-29 2021-06-29 一种行人重识别方法、装置、设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023272995A1 true WO2023272995A1 (zh) 2023-01-05

Family

ID=76976703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121901 WO2023272995A1 (zh) 2021-06-29 2021-09-29 一种行人重识别方法、装置、设备及可读存储介质

Country Status (3)

Country Link
US (1) US11830275B1 (zh)
CN (1) CN113191338B (zh)
WO (1) WO2023272995A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612500A (zh) * 2023-07-20 2023-08-18 深圳须弥云图空间科技有限公司 行人重识别模型训练方法及装置
CN116665019A (zh) * 2023-07-31 2023-08-29 山东交通学院 一种用于车辆重识别的多轴交互多维度注意力网络

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191338B (zh) 2021-06-29 2021-09-17 苏州浪潮智能科技有限公司 一种行人重识别方法、装置、设备及可读存储介质
CN113191461B (zh) * 2021-06-29 2021-09-17 苏州浪潮智能科技有限公司 一种图片识别方法、装置、设备及可读存储介质
CN114299442A (zh) * 2021-11-15 2022-04-08 苏州浪潮智能科技有限公司 一种行人重识别方法、系统、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784182A (zh) * 2018-12-17 2019-05-21 北京飞搜科技有限公司 行人重识别方法和装置
CN110414368A (zh) * 2019-07-04 2019-11-05 华中科技大学 一种基于知识蒸馏的无监督行人重识别方法
CN111368815A (zh) * 2020-05-28 2020-07-03 之江实验室 一种基于多部件自注意力机制的行人重识别方法
CN111488833A (zh) * 2020-04-08 2020-08-04 苏州浪潮智能科技有限公司 一种行人重识别方法、装置及电子设备和存储介质
CN112633417A (zh) * 2021-01-18 2021-04-09 天津大学 一种用于行人重识别的将神经网络模块化的行人深度特征融合方法
CN113191338A (zh) * 2021-06-29 2021-07-30 苏州浪潮智能科技有限公司 一种行人重识别方法、装置、设备及可读存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2564668B (en) * 2017-07-18 2022-04-13 Vision Semantics Ltd Target re-identification
KR20190068000A (ko) * 2017-12-08 2019-06-18 이의령 다중 영상 환경에서의 동일인 재식별 시스템
CN108764308B (zh) * 2018-05-16 2021-09-14 中国人民解放军陆军工程大学 一种基于卷积循环网络的行人重识别方法
US11537817B2 (en) * 2018-10-18 2022-12-27 Deepnorth Inc. Semi-supervised person re-identification using multi-view clustering
US11138469B2 (en) * 2019-01-15 2021-10-05 Naver Corporation Training and using a convolutional neural network for person re-identification
CN110008842A (zh) * 2019-03-09 2019-07-12 同济大学 一种基于深度多损失融合模型的行人重识别方法
CN110826424B (zh) * 2019-10-21 2021-07-27 华中科技大学 一种基于行人重识别驱动定位调整的行人搜索方法
CN110796057A (zh) * 2019-10-22 2020-02-14 上海交通大学 行人重识别方法、装置及计算机设备
CN111325111A (zh) * 2020-01-23 2020-06-23 同济大学 一种融合逆注意力和多尺度深度监督的行人重识别方法
CN111597887B (zh) * 2020-04-08 2023-02-03 北京大学 一种行人再识别方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784182A (zh) * 2018-12-17 2019-05-21 北京飞搜科技有限公司 行人重识别方法和装置
CN110414368A (zh) * 2019-07-04 2019-11-05 华中科技大学 一种基于知识蒸馏的无监督行人重识别方法
CN111488833A (zh) * 2020-04-08 2020-08-04 苏州浪潮智能科技有限公司 一种行人重识别方法、装置及电子设备和存储介质
CN111368815A (zh) * 2020-05-28 2020-07-03 之江实验室 一种基于多部件自注意力机制的行人重识别方法
CN112633417A (zh) * 2021-01-18 2021-04-09 天津大学 一种用于行人重识别的将神经网络模块化的行人深度特征融合方法
CN113191338A (zh) * 2021-06-29 2021-07-30 苏州浪潮智能科技有限公司 一种行人重识别方法、装置、设备及可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612500A (zh) * 2023-07-20 2023-08-18 深圳须弥云图空间科技有限公司 行人重识别模型训练方法及装置
CN116612500B (zh) * 2023-07-20 2023-09-29 深圳须弥云图空间科技有限公司 行人重识别模型训练方法及装置
CN116665019A (zh) * 2023-07-31 2023-08-29 山东交通学院 一种用于车辆重识别的多轴交互多维度注意力网络
CN116665019B (zh) * 2023-07-31 2023-09-29 山东交通学院 一种用于车辆重识别的多轴交互多维度注意力网络

Also Published As

Publication number Publication date
US11830275B1 (en) 2023-11-28
CN113191338A (zh) 2021-07-30
CN113191338B (zh) 2021-09-17
US20230394866A1 (en) 2023-12-07

Similar Documents

Publication Publication Date Title
WO2023272995A1 (zh) 一种行人重识别方法、装置、设备及可读存储介质
US10586350B2 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
CN110322446B (zh) 一种基于相似性空间对齐的域自适应语义分割方法
US10275719B2 (en) Hyper-parameter selection for deep convolutional networks
US20180181592A1 (en) Multi-modal image ranking using neural networks
WO2021057056A1 (zh) 神经网络架构搜索方法、图像处理方法、装置和存储介质
WO2022104540A1 (zh) 一种跨模态哈希检索方法、终端设备及存储介质
EP3493105A1 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
EP3493106B1 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
CN108596944A (zh) 一种提取运动目标的方法、装置及终端设备
CN115438215B (zh) 图文双向搜索及匹配模型训练方法、装置、设备及介质
WO2023272994A1 (zh) 基于深度学习网络的行人重识别方法、装置、设备及介质
EP3493104A1 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
CN112749300B (zh) 用于视频分类的方法、装置、设备、存储介质和程序产品
WO2023082561A1 (zh) 一种行人重识别方法、系统、电子设备及存储介质
WO2021169453A1 (zh) 用于文本处理的方法和装置
CN115455171A (zh) 文本视频的互检索以及模型训练方法、装置、设备及介质
CN114863440A (zh) 订单数据处理方法及其装置、设备、介质、产品
CN111222534A (zh) 一种基于双向特征融合和更平衡l1损失的单发多框检测器优化方法
Li et al. Alpha-SGANet: A multi-attention-scale feature pyramid network combined with lightweight network based on Alpha-IoU loss
Chua et al. Visual IoT: ultra-low-power processing architectures and implications
US20210357647A1 (en) Method and System for Video Action Classification by Mixing 2D and 3D Features
CN117152438A (zh) 一种基于改进DeepLabV3+网络的轻量级街景图像语义分割方法
CN113239215B (zh) 多媒体资源的分类方法、装置、电子设备及存储介质
US20220156502A1 (en) Lingually constrained tracking of visual objects

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE