CN113255695A - Feature extraction method and system for target re-identification - Google Patents

Feature extraction method and system for target re-identification Download PDF

Info

Publication number
CN113255695A
CN113255695A CN202110557889.3A CN202110557889A CN113255695A CN 113255695 A CN113255695 A CN 113255695A CN 202110557889 A CN202110557889 A CN 202110557889A CN 113255695 A CN113255695 A CN 113255695A
Authority
CN
China
Prior art keywords
feature vector
loss function
feature
neural network
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110557889.3A
Other languages
Chinese (zh)
Inventor
张超捷
黄宇恒
徐天适
张华俊
魏东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GRG Banking Equipment Co Ltd
Original Assignee
GRG Banking Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GRG Banking Equipment Co Ltd filed Critical GRG Banking Equipment Co Ltd
Priority to CN202110557889.3A priority Critical patent/CN113255695A/en
Publication of CN113255695A publication Critical patent/CN113255695A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a feature extraction method and a feature extraction system for target re-identification, wherein the feature extraction method comprises the following steps: acquiring a picture to be extracted; and extracting the characteristics of the picture to be extracted based on a neural network model which is trained by adopting a slicing method in advance. The training steps of the neural network model are as follows: acquiring an input image; inputting an input image into a preset neural network model to obtain a first feature vector; slicing the first feature vector to obtain a plurality of second feature vectors; slicing the category center parameters to obtain a plurality of category center parameter slices; determining a total loss function based on the category center parameter slice, the second feature vector, the first feature vector and the category center parameter; the class center parameters and the parameters of the neural network model are updated based on the total loss function. The feature extraction method for target re-identification can simultaneously meet different requirements of different services on retrieval precision, retrieval time and storage space.

Description

Feature extraction method and system for target re-identification
Technical Field
The invention relates to the technical field of image recognition, in particular to a feature extraction method and system for target re-recognition.
Background
At present, under different service scenes, the requirements on the retrieval precision, the retrieval time and the storage space of target re-identification are different, for example, some services need very high retrieval precision, and the requirements on the retrieval time and the storage space are relatively low; while other services require faster retrieval times and less storage space. In order to deal with these different services, it is a conventional practice to train a plurality of deep learning models respectively, each model extracts feature vectors of different lengths, and then selects different models according to different service requirements.
If multiple different business requirements exist in the same target re-recognition system at the same time, and the requirements of the businesses on retrieval precision, retrieval time and storage space are different, by adopting the traditional method, multiple models need to be trained, so that the complexity and space cost of the whole system are increased, and the development cost is increased.
Disclosure of Invention
One of the objectives of the present invention is to provide a feature extraction method for target re-identification, which can simultaneously meet different requirements of different services on retrieval accuracy, retrieval time and storage space.
The feature extraction method for target re-identification provided by the embodiment of the invention comprises the following steps:
acquiring a picture to be extracted;
and carrying out feature extraction on the picture to be extracted based on a neural network model which is trained by adopting a slicing method in advance.
Preferably, the training step of the neural network model is as follows:
acquiring an input image;
inputting the input image into a preset neural network model to obtain a first feature vector;
slicing the first feature vector to obtain a plurality of second feature vectors;
slicing the category center parameters to obtain a plurality of category center parameter slices;
determining a total loss function based on the category-centric parameter slice, the second feature vector, the first feature vector, and the category-centric parameter;
updating the class center parameters and the parameters of the neural network model based on a total loss function.
Preferably, a total loss function is determined based on the class center parameter slice, the second feature vector, the first feature vector and the class center parameter; the method comprises the following steps:
determining a plurality of first sub-loss functions based on the category-centric parameter slice and the second feature vector;
determining a second sub-loss function based on the first feature vector and the class center parameter;
determining the total loss function based on a plurality of the first and second sub-loss functions; the overall loss function is calculated as follows:
the overall loss function is calculated as follows:
Figure BDA0003077984290000031
wherein L is the total loss function,
Figure BDA0003077984290000032
is the first sub-loss function; mu.sNLNIs the second sub-loss function; mu.siIs the coefficient of a sub-loss function of characteristic length i, where μNNot equal to 0; n is the length of the feature vector output by the model; l isiA sub-loss function with a characteristic length i is represented.
Preferably, the first and second liquid crystal materials are,
Figure BDA0003077984290000033
where BS represents the number of images of a set of training sets, θi,ybRepresenting the angle between the characteristic vector with the length of i of the target of the b picture and the corresponding actual category center vector; thetai, j represents the angle between the characteristic vector with the length of i of the target of the b picture and the central vector of other categories; cn represents the number of classification categories; s, m, k are all hyper-parameters.
Preferably, the first feature vector is sliced to obtain a plurality of second feature vectors; comprises the following steps:
and sequentially taking a preset number of elements of the feature vector as the second feature vector from the foremost element of the first feature vector.
Preferably, the category center parameter slices correspond to the second feature vectors one to one.
Preferably, the class center parameters and the parameters of the neural network model are updated by a back propagation method.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of model training of a feature extraction method for target re-recognition in an embodiment of the present invention;
FIG. 2 is a diagram illustrating the positions of the selected eigenvector elements and their corresponding resulting loss functions;
FIG. 3 is a diagram illustrating the angles of the target feature vector and the class center vector.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 1 to 3, the batch _ size in fig. 1 is the number of images of one batch during model training; h is the image height; w is the image width; c is the number of image channels; n is the length of the feature vector output by the model; class _ num is the target class number of the images used for training; the feature vector slice refers to: and taking a part of the elements in front of the feature vector as a new feature vector from the front element, wherein the new feature vector is the feature vector slice.
The existing normal training process is as follows:
training by using an image classification network, wherein each different target is taken as a different class; the picture classification network can be divided into two sub-networks according to functions, wherein the first network is a network for extracting picture features and is set as a sub-network A; the second sub-network is set as B, and B classifies according to the characteristics of A output; after the classification network training is finished, abandoning the sub-network B, and reserving the sub-network A to extract the characteristics of the picture for the task of target re-identification;
the neural network model of the present application is trained as follows:
acquiring an input image;
inputting the input image into a preset neural network model to obtain a first feature vector;
slicing the first feature vector to obtain a plurality of second feature vectors;
slicing the category center parameters to obtain a plurality of category center parameter slices;
determining a total loss function based on the category-centric parameter slice, the second feature vector, the first feature vector, and the category-centric parameter;
updating the class center parameters and the parameters of the neural network model based on a total loss function.
The "category-centric parameter" in fig. 1 is the parameter (weight) of the sub-network B; the slice is a part of the original feature vector, the slice of the category center parameter corresponds to the slice of the feature vector, and the dimension (shape) of the original feature vector is assumed to be: batch _ size × N, then the dimensionality of the original category center parameter needs to be N × class _ num (batch _ size is the number of pictures for training, and class _ num is the number of categories for classification); at the moment, when the network is calculated forwards, the feature vector is multiplied by the class center parameter, and the output dimension is batch _ size × class _ num, namely, the classification result can be judged; when slicing is performed, if the slice dimension taken by the feature vector is batch _ size _ m (m < ═ N), then the class center parameter also needs to take the slice dimension as m class _ num, so that the output dimension after multiplication is ensured to be the same. In addition, the parameter adjustment uses a "back propagation method", all neural networks are trained based on the method, parameters in the network are updated once per iteration, and the parameters to be adjusted (updated) are: all parameters (weights) participating in the training in the network, including subnetwork a and subnetwork B.
Determining a total loss function based on the category-centric parameter slice, the second feature vector, the first feature vector, and the category-centric parameter; the method comprises the following steps:
determining a plurality of first sub-loss functions based on the category-centric parameter slice and the second feature vector;
determining a second sub-loss function based on the first feature vector and the class center parameter;
determining the total loss function based on a plurality of the first and second sub-loss functions.
The structure of the total loss function used in the present technique is represented by:
Figure BDA0003077984290000051
wherein L is the total loss function,
Figure BDA0003077984290000052
is the first sub-loss function; mu.sNLNIs the second sub-loss function; mu.siIs the coefficient of a sub-loss function of characteristic length i, where μNNot equal to 0; n is the length of the feature vector output by the model; l isiA sub-loss function with a characteristic length i is represented.
Preferably, the first and second liquid crystal materials are,
Figure BDA0003077984290000061
where BS represents the number of images of a set of training sets, θi,ybRepresenting the angle between the characteristic vector with the length of i of the target of the b picture and the corresponding actual category center vector; thetai, j represents the angle between the characteristic vector with the length of i of the target of the b picture and the central vector of other categories; cn represents the number of classification categories; s, m, k are all hyper-parameters.
FIG. 2 is a diagram illustrating the positions of the selected eigenvector elements and their corresponding resulting loss functions; in the figure, Vi denotes an element of the feature vector, where i denotes a position in the feature vector.
Preferably, the first feature vector is sliced to obtain a plurality of second feature vectors; comprises the following steps:
and sequentially taking a preset number of elements of the feature vector as the second feature vector from the foremost element of the first feature vector.
By using the target feature extraction technology, the retrieval accuracy of the feature vector with the extraction length of N can be ensured, and meanwhile, the feature elements contributing greatly to the system can be moved forward, so that the retrieval accuracy can also achieve a good effect when the front M feature elements of the feature vector are extracted for calculation.
The technology for extracting the target features integrates a plurality of models in the traditional method, under the condition of ensuring the retrieval precision, the functions which can be realized by a plurality of models (each model extracts feature vectors with different lengths) are integrated in the same model, the high-efficiency integration of the retrieval precision, the retrieval time and the storage space is realized, meanwhile, the integration has business flexibility, the length of extracting the features can be switched freely according to different businesses, and the development cost, the complexity of a system and the space cost are greatly reduced.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is intended to include such modifications and variations.

Claims (8)

1. A feature extraction method for target re-identification is characterized by comprising the following steps:
acquiring a picture to be extracted;
and extracting the characteristics of the picture to be extracted based on a neural network model which is trained by adopting a slicing method in advance.
2. The method for extracting features of target re-identification according to claim 1, wherein the training of the neural network model comprises the following steps:
acquiring an input image;
inputting the input image into a preset neural network model to obtain a first feature vector;
slicing the first feature vector to obtain a plurality of second feature vectors;
slicing the category center parameters to obtain a plurality of category center parameter slices;
determining a total loss function based on the category-centric parameter slice, the second feature vector, the first feature vector, and the category-centric parameter;
updating the class center parameters and the parameters of the neural network model based on a total loss function.
3. The method for feature extraction for object re-identification according to claim 2, wherein the total loss function is determined based on the class center parameter slice, the second feature vector, the first feature vector and the class center parameter; the method comprises the following steps:
determining a plurality of first sub-loss functions based on the category-centric parameter slice and the second feature vector;
determining a second sub-loss function based on the first feature vector and the class center parameter;
determining the total loss function based on a plurality of the first and second sub-loss functions; the overall loss function is calculated as follows:
Figure FDA0003077984280000011
wherein L is the total loss function,
Figure FDA0003077984280000012
is the first sub-loss function; mu.sNLNIs the second sub-loss function; mu.siIs the coefficient of a sub-loss function of characteristic length i, where μNNot equal to 0; n is the length of the feature vector output by the model; l isiA sub-loss function with a characteristic length i is represented.
4. The feature extraction method of object re-recognition according to claim 3,
Figure FDA0003077984280000021
where BS represents the number of images of a set of training sets, θi,ybThe length of the object of the b-th picture isi, the angle between the feature vector of i and the corresponding actual class center vector; thetai, j represents the angle between the characteristic vector with the length of i of the target of the b picture and the central vector of other categories; cn represents the number of classification categories; s, m, k are all hyper-parameters.
5. The method for extracting features of object re-recognition according to claim 2, wherein the first feature vector is sliced to obtain a plurality of second feature vectors; the method comprises the following steps:
and sequentially taking a preset number of elements of the feature vector as the second feature vector from the foremost element of the first feature vector.
6. The method for extracting features of object re-identification according to claim 2, wherein the class center parameter slices correspond to the second feature vectors one to one.
7. The method for extracting features of object re-identification according to claim 2, wherein the class center parameters and the parameters of the neural network model are updated by using a back propagation method.
8. A feature extraction system for object re-recognition, comprising:
the acquisition module is used for acquiring a picture to be extracted;
and the extraction module is used for extracting the characteristics of the picture to be extracted based on a neural network model which is trained by adopting a slicing method in advance.
CN202110557889.3A 2021-05-21 2021-05-21 Feature extraction method and system for target re-identification Pending CN113255695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110557889.3A CN113255695A (en) 2021-05-21 2021-05-21 Feature extraction method and system for target re-identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110557889.3A CN113255695A (en) 2021-05-21 2021-05-21 Feature extraction method and system for target re-identification

Publications (1)

Publication Number Publication Date
CN113255695A true CN113255695A (en) 2021-08-13

Family

ID=77183631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110557889.3A Pending CN113255695A (en) 2021-05-21 2021-05-21 Feature extraction method and system for target re-identification

Country Status (1)

Country Link
CN (1) CN113255695A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912535A (en) * 2023-09-08 2023-10-20 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068416A1 (en) * 2016-10-14 2018-04-19 广州视源电子科技股份有限公司 Neural network-based multilayer image feature extraction modeling method and device and image recognition method and device
CN108875693A (en) * 2018-07-03 2018-11-23 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and its storage medium
CN110414368A (en) * 2019-07-04 2019-11-05 华中科技大学 A kind of unsupervised pedestrian recognition methods again of knowledge based distillation
CN111401113A (en) * 2019-01-02 2020-07-10 南京大学 Pedestrian re-identification method based on human body posture estimation
CN111814845A (en) * 2020-03-26 2020-10-23 同济大学 Pedestrian re-identification method based on multi-branch flow fusion model
CN111814584A (en) * 2020-06-18 2020-10-23 北京交通大学 Vehicle weight identification method under multi-view-angle environment based on multi-center measurement loss

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018068416A1 (en) * 2016-10-14 2018-04-19 广州视源电子科技股份有限公司 Neural network-based multilayer image feature extraction modeling method and device and image recognition method and device
CN108875693A (en) * 2018-07-03 2018-11-23 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and its storage medium
CN111401113A (en) * 2019-01-02 2020-07-10 南京大学 Pedestrian re-identification method based on human body posture estimation
CN110414368A (en) * 2019-07-04 2019-11-05 华中科技大学 A kind of unsupervised pedestrian recognition methods again of knowledge based distillation
CN111814845A (en) * 2020-03-26 2020-10-23 同济大学 Pedestrian re-identification method based on multi-branch flow fusion model
CN111814584A (en) * 2020-06-18 2020-10-23 北京交通大学 Vehicle weight identification method under multi-view-angle environment based on multi-center measurement loss

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912535A (en) * 2023-09-08 2023-10-20 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening
CN116912535B (en) * 2023-09-08 2023-11-28 中国海洋大学 Unsupervised target re-identification method, device and medium based on similarity screening

Similar Documents

Publication Publication Date Title
CN106845529B (en) Image feature identification method based on multi-view convolution neural network
CN110188239B (en) Double-current video classification method and device based on cross-mode attention mechanism
CN110147700B (en) Video classification method, device, storage medium and equipment
CN109815826B (en) Method and device for generating face attribute model
CN109344855B (en) Depth model face beauty evaluation method based on sequencing guided regression
CN109584337B (en) Image generation method for generating countermeasure network based on condition capsule
US20200265597A1 (en) Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
CN110969250A (en) Neural network training method and device
CN110060236B (en) Stereoscopic image quality evaluation method based on depth convolution neural network
CN110263215B (en) Video emotion positioning method and system
CN109740679B (en) Target identification method based on convolutional neural network and naive Bayes
CN108197669B (en) Feature training method and device of convolutional neural network
CN111414815B (en) Pedestrian re-recognition network searching method and pedestrian re-recognition method
CN107871103B (en) Face authentication method and device
CN111488815A (en) Basketball game goal event prediction method based on graph convolution network and long-time and short-time memory network
CN110555461A (en) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN109325435B (en) Video action recognition and positioning method based on cascade neural network
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN111160229A (en) Video target detection method and device based on SSD (solid State disk) network
CN108734145A (en) A kind of face identification method based on degree adaptive face characterization model
CN113888501A (en) Non-reference image quality evaluation method based on attention positioning network
CN110363218A (en) A kind of embryo&#39;s noninvasively estimating method and device
CN110598097B (en) Hair style recommendation system, method, equipment and storage medium based on CNN
CN113255695A (en) Feature extraction method and system for target re-identification
CN111310516A (en) Behavior identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination