CN111881842A - Pedestrian re-identification method and device, electronic equipment and storage medium - Google Patents

Pedestrian re-identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111881842A
CN111881842A CN202010749564.0A CN202010749564A CN111881842A CN 111881842 A CN111881842 A CN 111881842A CN 202010749564 A CN202010749564 A CN 202010749564A CN 111881842 A CN111881842 A CN 111881842A
Authority
CN
China
Prior art keywords
pedestrian
attribute
identification
neural
neural unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010749564.0A
Other languages
Chinese (zh)
Inventor
张�浩
李一力
邵新庆
刘强
徐�明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Original Assignee
Shenzhen ZNV Technology Co Ltd
Nanjing ZNV Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZNV Technology Co Ltd, Nanjing ZNV Software Co Ltd filed Critical Shenzhen ZNV Technology Co Ltd
Priority to CN202010749564.0A priority Critical patent/CN111881842A/en
Publication of CN111881842A publication Critical patent/CN111881842A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a pedestrian re-identification method, a device, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining image information containing pedestrians; inputting image information containing pedestrians into a pre-trained neural network to obtain feature vectors of the pedestrians in the image information, wherein the feature vectors of the pedestrians comprise feature vectors of attributes of the pedestrians and feature vectors of re-identification of the pedestrians; and comparing the characteristic vector of the pedestrian with the characteristic vector in the preset characteristic vector set, and re-identifying the pedestrian in the image information according to the comparison result. Because the neural network for pedestrian re-identification adds the pedestrian attribute image data in the training stage, the pre-trained neural network has the fine-grained characteristic of the pedestrian re-identification and the coarse-grained characteristic of the pedestrian attribute, and the problem of low accuracy of the pedestrian re-identification when the image crosses the domain is solved.

Description

Pedestrian re-identification method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a pedestrian re-identification method, a pedestrian re-identification device, electronic equipment and a storage medium.
Background
The pedestrian re-recognition means that a pedestrian in the pedestrian image is retrieved from images captured by different capturing devices according to the acquired pedestrian image. The biggest problem of pedestrian re-identification is the problem of image cross-domain, namely, when a B training image data set is input into a neural network trained by an A training image data set, the performance of the neural network is often greatly reduced, and the accuracy of pedestrian re-identification is influenced.
Disclosure of Invention
The invention mainly solves the technical problem of low pedestrian re-identification precision when the image crosses the domain.
According to a first aspect, there is provided in an embodiment a pedestrian re-identification method comprising:
acquiring image information containing pedestrians;
inputting the image information containing the pedestrian into a pre-trained neural network to obtain a feature vector of the pedestrian in the image information, wherein the feature vector of the pedestrian comprises a feature vector of the attribute of the pedestrian and a feature vector of re-identification of the pedestrian;
comparing the characteristic vector of the pedestrian with the characteristic vector in a preset characteristic vector set, and re-identifying the pedestrian in the image information according to a comparison result.
Further, the neural network comprises a public neural unit, an attribute neural unit and a re-identification neural unit, wherein the input end of the public neural unit is connected with the input end of the neural network, the output end of the public neural unit is respectively connected with the input end of the attribute neural unit and the input end of the re-identification neural unit, and the output end of the attribute neural unit and the output end of the re-identification neural unit are connected with the output end of the neural network; the output end of the attribute neural unit is used for outputting a feature vector of the attribute of the pedestrian, and the output end of the re-identification neural unit is used for outputting a feature vector of re-identification of the pedestrian;
the common neural unit, attribute neural unit, and re-identification neural unit each include at least one pooling layer and/or at least one convolutional layer.
Further, the pre-trained neural network is trained by:
acquiring a pedestrian attribute training data set and a pedestrian re-identification training data set;
constructing a neural network;
inputting the pedestrian attribute training data set into a public neural unit and the attribute neural unit in the neural network for training to obtain an attribute-trained neural network;
and inputting the pedestrian re-recognition training data set into a public neural unit in the neural network after attribute training and the re-recognition neural unit for training to obtain a pre-trained neural network.
Further, the feature vector of the pedestrian in the image information is obtained by splicing the feature vector of the attribute of the pedestrian and the feature vector of the re-identification of the pedestrian.
According to a second aspect, there is provided in one embodiment a pedestrian re-identification apparatus comprising:
the image acquisition module is used for acquiring image information containing pedestrians;
the feature extraction module is used for inputting the image information containing the pedestrian into a pre-trained neural network to obtain a feature vector of the pedestrian in the image information, wherein the feature vector of the pedestrian comprises a feature vector of pedestrian attributes and a feature vector of pedestrian re-identification;
and the pedestrian re-identification module is used for comparing the characteristic vector of the pedestrian with the characteristic vector in a preset characteristic vector set and re-identifying the pedestrian in the image information according to a comparison result.
Further, the neural network comprises a public neural unit, an attribute neural unit and a re-identification neural unit, wherein the input end of the public neural unit is connected with the input end of the neural network, the output end of the public neural unit is respectively connected with the input end of the attribute neural unit and the input end of the re-identification neural unit, and the output end of the attribute neural unit and the output end of the re-identification neural unit are connected with the output end of the neural network; the output end of the attribute neural unit is used for outputting a feature vector of the attribute of the pedestrian, and the output end of the re-identification neural unit is used for outputting a feature vector of re-identification of the pedestrian;
the common neural unit, attribute neural unit, and re-identification neural unit each include at least one pooling layer and/or at least one convolutional layer.
Further, the training module is used for training the neural network according to the following modes:
acquiring a pedestrian attribute training data set and a pedestrian re-identification training data set;
constructing a neural network;
inputting the pedestrian attribute training data set into a public neural unit and the attribute neural unit in the neural network for training to obtain an attribute-trained neural network;
and inputting the pedestrian re-recognition training data set into a public neural unit in the neural network after attribute training and the re-recognition neural unit for training to obtain a pre-trained neural network.
Further, the feature vector of the pedestrian in the image information is obtained by splicing the feature vector of the attribute of the pedestrian and the feature vector of the re-identification of the pedestrian.
According to a third aspect, an embodiment provides an electronic device comprising:
a memory for storing a program;
a processor for implementing the method of the above embodiment by executing the program stored in the memory.
According to a fourth aspect, an embodiment provides a computer-readable storage medium comprising a program executable by a processor to implement the method of the above-described embodiment.
According to the pedestrian re-recognition method, the pedestrian re-recognition device, the electronic equipment and the storage medium of the embodiment, because the neural network for pedestrian re-recognition adds the pedestrian attribute image data in the training stage, the pre-trained neural network has the fine-grained characteristic of pedestrian re-recognition and the coarse-grained characteristic of the pedestrian attribute, and the problem of low pedestrian re-recognition precision in the image cross-domain process is solved.
Drawings
FIG. 1 is a flow chart of a pedestrian re-identification method of an embodiment;
FIG. 2 is a schematic diagram of a neural network according to an embodiment;
fig. 3 is a block diagram showing the construction of a pedestrian re-identification apparatus according to an embodiment;
fig. 4 is a block diagram of an electronic device according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a flowchart illustrating a pedestrian re-identification method according to an embodiment, which can be executed on a server and includes steps S10 to S30, which are described in detail below.
In step S10, image information including a pedestrian is acquired. The image information of the pedestrian in the embodiment may be a video or a picture of the pedestrian shot by the monitoring camera, and the image information may be an RGB image, an infrared image, or other images in various forms.
Step S20, inputting the image information containing the pedestrian into a pre-trained neural network to obtain the characteristic vector of the pedestrian in the image information, wherein the characteristic vector of the pedestrian comprises the characteristic vector of the pedestrian attribute and the characteristic vector of the pedestrian re-identification.
In this embodiment, the attribute of the pedestrian refers to sex, age, dressing, knapsack, whether or not the pedestrian is growing, and the feature vector of the attribute of the pedestrian refers to a feature vector corresponding to the attribute of the pedestrian in the image information (picture, video).
Referring to fig. 2, fig. 2 is a schematic structural diagram of an implemented neural network, the neural network includes a common neural unit 101, an attribute neural unit 102 and a re-identification neural unit 103, an input end of the common neural unit 101 is connected to an input end of the neural network, an output end of the common neural unit 101 is respectively connected to an input end of the attribute neural unit 102 and an input end of the re-identification neural unit 103, and an output end of the attribute neural unit 102 and an output end of the re-identification neural unit 103 are connected to an output end of the neural network; the output end of the attribute neural unit 102 is used for outputting a feature vector of the attribute of the pedestrian, and the output end of the re-identification neural unit 103 is used for outputting a feature vector of the re-identification of the pedestrian;
wherein the common neural unit, the attribute neural unit, and the re-identified neural unit each include at least one pooling layer and/or at least one convolutional layer.
In one embodiment, the neural network comprises five layers, the first three layers can be used as a public neural unit, the pedestrian attribute and the pedestrian re-identification are the same for the public neural unit 101, the pedestrian attribute feature and the pedestrian re-identification feature are separated in the fourth layer and the fifth layer, the attribute neural unit 102 is used for outputting the attribute feature vector of the pedestrian, and the re-identification neural unit 103 is used for outputting the re-identification feature vector of the pedestrian.
In one embodiment, the pre-trained neural network is trained by:
step S201, a pedestrian attribute training data set and a pedestrian re-identification training data set are obtained. The image data in the pedestrian attribute training data set has a pedestrian attribute label, the pedestrian attribute label comprises attributes such as gender, age and dressing, the image data in the pedestrian re-identification training data set has a pedestrian re-identification label, the pedestrian re-identification label is an ID of the image data, and different image data have different IDs.
Step S202, a neural network is constructed. The neural network constructed in this embodiment may be a RESNET feature extraction network.
Step S203, inputting the pedestrian attribute training data set into a public neural unit and an attribute neural unit in the neural network for training to obtain the neural network after attribute training. The training process comprises the following steps: activating a public neural unit and an attribute neural unit in a neural network, inputting image data with pedestrian attribute labels in a pedestrian attribute training data set from an input end of the neural network, calculating Loss of the neural network, adjusting parameters in the neural network according to the Loss, inputting the image data with the pedestrian attribute labels in the pedestrian attribute training data set from the input end of the neural network again, repeating for multiple times until the Loss of the neural network is minimum and stable, and finishing training of the pedestrian attributes.
And step S204, inputting the pedestrian re-recognition training data set into the public neural unit and the re-recognition neural unit in the neural network after attribute training for training to obtain a pre-trained neural network. The training process comprises the following steps: activating a public neural unit and a re-recognition neural unit in a neural network, inputting image data with a pedestrian re-recognition label in a pedestrian re-recognition training data set from an input end of the neural network, calculating Loss of the neural network, adjusting parameters in the neural network according to the Loss, inputting the image data with the pedestrian re-recognition label in the pedestrian re-recognition training data set from the input end of the neural network again, repeating for multiple times until the Loss of the neural network is minimum and stable, namely finishing training of pedestrian re-recognition, and obtaining a pre-trained neural network at the moment.
In one embodiment, the feature vector of the pedestrian in the image information is obtained by splicing the feature vector of the attribute of the pedestrian and the feature vector of the re-identification of the pedestrian. For example, the feature vector of the pedestrian is L1, the feature vector of the pedestrian re-identification is L2, and the feature vector of the pedestrian is L1L 2.
And step S30, comparing the characteristic vector of the pedestrian with the characteristic vector in the preset characteristic vector set, and re-identifying the pedestrian in the image information according to the comparison result.
In this embodiment, the feature vectors in the preset feature vector set are feature vectors of pedestrians extracted from image information of pedestrians acquired by different shooting devices, similarity comparison is performed between the feature vectors of the pedestrians in the currently acquired image information and the feature vectors in the preset feature vector set, and if a comparison result is a similar feature vector, re-identification of the pedestrians is achieved.
In the embodiment of the invention, when the neural network for pedestrian re-identification is trained, the image data of the pedestrian attribute is added for training, so that the trained neural network can learn the fine-grained characteristic of the pedestrian re-identification and the coarse-grained characteristic of the pedestrian attribute at the same time.
Example two:
referring to fig. 3, fig. 3 is a block diagram of a pedestrian re-identification apparatus according to an embodiment, the pedestrian re-identification apparatus includes: the image acquisition module 10, the feature extraction module 20 and the pedestrian re-identification module 30.
The image acquisition module 10 is used for acquiring image information containing pedestrians. The image information of the pedestrian in the embodiment may be a video or a picture of the pedestrian shot by the monitoring camera, and the image information may be an RGB image, an infrared image, or other images in various forms.
The feature extraction module 20 is configured to input image information including a pedestrian into a pre-trained neural network, so as to obtain a feature vector of the pedestrian in the image information, where the feature vector of the pedestrian includes a feature vector of a pedestrian attribute and a feature vector of pedestrian re-identification.
In this embodiment, the attribute of the pedestrian refers to sex, age, dressing, knapsack, whether or not the pedestrian is growing, and the feature vector of the attribute of the pedestrian refers to a feature vector corresponding to the attribute of the pedestrian in the image information (picture, video).
The pedestrian re-identification module 30 is configured to compare the feature vector of the pedestrian with the feature vector in the preset feature vector set, and re-identify the pedestrian in the image information according to the comparison result.
The embodiment further comprises a training module 40, configured to train the neural network in the following manner:
and acquiring a pedestrian attribute training data set and a pedestrian re-identification training data set. The image data in the pedestrian attribute training data set has a pedestrian attribute label, the pedestrian attribute label comprises attributes such as gender, age and dressing, the image data in the pedestrian re-identification training data set has a pedestrian re-identification label, the pedestrian re-identification label is an ID of the image data, and different image data have different IDs.
And constructing a neural network. The neural network constructed in this embodiment may be a RESNET feature extraction network. The neural network comprises a public neural unit 101, an attribute neural unit 102 and a re-identification neural unit 103, wherein the input end of the public neural unit 101 is connected with the input end of the neural network, the output end of the public neural unit 101 is respectively connected with the input end of the attribute neural unit 102 and the input end of the re-identification neural unit 103, and the output end of the attribute neural unit 102 and the output end of the re-identification neural unit 103 are connected with the output end of the neural network; the output end of the attribute neural unit 102 is used for outputting a feature vector of the attribute of the pedestrian, and the output end of the re-identification neural unit 103 is used for outputting a feature vector of the re-identification of the pedestrian;
wherein the common neural unit, the attribute neural unit, and the re-identified neural unit each include at least one pooling layer and/or at least one convolutional layer.
And inputting the pedestrian attribute training data set into a public neural unit and an attribute neural unit in the neural network for training to obtain the neural network after attribute training. The training process comprises the following steps: activating a public neural unit and an attribute neural unit in a neural network, inputting image data with pedestrian attribute labels in a pedestrian attribute training data set from an input end of the neural network, calculating Loss of the neural network, adjusting parameters in the neural network according to the Loss, inputting the image data with the pedestrian attribute labels in the pedestrian attribute training data set from the input end of the neural network again, repeating for multiple times until the Loss of the neural network is minimum and stable, and finishing training of the pedestrian attributes.
And inputting the pedestrian re-recognition training data set into the public neural unit and the re-recognition neural unit in the neural network after attribute training for training to obtain a pre-trained neural network. The training process comprises the following steps: activating a public neural unit and a re-recognition neural unit in a neural network, inputting image data with a pedestrian re-recognition label in a pedestrian re-recognition training data set from an input end of the neural network, calculating Loss of the neural network, adjusting parameters in the neural network according to the Loss, inputting the image data with the pedestrian re-recognition label in the pedestrian re-recognition training data set from the input end of the neural network again, repeating for multiple times until the Loss of the neural network is minimum and stable, namely finishing training of pedestrian re-recognition, and obtaining a pre-trained neural network at the moment.
In one embodiment, the feature vector of the pedestrian in the image information is obtained by splicing the feature vector of the attribute of the pedestrian and the feature vector of the re-identification of the pedestrian. For example, the feature vector of the pedestrian is L1, the feature vector of the pedestrian re-identification is L2, and the feature vector of the pedestrian is L1L 2.
Referring to fig. 4, an embodiment of the invention provides an electronic device. The electronic device comprises a memory 201, a processor 202, and an input/output interface 203. The memory 201 is used for storing programs. And the processor 202 is configured to call the program stored in the memory 301 to execute the feature fusion method according to the embodiment of the present invention. The processor 202 is connected to the memory 201 and the input/output interface 203, respectively, for example, via a bus system and/or other connection mechanism (not shown). The memory 201 may be used to store programs and data, including the feature fusion program involved in the embodiments of the present invention, and the processor 202 executes various functional applications and data processing of the electronic device by running the programs stored in the memory 201.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (10)

1. A pedestrian re-identification method is characterized by comprising the following steps:
acquiring image information containing pedestrians;
inputting the image information containing the pedestrian into a pre-trained neural network to obtain a feature vector of the pedestrian in the image information, wherein the feature vector of the pedestrian comprises a feature vector of the attribute of the pedestrian and a feature vector of re-identification of the pedestrian;
comparing the characteristic vector of the pedestrian with the characteristic vector in a preset characteristic vector set, and re-identifying the pedestrian in the image information according to a comparison result.
2. The pedestrian re-identification method according to claim 1, wherein a neural network includes a common neural unit, an attribute neural unit, and a re-identification neural unit, an input end of the common neural unit is connected with an input end of the neural network, an output end of the common neural unit is connected with an input end of the attribute neural unit and an input end of the re-identification neural unit, respectively, and an output end of the attribute neural unit and an output end of the re-identification neural unit are connected with an output end of the neural network; the output end of the attribute neural unit is used for outputting a feature vector of the attribute of the pedestrian, and the output end of the re-identification neural unit is used for outputting a feature vector of re-identification of the pedestrian;
the common neural unit, attribute neural unit, and re-identification neural unit each include at least one pooling layer and/or at least one convolutional layer.
3. The pedestrian re-identification method of claim 2, wherein the pre-trained neural network is trained by:
acquiring a pedestrian attribute training data set and a pedestrian re-identification training data set;
constructing a neural network;
inputting the pedestrian attribute training data set into a public neural unit and the attribute neural unit in the neural network for training to obtain an attribute-trained neural network;
and inputting the pedestrian re-recognition training data set into a public neural unit in the neural network after attribute training and the re-recognition neural unit for training to obtain a pre-trained neural network.
4. The pedestrian re-identification method according to claim 1, wherein the feature vector of the pedestrian in the image information is obtained by stitching the feature vector of the pedestrian attribute and the feature vector of the pedestrian re-identification.
5. A pedestrian re-recognition apparatus, comprising:
the image acquisition module is used for acquiring image information containing pedestrians;
the feature extraction module is used for inputting the image information containing the pedestrian into a pre-trained neural network to obtain a feature vector of the pedestrian in the image information, wherein the feature vector of the pedestrian comprises a feature vector of pedestrian attributes and a feature vector of pedestrian re-identification;
and the pedestrian re-identification module is used for comparing the characteristic vector of the pedestrian with the characteristic vector in a preset characteristic vector set and re-identifying the pedestrian in the image information according to a comparison result.
6. The pedestrian re-identification apparatus according to claim 5, wherein the neural network includes a common neural unit, an attribute neural unit, and a re-identification neural unit, an input terminal of the common neural unit is connected to an input terminal of the neural network, an output terminal of the common neural unit is connected to an input terminal of the attribute neural unit and an input terminal of the re-identification neural unit, respectively, and an output terminal of the attribute neural unit and an output terminal of the re-identification neural unit are connected to an output terminal of the neural network; the output end of the attribute neural unit is used for outputting a feature vector of the attribute of the pedestrian, and the output end of the re-identification neural unit is used for outputting a feature vector of re-identification of the pedestrian;
the common neural unit, attribute neural unit, and re-identification neural unit each include at least one pooling layer and/or at least one convolutional layer.
7. The pedestrian re-identification apparatus of claim 6, further comprising a training module for training the neural network in the following manner:
acquiring a pedestrian attribute training data set and a pedestrian re-identification training data set;
constructing a neural network;
inputting the pedestrian attribute training data set into a public neural unit and the attribute neural unit in the neural network for training to obtain an attribute-trained neural network;
and inputting the pedestrian re-recognition training data set into a public neural unit in the neural network after attribute training and the re-recognition neural unit for training to obtain a pre-trained neural network.
8. The pedestrian re-identification apparatus according to claim 5, wherein the feature vector of the pedestrian in the image information is obtained by stitching the feature vector of the pedestrian attribute and the feature vector of the pedestrian re-identification.
9. An electronic device, characterized by comprising:
a memory for storing a program;
a processor for implementing the method of any one of claims 1-4 by executing a program stored by the memory.
10. A computer-readable storage medium, comprising a program executable by a processor to implement the method of any one of claims 1-4.
CN202010749564.0A 2020-07-30 2020-07-30 Pedestrian re-identification method and device, electronic equipment and storage medium Pending CN111881842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010749564.0A CN111881842A (en) 2020-07-30 2020-07-30 Pedestrian re-identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010749564.0A CN111881842A (en) 2020-07-30 2020-07-30 Pedestrian re-identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111881842A true CN111881842A (en) 2020-11-03

Family

ID=73205123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010749564.0A Pending CN111881842A (en) 2020-07-30 2020-07-30 Pedestrian re-identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111881842A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711281A (en) * 2018-12-10 2019-05-03 复旦大学 A kind of pedestrian based on deep learning identifies again identifies fusion method with feature
CN110163110A (en) * 2019-04-23 2019-08-23 中电科大数据研究院有限公司 A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic
CN111178128A (en) * 2019-11-22 2020-05-19 北京迈格威科技有限公司 Image recognition method and device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711281A (en) * 2018-12-10 2019-05-03 复旦大学 A kind of pedestrian based on deep learning identifies again identifies fusion method with feature
CN110163110A (en) * 2019-04-23 2019-08-23 中电科大数据研究院有限公司 A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic
CN111178128A (en) * 2019-11-22 2020-05-19 北京迈格威科技有限公司 Image recognition method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吕超: "人脸智能:从技术概念到商用落地", 《中国安防》, no. 3, pages 51 - 55 *

Similar Documents

Publication Publication Date Title
EP3598342B1 (en) Method and device for identifying object
US11222211B2 (en) Method and apparatus for segmenting video object, electronic device, and storage medium
CN108960114A (en) Human body recognition method and device, computer readable storage medium and electronic equipment
CN108875492B (en) Face detection and key point positioning method, device, system and storage medium
CN111027455B (en) Pedestrian feature extraction method and device, electronic equipment and storage medium
CN110807472B (en) Image recognition method and device, electronic equipment and storage medium
US9633272B2 (en) Real time object scanning using a mobile phone and cloud-based visual search engine
CN112381104A (en) Image identification method and device, computer equipment and storage medium
CN114283351A (en) Video scene segmentation method, device, equipment and computer readable storage medium
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
JP7089045B2 (en) Media processing methods, related equipment and computer programs
CN111881826A (en) Cross-modal pedestrian re-identification method and device, electronic equipment and storage medium
CN111067522A (en) Brain addiction structural map assessment method and device
CN114550070A (en) Video clip identification method, device, equipment and storage medium
CN116050496A (en) Determination method and device, medium and equipment of picture description information generation model
Baddar et al. A deep facial landmarks detection with facial contour and facial components constraint
CN114880513A (en) Target retrieval method and related device
CN111881842A (en) Pedestrian re-identification method and device, electronic equipment and storage medium
CN112364946B (en) Training method of image determination model, and method, device and equipment for image determination
US20230050371A1 (en) Method and device for personalized search of visual media
CN113705666B (en) Split network training method, use method, device, equipment and storage medium
CN115797948A (en) Character recognition method, device and equipment
CN113808157B (en) Image processing method and device and computer equipment
CN115687676A (en) Information retrieval method, terminal and computer-readable storage medium
CN110781345B (en) Video description generation model obtaining method, video description generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination