CN115116090A - Pedestrian re-identification method, system and storage medium - Google Patents

Pedestrian re-identification method, system and storage medium Download PDF

Info

Publication number
CN115116090A
CN115116090A CN202210726677.8A CN202210726677A CN115116090A CN 115116090 A CN115116090 A CN 115116090A CN 202210726677 A CN202210726677 A CN 202210726677A CN 115116090 A CN115116090 A CN 115116090A
Authority
CN
China
Prior art keywords
pedestrian
feature extraction
extraction model
data set
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210726677.8A
Other languages
Chinese (zh)
Inventor
黄文丽
李艳生
杨活龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Lingtu Technology Co ltd
Original Assignee
Suzhou Lingtu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Lingtu Technology Co ltd filed Critical Suzhou Lingtu Technology Co ltd
Priority to CN202210726677.8A priority Critical patent/CN115116090A/en
Publication of CN115116090A publication Critical patent/CN115116090A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a pedestrian re-identification method, a system and a storage medium, wherein the method comprises the following steps: acquiring a whole-body detection block diagram of each pedestrian from the video; calling a feature extraction model to perform feature extraction on the whole-body detection block diagram of each pedestrian, and acquiring a feature vector of each whole-body detection block diagram, wherein the feature vector comprises global features and local features; clustering the feature vectors to obtain different whole body detection block diagrams belonging to the same pedestrian; wherein the feature extraction model is obtained by training an initial feature extraction model; the feature extraction model comprises a ResNet50 model and an attention mesh network model in series; the output of the ResNet50 model layer 4 serves as the input of the attention mesh network model, and the step size of the bottleneck layer of the ResNet50 model layer 4 is set to 1.

Description

Pedestrian re-identification method, system and storage medium
Technical Field
The invention relates to the technical field of computer software, in particular to a pedestrian re-identification method, a pedestrian re-identification system and a storage medium.
Background
Pedestrian re-identification is one of the more important and challenging tasks in the field of computer vision. The input data of the pedestrian re-identification system is mainly a whole-body detection block diagram of a pedestrian obtained by a pedestrian target detection system from videos of a plurality of monitoring cameras; then, the pedestrian re-identification system can find out the pedestrian appears and reappears under which cameras through the whole body detection block diagram, and help to retrieve a certain pedestrian from all images. Pedestrian re-recognition systems are more suitable for real-world situations than typical face recognition systems that typically acquire facial images from a confined environment. However, due to the influence of factors such as occlusion between different cameras, change in pedestrian posture, and change in ambient lighting, the accuracy of pedestrian re-recognition is not very high at present.
In recent years, deep neural networks have proven to be effective in extracting discriminative features of image classification problems, and are therefore widely used as basic models for pedestrian re-recognition. The Resnet network serves as a standard baseline for extracting features of the whole-body image. The Resnet network can only obtain global features, such as the overall color of clothing, and the overall features of people with similar clothing are close to each other in the feature space, not enough to distinguish different people. Generally, we identify pedestrians not only by general clothing colors but also by local features, and the global features indicate that these details cannot be obtained. Therefore, a more accurate method for pedestrian re-identification should be to locate the body part of the pedestrian and extract the relevant local features based on the local feature representation. However, the definition of local region localization in the related papers is quite arbitrary, and therefore, the data sets may also be changed one by one and/or one by one category, based on which, the technical solution of the present application combines the global features and the local features on the basis of the global features to improve the accuracy of the model at the global feature level, and localizes the local regions through the local feature combination and attention mechanism to obtain better features and improve the accuracy of the model.
Disclosure of Invention
In view of the technical defects in the prior art, an embodiment of the present invention provides a method, a system and a storage medium for re-identifying a pedestrian, so as to solve the technical problems in the background art.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a pedestrian re-identification method, including:
acquiring a whole-body detection block diagram of each pedestrian from the video;
calling a feature extraction model to perform feature extraction on the whole-body detection block diagram of each pedestrian, and acquiring a feature vector of each whole-body detection block diagram, wherein the feature vector comprises global features and local features;
clustering the feature vectors to obtain different whole body detection block diagrams belonging to the same pedestrian;
wherein the feature extraction model is obtained by training an initial feature extraction model;
the feature extraction model comprises a ResNet50 model and an attention mesh network model in series; the output of the ResNet50 model layer 4 serves as the input of the attention mesh network model, and the step size of the bottleneck layer of the ResNet50 model layer 4 is set to 1.
Optionally, training the initial feature extraction model to obtain the feature extraction model includes:
acquiring a training data set;
and training the initial feature extraction model by using the training data set to obtain the feature extraction model.
Optionally, the training the initial feature extraction model by using the training data set to obtain the feature extraction model includes:
s1: inputting the training data set into the initial feature extraction model, and performing feature extraction on each body detection block diagram to obtain feature labels;
s2: calculating the error between the characteristic label and the real label;
s3: updating parameters of the initial feature extraction model according to error back propagation;
s4: and repeating the steps S1-S3 until the initial feature extraction model converges or reaches the specified iteration times to obtain the feature extraction model.
Optionally, a training data set and a verification data set are obtained at the same time;
and further comprising, after step S4:
and S5, inputting the verification data set into the obtained feature extraction model, and testing the precision of the model.
Optionally, the simultaneously acquiring the training data set and the verification data set specifically includes:
acquiring a pedestrian video and storing a frame image;
extracting a pedestrian detection frame from the frame image, and cutting out a body detection frame diagram only containing a single pedestrian according to the detection frame;
and manually finding out body detection block diagrams belonging to the same pedestrian, marking corresponding pedestrian numbers, summarizing to form a data set, and dividing the data set into a training data set and a verification data set.
Optionally, the span between each frame image is not less than the set frame number;
the body detection block diagram belonging to the same pedestrian is not less than the set number.
Optionally, the set frame number is 10 frames; the set number of sheets is 3.
Optionally, the obtaining a whole-body detection block diagram of each pedestrian from the video includes:
and calling a pedestrian target detection algorithm to obtain a whole body detection block diagram of each pedestrian from the video.
In a second aspect, an embodiment of the present invention further provides a pedestrian re-identification system, which includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method according to the first aspect.
In a third aspect, the present invention also provides a computer-readable storage medium, which stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method according to the first aspect.
By implementing the method provided by the embodiment of the invention, the local feature combination can keep the inherent relationship between the local features and the adjacent features between the local scales, information cannot be lost by dividing the global features, and more discriminant features are generated according to the local slice features, so that the accuracy of pedestrian re-identification is effectively improved. The results of experiments on a plurality of widely used public data sets show that the method can accurately re-identify the pedestrians in the video.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below.
Fig. 1 is a schematic flow chart of a pedestrian re-identification method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a pedestrian re-identification system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
As shown in fig. 1, a pedestrian re-identification method according to an embodiment of the present invention is used for re-identifying a pedestrian from a video. The pedestrian re-identification method may include:
s100: and acquiring a whole-body detection block diagram of each pedestrian from the video.
In the embodiment of the video, a pedestrian target detection algorithm is called to obtain a whole-body detection block diagram of each pedestrian from the video.
S200: and calling a feature extraction model to perform feature extraction on the whole-body detection block diagram of each pedestrian, and acquiring a feature vector of each whole-body detection block diagram, wherein the feature vector comprises global features and local features.
Global features refer to the overall properties of an image, and common global features include color features, texture features, and shape features. The local features are features extracted from local regions of the image, and include edges, corners, lines, curves, special attributes, and the like.
The feature extraction model is obtained by training an initial feature extraction model. The feature extraction model includes a ResNet50 model and an attention mesh network model in series. Specifically, in this embodiment, the original forms of the 0 th layer to the 3 rd layer in the ResNet50 model are retained, and the step length of the bottleneck layer of the 4 th layer of the ResNet50 model is set to 1, so as to conveniently cut the global features obtained by the fourth layer into more segments and obtain feature vectors of 2048 dimensions. The output of layer 4 of the ResNet50 model is then used as an input to the attention mesh network model.
Elicitation is obtained from a multi-granularity network (MGN), local features are merged and divided into a plurality of branch structures, and finally, global features and the local features are combined together to be used as final feature vector representation. The attention mesh network model divides the global feature output by the ResNet50 model layer 4 into 4 branches, and in the branches of the local feature, divides the branch feature map into 2 and 3 (strips). Thus, through the diversity of the granularity, different branches also differ in the feature extraction emphasis. The global features and the multi-granularity local features are combined, so that the feature extraction model is more comprehensive.
When the initial feature extraction model is constructed, the initial feature extraction model adopts a plurality of loss functions, and the model is trained by combining cross entropy loss, ternary loss and central loss. The cross entropy loss mainly measures the classification effect of global features, and the ternary loss and the central loss are mainly used for measuring the distance between local features.
In this embodiment, training the initial feature extraction model to obtain the feature extraction model includes:
s10: a training data set is obtained.
In this embodiment, when the training data set is obtained, the verification data set is obtained at the same time. The method specifically comprises the following steps:
collecting a pedestrian video and storing a frame image;
extracting a pedestrian detection frame from the frame image, and cutting out a body detection frame diagram only containing a single pedestrian according to the detection frame;
and manually finding out body detection block diagrams belonging to the same pedestrian, marking corresponding pedestrian numbers, summarizing to form a data set, and dividing the data set into a training data set and a verification data set.
In this embodiment, to ensure the discrimination between the images, the span between each frame image is not less than the set number of frames when the frame images are stored, for example, the span between each frame image is not less than 10 frames. Meanwhile, in order to avoid that the number of pictures belonging to a certain pedestrian is too small, which leads to the fact that the model is easy to be under-fitted after training, the body detection frame diagram belonging to the same pedestrian is required to be not less than a set number of sheets, for example, not less than 3 sheets.
S20: and training the initial feature extraction model by using the training data set to obtain the feature extraction model.
In this embodiment, the training the initial feature extraction model by using the training data set to obtain the feature extraction model specifically includes:
s1: inputting the training data set into the initial feature extraction model, and performing feature extraction on each body detection block diagram to obtain feature labels.
S2: an error between the feature tag and the genuine tag is calculated.
S3: and updating the parameters of the initial feature extraction model according to the error back propagation.
S4: and repeating the steps S1-S3 until the initial feature extraction model converges or reaches the specified iteration times to obtain the feature extraction model.
And S5, inputting the verification data set into the obtained feature extraction model, and testing the precision of the model.
During specific training, 32 images are selected, wherein 8 different pedestrian IDs are randomly selected, and each pedestrian ID has 4 bounding box images. All input images are resized to 384 pixels by 128 pixels, 384 pixels high, 128 pixels wide. The loss function of the initial feature extraction model consists of a cross entropy loss function, a ternary loss function and a central loss function, wherein the central loss weight in the loss function is set to be 0.0005, the cross entropy loss function weight is set to be 1, and the ternary loss function weight is set to be 1. The initial feature extraction model is optimized using an Adam optimizer with default parameters.
S300: and clustering the feature vectors to obtain different whole body detection block diagrams belonging to the same pedestrian.
The characteristic vectors are clustered by adopting the existing kmeans clustering algorithm, different whole body detection block diagrams belonging to the same pedestrian in the video are obtained, all the whole body detection block diagrams belonging to the same pedestrian are divided under the same ID number, so that the actual pedestrian ID number in the video is obtained, and meanwhile, pedestrian images are stored according to the ID numbers for viewing.
By implementing the method provided by the embodiment of the invention, the local feature combination can keep the inherent relationship between the local features and the adjacent features between the local scales, information cannot be lost by dividing the global features, and more discriminant features are generated according to the local slice features, so that the accuracy of pedestrian re-identification is effectively improved. The results of experiments on a plurality of widely used public data sets show that the method can accurately re-identify the pedestrians in the video.
Based on the same inventive concept, the embodiment of the invention provides a pedestrian re-identification system. As shown in fig. 2, the system may include: one or more processors 101, one or more input devices 102, one or more output devices 103, and memory 104, the processors 101, input devices 102, output devices 103, and memory 104 being interconnected via a bus 105. The memory 104 is used to store a computer program comprising program instructions, the processor 101 being configured to invoke the program instructions to perform the methods of the embodiments of the pedestrian re-identification method described above.
It should be understood that, in the embodiment of the present invention, the Processor 101 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 102 may include a keyboard or the like, and the output device 103 may include a display (LCD or the like), a speaker, or the like.
The memory 104 may include read-only memory and random access memory, and provides instructions and data to the processor 101. A portion of the memory 104 may also include non-volatile random access memory. For example, the memory 104 may also store device type information.
In a specific implementation, the processor 101, the input device 102, and the output device 103 described in the embodiment of the present invention may execute the implementation manner described in the embodiment of the pedestrian re-identification method provided in the embodiment of the present invention, and details are not described herein again.
It should be noted that, with respect to the specific work flow of the pedestrian re-identification system, reference may be made to the foregoing method embodiment portion, and details are not repeated here.
Further, an embodiment of the present invention also provides a readable storage medium, in which a computer program is stored, where the computer program includes program instructions, and the program instructions, when executed by a processor, implement: the pedestrian re-identification method is provided.
The computer readable storage medium may be an internal storage unit of the background server described in the foregoing embodiment, for example, a hard disk or a memory of the system. The computer readable storage medium may also be an external storage device of the system, such as a plug-in hard drive, Smart Media Card (SMC), Secure Digital (SD) Card, Flash memory Card (Flash Card), etc. provided on the system. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the system. The computer-readable storage medium is used for storing the computer program and other programs and data required by the system. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partly contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A pedestrian re-identification method is characterized by comprising the following steps:
acquiring a whole-body detection block diagram of each pedestrian from the video;
calling a feature extraction model to perform feature extraction on the whole-body detection block diagram of each pedestrian, and acquiring a feature vector of each whole-body detection block diagram, wherein the feature vector comprises global features and local features;
clustering the feature vectors to obtain different whole body detection block diagrams belonging to the same pedestrian;
wherein the feature extraction model is obtained by training an initial feature extraction model;
the feature extraction model comprises a ResNet50 model and an attention mesh network model in series; the output of the ResNet50 model layer 4 serves as the input to the attention mesh network model, and the step size of the bottleneck layer of the ResNet50 model layer 4 is set to 1.
2. The method of claim 1, wherein training the initial feature extraction model to obtain the feature extraction model comprises:
acquiring a training data set;
and training the initial feature extraction model by using the training data set to obtain the feature extraction model.
3. The method of claim 2, wherein the training the initial feature extraction model with the training data set to obtain the feature extraction model comprises:
s1: inputting the training data set into the initial feature extraction model, and performing feature extraction on each body detection block diagram to obtain feature labels;
s2: calculating the error between the characteristic label and the real label;
s3: updating parameters of the initial feature extraction model according to error back propagation;
s4: and repeating the steps S1-S3 until the initial feature extraction model converges or reaches the specified iteration times to obtain the feature extraction model.
4. A pedestrian re-identification method as claimed in claim 3, characterized by simultaneously acquiring a training data set and a verification data set;
and further comprising, after step S4:
and S5, inputting the verification data set into the obtained feature extraction model, and testing the precision of the model.
5. The pedestrian re-identification method according to claim 4, wherein the simultaneously acquiring the training data set and the verification data set specifically comprises:
acquiring a pedestrian video and storing a frame image;
extracting a pedestrian detection frame from the frame image, and cutting out a body detection frame diagram only containing a single pedestrian according to the detection frame;
and manually finding out body detection block diagrams belonging to the same pedestrian, marking corresponding pedestrian numbers, summarizing to form a data set, and dividing the data set into a training data set and a verification data set.
6. The pedestrian re-identification method according to claim 5, wherein a span between each of said frame images is not less than a set number of frames;
the body detection block diagrams belonging to the same pedestrian are not less than the set number.
7. The pedestrian re-identification method according to claim 6, wherein the set number of frames is 10 frames; the set number of sheets is 3.
8. The pedestrian re-identification method according to claim 1, wherein the step of obtaining the whole-body detection block diagram of each pedestrian from the video comprises:
and calling a pedestrian target detection algorithm to obtain a whole body detection block diagram of each pedestrian from the video.
9. A pedestrian re-identification system comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-8.
CN202210726677.8A 2022-06-24 2022-06-24 Pedestrian re-identification method, system and storage medium Withdrawn CN115116090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210726677.8A CN115116090A (en) 2022-06-24 2022-06-24 Pedestrian re-identification method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210726677.8A CN115116090A (en) 2022-06-24 2022-06-24 Pedestrian re-identification method, system and storage medium

Publications (1)

Publication Number Publication Date
CN115116090A true CN115116090A (en) 2022-09-27

Family

ID=83329082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210726677.8A Withdrawn CN115116090A (en) 2022-06-24 2022-06-24 Pedestrian re-identification method, system and storage medium

Country Status (1)

Country Link
CN (1) CN115116090A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376054A (en) * 2022-10-26 2022-11-22 浪潮电子信息产业股份有限公司 Target detection method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376054A (en) * 2022-10-26 2022-11-22 浪潮电子信息产业股份有限公司 Target detection method, device, equipment and storage medium
WO2024087358A1 (en) * 2022-10-26 2024-05-02 浪潮电子信息产业股份有限公司 Target detection method and apparatus, and device and non-volatile readable storage medium

Similar Documents

Publication Publication Date Title
CN109784186B (en) Pedestrian re-identification method and device, electronic equipment and computer-readable storage medium
US10726244B2 (en) Method and apparatus detecting a target
CN108038474B (en) Face detection method, convolutional neural network parameter training method, device and medium
CN109284733B (en) Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network
US10891465B2 (en) Methods and apparatuses for searching for target person, devices, and media
US7376270B2 (en) Detecting human faces and detecting red eyes
CN109960742B (en) Local information searching method and device
CN106408037B (en) Image recognition method and device
US20120275701A1 (en) Identifying high saliency regions in digital images
US20120002868A1 (en) Method for fast scene matching
CN111814697B (en) Real-time face recognition method and system and electronic equipment
US20220165095A1 (en) Person verification device and method and non-transitory computer readable media
WO2021051547A1 (en) Violent behavior detection method and system
CN111415373A (en) Target tracking and segmenting method, system and medium based on twin convolutional network
CN115115825B (en) Method, device, computer equipment and storage medium for detecting object in image
CN112668462A (en) Vehicle loss detection model training method, vehicle loss detection device, vehicle loss detection equipment and vehicle loss detection medium
CN110738204B (en) Certificate area positioning method and device
CN112541394A (en) Black eye and rhinitis identification method, system and computer medium
CN115116090A (en) Pedestrian re-identification method, system and storage medium
CN111666976A (en) Feature fusion method and device based on attribute information and storage medium
CN114140663A (en) Multi-scale attention and learning network-based pest identification method and system
CN112329810B (en) Image recognition model training method and device based on significance detection
CN110956116B (en) Face image gender identification model and method based on convolutional neural network
CN110781866A (en) Panda face image gender identification method and device based on deep learning
CN111163332A (en) Video pornography detection method, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220927