CN115631509A - Pedestrian re-identification method and device, computer equipment and storage medium - Google Patents

Pedestrian re-identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115631509A
CN115631509A CN202211302074.1A CN202211302074A CN115631509A CN 115631509 A CN115631509 A CN 115631509A CN 202211302074 A CN202211302074 A CN 202211302074A CN 115631509 A CN115631509 A CN 115631509A
Authority
CN
China
Prior art keywords
pedestrian
features
image
head
shoulder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211302074.1A
Other languages
Chinese (zh)
Other versions
CN115631509B (en
Inventor
谢喜林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN202211302074.1A priority Critical patent/CN115631509B/en
Publication of CN115631509A publication Critical patent/CN115631509A/en
Application granted granted Critical
Publication of CN115631509B publication Critical patent/CN115631509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a pedestrian re-identification method, a pedestrian re-identification device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be identified; detecting the head and shoulder parts of the image to be recognized to obtain a detection result; when the detection result shows that the head and shoulder parts exist, cutting the image to be recognized to obtain a head and shoulder image; extracting the features of the head and shoulder images, and performing feature enhancement on the extracted features to obtain head and shoulder enhancement features; performing feature extraction on an image to be recognized to obtain image features, and inputting the image features into at least two pedestrian re-recognition branches; for each pedestrian re-identification branch, performing feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches; performing feature fusion on the head and shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features; and based on the re-identification characteristics of the pedestrians, carrying out identification processing on the image to be identified to obtain an identification result.

Description

Pedestrian re-identification method and device, computer equipment and storage medium
Technical Field
The invention relates to the field of computer vision, in particular to a pedestrian re-identification method, a pedestrian re-identification device, computer equipment and a storage medium.
Background
The pedestrian Re-identification (Person Re-identification) is a technology for searching for a specific pedestrian in a large-scale distributed monitoring system using a computer vision technology, and is abbreviated as Re-ID. The identity of the pedestrian is judged by identifying the image of the pedestrian. However, in practical applications, due to the installation position of the camera, the angle of the shooting field of view, the shielding of objects, the crowding of pedestrians, and other reasons, a complete pedestrian image meeting the requirement of pedestrian re-identification cannot be obtained, and the pedestrian image has information loss. In addition, changes in the wearing of the pedestrian, such as reloading, also result in changes in the appearance of the captured images of the pedestrian. At this time, the pedestrian re-identification is performed on the acquired pedestrian image, and the problem of low accuracy due to information loss or feature change exists.
Therefore, the conventional method has a problem of low accuracy due to missing of image information of pedestrians or change of features when the pedestrians are re-identified.
Disclosure of Invention
The embodiment of the invention provides a pedestrian re-identification method, a pedestrian re-identification device, computer equipment and a storage medium, which are used for improving the pedestrian identification accuracy rate under the condition of pedestrian image information loss or characteristic change.
In order to solve the above technical problem, an embodiment of the present application provides a pedestrian re-identification method, including.
And acquiring an image to be identified.
And detecting the head and shoulder parts of the image to be identified based on a head and shoulder detection network to obtain a detection result.
And when the detection result shows that the image to be recognized has the head and shoulder part, cutting the image to be recognized to obtain the head and shoulder image.
And extracting the features of the head and shoulder images, and performing feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features.
And performing feature extraction on the image to be recognized to obtain image features, and inputting the image features into at least two pedestrian re-recognition branches.
And performing feature enhancement on the image features aiming at each pedestrian re-identification branch to obtain pedestrian features corresponding to the pedestrian re-identification branches.
And performing feature fusion on the head and shoulder enhancement features and all the pedestrian features to obtain the re-identification features of the pedestrians.
And identifying the image to be identified based on the re-identification characteristic of the pedestrian to obtain an identification result.
In order to solve the above technical problem, an embodiment of the present application further provides a pedestrian re-identification apparatus, including.
And the image to be recognized acquiring module is used for acquiring the image to be recognized.
And the detection result acquisition module is used for detecting the head and shoulder parts of the image to be identified based on the head and shoulder detection network to obtain a detection result.
And the head and shoulder image acquisition module is used for cutting the image to be recognized to obtain a head and shoulder image when the detection result shows that the image to be recognized has a head and shoulder part.
And the head and shoulder enhancement feature acquisition module is used for extracting features of the head and shoulder images and enhancing the features of the extracted features based on an attention algorithm to obtain head and shoulder enhancement features.
And the image characteristic acquisition module is used for extracting the characteristics of the image to be recognized to obtain image characteristics and inputting the image characteristics into at least two pedestrian re-recognition branches.
And the pedestrian characteristic acquisition module is used for performing characteristic enhancement on the image characteristics aiming at each pedestrian re-identification branch to obtain the pedestrian characteristics corresponding to the pedestrian re-identification branch.
And the pedestrian re-identification feature acquisition module is used for carrying out feature fusion on the head and shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features.
And the identification module is used for identifying the image to be identified based on the re-identification characteristic of the pedestrian to obtain an identification result.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the pedestrian re-identification method when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the pedestrian re-identification method.
According to the pedestrian re-identification method, the pedestrian re-identification device, the computer equipment and the storage medium, the image to be identified is obtained; based on a head and shoulder detection network, detecting the head and shoulder parts of the image to be recognized to obtain a detection result; when the detection result is that the image to be recognized has the head and shoulder parts, cutting the image to be recognized to obtain a head and shoulder image; extracting the features of the head and shoulder images, and performing feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features; performing feature extraction on an image to be recognized to obtain image features, and inputting the image features into at least two pedestrian re-recognition branches; aiming at each pedestrian re-identification branch, performing feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches; performing feature fusion on the head and shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features; and based on the re-identification characteristics of the pedestrians, carrying out identification processing on the image to be identified to obtain an identification result. By fusing the characteristics of the head and shoulder parts, the accuracy of pedestrian re-identification is effectively improved under the conditions of pedestrian image information loss or characteristic change, such as pedestrian trunk shielding, pedestrian reloading and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is an exemplary system architecture diagram to which the present application may be applied.
FIG. 2 is a flow chart of one embodiment of a pedestrian re-identification method of the present application.
Fig. 3 is a schematic structural view of an embodiment of a pedestrian re-identification apparatus according to the present application.
FIG. 4 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the foregoing drawings are used for distinguishing between different objects and not for describing a particular sequential order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, as shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, E-book readers, MP3 players (Moving Picture E interface displays the properties Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture E interface displays the properties Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
The pedestrian re-identification method provided by the embodiment of the application is executed by the server, and accordingly, the pedestrian re-identification device is arranged in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks and servers may be provided according to implementation needs, and the terminal devices 101, 102 and 103 in this embodiment may specifically correspond to an application system in actual production.
Referring to fig. 2, fig. 2 shows a pedestrian re-identification method according to an embodiment of the present invention, which is described by taking the method applied to the server in fig. 1 as an example, and is described in detail as follows.
S201, obtaining an image to be recognized.
In step S201, the image to be recognized is a pedestrian image.
The acquisition mode of the image to be identified includes but is not limited to image capturing from a monitoring video shot by a monitoring camera and mobile phone shooting. Specifically, the above-mentioned manner of acquiring the image to be recognized is adaptively adjusted according to the actual application scene.
S202, detecting the head and shoulder parts of the image to be recognized based on the head and shoulder detection network to obtain a detection result.
In step S202, the head and shoulder detection network refers to a network model for performing head and shoulder part recognition on the image to be recognized.
It should be noted here that the training method of the head and shoulder detection network includes, but is not limited to, an object detection algorithm and a positive and negative sample training method.
The target detection algorithm is an algorithm for distinguishing a target in an image or a video from other regions of no interest, judging whether the target exists, determining the position of the target and identifying the type of the target. The above target detection algorithms include, but are not limited to YOLOV1, YOLOV3, YOLOV5, and the YOLOV5 target detection algorithm is preferably used in the embodiment of the present application. Training a training sample by using a YOLOV5 target detection algorithm, constructing a head and shoulder detection network, and detecting the head and shoulder parts of an image to be recognized based on the head and shoulder detection network to obtain a detection result.
The positive and negative sample training method is characterized in that a neural network is trained by obtaining positive and negative samples, and the trained network is used as a head and shoulder detection network, wherein the positive samples are head and shoulder images, and the negative samples are human body images containing head and shoulder parts.
The head and shoulder detection network is used for detecting the head and shoulder parts of the image to be recognized, the head and shoulder parts in the image to be recognized can be quickly positioned, the features of the head and shoulder parts are extracted, and the features of the head and shoulder parts are fused to improve the pedestrian recognition accuracy rate under the condition of pedestrian image information loss or feature change.
And S203, when the detection result shows that the head and shoulder parts exist in the image to be recognized, cutting the image to be recognized to obtain the head and shoulder image.
In step S203, the head-shoulder image is an image including only the head-shoulder portion and not including the torso portion.
Specifically, when the detection result shows that the head and shoulder parts exist in the image to be recognized, determining a boundary frame of the head and shoulder parts in the image to be recognized, and cutting the image to be recognized according to the boundary frame to obtain the head and shoulder image.
It should be noted here that the method for determining the bounding box of the head and shoulder region includes, but is not limited to, an object detection algorithm and a multi-scale object detection algorithm. Through a target detection algorithm or multi-scale target detection and calculation, a target edge frame including the head and shoulder part of the image to be recognized is recognized, the target edge frame is a rectangular frame, and the rectangular frame can be determined by x and y axis coordinates of the upper left corner and the lower right corner of the rectangle.
The method comprises the steps of determining a boundary frame of a head and shoulder position in an image to be recognized, cutting the image to be recognized according to the boundary frame to obtain a head and shoulder image, extracting features of the head and shoulder image, and fusing the features of the head and shoulder position to improve the pedestrian recognition accuracy rate under the condition of pedestrian image information loss or feature change.
And S204, extracting the features of the head and shoulder images, and performing feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features.
In step S204, the attention algorithm refers to attention extracted based on the attention of the feature map itself.
The method for extracting the features of the head and shoulder images includes, but is not limited to, a resnet50 residual network, a resnet101 residual network, and a resnet152 residual network, and the resnet50 residual network is preferred in the present application.
The feature enhancement means a method of enhancing the expression of the features of the extracted features. The feature enhancement includes, but is not limited to, fine-grained feature enhancement and weighted feature enhancement, wherein the fine-grained feature enhancement refers to expression enhancement of fine-grained features extracted from the head-shoulder image, and the weighted feature enhancement refers to expression enhancement of features extracted from the head-shoulder image after weighting is given to the features.
It should be understood that the feature enhancements herein may be specifically tailored to the actual situation. Fine grain feature enhancement is preferably employed in the present application.
The head and shoulder enhancement features refer to head and shoulder features obtained through feature enhancement.
Through an attention algorithm, feature enhancement is carried out on features extracted from the head and shoulder images to obtain head and shoulder enhancement features, attention recognition on head and shoulder positions is enhanced, and therefore the pedestrian recognition accuracy rate under the condition that head and shoulder position information such as trunk occlusion of pedestrians and changing of pedestrians is unchanged is improved.
S205, extracting the features of the image to be recognized to obtain image features, and inputting the image features into at least two pedestrian re-recognition branches.
In step S205, the pedestrian re-recognition branch refers to a model branch for re-recognizing the pedestrian of the image to be recognized.
The implementation method of the pedestrian re-identification branch includes, but is not limited to, a resnet50 residual network, a resnet101 residual network, and a resnet152 residual network, and the resnet50 residual network is preferred in the present application. It should be understood that the pedestrian re-identification branch and the head and shoulder detection network should perform feature extraction in the same manner, so as to ensure that the extracted feature maps have consistent dimensions.
And performing feature extraction on the image to be recognized through a resnet50 residual error network to obtain image features, and inputting the image features into all pedestrian re-recognition branches respectively.
It should be noted here that the number of pedestrian re-identification branches can be specifically adjusted according to actual conditions.
The embodiment of the present application preferably employs two pedestrian re-recognition branches part1 and part2. And connecting multi-classification softmax loss training after full connection of part1 and part2 by adopting global pooling.
The image features are obtained by extracting the features of the image to be recognized, the image features are input into the at least two pedestrian re-recognition branches, the image features are processed based on different pedestrian re-recognition branches, and the accuracy rate of pedestrian re-recognition is improved.
And S206, aiming at each pedestrian re-identification branch, performing feature enhancement on the image features to obtain the pedestrian features corresponding to the pedestrian re-identification branches.
In step S206, the above-mentioned pedestrian feature refers to a feature of a pedestrian in the image to be recognized. The pedestrian features include, but are not limited to, a posture feature, a behavior feature, a skin tone feature. It should be understood that the pedestrian features can be adjusted according to actual conditions for guiding the pedestrian identity classification.
Through different pedestrian re-recognition branches, the image features are subjected to feature enhancement to obtain a plurality of pedestrian features, the image to be recognized is subjected to pedestrian re-recognition based on the different pedestrian features, and the pedestrian re-recognition accuracy rate is improved.
And S207, performing feature fusion on the head and shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features.
In step S207, the dimensions of the head-shoulder enhancing features and the pedestrian feature features are consistent, and the head-shoulder enhancing features and all the pedestrian features are subjected to feature fusion according to a preset sequence to obtain pedestrian re-identification features.
It should be noted here that the preset sequence includes, but is not limited to, the head-shoulder enhancing feature being connected with the pedestrian feature, and the pedestrian feature being connected with the head-shoulder enhancing feature.
The pedestrian re-identification features are obtained by performing feature fusion on the head and shoulder enhancement features and all pedestrian features, and the pedestrian re-identification features comprise features of the head and shoulder parts, so that the pedestrian identification accuracy rate under the condition of pedestrian image information missing or feature change is improved.
And S208, based on the re-identification characteristics of the pedestrians, carrying out identification processing on the image to be identified to obtain an identification result.
In step S208, the recognition result is whether the target pedestrian is included.
Through pedestrian re-recognition characteristics, whether the image to be recognized contains a target pedestrian is judged, the pedestrian re-recognition characteristics are fused with the head and shoulder enhancement characteristics, and the pedestrian recognition accuracy rate under the condition that head and shoulder position information is unchanged, such as the shielding of the trunk of the pedestrian, the replacement of the pedestrian and the like is improved.
In the embodiment, the image to be identified is obtained; performing head and shoulder position detection on the image to be recognized based on a head and shoulder detection network to obtain a detection result; when the detection result is that the image to be recognized has the head and shoulder parts, cutting the image to be recognized to obtain a head and shoulder image; extracting the features of the head and shoulder images, and performing feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features; performing feature extraction on an image to be recognized to obtain image features, and inputting the image features into at least two pedestrian re-recognition branches; aiming at each pedestrian re-identification branch, performing feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches; performing feature fusion on the head and shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features; and based on the re-identification characteristics of the pedestrians, carrying out identification processing on the image to be identified to obtain an identification result. By fusing the head and shoulder part characteristics, the accuracy of pedestrian re-identification is effectively improved under the condition that pedestrian image information is missing or the characteristics are changed, such as pedestrian trunk shielding, pedestrian reloading and the like.
In some optional implementation manners of this embodiment, in step S202, based on the head and shoulder detection network, the step of performing head and shoulder part detection on the image to be recognized to obtain a detection result includes steps S2021 to S2022:
s2021, performing fine-grained feature extraction on the image to be recognized based on the head and shoulder detection network to obtain fine-grained features.
S2022, detecting the head and shoulder parts of the image to be recognized based on the fine-grained features to obtain a detection result.
The head and shoulder detection network is a resnet50 residual network.
The fine-grained feature refers to a feature having a higher similarity with the head and shoulder regions than the coarse-grained feature.
The fine-grained feature extraction method includes but is not limited to local feature extraction and attention network enhancement. The local feature extraction refers to a method for further extracting features of the head and shoulder parts, and subdividing the extracted features to determine whether the head and shoulder parts are the head and shoulder parts. Attention network enhancement refers to attention features related to the head and shoulder parts extracted by adopting an attention network to pay attention to the feature map of the image to be recognized. The present application preferably employs attention network enhancements.
And extracting the features of the image to be recognized through a resnet50 residual error network, performing global average pooling on the extracted features, and performing fine-grained feature extraction on the pooled features based on an attention mechanism network to obtain fine-grained features for enhancing the head and shoulder parts.
The detection result may be that the image to be recognized includes the head and shoulder part, and the image to be recognized does not include the head and shoulder part.
In this embodiment, feature extraction and fine-grained feature extraction are performed on the image to be recognized through a resnet50 residual error network, and head and shoulder part detection is performed on the image to be recognized according to the fine-grained feature, so that a detection result is obtained. By enhancing the characteristics of the head and shoulder parts, the accuracy of pedestrian re-identification is effectively improved under the conditions of pedestrian image information loss or characteristic change, such as pedestrian trunk shielding, pedestrian reloading and the like.
In some optional implementation manners of this embodiment, in step S203, when the detection result indicates that the image to be recognized has a head and shoulder portion, the step of cutting the image to be recognized to obtain the head and shoulder image includes.
And performing head and shoulder bounding box prediction on the image to be recognized based on the fine-grained characteristics to obtain a head and shoulder bounding box.
And cutting the image to be recognized based on the head and shoulder bounding box to obtain a head and shoulder image.
The head and shoulder bounding box refers to a target bounding box for detecting the head and shoulder parts of the image to be recognized. Wherein, the head and shoulder part is the target part.
In the embodiment, the boundary frame of the head and shoulder position in the image to be recognized is determined through fine-grained features, the image to be recognized is cut according to the boundary frame to obtain the head and shoulder image, the features of the head and shoulder image are extracted, and the features of the head and shoulder position are fused to improve the pedestrian recognition accuracy under the condition that the pedestrian image information is missing or the features are changed.
In some optional implementations of the embodiment, in step S206, for each pedestrian re-recognition branch, the step of performing feature enhancement on the image feature to obtain the pedestrian feature corresponding to the pedestrian re-recognition branch includes S2061 to S2064.
S2061, selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as a current re-identification branch.
And S2062, based on the current re-recognition branch, performing feature enhancement on the image features to obtain a first pedestrian feature and a second pedestrian feature.
And S2063, carrying out layering processing on the second pedestrian characteristic to obtain the layering characteristics with the same number as the preset layering number.
And S2064, taking the first pedestrian feature and all the layered features as the pedestrian features corresponding to the current re-identification branch, and returning to the step of selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as the current re-identification branch to continue execution until all the pedestrian re-identification branches are completely selected.
In step S2061, the current re-recognition branch is a pedestrian re-recognition branch in which the image feature is currently processed.
In step S2062, the first pedestrian feature and the second pedestrian feature are both pedestrian features obtained by globally pooling image features. It should be understood that the first pedestrian characteristic is substantially identical to the second pedestrian characteristic.
In step S2063, the above-described layering process is a process of dividing the second pedestrian feature into a plurality of layers in the H dimension in the NCHW.
It should be noted that NCHW actually represents [ wh C N ], N represents number, C represents channel, H represents height, and W represents width. The sequence of the NCHW data is W direction data, H direction data, C direction data and N direction data. That is, the layering processing in the embodiment of the present application is to layer the feature map corresponding to the second pedestrian feature in height. It should be understood that the W direction or C direction data or N direction may also be layered according to actual needs, and is adjusted according to actual situations.
The preset number of layers refers to the number of layers for layering the second pedestrian feature.
Next, step S2063 is explained as an embodiment, and two pedestrian re-identification branches part1 and part2 are input respectively by using the image feature output from the resnet50 network. And connecting multi-classification softmax loss training after full connection of part1 and part2 by adopting global pooling. The first pedestrian feature of part1 is f1, the feature map corresponding to the second pedestrian feature of part1 is divided into upper, lower and upper layers f2 and f3 in the h dimension of nchw on average, in the same way, the first pedestrian feature of part2 is f4, and the second pedestrian feature corresponding feature map of part2 is divided into upper, middle and lower layers f5, f6 and f7 in the h dimension of nchw on average.
In the embodiment, the image characteristics are processed through the multiple pedestrian re-identification branches, different layering operations are carried out on different pedestrian re-identification branches, and the accuracy rate of the pedestrian re-identification can be effectively improved.
In some optional implementations of this embodiment, in step S207, the step of performing feature fusion on the head and shoulder enhancement features and all the pedestrian features to obtain the pedestrian re-identification feature includes S2071 to S2073.
And S2071, performing full connection processing on the head and shoulder enhancement features and all the first pedestrian features to obtain first full connection features.
And S2072, according to the arrangement sequence of the pedestrian re-identification branches, performing full connection processing on the head and shoulder enhancement features and all the layering features to obtain second full connection features.
And S2073, performing feature fusion on the first full-link feature and the second full-link feature to obtain the re-identification feature of the pedestrian.
In step S2071, the convat connection fusion is performed on the head-shoulder enhanced feature and all the first pedestrian features through convolution dimensionality reduction, and after the head-shoulder bit feature is fused, the weight of the head-shoulder fine-granularity feature in the final pedestrian re-identification feature is increased. The fusion branch is finally connected with classification softmax loss to guide pedestrian identity classification, and features among classes are emphasized.
In step S2072, this is specifically true.
And performing dimension reduction connection on all layered features in the pedestrian re-identification branches aiming at each pedestrian re-identification branch to obtain dimension reduction features corresponding to the pedestrian re-identification branches.
And performing full-connection fusion on all the dimension reduction features according to the arrangement sequence of the re-identification branches of the pedestrians to obtain fusion features.
And carrying out full connection processing on the head-shoulder enhancing feature and the fusion feature to obtain a second full connection feature.
Further explaining S2072 by taking the example of step S2063 as an example, the hierarchical features f2 and f3 in the second pedestrian feature of part1 are connected by 1*1 through convolution dimensionality reduction to obtain a dimensionality reduction feature corresponding to part 1. And (3) carrying out convolution dimensionality reduction connection on the hierarchical features f4, f5 and f6 in the second pedestrian feature of Part2 through 1*1 to obtain a dimensionality reduction feature corresponding to Part2. 5 layers of part1 and part2 are averagely divided, the characteristics with the same f2, f3, f5, f6 and f7 dimensions are obtained through independent convolution network dimension reduction characteristics of the parts, the f2, f3, f5, f6 and f7 and the head-shoulder enhancement characteristic f8 are subjected to concat connection and fusion to obtain a second full connection characteristic, and the triple loss is connected to guide the learning of the trunk characteristics.
In the embodiment, by fusing the characteristics of the head and shoulder parts, the accuracy of pedestrian re-identification is effectively improved under the condition that image information of pedestrians is lost or the characteristics are changed, such as trunk shielding of the pedestrians and pedestrian reloading.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 3 shows a schematic block diagram of a pedestrian re-recognition apparatus in one-to-one correspondence with the pedestrian re-recognition method of the above-described embodiment. As shown in fig. 3, the pedestrian re-identification device includes an image to be identified acquisition module 31, a detection result acquisition module 32, a head and shoulder image acquisition module 33, a head and shoulder enhancement feature acquisition module 34, an image feature acquisition module 35, a pedestrian feature acquisition module 36, a pedestrian re-identification feature acquisition module 37 and an identification module 38. Each functional block is described in detail below.
And an image to be recognized acquiring module 31, configured to acquire an image to be recognized.
And the detection result acquisition module 32 is configured to perform head and shoulder position detection on the image to be recognized based on the head and shoulder detection network to obtain a detection result.
And the head and shoulder image obtaining module 33 is configured to, when the detection result indicates that the image to be recognized has a head and shoulder portion, cut the image to be recognized to obtain a head and shoulder image.
And the head and shoulder enhancement feature acquisition module 34 is configured to perform feature extraction on the head and shoulder image, and perform feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features.
The image feature obtaining module 35 is configured to perform feature extraction on the image to be recognized to obtain image features, and input the image features into at least two pedestrian re-recognition branches.
And the pedestrian characteristic acquisition module 36 is configured to perform characteristic enhancement on the image characteristic for each pedestrian re-identification branch to obtain a pedestrian characteristic corresponding to the pedestrian re-identification branch.
And the pedestrian re-identification feature acquisition module 37 is configured to perform feature fusion on the head and shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features.
And the identification module 38 is configured to perform identification processing on the image to be identified based on the re-identification feature of the pedestrian to obtain an identification result.
Optionally, the detection result obtaining module 32 includes.
And the fine-grained feature acquisition unit is used for extracting fine-grained features of the image to be identified based on the head-shoulder detection network to obtain the fine-grained features.
And the detection result acquisition unit is used for detecting the head and shoulder parts of the image to be recognized based on the fine-grained features to obtain a detection result.
Optionally, the head-shoulder image acquisition module 33 includes.
And the head-shoulder bounding box obtaining unit is used for performing head-shoulder bounding box prediction on the image to be recognized based on the fine-grained characteristics to obtain a head-shoulder bounding box.
And the cutting unit is used for cutting the image to be recognized based on the head and shoulder boundary frame to obtain a head and shoulder image.
Optionally, the pedestrian feature acquisition module 36 includes.
And the current re-identification branch acquisition unit is used for selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as a current re-identification branch.
And the feature enhancement unit is used for carrying out feature enhancement on the image features based on the current re-recognition branch to obtain a first pedestrian feature and a second pedestrian feature.
And the layering unit is used for carrying out layering processing on the second pedestrian characteristic to obtain the layering characteristics with the same number as the preset layering number.
And the circulating unit is used for taking the first pedestrian characteristic and all the layered characteristics as the pedestrian characteristic corresponding to the current re-identification branch, returning to the step of selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as the current re-identification branch and continuing to execute the step until all the pedestrian re-identification branches are selected completely.
Optionally, the pedestrian re-identification feature acquisition module 37 includes.
And the first full-connection unit is used for performing full-connection processing on the head and shoulder enhancement features and all the first pedestrian features to obtain first full-connection features.
And the second full-connection unit is used for performing full-connection processing on the head-shoulder enhancement features and all the layered features according to the arrangement sequence of the pedestrian re-identification branches to obtain second full-connection features.
And the feature fusion unit is used for performing feature fusion on the first full-connection feature and the second full-connection feature to obtain the pedestrian re-identification feature.
Optionally, the second fully connected unit comprises.
And the dimension reduction feature acquisition unit is used for performing dimension reduction connection on all layered features in the pedestrian re-identification branch aiming at each pedestrian re-identification branch to obtain the dimension reduction features corresponding to the pedestrian re-identification branches.
And the fusion feature acquisition unit is used for performing full-connection fusion on all the dimension reduction features according to the arrangement sequence of the pedestrian re-identification branches to obtain fusion features.
And the full-connection unit is used for performing full-connection processing on the head-shoulder enhancement feature and the fusion feature to obtain a second full-connection feature.
For specific definition of the pedestrian re-identification device, reference may be made to the above definition of the pedestrian re-identification method, which is not described herein again. The modules in the pedestrian re-identification device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only computer device 4 having components connection memory 41, processor 42, network interface 43 is shown, but it is understood that not all of the illustrated components are required to be implemented, and that more or fewer components may alternatively be implemented. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware thereof includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 4. Of course, the memory 41 may also include both an internal storage unit of the computer device 4 and an external storage device thereof. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as program codes for controlling electronic files. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute the program code stored in the memory 41 or process data, such as program code for executing control of an electronic file.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer readable storage medium storing an interface display program, which is executable by at least one processor to cause the at least one processor to execute the steps of the pedestrian re-identification method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and the embodiments are provided so that this disclosure will be thorough and complete. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that modifications can be made to the embodiments described in the foregoing detailed description, or equivalents can be substituted for some of the features described therein. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A pedestrian re-recognition method, characterized by comprising:
acquiring an image to be identified;
based on a head and shoulder detection network, carrying out head and shoulder position detection on the image to be identified to obtain a detection result;
when the detection result shows that the image to be recognized has the head and shoulder parts, cutting the image to be recognized to obtain a head and shoulder image;
extracting the features of the head and shoulder images, and performing feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features;
performing feature extraction on the image to be recognized to obtain image features, and inputting the image features into at least two pedestrian re-recognition branches;
for each pedestrian re-identification branch, performing feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches;
performing feature fusion on the head and shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features;
and identifying the image to be identified based on the re-identification characteristic of the pedestrian to obtain an identification result.
2. The pedestrian re-identification method according to claim 1, wherein the step of performing the head and shoulder part detection on the image to be identified based on the head and shoulder detection network to obtain the detection result comprises:
performing fine-grained feature extraction on the image to be identified based on a head-shoulder detection network to obtain fine-grained features;
and detecting the head and shoulder parts of the image to be identified based on the fine-grained features to obtain a detection result.
3. The pedestrian re-recognition method according to claim 2, wherein the step of cropping the image to be recognized to obtain the head-shoulder image when the detection result indicates that the image to be recognized has the head-shoulder part comprises:
performing head-shoulder bounding box prediction on the image to be identified based on the fine-grained features to obtain a head-shoulder bounding box;
and based on the head and shoulder boundary frame, cutting the image to be recognized to obtain a head and shoulder image.
4. The pedestrian re-identification method according to claim 1, wherein the step of performing feature enhancement on the image feature for each of the pedestrian re-identification branches to obtain the pedestrian feature corresponding to the pedestrian re-identification branch comprises:
selecting a pedestrian re-identification branch from all the pedestrian re-identification branches as a current re-identification branch;
based on the current re-recognition branch, performing feature enhancement on the image features to obtain a first pedestrian feature and a second pedestrian feature;
carrying out layering processing on the second pedestrian characteristics to obtain layering characteristics with the number equal to that of preset layering;
and taking the first pedestrian feature and all the hierarchical features as pedestrian features corresponding to the current re-identification branch, and returning to the step of selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as the current re-identification branch to be continuously executed until all the pedestrian re-identification branches are completely selected.
5. The pedestrian re-identification method according to claim 4, wherein the step of performing feature fusion on the head-shoulder enhancing features and all the pedestrian features to obtain the pedestrian re-identification features comprises:
performing full connection processing on the head and shoulder enhancement features and all the first pedestrian features to obtain first full connection features;
according to the arrangement sequence of the pedestrian re-identification branches, carrying out full connection processing on the head-shoulder enhancement features and all the layered features to obtain second full connection features;
and performing feature fusion on the first full-connection feature and the second full-connection feature to obtain the re-identification feature of the pedestrian.
6. The pedestrian re-identification method according to claim 4, wherein the step of fully connecting the head-shoulder enhancing feature with all of the hierarchical features in the order of arrangement of the pedestrian re-identification branches to obtain a second fully connected feature comprises:
for each pedestrian re-identification branch, performing dimensionality reduction connection on all the hierarchical features in the pedestrian re-identification branch to obtain dimensionality reduction features corresponding to the pedestrian re-identification branches;
performing full-connection fusion on all the dimensionality reduction features according to the arrangement sequence of the pedestrian re-identification branches to obtain fusion features;
and carrying out full connection processing on the head-shoulder enhancing feature and the fusion feature to obtain a second full connection feature.
7. A pedestrian re-recognition apparatus, characterized by comprising:
the image to be recognized acquisition module is used for acquiring an image to be recognized;
the detection result acquisition module is used for detecting the head and shoulder parts of the image to be identified based on a head and shoulder detection network to obtain a detection result;
the head and shoulder image acquisition module is used for cutting the image to be recognized to obtain a head and shoulder image when the detection result indicates that the image to be recognized has a head and shoulder part;
the head and shoulder enhancement feature acquisition module is used for extracting features of the head and shoulder images and enhancing the extracted features based on an attention algorithm to obtain head and shoulder enhancement features;
the image characteristic acquisition module is used for extracting the characteristics of the image to be recognized to obtain image characteristics and inputting the image characteristics into at least two pedestrian re-recognition branches;
the pedestrian feature acquisition module is used for performing feature enhancement on the image features aiming at each pedestrian re-identification branch to obtain pedestrian features corresponding to the pedestrian re-identification branches;
the pedestrian re-identification feature acquisition module is used for carrying out feature fusion on the head and shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features;
and the identification module is used for identifying the image to be identified based on the re-identification characteristic of the pedestrian to obtain an identification result.
8. The pedestrian re-recognition apparatus according to claim 7, wherein the detection result acquisition module includes:
a fine-grained feature acquisition unit, configured to perform fine-grained feature extraction on the image to be identified based on a head-shoulder detection network, so as to obtain fine-grained features;
and the detection result acquisition unit is used for detecting the head and shoulder parts of the image to be identified based on the fine-grained features to obtain a detection result.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the pedestrian re-identification method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a pedestrian re-identification method according to any one of claims 1 to 6.
CN202211302074.1A 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium Active CN115631509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211302074.1A CN115631509B (en) 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211302074.1A CN115631509B (en) 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115631509A true CN115631509A (en) 2023-01-20
CN115631509B CN115631509B (en) 2023-05-26

Family

ID=84906622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211302074.1A Active CN115631509B (en) 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115631509B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106409A (en) * 2013-01-29 2013-05-15 北京交通大学 Composite character extraction method aiming at head shoulder detection
US20180181813A1 (en) * 2016-12-22 2018-06-28 TCL Research America Inc. Face detection, identification, and tracking system for robotic devices
EP3401908A1 (en) * 2017-05-12 2018-11-14 Thomson Licensing Device and method for walker identification
CN109871821A (en) * 2019-03-04 2019-06-11 中国科学院重庆绿色智能技术研究院 The pedestrian of adaptive network recognition methods, device, equipment and storage medium again
CN110543841A (en) * 2019-08-21 2019-12-06 中科视语(北京)科技有限公司 Pedestrian re-identification method, system, electronic device and medium
CN110543823A (en) * 2019-07-30 2019-12-06 平安科技(深圳)有限公司 Pedestrian re-identification method and device based on residual error network and computer equipment
CN112597943A (en) * 2020-12-28 2021-04-02 北京眼神智能科技有限公司 Feature extraction method and device for pedestrian re-identification, electronic equipment and storage medium
CN112801008A (en) * 2021-02-05 2021-05-14 电子科技大学中山学院 Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN112818967A (en) * 2021-04-16 2021-05-18 杭州魔点科技有限公司 Child identity recognition method based on face recognition and head and shoulder recognition
WO2022041830A1 (en) * 2020-08-25 2022-03-03 北京京东尚科信息技术有限公司 Pedestrian re-identification method and device
CN114266946A (en) * 2021-12-31 2022-04-01 智慧眼科技股份有限公司 Feature identification method and device under shielding condition, computer equipment and medium
CN114783037A (en) * 2022-06-17 2022-07-22 浙江大华技术股份有限公司 Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium
CN114821647A (en) * 2022-04-25 2022-07-29 济南博观智能科技有限公司 Sleeping post identification method, device, equipment and medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106409A (en) * 2013-01-29 2013-05-15 北京交通大学 Composite character extraction method aiming at head shoulder detection
US20180181813A1 (en) * 2016-12-22 2018-06-28 TCL Research America Inc. Face detection, identification, and tracking system for robotic devices
EP3401908A1 (en) * 2017-05-12 2018-11-14 Thomson Licensing Device and method for walker identification
CN109871821A (en) * 2019-03-04 2019-06-11 中国科学院重庆绿色智能技术研究院 The pedestrian of adaptive network recognition methods, device, equipment and storage medium again
CN110543823A (en) * 2019-07-30 2019-12-06 平安科技(深圳)有限公司 Pedestrian re-identification method and device based on residual error network and computer equipment
CN110543841A (en) * 2019-08-21 2019-12-06 中科视语(北京)科技有限公司 Pedestrian re-identification method, system, electronic device and medium
WO2022041830A1 (en) * 2020-08-25 2022-03-03 北京京东尚科信息技术有限公司 Pedestrian re-identification method and device
CN112597943A (en) * 2020-12-28 2021-04-02 北京眼神智能科技有限公司 Feature extraction method and device for pedestrian re-identification, electronic equipment and storage medium
CN112801008A (en) * 2021-02-05 2021-05-14 电子科技大学中山学院 Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN112818967A (en) * 2021-04-16 2021-05-18 杭州魔点科技有限公司 Child identity recognition method based on face recognition and head and shoulder recognition
CN114266946A (en) * 2021-12-31 2022-04-01 智慧眼科技股份有限公司 Feature identification method and device under shielding condition, computer equipment and medium
CN114821647A (en) * 2022-04-25 2022-07-29 济南博观智能科技有限公司 Sleeping post identification method, device, equipment and medium
CN114783037A (en) * 2022-06-17 2022-07-22 浙江大华技术股份有限公司 Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ERIC JUWEI CHENG 等: "A fast fused part-based model with new deep feature for pedestrian detection and security monitoring" *
沈宇慧 等: "融合头肩部位特征的行人重识别" *
谢洪霞: "基于头肩的遮挡场景行人检测算法研究" *

Also Published As

Publication number Publication date
CN115631509B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN110852285B (en) Object detection method and device, computer equipment and storage medium
CN109657533B (en) Pedestrian re-identification method and related product
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN112685565A (en) Text classification method based on multi-mode information fusion and related equipment thereof
CN111191533B (en) Pedestrian re-recognition processing method, device, computer equipment and storage medium
JP7425147B2 (en) Image processing method, text recognition method and device
CN108021863B (en) Electronic device, age classification method based on image and storage medium
CN111414888A (en) Low-resolution face recognition method, system, device and storage medium
CN111667001A (en) Target re-identification method and device, computer equipment and storage medium
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN112016502B (en) Safety belt detection method, safety belt detection device, computer equipment and storage medium
CN111652181B (en) Target tracking method and device and electronic equipment
CN114937285B (en) Dynamic gesture recognition method, device, equipment and storage medium
CN110232381B (en) License plate segmentation method, license plate segmentation device, computer equipment and computer readable storage medium
CN112149570B (en) Multi-person living body detection method, device, electronic equipment and storage medium
CN116246287B (en) Target object recognition method, training device and storage medium
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN112381118A (en) Method and device for testing and evaluating dance test of university
CN115700845B (en) Face recognition model training method, face recognition device and related equipment
CN115631509B (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN115424335A (en) Living body recognition model training method, living body recognition method and related equipment
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN115393755A (en) Visual target tracking method, device, equipment and storage medium
CN112036501A (en) Image similarity detection method based on convolutional neural network and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Country or region after: China

Address before: 410205, Changsha high tech Zone, Hunan Province, China

Patentee before: Wisdom Eye Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address