CN115631509B - Pedestrian re-identification method and device, computer equipment and storage medium - Google Patents

Pedestrian re-identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115631509B
CN115631509B CN202211302074.1A CN202211302074A CN115631509B CN 115631509 B CN115631509 B CN 115631509B CN 202211302074 A CN202211302074 A CN 202211302074A CN 115631509 B CN115631509 B CN 115631509B
Authority
CN
China
Prior art keywords
pedestrian
image
features
head
shoulder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211302074.1A
Other languages
Chinese (zh)
Other versions
CN115631509A (en
Inventor
谢喜林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN202211302074.1A priority Critical patent/CN115631509B/en
Publication of CN115631509A publication Critical patent/CN115631509A/en
Application granted granted Critical
Publication of CN115631509B publication Critical patent/CN115631509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a pedestrian re-identification method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be identified; detecting the head and shoulder parts of the image to be identified to obtain a detection result; when the detection result is that the head and shoulder positions exist, cutting the image to be identified to obtain a head and shoulder image; extracting features of the head-shoulder images, and enhancing the extracted features to obtain head-shoulder enhanced features; extracting features of an image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches; aiming at each pedestrian re-identification branch, carrying out feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches; performing feature fusion on the head-shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features; based on the pedestrian re-recognition characteristics, the image to be recognized is recognized, a recognition result is obtained, and the pedestrian recognition accuracy is improved by adopting the pedestrian recognition method and the pedestrian recognition device.

Description

Pedestrian re-identification method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to a pedestrian re-recognition method, apparatus, computer device, and storage medium.
Background
The pedestrian Re-identification (Re-identification) technology is a technology for searching specific pedestrians in a large-scale distributed monitoring system by utilizing a computer vision technology, and is called Re-ID for short. And judging the identity of the pedestrian by identifying the pedestrian image. However, in practical applications, due to the installation position of the camera, the angle of the shooting field of view, object shielding, crowded pedestrians, and the like, a complete pedestrian image meeting the requirement of re-recognition of pedestrians cannot be obtained, and information of the pedestrian image is lost. In addition, changes in pedestrian wear, such as changing, can also result in changes in the external characteristics of the acquired pedestrian image. At this time, the acquired pedestrian image is re-identified, and there is a problem that the accuracy is low due to the information loss or the feature change.
Therefore, the conventional method has a problem that the accuracy is low due to the lack of pedestrian image information or the change of characteristics when the pedestrian re-recognition is performed.
Disclosure of Invention
The embodiment of the invention provides a pedestrian re-identification method, a device, computer equipment and a storage medium, which are used for improving the pedestrian identification accuracy under the condition of pedestrian image information missing or characteristic change.
In order to solve the above technical problems, an embodiment of the present application provides a pedestrian re-recognition method, including.
And acquiring an image to be identified.
And detecting the head and shoulder parts of the image to be identified based on the head and shoulder detection network to obtain a detection result.
And when the detection result is that the head and shoulder positions exist in the image to be identified, cutting the image to be identified to obtain the head and shoulder image.
And extracting the features of the head and shoulder images, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features.
And extracting the characteristics of the image to be identified to obtain image characteristics, and inputting the image characteristics into at least two pedestrian re-identification branches.
And carrying out feature enhancement on the image features aiming at each pedestrian re-recognition branch to obtain pedestrian features corresponding to the pedestrian re-recognition branches.
And carrying out feature fusion on the head-shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features.
And based on the pedestrian re-identification characteristics, carrying out identification processing on the image to be identified to obtain an identification result.
In order to solve the above technical problem, the embodiment of the present application further provides a pedestrian re-identification device, including.
And the image to be identified acquisition module is used for acquiring the image to be identified.
And the detection result acquisition module is used for detecting the head and shoulder parts of the image to be identified based on the head and shoulder detection network to obtain a detection result.
And the head-shoulder image acquisition module is used for cutting the image to be identified to obtain a head-shoulder image when the detection result shows that the head-shoulder position exists in the image to be identified.
The head-shoulder enhancement feature acquisition module is used for extracting features of the head-shoulder image, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head-shoulder enhancement features.
The image feature acquisition module is used for carrying out feature extraction on the image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches.
The pedestrian characteristic acquisition module is used for carrying out characteristic enhancement on the image characteristics aiming at each pedestrian re-identification branch to obtain pedestrian characteristics corresponding to the pedestrian re-identification branches.
And the pedestrian re-identification feature acquisition module is used for carrying out feature fusion on the head-shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features.
And the identification module is used for carrying out identification processing on the image to be identified based on the pedestrian re-identification characteristic to obtain an identification result.
In order to solve the above technical problem, the embodiments of the present application further provide a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the steps of the pedestrian re-recognition method are implemented when the processor executes the computer program.
In order to solve the above technical problem, embodiments of the present application further provide a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps of the pedestrian re-recognition method described above.
The pedestrian re-identification method, the pedestrian re-identification device, the computer equipment and the storage medium provided by the embodiment of the invention are characterized in that an image to be identified is obtained; based on the head-shoulder detection network, detecting the head-shoulder part of the image to be identified to obtain a detection result; when the detection result is that the head and shoulder parts exist in the image to be identified, cutting the image to be identified to obtain the head and shoulder image; extracting features of the head-shoulder image, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head-shoulder enhancement features; extracting features of an image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches; aiming at each pedestrian re-identification branch, carrying out feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches; performing feature fusion on the head-shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features; and based on the pedestrian re-recognition characteristics, recognizing the image to be recognized to obtain a recognition result. By fusing the head and shoulder position characteristics, the accuracy of pedestrian re-identification is effectively improved under the conditions that pedestrian image information is absent or characteristics are changed, such as pedestrian trunk shielding, pedestrian replacement and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied.
FIG. 2 is a flow chart of one embodiment of a pedestrian re-identification method of the present application.
Fig. 3 is a schematic structural view of an embodiment of a pedestrian re-recognition device according to the present application.
FIG. 4 is a schematic structural diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, as shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture E interface display perts Group Audio Layer III, moving Picture expert compression standard audio layer 3), MP4 players (Moving Picture E interface display perts Group Audio Layer IV, moving Picture expert compression standard audio layer 4), laptop and desktop computers, and so on.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the pedestrian re-recognition method provided by the embodiment of the present application is executed by a server, and accordingly, the pedestrian re-recognition device is disposed in the server.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks and servers may be provided according to implementation requirements, and the terminal devices 101, 102 and 103 in the embodiments of the present application may specifically correspond to application systems in actual production.
Referring to fig. 2, fig. 2 shows a pedestrian re-recognition method according to an embodiment of the present invention, and the application of the method to the server in fig. 1 is described as follows.
S201, acquiring an image to be identified.
In step S201, the image to be recognized refers to a pedestrian image.
The method for acquiring the image to be identified includes, but is not limited to, capturing an image from a monitoring video shot by a monitoring camera and shooting by a mobile phone. Specifically, the method for acquiring the image to be identified is adaptively adjusted according to the actual application scene.
S202, based on a head-shoulder detection network, head-shoulder position detection is carried out on the image to be identified, and a detection result is obtained.
In step S202, the head-shoulder detecting network refers to a network model for performing head-shoulder region recognition on the image to be recognized.
It should be noted that, the training method of the head-shoulder detection network includes, but is not limited to, a target detection algorithm and a positive and negative sample training method.
The target detection algorithm is an algorithm for distinguishing targets in images or videos from other non-interested areas, judging whether targets exist, determining target positions and identifying target types. Such target detection algorithms include, but are not limited to, YOLOV1, YOLOV3, YOLOV5, and embodiments of the present application preferably employ the YOLOV5 target detection algorithm. Training a training sample by using a YOLOV5 target detection algorithm, constructing a head-shoulder detection network, and detecting the head-shoulder part of an image to be identified based on the head-shoulder detection network to obtain a detection result.
The positive and negative sample training method is to train a neural network by acquiring positive and negative samples, and take the trained network as a head and shoulder detection network, wherein the positive sample is a head and shoulder image, and the negative sample is a human body image containing the head and shoulder parts.
The head and shoulder detection network is used for detecting the head and shoulder positions of the image to be identified, so that the head and shoulder positions in the image to be identified can be rapidly positioned, the feature extraction is carried out on the head and shoulder positions, and the features of the head and shoulder positions are fused to improve the pedestrian identification accuracy under the condition of pedestrian image information loss or feature change.
And S203, when the detection result is that the head and shoulder parts exist in the image to be identified, cutting the image to be identified to obtain the head and shoulder image.
In step S203, the head-shoulder image is an image including only the head-shoulder region and not including the trunk region.
When the detection result is that the head and shoulder positions of the image to be identified exist, determining a boundary box of the head and shoulder positions in the image to be identified, and cutting the image to be identified according to the boundary box to obtain the head and shoulder image.
It should be noted that the method for determining the bounding box of the head and shoulder part includes, but is not limited to, a target detection algorithm and a multi-scale target detection algorithm. And identifying a target edge frame containing the head and shoulder positions of the image to be identified through a target detection algorithm or a multi-scale target detection calculation, wherein the target edge frame is a rectangular frame, and the rectangular frame can be determined by x and y axis coordinates of the upper left corner and x and y axis coordinates of the lower right corner of the rectangle.
The method comprises the steps of determining a boundary frame of a head-shoulder part in an image to be identified, cutting the image to be identified according to the boundary frame to obtain a head-shoulder image, extracting features of the head-shoulder image, and fusing features of the head-shoulder part to improve pedestrian identification accuracy under the condition of pedestrian image information loss or feature change.
S204, extracting features of the head and shoulder images, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head and shoulder enhancement features.
In step S204, the attention algorithm described above refers to attention extracted based on the attention of the feature map itself.
The method for extracting the features of the head and shoulder image includes, but is not limited to, a resnet50 residual network, a resnet101 residual network and a resnet152 residual network, and the resnet50 residual network is preferred in the application.
The feature enhancement means a manner of enhancing the feature expression of the extracted feature. The feature enhancement includes, but is not limited to, fine-grained feature enhancement, and weight feature enhancement, wherein fine-grained feature enhancement refers to expression enhancement of fine-grained features extracted from a head-shoulder image, and weight feature enhancement refers to weight assignment to features extracted from the head-shoulder image and expression enhancement.
It should be appreciated that the feature enhancements herein may be specifically tailored to the actual situation. The present application preferably employs fine grain feature enhancement.
The head-shoulder enhancement features refer to head-shoulder features obtained through feature enhancement.
The features extracted from the head-shoulder images are subjected to feature enhancement through an attention algorithm to obtain head-shoulder enhancement features, so that the attention recognition of head-shoulder positions is enhanced, and the pedestrian recognition accuracy under the conditions of head-shoulder position information such as pedestrian trunk shielding and pedestrian replacement is improved.
S205, extracting features of the image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches.
In step S205, the above-mentioned pedestrian re-recognition branch refers to a model branch for pedestrian re-recognition of the image to be recognized.
The implementation method of the pedestrian re-identification branch includes, but is not limited to, a resnet50 residual network, a resnet101 residual network and a resnet152 residual network, and the resnet50 residual network is preferred in the application. It should be understood that the pedestrian re-identification branch and the head and shoulder detection network should perform feature extraction in the same manner, so as to ensure that the dimensions of the extracted feature images are consistent.
And extracting features of the image to be identified through a resnet50 residual error network to obtain image features, and respectively inputting the image features into all pedestrian re-identification branches.
It should be noted that the number of the branches for re-identifying the pedestrian may be specifically adjusted according to the actual situation.
The embodiment of the application preferably adopts two pedestrian re-recognition branches part1 and part2.part1 and part2 used global pooling full join post join multi-class softmax penalty training.
The image features are obtained by extracting the features of the image to be identified, the image features are input into at least two pedestrian re-identification branches, the image features are processed based on different pedestrian re-identification branches, and the pedestrian re-identification accuracy is improved.
S206, aiming at each pedestrian re-identification branch, carrying out feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches.
In step S206, the pedestrian feature refers to a feature of a pedestrian in the image to be recognized. The pedestrian features include, but are not limited to, gesture features, behavioral features, skin tone features. It should be appreciated that the pedestrian characteristics may be adjusted according to actual circumstances for guiding pedestrian identity classification.
And the image features are enhanced through different pedestrian re-recognition branches to obtain a plurality of pedestrian features, and the pedestrian re-recognition is performed on the image to be recognized based on the different pedestrian features, so that the pedestrian re-recognition accuracy is improved.
S207, carrying out feature fusion on the head-shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features.
In step S207, the head-shoulder enhancement features and the pedestrian features are consistent in dimension, and feature fusion is performed on the head-shoulder enhancement features and all the pedestrian features according to a preset sequence, so as to obtain pedestrian re-identification features.
It should be noted that the preset sequence includes, but is not limited to, the head-shoulder enhancement feature connecting the pedestrian feature and the pedestrian feature connecting the head-shoulder enhancement feature.
The pedestrian re-recognition feature is obtained by carrying out feature fusion on the head-shoulder enhancement feature and all the pedestrian features, and comprises the features of the head-shoulder part, so that the pedestrian recognition accuracy under the condition of pedestrian image information missing or feature change is improved.
S208, based on the pedestrian re-recognition characteristics, recognizing the image to be recognized to obtain a recognition result.
In step S208, the above-mentioned recognition result is whether the target pedestrian is included.
And whether the image to be identified contains the target pedestrian is judged through the pedestrian re-identification feature, and the pedestrian re-identification feature is fused with the head-shoulder enhancement feature, so that the pedestrian identification accuracy is improved under the conditions that the head-shoulder position information of the pedestrian is unchanged, such as trunk shielding and pedestrian replacement.
In the embodiment, an image to be recognized is acquired; based on the head-shoulder detection network, detecting the head-shoulder part of the image to be identified to obtain a detection result; when the detection result is that the head and shoulder parts exist in the image to be identified, cutting the image to be identified to obtain the head and shoulder image; extracting features of the head-shoulder image, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head-shoulder enhancement features; extracting features of an image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches; aiming at each pedestrian re-identification branch, carrying out feature enhancement on the image features to obtain pedestrian features corresponding to the pedestrian re-identification branches; performing feature fusion on the head-shoulder enhancement features and all pedestrian features to obtain pedestrian re-identification features; and based on the pedestrian re-recognition characteristics, recognizing the image to be recognized to obtain a recognition result. By fusing the head and shoulder position characteristics, the accuracy of pedestrian re-identification is effectively improved under the conditions that pedestrian image information is absent or characteristics are changed, such as pedestrian trunk shielding, pedestrian replacement and the like.
In some optional implementations of the present embodiment, in step S202, based on the head-shoulder detection network, the step of performing head-shoulder part detection on the image to be identified to obtain a detection result includes steps S2021 to S2022:
s2021, carrying out fine-grained feature extraction on the image to be identified based on the head-shoulder detection network to obtain fine-grained features.
S2022, detecting the head and shoulder parts of the image to be identified based on the fine granularity characteristics, and obtaining a detection result.
The head and shoulder detection network is a resnet50 residual network.
The fine-grained feature refers to a feature that has a greater similarity to the head-shoulder position than the coarse-grained feature.
The fine-grained feature extraction methods described above include, but are not limited to, local feature extraction and attention network enhancement. The local feature extraction is a method for extracting further features of the head and shoulder positions and subdividing the extracted features to determine whether the features are the head and shoulder positions. Attention network enhancement refers to attention features related to head and shoulder positions extracted by taking care of the feature map itself of an image to be identified using an attention network. The present application preferably employs attention network enhancement.
And extracting features of the image to be identified through a resnet50 residual network, carrying out global average pooling on the extracted features, and carrying out fine-grained feature extraction on the pooled features based on an attention mechanism network to obtain fine-grained features of the enhanced head and shoulder parts.
The detection result may be that the image to be identified includes a head-shoulder portion, and the image to be identified does not include the head-shoulder portion.
In this embodiment, feature extraction and fine-granularity feature extraction are performed on an image to be identified through a resnet50 residual network, and head and shoulder part detection is performed on the image to be identified according to the fine-granularity features, so as to obtain a detection result. By enhancing the head and shoulder position characteristics, the accuracy of pedestrian re-identification is effectively improved under the conditions of pedestrian image information deletion or characteristic change such as pedestrian trunk shielding, pedestrian replacement and the like.
In some optional implementations of this embodiment, in step S203, when the detection result is that the image to be identified has a head-shoulder portion, the step of clipping the image to be identified to obtain the head-shoulder image includes.
And based on the fine granularity characteristics, performing head-shoulder boundary frame prediction on the image to be identified to obtain a head-shoulder boundary frame.
Based on the head-shoulder boundary box, cutting the image to be identified to obtain the head-shoulder image.
The head-shoulder boundary box refers to a target boundary box for detecting the head-shoulder part of the image to be identified. Wherein the head and shoulder parts are target parts.
In this embodiment, a bounding box of a head-shoulder part in an image to be identified is determined through fine-grained features, the image to be identified is cut according to the bounding box, a head-shoulder image is obtained, feature extraction is performed on the head-shoulder image, and features of the head-shoulder part are fused to improve pedestrian identification accuracy under the condition that pedestrian image information is absent or features are changed.
In some optional implementations of this embodiment, in step S206, for each pedestrian re-recognition branch, the step of performing feature enhancement on the image feature to obtain the pedestrian feature corresponding to the pedestrian re-recognition branch includes S2061 to S2064.
S2061, selecting one pedestrian re-recognition branch from all the pedestrian re-recognition branches as the current re-recognition branch.
S2062, performing feature enhancement on the image features based on the current re-identification branch to obtain a first pedestrian feature and a second pedestrian feature.
S2063, layering the second pedestrian features to obtain layering features with the same number as the preset layering number.
S2064, taking the first pedestrian characteristic and all the layered characteristics as pedestrian characteristics corresponding to the current re-recognition branch, and returning to the step of selecting one pedestrian re-recognition branch from all the pedestrian re-recognition branches as the current re-recognition branch to continue until all the pedestrian re-recognition branches are selected.
In step S2061, the current re-recognition branch refers to a pedestrian re-recognition branch that is currently processing the image feature.
In step S2062, the first pedestrian feature and the second pedestrian feature are both pedestrian features obtained by global pooling of the image features. It should be appreciated that the first pedestrian feature is consistent in nature with the second pedestrian feature.
In step S2063, the above layering processing refers to processing in which the second pedestrian feature is divided into multiple layers in the H dimension in NCHW.
Here, NCHW is actually represented by [ W H cn ], N represents the number, C represents the channel, H represents the height, and W represents the width. The NCHW fetches data in the order of W-direction data, H-direction data, C-direction data, and N-direction data. That is, the layering process of the embodiment of the present application is to highly layer the feature map corresponding to the second pedestrian feature. It should be understood that the data in the W direction or the data in the C direction or the data in the N direction may be layered according to actual needs, and specifically adjusted according to actual situations.
The preset layering number refers to the number of layers layering the second pedestrian feature.
Step S2063 will be explained below with an embodiment in which two pedestrian re-recognition branches part1 and part2 are respectively input using the image features output by the resnet50 network. part1 and part2 used global pooling full join post join multi-class softmax penalty training. The first pedestrian characteristic of part1 is f1, the characteristic diagram corresponding to the second pedestrian characteristic of part1 is divided into an upper layer f2 and a lower layer f3 in the h dimension of nchw, in the same way, the first pedestrian characteristic of part2 is f4, and the characteristic diagram corresponding to the second pedestrian characteristic of part2 is divided into an upper layer f5, a middle layer f6 and a lower layer f7 in the h dimension of nchw.
In the embodiment, the image features are processed through the pedestrian re-identification branches, and different layering operations are performed on different pedestrian re-identification branches, so that the accuracy of pedestrian re-identification can be effectively improved.
In some optional implementations of the present embodiment, in step S207, the step of performing feature fusion on the head-shoulder enhancement feature and all the pedestrian features to obtain the pedestrian re-recognition feature includes S2071 to S2073.
S2071, performing full connection processing on the head-shoulder enhancement features and all the first pedestrian features to obtain first full connection features.
S2072, performing full connection processing on the head-shoulder enhancement features and all layered features according to the arrangement sequence of the pedestrian re-identification branches to obtain second full connection features.
S2073, carrying out feature fusion on the first full-connection feature and the second full-connection feature to obtain the pedestrian re-identification feature.
In step S2071, the head-shoulder enhancement features and all the first pedestrian features are subjected to concat connection fusion through convolution dimension reduction, and after the head-shoulder position features are fused, the weight of the head-shoulder fine granularity features in the final pedestrian re-identification features is improved. The fusion branch finally connects class softmax losses to guide pedestrian identity classification, focusing on inter-class features.
In step S2072, it is specifically.
And aiming at each pedestrian re-identification branch, performing dimension reduction connection on all layered features in the pedestrian re-identification branch to obtain dimension reduction features corresponding to the pedestrian re-identification branch.
And carrying out full-connection fusion on all the dimension reduction features according to the arrangement sequence of the pedestrian re-identification branches to obtain fusion features.
And performing full connection processing on the head-shoulder enhancement feature and the fusion feature to obtain a second full connection feature.
Taking the example of step S2063 as an example to further explain S2072, the hierarchical features f2 and f3 in the second pedestrian feature of part1 are connected through 1*1 convolution to obtain the dimension reduction feature corresponding to part 1. And the layered features f4, f5 and f6 in the second pedestrian feature of Part2 are connected through 1*1 convolution dimension reduction, so that the dimension reduction feature corresponding to Part2 is obtained. And 5 layers of part1 and part2 are averagely divided, and the features with the same dimensions of f2, f3, f5, f6 and f7 are obtained through the self-independent convolution network dimension reduction features, and the concat connection fusion is carried out on the f2, f3, f5, f6 and f7 and the head shoulder enhancement feature f8 to obtain a second full connection feature, and the connection triplet loss guides the study of trunk features.
In the embodiment, by fusing the head and shoulder position characteristics, the accuracy of pedestrian re-identification is effectively improved under the conditions that pedestrian image information is absent or characteristics are changed, such as pedestrian trunk shielding, pedestrian replacement and the like.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Fig. 3 shows a schematic block diagram of a pedestrian re-recognition apparatus in one-to-one correspondence with the pedestrian re-recognition method of the above embodiment. As shown in fig. 3, the pedestrian re-recognition device includes an image acquisition module 31 to be recognized, a detection result acquisition module 32, a head-shoulder image acquisition module 33, a head-shoulder enhancement feature acquisition module 34, an image feature acquisition module 35, a pedestrian feature acquisition module 36, a pedestrian re-recognition feature acquisition module 37, and a recognition module 38. The functional modules are described in detail below.
The image to be identified acquiring module 31 is configured to acquire an image to be identified.
The detection result obtaining module 32 is configured to perform head-shoulder position detection on the image to be identified based on the head-shoulder detection network, so as to obtain a detection result.
And the head-shoulder image acquisition module 33 is configured to cut the image to be identified to obtain a head-shoulder image when the detection result indicates that the image to be identified has a head-shoulder part.
The head-shoulder enhancement feature obtaining module 34 is configured to perform feature extraction on the head-shoulder image, and perform feature enhancement on the extracted features based on an attention algorithm, so as to obtain head-shoulder enhancement features.
The image feature obtaining module 35 is configured to perform feature extraction on an image to be identified, obtain an image feature, and input the image feature into at least two pedestrian re-identification branches.
The pedestrian feature obtaining module 36 is configured to perform feature enhancement on the image feature for each pedestrian re-recognition branch, so as to obtain a pedestrian feature corresponding to the pedestrian re-recognition branch.
The pedestrian re-recognition feature acquisition module 37 is configured to perform feature fusion on the head-shoulder enhancement feature and all the pedestrian features to obtain a pedestrian re-recognition feature.
The recognition module 38 is configured to perform recognition processing on the image to be recognized based on the pedestrian re-recognition feature, so as to obtain a recognition result.
Optionally, the detection result acquisition module 32 includes.
The fine-granularity feature acquisition unit is used for carrying out fine-granularity feature extraction on the image to be identified based on the head-shoulder detection network to obtain fine-granularity features.
The detection result acquisition unit is used for detecting the head and shoulder parts of the image to be identified based on the fine granularity characteristics to obtain a detection result.
Optionally, the head-shoulder image acquisition module 33 includes.
And the head-shoulder boundary frame acquisition unit is used for carrying out head-shoulder boundary frame prediction on the image to be identified based on the fine granularity characteristics to obtain a head-shoulder boundary frame.
And the cutting unit is used for cutting the image to be identified based on the head-shoulder boundary box to obtain the head-shoulder image.
Optionally, the pedestrian feature acquisition module 36 includes.
And the current re-identification branch acquisition unit is used for selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as the current re-identification branch.
And the characteristic enhancement unit is used for carrying out characteristic enhancement on the image characteristics based on the current re-identification branch to obtain a first pedestrian characteristic and a second pedestrian characteristic.
And the layering unit is used for layering the second pedestrian characteristics to obtain layering characteristics with the same number as the preset layering number.
And the circulation unit is used for taking the first pedestrian characteristic and all the layered characteristics as pedestrian characteristics corresponding to the current re-identification branch, and returning to the step of selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as the current re-identification branch to continue execution until all the pedestrian re-identification branches are selected.
Optionally, the pedestrian re-identification feature acquisition module 37 includes.
And the first full-connection unit is used for carrying out full-connection processing on the head-shoulder enhancement feature and all the first pedestrian features to obtain first full-connection features.
And the second full-connection unit is used for carrying out full-connection processing on the head-shoulder enhancement features and all the layered features according to the arrangement sequence of the pedestrian re-identification branches to obtain second full-connection features.
And the feature fusion unit is used for carrying out feature fusion on the first full-connection feature and the second full-connection feature to obtain the pedestrian re-identification feature.
Optionally, the second full connection unit comprises.
The dimension reduction feature acquisition unit is used for carrying out dimension reduction connection on all layered features in the pedestrian re-identification branches aiming at each pedestrian re-identification branch to obtain dimension reduction features corresponding to the pedestrian re-identification branches.
And the fusion feature acquisition unit is used for carrying out full-connection fusion on all the dimension reduction features according to the arrangement sequence of the pedestrian re-identification branches to obtain fusion features.
And the full-connection unit is used for carrying out full-connection processing on the head-shoulder enhancement feature and the fusion feature to obtain a second full-connection feature.
The specific limitation of the pedestrian re-recognition device can be referred to as the limitation of the pedestrian re-recognition method hereinabove, and will not be repeated here. The respective modules in the pedestrian re-recognition apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 4, fig. 4 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only a computer device 4 having a component connection memory 41, a processor 42, a network interface 43 is shown in the figures, but it is understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is typically used for storing an operating system and various application software installed on the computer device 4, such as program codes for controlling electronic files, etc. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute a program code stored in the memory 41 or process data, such as a program code for executing control of an electronic file.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The present application also provides another embodiment, namely, a computer-readable storage medium storing an interface display program executable by at least one processor to cause the at least one processor to perform the steps of the pedestrian re-recognition method as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (9)

1. A pedestrian re-recognition method, characterized in that the pedestrian re-recognition method comprises:
acquiring an image to be identified;
based on a head-shoulder detection network, detecting the head-shoulder part of the image to be identified to obtain a detection result;
when the detection result is that the head and shoulder positions of the image to be identified exist, cutting the image to be identified to obtain a head and shoulder image;
extracting features of the head-shoulder image, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head-shoulder enhancement features;
extracting features of the image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches;
performing feature enhancement on the image features aiming at each pedestrian re-recognition branch to obtain pedestrian features corresponding to the pedestrian re-recognition branches;
performing feature fusion on the head-shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features;
based on the pedestrian re-recognition characteristics, recognizing the image to be recognized to obtain a recognition result;
the step of performing feature enhancement on the image features for each pedestrian re-recognition branch to obtain pedestrian features corresponding to the pedestrian re-recognition branches includes:
selecting one pedestrian re-identification branch from all the pedestrian re-identification branches as a current re-identification branch;
based on the current re-recognition branch, carrying out feature enhancement on the image features to obtain a first pedestrian feature and a second pedestrian feature;
layering the second pedestrian characteristic to obtain layering characteristics with the same number as the preset layering number;
and taking the first pedestrian characteristic and all the layered characteristics as pedestrian characteristics corresponding to the current re-recognition branches, and returning to the step of selecting one pedestrian re-recognition branch from all the pedestrian re-recognition branches as the current re-recognition branch to continue execution until all the pedestrian re-recognition branches are selected.
2. The pedestrian re-recognition method according to claim 1, wherein the step of performing head-shoulder position detection on the image to be recognized based on a head-shoulder detection network to obtain a detection result comprises:
based on a head-shoulder detection network, carrying out fine-granularity feature extraction on the image to be identified to obtain fine-granularity features;
and detecting the head and shoulder parts of the image to be identified based on the fine granularity characteristics to obtain a detection result.
3. The pedestrian re-recognition method according to claim 2, wherein when the detection result is that the head and shoulder positions of the image to be recognized exist, the step of clipping the image to be recognized to obtain the head and shoulder image includes:
performing head-shoulder boundary frame prediction on the image to be identified based on the fine granularity characteristics to obtain a head-shoulder boundary frame;
and cutting the image to be identified based on the head-shoulder boundary box to obtain a head-shoulder image.
4. The pedestrian re-recognition method of claim 1, wherein the step of feature-fusing the head-shoulder enhancement feature and all the pedestrian features to obtain a pedestrian re-recognition feature comprises:
performing full connection processing on the head-shoulder enhancement features and all the first pedestrian features to obtain first full connection features;
performing full connection processing on the head-shoulder enhancement features and all the layered features according to the arrangement sequence of the pedestrian re-identification branches to obtain second full connection features;
and carrying out feature fusion on the first full-connection feature and the second full-connection feature to obtain a pedestrian re-identification feature.
5. The pedestrian re-recognition method of claim 4, wherein the step of performing full-connection processing on the head-shoulder enhancement feature and all the layered features in the order of the pedestrian re-recognition branches to obtain a second full-connection feature includes:
aiming at each pedestrian re-identification branch, performing dimension reduction connection on all the layered features in the pedestrian re-identification branch to obtain dimension reduction features corresponding to the pedestrian re-identification branches;
according to the arrangement sequence of the pedestrian re-identification branches, all the dimension reduction features are subjected to full-connection fusion to obtain fusion features;
and carrying out full connection processing on the head-shoulder enhancement feature and the fusion feature to obtain a second full connection feature.
6. A pedestrian re-recognition device, characterized in that the pedestrian re-recognition device comprises:
the image acquisition module to be identified is used for acquiring the image to be identified;
the detection result acquisition module is used for detecting the head and shoulder parts of the image to be identified based on a head and shoulder detection network to obtain a detection result;
the head-shoulder image acquisition module is used for cutting the image to be identified to obtain a head-shoulder image when the detection result shows that the head-shoulder position exists in the image to be identified;
the head-shoulder enhancement feature acquisition module is used for extracting features of the head-shoulder image, and carrying out feature enhancement on the extracted features based on an attention algorithm to obtain head-shoulder enhancement features;
the image feature acquisition module is used for carrying out feature extraction on the image to be identified to obtain image features, and inputting the image features into at least two pedestrian re-identification branches;
the pedestrian characteristic acquisition module is used for carrying out characteristic enhancement on the image characteristics aiming at each pedestrian re-identification branch to obtain pedestrian characteristics corresponding to the pedestrian re-identification branches;
the pedestrian re-identification feature acquisition module is used for carrying out feature fusion on the head-shoulder enhancement features and all the pedestrian features to obtain pedestrian re-identification features;
the recognition module is used for carrying out recognition processing on the image to be recognized based on the pedestrian re-recognition characteristics to obtain a recognition result;
wherein, pedestrian characteristic acquisition module includes:
a current re-recognition branch acquisition unit, configured to select one pedestrian re-recognition branch from all the pedestrian re-recognition branches as a current re-recognition branch;
the feature enhancement unit is used for carrying out feature enhancement on the image features based on the current re-identification branch to obtain a first pedestrian feature and a second pedestrian feature;
the layering unit is used for layering the second pedestrian characteristics to obtain layering characteristics with the same number as the preset layering number;
and the circulation unit is used for taking the first pedestrian characteristic and all the layering characteristics as pedestrian characteristics corresponding to the current re-recognition branches, and returning to the step of selecting one pedestrian re-recognition branch from all the pedestrian re-recognition branches as the current re-recognition branch to continue execution until all the pedestrian re-recognition branches are selected.
7. The pedestrian re-recognition device of claim 6, wherein the detection result acquisition module includes:
the fine-granularity feature acquisition unit is used for extracting fine-granularity features of the image to be identified based on the head-shoulder detection network to obtain fine-granularity features;
and the detection result acquisition unit is used for detecting the head and shoulder parts of the image to be identified based on the fine granularity characteristics to obtain a detection result.
8. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the pedestrian re-recognition method according to any one of claims 1 to 5 when executing the computer program.
9. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the pedestrian re-recognition method according to any one of claims 1 to 5.
CN202211302074.1A 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium Active CN115631509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211302074.1A CN115631509B (en) 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211302074.1A CN115631509B (en) 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115631509A CN115631509A (en) 2023-01-20
CN115631509B true CN115631509B (en) 2023-05-26

Family

ID=84906622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211302074.1A Active CN115631509B (en) 2022-10-24 2022-10-24 Pedestrian re-identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115631509B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543841A (en) * 2019-08-21 2019-12-06 中科视语(北京)科技有限公司 Pedestrian re-identification method, system, electronic device and medium
CN114266946A (en) * 2021-12-31 2022-04-01 智慧眼科技股份有限公司 Feature identification method and device under shielding condition, computer equipment and medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106409A (en) * 2013-01-29 2013-05-15 北京交通大学 Composite character extraction method aiming at head shoulder detection
US10068135B2 (en) * 2016-12-22 2018-09-04 TCL Research America Inc. Face detection, identification, and tracking system for robotic devices
EP3401908A1 (en) * 2017-05-12 2018-11-14 Thomson Licensing Device and method for walker identification
CN109871821B (en) * 2019-03-04 2020-10-09 中国科学院重庆绿色智能技术研究院 Pedestrian re-identification method, device, equipment and storage medium of self-adaptive network
CN110543823B (en) * 2019-07-30 2024-03-19 平安科技(深圳)有限公司 Pedestrian re-identification method and device based on residual error network and computer equipment
CN112307886A (en) * 2020-08-25 2021-02-02 北京京东尚科信息技术有限公司 Pedestrian re-identification method and device
CN112597943A (en) * 2020-12-28 2021-04-02 北京眼神智能科技有限公司 Feature extraction method and device for pedestrian re-identification, electronic equipment and storage medium
CN112801008A (en) * 2021-02-05 2021-05-14 电子科技大学中山学院 Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN112818967B (en) * 2021-04-16 2021-07-09 杭州魔点科技有限公司 Child identity recognition method based on face recognition and head and shoulder recognition
CN114821647A (en) * 2022-04-25 2022-07-29 济南博观智能科技有限公司 Sleeping post identification method, device, equipment and medium
CN114783037B (en) * 2022-06-17 2022-11-22 浙江大华技术股份有限公司 Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543841A (en) * 2019-08-21 2019-12-06 中科视语(北京)科技有限公司 Pedestrian re-identification method, system, electronic device and medium
CN114266946A (en) * 2021-12-31 2022-04-01 智慧眼科技股份有限公司 Feature identification method and device under shielding condition, computer equipment and medium

Also Published As

Publication number Publication date
CN115631509A (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
CN106682602B (en) Driver behavior identification method and terminal
CN107844794B (en) Image recognition method and device
CN109657533A (en) Pedestrian recognition methods and Related product again
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
CN112926654B (en) Pre-labeling model training and certificate pre-labeling method, device, equipment and medium
CN112052837A (en) Target detection method and device based on artificial intelligence
KR20190106853A (en) Apparatus and method for recognition of text information
CN111667001A (en) Target re-identification method and device, computer equipment and storage medium
CN112016502B (en) Safety belt detection method, safety belt detection device, computer equipment and storage medium
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN110232381B (en) License plate segmentation method, license plate segmentation device, computer equipment and computer readable storage medium
CN114937285A (en) Dynamic gesture recognition method, device, equipment and storage medium
CN111652181B (en) Target tracking method and device and electronic equipment
CN111160251B (en) Living body identification method and device
CN115424335B (en) Living body recognition model training method, living body recognition method and related equipment
CN112613496A (en) Pedestrian re-identification method and device, electronic equipment and storage medium
CN113705293A (en) Image scene recognition method, device, equipment and readable storage medium
CN115631509B (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN116246287A (en) Target object recognition method, training device and storage medium
CN115700845A (en) Face recognition model training method, face recognition device and related equipment
Li et al. Detection of partially occluded pedestrians by an enhanced cascade detector
CN114842411A (en) Group behavior identification method based on complementary space-time information modeling
CN115631510B (en) Pedestrian re-identification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Country or region after: China

Address before: 410205, Changsha high tech Zone, Hunan Province, China

Patentee before: Wisdom Eye Technology Co.,Ltd.

Country or region before: China