CN111582126B - Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion - Google Patents

Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion Download PDF

Info

Publication number
CN111582126B
CN111582126B CN202010360873.9A CN202010360873A CN111582126B CN 111582126 B CN111582126 B CN 111582126B CN 202010360873 A CN202010360873 A CN 202010360873A CN 111582126 B CN111582126 B CN 111582126B
Authority
CN
China
Prior art keywords
pedestrian
scale
network
features
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010360873.9A
Other languages
Chinese (zh)
Other versions
CN111582126A (en
Inventor
王慧燕
陈海英
陶家威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202010360873.9A priority Critical patent/CN111582126B/en
Publication of CN111582126A publication Critical patent/CN111582126A/en
Application granted granted Critical
Publication of CN111582126B publication Critical patent/CN111582126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pedestrian re-identification method based on multi-scale pedestrian contour segmentation and fusion. Firstly, preprocessing data; secondly, extracting global features of the image and outline features of pedestrians, and fusing the two features; training the pedestrian re-recognition network by adopting a label smooth loss function so as to optimize network parameters; finally, aiming at the query set and the candidate set contained in the pedestrian re-identification data set, the Euclidean distance of each object in the designated object and the candidate set in the query set is calculated, and then the calculated distances are subjected to ascending order sorting to obtain the sorting result of pedestrian re-identification. The pedestrian recognition method removes the characteristics of the pedestrian clothes, learns the outline of the human body of the pedestrian to recognize the pedestrian, and performs pedestrian re-recognition by combining the global characteristics. The invention can better re-identify whether the pedestrian clothing is replaced or not.

Description

Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
Technical Field
The invention relates to the technical field of computer vision, in particular to a pedestrian re-identification method based on multi-scale pedestrian contour segmentation and fusion.
Background
Pedestrian Re-recognition, also known as pedestrian Re-recognition (Re-ID), is a technique that uses computer vision techniques to determine whether a particular pedestrian is present in an image or video sequence, and specifically to identify the identity of the pedestrian from images of the pedestrian captured from different cameras. Given an image containing a target pedestrian (query), the ReID system attempts to search for images including the same pedestrian from a large number of pedestrian images (gallery), widely regarded as a sub-problem of image retrieval; given a pedestrian image under surveillance, the pedestrian image under cross-device is retrieved. The camera is used for making up the visual limitation of the current fixed camera, can be combined with pedestrian detection/pedestrian tracking technology, and can be widely applied to the fields of video monitoring, security protection and the like. ReIDs are of great interest to academia and industry for their wide application potential, such as video surveillance and cross-camera tracking.
ReID has evolved very rapidly for two years, but has very few applications in floor-based applications compared to face technology. Instead of the ReID model being not good enough, the accuracy on the data set is not high enough, but rather the ReID scene is more complex than the face task, and some essential problems are not solved. ReID remains a very challenging task due to the large number of uncontrolled sources of variation, such as significant changes in pose and viewpoint, complex changes in illumination, and poor image quality.
The simplest and most urgent shielding problem and the problem of replacement of invisible light, pedestrian clothing, etc. can make almost all existing ReID models very poor in effect, so to speak, failure.
Disclosure of Invention
Aiming at the problems and the defects of the prior art, the pedestrian re-recognition method for multi-scale pedestrian contour segmentation and fusion is mainly provided for the defects of pedestrian re-recognition technology on the recognition of the clothing changing pedestrians (the pedestrian refers to the recognition after the pedestrian changes the clothing).
The technical scheme adopted for solving the technical problems is as follows:
step (1), data preprocessing
And acquiring a sufficient number of sample images, and carrying out normalization processing on the sample images to obtain a data set.
Step (2), extracting global features of the image and outline features of pedestrians
Inputting the data set into a pedestrian global feature extraction network to obtain global features of the image;
inputting the data set into a multi-scale pedestrian contour segmentation network to obtain contour features of pedestrians;
the multi-scale pedestrian contour segmentation network adopts ResNet obtained by pre-training on ImageNet as a main feature extraction network thereof, and a new residual block is added for multi-scale feature learning on the basis of the network, and the new residual block replaces common convolution by using hole convolution;
the top of the new residual block adopts a hole space pyramid pool which can acquire contour scale information of different rows of human bodies.
And (3) inputting the global features and the outline features into a pedestrian re-recognition network for fusion.
Training a pedestrian re-identification network by adopting a label smooth loss function to optimize network parameters, wherein the method specifically comprises the following steps:
training on an ImageNet database according to the InceptionResNetv2 to obtain a pre-training network, inputting a feature vector generated by fusion of global features and contour features into a label smooth loss function, and training parameters of a pedestrian re-identification network by using a back propagation algorithm until the whole network converges.
And (5) aiming at the query set and the candidate set contained in the pedestrian re-identification data set, calculating the Euclidean distance of each object in the designated object and the candidate set in the query set, and then carrying out ascending sort on the calculated distances to obtain the sorting result of pedestrian re-identification.
Further, the pretreatment in the step (1) is specifically: setting the size of an input image, and if the sample image is larger than the size, randomly cutting to obtain the sample image; if the sample image is smaller than the size, the sample image is obtained by carrying out equal proportion amplification and then cutting.
Further, the new residual block is subjected to hole convolution through pixels of control features of a deep convolution neural network, the visual domain of the convolution kernel is adjusted to obtain multi-scale information, and each hole convolution captures multi-scale context information by using different expansion rates.
Further, the hole space pyramid pooling uses hole convolutions with different expansion rates to classify regions of arbitrary scale.
Further, the pyramid pooling of the cavity space comprises two parts: multi-scale hole convolution and image level features;
the multi-scale hole convolution comprises a 1x1 common convolution, a 3x3 hole convolution with a hole rate of 6, a 3x3 hole convolution with a hole rate of 12 and a 3x3 hole convolution with a hole rate of 18;
the image level features are used for solving average value of the input in the [1,2] dimension, performing common convolution, converting the average value into the size of the input image by using linear difference values, finally connecting the four convolutions with the image features, and performing convolution to obtain the output of the network.
Further, in the step (3), the fusion of the global feature and the contour feature is performed in a point-by-point addition mode.
Further, in step (3), when the two features have different dimensions, the two features are converted into a co-dimensional vector through linear transformation.
The invention has the beneficial effects that:
1. the influence of the pedestrian background in the ReID process is removed, and the person is identified through the outline of the pedestrian, which is the process of identifying the pedestrian closest to the human.
2. The characteristics on the clothes of the pedestrians are removed, and the method is effective to the defects of the existing pedestrian re-identification technology for the pedestrian identification of the changing clothes, because the network does not depend on the clothes characteristics on the clothes, and the outline of the human body of the pedestrians is learned to identify the pedestrians. The two branches of the pedestrian re-recognition method based on the multi-scale pedestrian contour segmentation can learn global features and also can learn human body contour features of pedestrians well, and for a pedestrian re-recognition system, the pedestrian clothes can be re-recognized better no matter whether replaced or not.
Drawings
FIG. 1 is a general block diagram according to the present invention;
FIG. 2 is a network architecture diagram of a multi-scale pedestrian profile segmentation network branch in accordance with the present invention;
fig. 3 is a block diagram of a dual-branch re-identification network according to the present invention.
Detailed Description
In order to describe the present invention more specifically, the following detailed description of the technical solution of the present invention is given with reference to the accompanying drawings and the specific embodiments, and a flow chart of an embodiment of the method is shown in fig. 1. The invention discloses a pedestrian re-identification method based on pedestrian contour segmentation, which comprises the following steps:
step (1), acquiring a sufficient number of pedestrian sample images, wherein the images can be downloaded from a network (mark 1501, dukeMTMC-reID, CUHK 03) or can be photographed by themselves; the pedestrian sample image is normalized, taking an input image with the size of 512 multiplied by 512 as an example, if the sample image is larger than the size, the pedestrian sample image is obtained by random clipping, and if the size of the pedestrian sample image is smaller than the size, the pedestrian sample image is obtained by scaling up and clipping.
Step (2), extracting global features of the image and outline features of pedestrians
Inputting the data set into a pedestrian global feature extraction network to obtain global features of the image;
inputting the data set into a multi-scale pedestrian contour segmentation network to obtain contour features of pedestrians;
the two branches can learn the global features of the image and can learn the human body contour features of pedestrians well. These two branches are effective against the shortcomings of the existing pedestrian re-recognition technology for changing clothes pedestrian recognition, because the network does not depend on the clothing features on the clothing, and the outline of the human body of the pedestrian is learned for recognizing the pedestrian. For the pedestrian re-identification system, the pedestrian can be better re-identified no matter whether the pedestrian is replaced or not.
As shown in fig. 2, the multi-scale pedestrian profile segmentation network is a network for learning multi-scale contextual features, which is a network for extracting res net pre-trained on ImageNet as its main feature, and on the basis of this network, it adds a new residual block for multi-scale feature learning, where a hole convolution 301 is used instead of the normal convolution. The hole convolution can be used for controlling the pixels of the characteristic through the deep convolution neural network, and the visual domain of the convolution kernel is adjusted to obtain multi-scale information.
In addition, each hole convolution within this residual block uses a different expansion rate to capture multi-sized context information, and hole space pyramid pooling 302 is used on top of this residual block. The cavity space pyramid pooling uses cavity convolution with different expansion rates to classify the regions with any scale, so that the information of the contour scales of different persons can be obtained through the cavity space pyramid pooling structure.
The hole space pyramid pooling consists of two parts: multi-scale hole convolution and image level features. The multi-scale hole convolution comprises a common convolution of 1x1, a hole convolution with 3x3 hole rate of 6, a hole convolution with 3x3 hole rate of 12 and a hole convolution with 3x3 hole rate of 18; the image level features are used for solving the average value of the input in the [1,2] dimension, performing common convolution, converting the average value into the size of the input image by using a linear difference value, finally connecting the 4 convolutions with the image features, and finally performing convolution to obtain the output of the network. The network outputs a pixel-wise softmax, which is:
where x is the pixel position on the two-dimensional plane, a k (x) Representing the value of the kth channel corresponding to pixel x in the last output layer of the network. P is p k (x) Representing the probability that pixel x belongs to class k.
Meanwhile, a multi-scale pedestrian contour segmentation branch network segments a large amount of labeling information of a data set on a coco data set to train a pre-training model for segmenting the pedestrian contour, so that in a pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion, a pedestrian picture is input into the multi-scale pedestrian contour segmentation branch to obtain a pedestrian contour map.
And (3) fusing the pedestrian global feature extraction network and the multi-scale pedestrian contour segmentation network through a structure shown in fig. 3. The network structure shown in fig. 3 is prior art and is not described or illustrated in detail. The two branches of the pedestrian global feature extraction branch network and the multi-scale pedestrian contour segmentation branch network are trained by taking the InceptionResNetv2 as a backbone network, and training is carried out on an ImageNet database by using the InceptionResNetv2 to obtain a pre-training network. The characteristics of different scales are fused by the InceptionResNetv2 network, so that the backbone network can be fused with the characteristics of different sizes of the multi-scale pedestrian profile segmentation branch network, the backbone network can be better corresponding to the multi-scale pedestrian profile segmentation branch network, and the accuracy can be improved.
The InceptionResNet v2 replaces the nxn convolution by the 1xn convolution kernel nx1 convolution, so that the calculated amount is effectively reduced, a plurality of 3x3 convolutions are used for replacing the 5x5 convolution and the 7x7 convolution, the calculated amount is reduced, the speed of the pedestrian re-recognition network can be fused relative to the multi-scale pedestrian contour segmentation, in addition, the ResNet and the network structure of the index are fused in the InceptionResNet v2, and in the multi-scale pedestrian contour segmentation branch network, the ResNet is also adopted, and the ResNet corresponds to the multi-scale pedestrian contour segmentation branch network, so that the accuracy can be further improved.
Training on an ImageNet database according to the InceptionResNetv2 to obtain a pre-training network, and then fusing global features and contour features in a point-by-point addition mode to obtain a feature vector. Inputting the feature vector into a cross entropy loss function, and training the defined multi-scale pedestrian contour segmentation fusion pedestrian re-recognition network parameters by using a back propagation algorithm so as to optimize the parameters of the network model.
Step (4), model training adopts label smooth loss, and classification of pedestrian re-recognition usually uses a cross entropy loss function:
where N is the total pedestrian number and is the pedestrian label. When the image i is input, y i Is the label of the pedestrian in the image, if y i Class i has a value of 1, otherwise 0.P is p i Is the probability that the network predicts that the pedestrian belongs to the tag i pedestrian. The reason for introducing the label smoothing loss function is that the cross entropy loss function excessively depends on a correct pedestrian label, so that the phenomenon of over fitting during training is easy to cause, and the phenomenon of over fitting during training is avoided. There may be a small number of false labels in the pedestrian training samples that may have some impact on the prediction result, and the label smoothing loss function may also be used to prevent the model from over-relying on labels during training. The pedestrian label smoothing process is to set an error rate epsilon for the label in the training process, and train by taking 1-epsilon as a real label
Step (5), test results
And aiming at the query set and the candidate set contained in the pedestrian re-recognition data set, calculating the Euclidean distance of each object in the designated object and the candidate set in the query set, and then carrying out ascending order sorting on the calculated distances to obtain a sorting result of pedestrian re-recognition and a pedestrian re-recognition result.
The above description of the embodiments of the invention has been presented in connection with the drawings but these descriptions should not be construed as limiting the scope of the invention, which is defined by the appended claims, and any changes based on the claims are intended to be covered by the invention.

Claims (4)

1. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation and fusion is characterized by comprising the following steps of:
step (1), data preprocessing
Acquiring a sufficient number of sample images, and carrying out normalization processing on the sample images to obtain a data set;
step (2), extracting global features of the image and outline features of pedestrians
Inputting the data set into a pedestrian global feature extraction network to obtain global features of the image;
inputting the data set into a multi-scale pedestrian contour segmentation network to obtain contour features of pedestrians;
the multi-scale pedestrian contour segmentation network adopts ResNet obtained by pre-training on ImageNet as a main feature extraction network thereof, and a new residual block is added for multi-scale feature learning on the basis of the network, and the new residual block replaces common convolution by using hole convolution;
the top of the new residual block is subjected to pyramid pooling by adopting a cavity space capable of acquiring contour scale information of different rows of human bodies;
step (3), inputting the global features and the outline features into a pedestrian re-recognition network for fusion;
training a pedestrian re-identification network by adopting a label smooth loss function to optimize network parameters, wherein the method specifically comprises the following steps:
training on an ImageNet database according to the InceptionResNetv2 to obtain a pre-training network, inputting a feature vector generated by fusion of global features and contour features into a label smooth loss function, and training parameters of a pedestrian re-identification network by using a back propagation algorithm until the whole network is converged;
step (5), aiming at a query set and a candidate set contained in the pedestrian re-recognition data set, calculating the Euclidean distance of each object in the designated object and the candidate set in the query set, and then carrying out ascending sort on the calculated distances to obtain a sorting result of pedestrian re-recognition;
the cavity convolution of the new residual block is controlled by pixels of the characteristic through a deep convolution neural network, the visual domain of the convolution kernel is adjusted to obtain multi-scale information, and each cavity convolution uses different expansion rates to capture multi-scale context information;
the hole space pyramid pooling uses hole convolution with different expansion rates to classify the regions with any scale;
the pyramid pooling of the cavity space comprises two parts: multi-scale hole convolution and image level features;
the multi-scale hole convolution comprises a 1x1 common convolution, a 3x3 hole convolution with a hole rate of 6, a 3x3 hole convolution with a hole rate of 12 and a 3x3 hole convolution with a hole rate of 18;
the image level features are used for solving average value of the input in the [1,2] dimension, performing common convolution, converting the average value into the size of the input image by using linear difference values, finally connecting the four convolutions with the image features, and performing convolution to obtain the output of the network.
2. The pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion according to claim 1, wherein the method is characterized by: the pretreatment in the step (1) is specifically as follows: setting the size of an input image, and if the sample image is larger than the size, randomly cutting to obtain the sample image; if the sample image is smaller than the size, the sample image is obtained by carrying out equal proportion amplification and then cutting.
3. The pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion according to claim 1, wherein the method is characterized by: and (3) fusing the global features and the contour features in a point-by-point addition mode.
4. The pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion according to claim 1, wherein the method is characterized by: in the step (3), when the two features have different dimensions, the two features are converted into the same-dimensional vector through linear transformation.
CN202010360873.9A 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion Active CN111582126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010360873.9A CN111582126B (en) 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010360873.9A CN111582126B (en) 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion

Publications (2)

Publication Number Publication Date
CN111582126A CN111582126A (en) 2020-08-25
CN111582126B true CN111582126B (en) 2024-02-27

Family

ID=72114476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010360873.9A Active CN111582126B (en) 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion

Country Status (1)

Country Link
CN (1) CN111582126B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465834B (en) * 2020-11-26 2024-05-24 中科麦迪人工智能研究院(苏州)有限公司 Blood vessel segmentation method and device
CN114693862A (en) * 2020-12-29 2022-07-01 北京万集科技股份有限公司 Three-dimensional point cloud data model reconstruction method, target re-identification method and device
CN113255630B (en) * 2021-07-15 2021-10-15 浙江大华技术股份有限公司 Moving target recognition training method, moving target recognition method and device
CN114626470B (en) * 2022-03-18 2024-02-02 南京航空航天大学深圳研究院 Aircraft skin key feature detection method based on multi-type geometric feature operator
CN114758362B (en) * 2022-06-15 2022-10-11 山东省人工智能研究院 Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN115738747B (en) * 2022-11-29 2024-01-23 浙江致远环境科技股份有限公司 Ceramic composite fiber catalytic filter tube for removing dioxin through desulfurization and denitrification and preparation method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271895A (en) * 2018-08-31 2019-01-25 西安电子科技大学 Pedestrian's recognition methods again based on Analysis On Multi-scale Features study and Image Segmentation Methods Based on Features
CN109325534A (en) * 2018-09-22 2019-02-12 天津大学 A kind of semantic segmentation method based on two-way multi-Scale Pyramid
CN109784258A (en) * 2019-01-08 2019-05-21 华南理工大学 A kind of pedestrian's recognition methods again cut and merged based on Analysis On Multi-scale Features
CN110084156A (en) * 2019-04-12 2019-08-02 中南大学 A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature
CN110084108A (en) * 2019-03-19 2019-08-02 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Pedestrian re-identification system and method based on GAN neural network
CN110717411A (en) * 2019-09-23 2020-01-21 湖北工业大学 Pedestrian re-identification method based on deep layer feature fusion
CN110852168A (en) * 2019-10-11 2020-02-28 西北大学 Pedestrian re-recognition model construction method and device based on neural framework search
CN110969087A (en) * 2019-10-31 2020-04-07 浙江省北大信息技术高等研究院 Gait recognition method and system
CN111027372A (en) * 2019-10-10 2020-04-17 山东工业职业学院 Pedestrian target detection and identification method based on monocular vision and deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271895A (en) * 2018-08-31 2019-01-25 西安电子科技大学 Pedestrian's recognition methods again based on Analysis On Multi-scale Features study and Image Segmentation Methods Based on Features
CN109325534A (en) * 2018-09-22 2019-02-12 天津大学 A kind of semantic segmentation method based on two-way multi-Scale Pyramid
CN109784258A (en) * 2019-01-08 2019-05-21 华南理工大学 A kind of pedestrian's recognition methods again cut and merged based on Analysis On Multi-scale Features
CN110084108A (en) * 2019-03-19 2019-08-02 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Pedestrian re-identification system and method based on GAN neural network
CN110084156A (en) * 2019-04-12 2019-08-02 中南大学 A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature
CN110717411A (en) * 2019-09-23 2020-01-21 湖北工业大学 Pedestrian re-identification method based on deep layer feature fusion
CN111027372A (en) * 2019-10-10 2020-04-17 山东工业职业学院 Pedestrian target detection and identification method based on monocular vision and deep learning
CN110852168A (en) * 2019-10-11 2020-02-28 西北大学 Pedestrian re-recognition model construction method and device based on neural framework search
CN110969087A (en) * 2019-10-31 2020-04-07 浙江省北大信息技术高等研究院 Gait recognition method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Wu, X. , et al..Person Re-identification Based on Semantic Segmentation.Signal and Information Processing,Networking and Computers.2020,第903页Abstract,第905页第2节至第907页第3节,图2. *
Xie,Y. , et al..Cross-Camera Person Re-Identification With Body-Guided Attention Network.IEEE Sencors Journal.2020,第361页第METHODOLOGY节至第364页第EXPERIMENTS节,图2. *
罗晖 ; 芦春雨 ; 郑翔文 ; .一种基于多尺度角点检测的语义分割网络.电脑知识与技术.2019,(第33期),全文. *
陈洪云 等.融合深度神经网络和空洞卷积的语义图像分割研究.小型微型计算机系统.2020,第167页第2节至第168页第3节,图4. *

Also Published As

Publication number Publication date
CN111582126A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111582126B (en) Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
CN110414368B (en) Unsupervised pedestrian re-identification method based on knowledge distillation
CN111783576B (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN108509859B (en) Non-overlapping area pedestrian tracking method based on deep neural network
CN111709311B (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
Kim et al. Multi-task convolutional neural network system for license plate recognition
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN111611880B (en) Efficient pedestrian re-recognition method based on neural network unsupervised contrast learning
CN111814661A (en) Human behavior identification method based on residual error-recurrent neural network
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN111814690B (en) Target re-identification method, device and computer readable storage medium
CN110858276A (en) Pedestrian re-identification method combining identification model and verification model
CN111460884A (en) Multi-face recognition method based on human body tracking
CN113221770B (en) Cross-domain pedestrian re-recognition method and system based on multi-feature hybrid learning
CN112507924B (en) 3D gesture recognition method, device and system
CN114821014A (en) Multi-mode and counterstudy-based multi-task target detection and identification method and device
CN113033523A (en) Method and system for constructing falling judgment model and falling judgment method and system
CN111582154A (en) Pedestrian re-identification method based on multitask skeleton posture division component
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
CN111950357A (en) Marine water surface garbage rapid identification method based on multi-feature YOLOV3
CN113326738B (en) Pedestrian target detection and re-identification method based on deep network and dictionary learning
CN107679467B (en) Pedestrian re-identification algorithm implementation method based on HSV and SDALF
CN115937492B (en) Feature recognition-based infrared image recognition method for power transformation equipment
CN110334703B (en) Ship detection and identification method in day and night image
CN116977859A (en) Weak supervision target detection method based on multi-scale image cutting and instance difficulty

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant