CN111582126A - Pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion - Google Patents

Pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion Download PDF

Info

Publication number
CN111582126A
CN111582126A CN202010360873.9A CN202010360873A CN111582126A CN 111582126 A CN111582126 A CN 111582126A CN 202010360873 A CN202010360873 A CN 202010360873A CN 111582126 A CN111582126 A CN 111582126A
Authority
CN
China
Prior art keywords
pedestrian
scale
features
network
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010360873.9A
Other languages
Chinese (zh)
Other versions
CN111582126B (en
Inventor
王慧燕
陈海英
陶家威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202010360873.9A priority Critical patent/CN111582126B/en
Publication of CN111582126A publication Critical patent/CN111582126A/en
Application granted granted Critical
Publication of CN111582126B publication Critical patent/CN111582126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Social Psychology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pedestrian re-identification method based on multi-scale pedestrian contour segmentation and fusion. Firstly, preprocessing data; secondly, extracting global features of the image and contour features of pedestrians, and fusing the two features; secondly, training the pedestrian re-identification network by adopting a label smooth loss function so as to optimize network parameters; and finally, calculating the Euclidean distance between the specified object in the query set and each object in the candidate set aiming at the query set and the candidate set contained in the pedestrian re-identification data set, and then performing ascending sorting on the calculated distances to obtain a sorting result of the pedestrian re-identification. The pedestrian re-identification method removes the characteristics of the pedestrian clothes, learns the human body outline of the pedestrian to identify the pedestrian, and performs pedestrian re-identification by combining the global characteristics. The invention can better identify the pedestrian clothes whether the pedestrian clothes are replaced or not.

Description

Pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion
Technical Field
The invention relates to the technical field of computer vision, in particular to a pedestrian re-identification method based on multi-scale pedestrian contour segmentation and fusion.
Background
Pedestrian Re-identification, also known as pedestrian Re-identification (Re-ID), is a technique that uses computer vision to determine whether a specific pedestrian is present in an image or video sequence, and specifically refers to identifying the identity of the pedestrian from images of the pedestrian captured by different cameras. Given an image containing a target pedestrian (query), the ReID system attempts to search for images containing the same pedestrian from a large number of pedestrian images (a gallery), widely recognized as a sub-problem for image retrieval; given an image of a pedestrian under surveillance, the image of the pedestrian across the device is retrieved. The camera aims to make up the visual limitation of the existing fixed camera, can be combined with a pedestrian detection/pedestrian tracking technology, and can be widely applied to the fields of video monitoring, security protection and the like. ReID is of great interest to both academia and industry for its wide range of application potential, such as video surveillance and cross-camera tracking.
ReID has developed very rapidly in these two years, but falls to the ground with very few applications compared to face technology. In fact, the model of the ReID is not good enough, the accuracy on the data set is not high enough, and compared with the human face task, the ReID scene is more complex, and some essential problems are not solved. ReID remains a very challenging task due to a number of uncontrolled sources of variation, such as significant changes in pose and viewpoint, complex variations in illumination, and poor image quality.
The simplest and most urgent occlusion problems and the problems of invisible light and replacement of pedestrian clothing and the like can make almost all existing ReID models extremely ineffective, so to speak ineffective.
Disclosure of Invention
Aiming at the problems and the defects of the prior art, mainly aiming at the defect of the pedestrian re-identification technology in clothes-changing pedestrian identification (identification after clothes-changing of clothes-changing pedestrians), the pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion is provided.
The technical scheme adopted by the invention for solving the technical problem is as follows:
step (1), data preprocessing
And acquiring a sufficient number of sample images, and carrying out normalization processing on the sample images to obtain a data set.
Step (2), extracting the global features of the image and the contour features of the pedestrian
Inputting the data set into a pedestrian global feature extraction network to obtain global features of the image;
inputting the data set into a multi-scale pedestrian contour segmentation network to obtain the contour characteristics of the pedestrian;
the multi-scale pedestrian contour segmentation network adopts ResNet pre-trained on ImageNet as a main feature extraction network, and on the basis of the network, a new residual block is added for multi-scale feature learning, and the new residual block uses hole convolution to replace common convolution;
and the top of the new residual block is subjected to pyramid pooling by adopting a cavity space which can obtain the information of the human body outline dimensions of different pedestrians.
And (3) inputting the global features and the contour features into a pedestrian re-identification network for fusion.
Step (4), training the pedestrian re-recognition network by adopting a label smooth loss function to enable the network parameters to be optimal, specifically:
training on an ImageNet database according to InceptionResNet 2 to obtain a pre-training network, inputting a feature vector generated by fusion of global features and contour features into a label smoothing loss function, and training parameters of the pedestrian re-identification network by using a back propagation algorithm until the whole network converges.
And (5) calculating Euclidean distances of specified objects in the query set and each object in the candidate set aiming at the query set and the candidate set contained in the pedestrian re-identification data set, and then performing ascending sorting on the calculated distances to obtain a sorting result of pedestrian re-identification.
Further, the pretreatment in the step (1) is specifically: setting the size of an input image, and if the sample image is larger than the size, performing random cutting to obtain the sample image; and if the sample image is smaller than the size, performing equal-scale amplification and then cutting.
Further, the hole convolution of the new residual block controls the pixels of the features through a deep convolution neural network, the visual domain of a convolution kernel is adjusted to obtain multi-scale information, and each hole convolution uses different expansion rates to capture multi-size contextual information.
Further, the void space pyramid pooling uses void convolutions of different expansion rates to classify regions of arbitrary scale.
Further, the void space pyramid pooling includes two parts: multi-scale hole convolution and image-level features;
the multi-scale void convolution comprises common convolution of 1x1, void convolution with a void rate of 6 of 3x3, void convolution with a void rate of 12 of 3x3 and void convolution with a void rate of 18 of 3x 3;
the image level features are obtained by averaging the input in the [1,2] dimension, converting the input image size by using linear difference values through common convolution, finally connecting the four convolutions with the image features, and obtaining the output of the network through convolution.
Further, the step (3) adopts a point-by-bit addition mode to perform the fusion of the global features and the contour features.
Further, in the step (3), when the two features have different dimensions, the two features are converted into vectors with the same dimension through linear transformation.
The invention has the beneficial effects that:
1. the influence of the pedestrian background in the ReID process is removed, and the human is identified through the human body outline of the pedestrian, which is the process of performing pedestrian identification closest to the human.
2. The features on the clothing of the pedestrian are removed, and the pedestrian identification method is effective to the defect of the existing pedestrian re-identification technology in the clothing change pedestrian identification, because the network does not depend on the clothing features on the clothing, and the contour of the human body of the pedestrian is also learned to identify the pedestrian. The global features can be learned by two branches of the pedestrian re-identification method based on multi-scale pedestrian contour segmentation, the human body contour features of pedestrians can be well learned, and for a pedestrian re-identification system, the re-identification can be well performed no matter whether the pedestrian clothing is replaced or not.
Drawings
FIG. 1 is a general block diagram according to the present invention;
FIG. 2 is a network architecture diagram of a multi-scale pedestrian contour segmentation network branch in accordance with the present invention;
fig. 3 is a block diagram of a dual-branch re-identification network according to the present invention.
Detailed Description
In order to describe the present invention more specifically, the following detailed description of the technical solution of the present invention is made with reference to the accompanying drawings and the detailed description, and the flow of an embodiment of the method is shown in fig. 1. The invention relates to a pedestrian re-identification method based on pedestrian contour segmentation, which comprises the following steps of:
step (1), acquiring enough number of pedestrian sample images, wherein the images can be downloaded from a network (Market1501, DukeMTMC-reiD, CUHK03) or can be shot by self; and (3) normalizing the pedestrian sample image, taking an input image with the size of 512 multiplied by 512 as an example, if the sample image is larger than the size, randomly cutting the pedestrian sample image, and if the size of the pedestrian sample image is smaller than the size, performing equal-proportion amplification and cutting the pedestrian sample image.
Step (2), extracting the global features of the image and the contour features of the pedestrian
Inputting the data set into a pedestrian global feature extraction network to obtain global features of the image;
inputting the data set into a multi-scale pedestrian contour segmentation network to obtain the contour characteristics of the pedestrian;
the two branches can learn the global features of the image and can also well learn the human body contour features of the pedestrians. The two branches are effective to the defect of the existing pedestrian re-identification technology in the identification of the clothes-changing pedestrians, because the network does not depend on the clothes characteristics on the clothes, and the contour of the human body of the pedestrian is learned to identify the pedestrian. For the pedestrian re-identification system, the pedestrian garment can be well re-identified no matter whether the garment is replaced or not.
As shown in fig. 2, the multi-scale pedestrian contour segmentation network is a network for learning multi-scale contextual features, which is a network for extracting main features by using ResNet pre-trained on ImageNet, and on the basis of the network, a new residual block is added for multi-scale feature learning, and the hole convolution 301 is used for the new residual block to replace the ordinary convolution. The hole convolution can control the pixels of the features through a deep convolution neural network, and the visual domain of a convolution kernel is adjusted to obtain multi-scale information.
In addition, each hole convolution within this residual block uses a different expansion rate to capture multi-sized context information, using hole space pyramid pooling 302 at the top of this residual block. The cavity space pyramid pooling uses cavity convolution with different expansion rates to classify regions of any scale, so that information of different pedestrian human body contour scales can be obtained through the cavity space pyramid pooling structure.
The void space pyramid pooling includes two parts: multi-scale hole convolution and image-level features. The multiscale hole convolution includes, 1x1 ordinary convolution, 3x3 hole convolution with a hole rate of 6, 3x3 hole convolution with a hole rate of 12, and 3x3 hole convolution with a hole rate of 18; and (3) image level characteristics, namely averaging the input in the [1,2] dimension, converting the input image size by using a linear difference value through common convolution, connecting 4 convolutions with image characteristics, and finally obtaining the output of the network through one convolution. The network outputs softmax of pixel-wise, namely:
Figure BDA0002475030930000041
wherein x is the pixel position on the two-dimensional plane, ak(x) Representing the value of the kth channel corresponding to pixel x in the last output layer of the network. p is a radical ofk(x) Representing the probability that pixel x belongs to class k.
Meanwhile, the multi-scale pedestrian contour segmentation branch network is used for training a pre-training model for segmenting the pedestrian contour on a large amount of label information of a segmentation data set on a coco data set, so that in the pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion, a pedestrian picture is input into the multi-scale pedestrian contour segmentation branch to obtain a pedestrian contour map.
And (3) finally fusing the pedestrian global feature extraction network and the multi-scale pedestrian contour segmentation network through the structure shown in the figure 3. The network architecture shown in fig. 3 is prior art and is not described or illustrated in detail. The two branches of the pedestrian global feature extraction branch network and the multi-scale pedestrian contour segmentation branch network are trained by taking the Inception ResNet 2 as a backbone network, and the Inception ResNet 2 is used for training on an ImageNet database to obtain a pre-training network. The IncepistionResNetv 2 network fuses the features of different scales, so that the adoption of the backbone network can realize the fusion of the features of different sizes with a multi-scale pedestrian contour segmentation branch network, better front-back correspondence can be realized, and the accuracy can be improved.
The InceptionResNet 2 is characterized in that the nxn convolution is replaced by 1xn convolution kernel nx1 convolution, so that the calculation amount is effectively reduced, the calculation amount is reduced by replacing 5x5 convolution and 7x7 convolution with a plurality of 3x3 convolutions, the speed of the fused pedestrian re-identification network can be increased relative to multi-scale pedestrian contour segmentation, ResNet and acceptance network structures are further fused in the InceptionResNet 2, and ResNet is also adopted in the multi-scale pedestrian contour segmentation branch network, and the accuracy can be further improved correspondingly.
Training is carried out on an ImageNet database according to InceptionResNet 2 to obtain a pre-training network, and then the global features and the outline features are fused in a point-by-point addition mode to obtain a feature vector. And inputting the feature vector into a cross entropy loss function, and training the defined multi-scale pedestrian contour segmentation and pedestrian re-identification network parameters by using a back propagation algorithm so as to optimize the parameters of the network model.
Step (4), the model training adopts label smooth loss, and the classification of pedestrian re-identification usually uses a cross entropy loss function:
Figure BDA0002475030930000051
wherein N is the total number of pedestrians and is a pedestrian label. When an image i is input, yiIs the pedestrian's label in the image if yiIs class i, which has a value of 1, otherwise it is 0. p is a radical ofiIs the probability that the network predicts that the pedestrian belongs to tag i pedestrian. The reason for introducing the label smoothing loss function is that the cross entropy loss function excessively depends on a correct pedestrian label, so that the phenomenon of overfitting training is easily caused, and the overfitting phenomenon in the training process is avoided. A small number of error labels may exist in a pedestrian training sample, the error labels may have a certain influence on a prediction result to a certain extent, and the label smoothing loss function may also be used for preventing the model from excessively depending on the labels in the training process. Therefore, the pedestrian label smoothing treatment is to set an error rate for the label in the training process and train by taking 1-as a real label
Figure BDA0002475030930000052
Step (5), testing results
And calculating Euclidean distances of the specified objects in the query set and each object in the candidate set aiming at the query set and the candidate set contained in the pedestrian re-identification data set, and then performing ascending sorting on the calculated distances to obtain a sorting result of pedestrian re-identification and obtain a pedestrian re-identification result.
While the invention has been described in connection with specific embodiments thereof, it will be understood that these should not be construed as limiting the scope of the invention, which is defined in the following claims, and any variations which fall within the scope of the claims are intended to be embraced thereby.

Claims (7)

1. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion is characterized by comprising the following steps of:
step (1), data preprocessing
Acquiring a sufficient number of sample images, and carrying out normalization processing on the sample images to obtain a data set;
step (2), extracting the global features of the image and the contour features of the pedestrian
Inputting the data set into a pedestrian global feature extraction network to obtain global features of the image;
inputting the data set into a multi-scale pedestrian contour segmentation network to obtain the contour characteristics of the pedestrian;
the multi-scale pedestrian contour segmentation network adopts ResNet pre-trained on ImageNet as a main feature extraction network, and on the basis of the network, a new residual block is added for multi-scale feature learning, and the new residual block uses hole convolution to replace common convolution;
the top of the new residual block is subjected to pyramid pooling by adopting a cavity space which can obtain the human body outline dimension information of different pedestrians;
inputting the global features and the contour features into a pedestrian re-identification network for fusion;
step (4), training the pedestrian re-recognition network by adopting a label smooth loss function to enable the network parameters to be optimal, specifically:
training on an ImageNet database according to InceptionResNet 2 to obtain a pre-training network, inputting a feature vector generated by fusion of global features and contour features into a label smooth loss function, and training parameters of a pedestrian re-identification network by using a back propagation algorithm until the whole network converges;
and (5) calculating Euclidean distances of specified objects in the query set and each object in the candidate set aiming at the query set and the candidate set contained in the pedestrian re-identification data set, and then performing ascending sorting on the calculated distances to obtain a sorting result of pedestrian re-identification.
2. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion according to claim 1, characterized in that: the pretreatment in the step (1) is specifically as follows: setting the size of an input image, and if the sample image is larger than the size, performing random cutting to obtain the sample image; and if the sample image is smaller than the size, performing equal-scale amplification and then cutting.
3. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion according to claim 1, characterized in that: the hole convolution of the new residual block controls the pixels of the features through a deep convolution neural network, the visual domain of a convolution kernel is adjusted to obtain multi-scale information, and each hole convolution uses different expansion rates to capture multi-size contextual information.
4. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion according to claim 1, characterized in that: the void space pyramid pooling uses void convolutions of different expansion rates to classify regions of arbitrary scale.
5. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion as claimed in claim 4, wherein: the void space pyramid pooling comprises two parts: multi-scale hole convolution and image-level features;
the multi-scale void convolution comprises common convolution of 1x1, void convolution with a void rate of 6 of 3x3, void convolution with a void rate of 12 of 3x3 and void convolution with a void rate of 18 of 3x 3;
the image level features are obtained by averaging the input in the [1,2] dimension, converting the input image size by using linear difference values through common convolution, finally connecting the four convolutions with the image features, and obtaining the output of the network through convolution.
6. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion according to claim 1, characterized in that: and (3) fusing the global features and the contour features in a point-by-bit addition mode.
7. The pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion as claimed in claim 6, wherein: and (4) when the two features have different dimensions in the step (3), converting the two features into vectors with the same dimension through linear transformation.
CN202010360873.9A 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion Active CN111582126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010360873.9A CN111582126B (en) 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010360873.9A CN111582126B (en) 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion

Publications (2)

Publication Number Publication Date
CN111582126A true CN111582126A (en) 2020-08-25
CN111582126B CN111582126B (en) 2024-02-27

Family

ID=72114476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010360873.9A Active CN111582126B (en) 2020-04-30 2020-04-30 Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion

Country Status (1)

Country Link
CN (1) CN111582126B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255630A (en) * 2021-07-15 2021-08-13 浙江大华技术股份有限公司 Moving target recognition training method, moving target recognition method and device
CN114626470A (en) * 2022-03-18 2022-06-14 南京航空航天大学深圳研究院 Aircraft skin key feature detection method based on multi-type geometric feature operator
CN114758362A (en) * 2022-06-15 2022-07-15 山东省人工智能研究院 Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking
CN115738747A (en) * 2022-11-29 2023-03-07 浙江致远环境科技股份有限公司 Ceramic composite fiber catalytic filter tube for removing sulfur, nitrogen and dioxin and preparation method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271895A (en) * 2018-08-31 2019-01-25 西安电子科技大学 Pedestrian's recognition methods again based on Analysis On Multi-scale Features study and Image Segmentation Methods Based on Features
CN109325534A (en) * 2018-09-22 2019-02-12 天津大学 A kind of semantic segmentation method based on two-way multi-Scale Pyramid
CN109784258A (en) * 2019-01-08 2019-05-21 华南理工大学 A kind of pedestrian's recognition methods again cut and merged based on Analysis On Multi-scale Features
CN110084156A (en) * 2019-04-12 2019-08-02 中南大学 A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature
CN110084108A (en) * 2019-03-19 2019-08-02 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Pedestrian re-identification system and method based on GAN neural network
CN110717411A (en) * 2019-09-23 2020-01-21 湖北工业大学 Pedestrian re-identification method based on deep layer feature fusion
CN110852168A (en) * 2019-10-11 2020-02-28 西北大学 Pedestrian re-recognition model construction method and device based on neural framework search
CN110969087A (en) * 2019-10-31 2020-04-07 浙江省北大信息技术高等研究院 Gait recognition method and system
CN111027372A (en) * 2019-10-10 2020-04-17 山东工业职业学院 Pedestrian target detection and identification method based on monocular vision and deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271895A (en) * 2018-08-31 2019-01-25 西安电子科技大学 Pedestrian's recognition methods again based on Analysis On Multi-scale Features study and Image Segmentation Methods Based on Features
CN109325534A (en) * 2018-09-22 2019-02-12 天津大学 A kind of semantic segmentation method based on two-way multi-Scale Pyramid
CN109784258A (en) * 2019-01-08 2019-05-21 华南理工大学 A kind of pedestrian's recognition methods again cut and merged based on Analysis On Multi-scale Features
CN110084108A (en) * 2019-03-19 2019-08-02 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Pedestrian re-identification system and method based on GAN neural network
CN110084156A (en) * 2019-04-12 2019-08-02 中南大学 A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature
CN110717411A (en) * 2019-09-23 2020-01-21 湖北工业大学 Pedestrian re-identification method based on deep layer feature fusion
CN111027372A (en) * 2019-10-10 2020-04-17 山东工业职业学院 Pedestrian target detection and identification method based on monocular vision and deep learning
CN110852168A (en) * 2019-10-11 2020-02-28 西北大学 Pedestrian re-recognition model construction method and device based on neural framework search
CN110969087A (en) * 2019-10-31 2020-04-07 浙江省北大信息技术高等研究院 Gait recognition method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WU, X. , ET AL.: "Person Re-identification Based on Semantic Segmentation", pages 903 *
XIE,Y. , ET AL.: "Cross-Camera Person Re-Identification With Body-Guided Attention Network", pages 361 *
罗晖;芦春雨;郑翔文;: "一种基于多尺度角点检测的语义分割网络", no. 33 *
陈洪云 等: "融合深度神经网络和空洞卷积的语义图像分割研究", pages 167 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255630A (en) * 2021-07-15 2021-08-13 浙江大华技术股份有限公司 Moving target recognition training method, moving target recognition method and device
CN113255630B (en) * 2021-07-15 2021-10-15 浙江大华技术股份有限公司 Moving target recognition training method, moving target recognition method and device
CN114626470A (en) * 2022-03-18 2022-06-14 南京航空航天大学深圳研究院 Aircraft skin key feature detection method based on multi-type geometric feature operator
CN114626470B (en) * 2022-03-18 2024-02-02 南京航空航天大学深圳研究院 Aircraft skin key feature detection method based on multi-type geometric feature operator
CN114758362A (en) * 2022-06-15 2022-07-15 山东省人工智能研究院 Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking
CN114758362B (en) * 2022-06-15 2022-10-11 山东省人工智能研究院 Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN115738747A (en) * 2022-11-29 2023-03-07 浙江致远环境科技股份有限公司 Ceramic composite fiber catalytic filter tube for removing sulfur, nitrogen and dioxin and preparation method thereof
CN115738747B (en) * 2022-11-29 2024-01-23 浙江致远环境科技股份有限公司 Ceramic composite fiber catalytic filter tube for removing dioxin through desulfurization and denitrification and preparation method thereof

Also Published As

Publication number Publication date
CN111582126B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN110414368B (en) Unsupervised pedestrian re-identification method based on knowledge distillation
CN111582126B (en) Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
CN108509859B (en) Non-overlapping area pedestrian tracking method based on deep neural network
CN111783576B (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
Kim et al. Multi-task convolutional neural network system for license plate recognition
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
KR101697161B1 (en) Device and method for tracking pedestrian in thermal image using an online random fern learning
CN107480585B (en) Target detection method based on DPM algorithm
CN111709311A (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
CN111178251A (en) Pedestrian attribute identification method and system, storage medium and terminal
Dib et al. A review on negative road anomaly detection methods
CN103093198A (en) Crowd density monitoring method and device
Supreeth et al. An approach towards efficient detection and recognition of traffic signs in videos using neural networks
CN113221770A (en) Cross-domain pedestrian re-identification method and system based on multi-feature hybrid learning
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
CN111582154A (en) Pedestrian re-identification method based on multitask skeleton posture division component
CN113326738B (en) Pedestrian target detection and re-identification method based on deep network and dictionary learning
CN110334703B (en) Ship detection and identification method in day and night image
CN116912670A (en) Deep sea fish identification method based on improved YOLO model
Zhang et al. Reading various types of pointer meters under extreme motion blur
CN111046861B (en) Method for identifying infrared image, method for constructing identification model and application
Kim et al. Development of a real-time automatic passenger counting system using head detection based on deep learning
Vaidya et al. Comparative analysis of motion based and feature based algorithms for object detection and tracking
Cheng et al. Automatic Data Cleaning System for Large-Scale Location Image Databases Using a Multilevel Extractor and Multiresolution Dissimilarity Calculation
Pandya et al. A novel approach for vehicle detection and classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant