CN106611156B - Pedestrian identification method and system based on self-adaptive depth space characteristics - Google Patents

Pedestrian identification method and system based on self-adaptive depth space characteristics Download PDF

Info

Publication number
CN106611156B
CN106611156B CN201610953664.9A CN201610953664A CN106611156B CN 106611156 B CN106611156 B CN 106611156B CN 201610953664 A CN201610953664 A CN 201610953664A CN 106611156 B CN106611156 B CN 106611156B
Authority
CN
China
Prior art keywords
feature
groups
pedestrian
optimized
pedestrian image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610953664.9A
Other languages
Chinese (zh)
Other versions
CN106611156A (en
Inventor
蔡晓东
宋宗涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201610953664.9A priority Critical patent/CN106611156B/en
Publication of CN106611156A publication Critical patent/CN106611156A/en
Application granted granted Critical
Publication of CN106611156B publication Critical patent/CN106611156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a pedestrian identification method and system adaptive to depth space characteristics; the method comprises the following steps: segmenting the pedestrian image according to a preset segmentation number n; establishing n groups of feature extraction models with the same number as the number of the pedestrian image blocks, and correspondingly extracting feature information from the n pedestrian image blocks; establishing n groups of feature classifiers with the same number as the feature information groups, and correspondingly classifying the features of the n groups of feature information; respectively calculating loss values generated in the classification process of each group of features according to a back propagation algorithm; and returning the n groups of loss values to the corresponding feature extraction model and feature classifier respectively for optimization. The method can effectively extract more information, particularly local information, of the pedestrian, adopts the loss value feedback form to realize self-adaptive pedestrian feature classification, finally approaches to an optimal solution, and can guide a pedestrian feature extraction model to extract pedestrian feature information in more depth spaces.

Description

Pedestrian identification method and system based on self-adaptive depth space characteristics
Technical Field
The invention mainly relates to the technical field of image processing, in particular to a pedestrian identification method and system with self-adaptive depth space characteristics.
Background
With the popularization of computers and the appearance of intelligent equipment, science and technology have greater and greater influence on people, and the demand of people on identity authentication is stronger and stronger at the present time of global aggravation. The traditional personal identification code, password, IC card, secret card and other traditional identity authentication technologies have a small gap between the reasons of unchanged operation, difficulty in memory and the like and the requirements of the information era. The human face recognition is a very popular subject in the field of biological feature recognition, and is a biological feature recognition technology which is hopefully and widely applied to social and economic activities and daily life of people after fingerprint recognition. Pedestrian identification is a biometric identification technique that uses human faces for identification. However, in recent times, particularly with the development of computer technology, pedestrian recognition generally refers to a recognition technology of face feature information based on computer technology.
With the development of society and the progress of technology, the influence of science and technology on people is larger and larger, the demand of people on identity authentication is stronger and stronger, however, the problems that the angle is different, the resolution ratio of a camera is different, the illumination is not consistent under different scenes and the like exist in different scenes, and the same pedestrian is easily judged as the same pedestrian by mistake. In the past, a mode of directly extracting features of a whole image is often adopted, and sufficient more detailed feature information is often not extracted.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a pedestrian recognition method and system with adaptive depth space features, which can effectively extract more information, especially local information, of pedestrians, adopt adaptive pedestrian feature classification in a loss value feedback mode to finally approach to an optimal solution, and can guide a pedestrian feature extraction model to extract pedestrian feature information of more depth spaces.
The technical scheme for solving the technical problems is as follows: a pedestrian identification method of self-adaptive depth space features comprises the following steps:
step S1: acquiring a pedestrian image, and segmenting the pedestrian image according to a preset segmentation number n to obtain n pedestrian image blocks;
step S2: establishing n groups of feature extraction models with the same number as the pedestrian image blocks, wherein the n groups of feature extraction models extract feature information from the n pedestrian image blocks in a one-to-one correspondence manner to obtain n groups of feature information; one-to-one correspondence is that a feature extraction model corresponds to a pedestrian image block to extract feature information;
step S3: establishing n groups of feature classifiers with the same number as the feature information groups, wherein the n groups of feature classifiers are used for carrying out feature classification on the n groups of feature information in a one-to-one correspondence manner; one-to-one correspondence is that a feature classifier corresponds to a group of feature information for feature classification;
step S4: respectively calculating loss values generated in the classification process of each group of features according to a back propagation algorithm to obtain n groups of loss values;
step S5: and respectively returning the n groups of loss values to the corresponding feature extraction model and the feature classifier to obtain the optimized feature extraction model and the optimized feature classifier.
The invention has the beneficial effects that: the pedestrian feature is extracted by means of segmenting the pedestrian image, each segmented pedestrian image corresponds to a feature extraction model and a feature classifier respectively, more information, particularly local information, of pedestrians can be effectively extracted, adaptive pedestrian feature classification in a loss value feedback mode is adopted, the optimal solution is finally approached, and the pedestrian feature extraction model can be guided to extract pedestrian feature information of more depth spaces.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the specific method for implementing step S5 is as follows: and respectively returning n groups of loss values to the corresponding feature extraction model and the feature classifier, adjusting the configuration parameters of the feature extraction model according to the returned loss values to obtain an optimized feature extraction model, and adjusting the configuration parameters of the feature classifier according to the returned loss values to obtain an optimized feature classifier.
The beneficial effect of adopting the further scheme is that: and finally, the feature extraction model and the feature classifier tend to be in an optimal solution through the loss value.
Further, the step S5 is followed by the step S6: and obtaining n groups of optimized feature classifications through n optimized feature extraction models and feature classifiers.
Further, the specific method for implementing step S6 is as follows: and inputting the n groups of pedestrian images into the n optimized feature extraction models again to obtain n groups of optimized feature information, and inputting the n groups of feature information into n optimized feature classifiers to obtain n groups of optimized feature classifications.
The beneficial effect of adopting the further scheme is that: and better feature extraction and feature classification training are obtained through the optimized feature extraction model and the feature classifier.
Further, the method also comprises the following steps after obtaining the optimized n groups of feature classification: and carrying out feature fusion on the optimized n groups of feature classifications to obtain feature fusion data of the pedestrian image.
The beneficial effect of adopting the further scheme is that: the pedestrian recognition can be carried out by acquiring the feature fusion data.
Further, the method also comprises the following steps after the feature fusion data of the pedestrian image are obtained: and setting a label for the feature fusion data of the pedestrian image.
The beneficial effect of adopting the further scheme is that: it is convenient to recognize and distinguish a plurality of recognized pedestrian images.
Further, the feature extraction model comprises a parameter configuration module, a plurality of convolution layers, a pooling layer and a plurality of full-connection layers,
the parameter configuration module is used for performing parameter configuration on the multilayer convolution layer, the pooling layer and the multilayer all-connected layer and updating each parameter configuration according to a loss value;
the multilayer convolution layer is used for carrying out multilayer convolution operation on the pedestrian image blocks and calculating the output value of the multilayer convolution according to the neuron activation function to extract characteristic information;
the pooling layer is used for performing maximum pooling processing on the extracted characteristic information to reduce the size of the pedestrian image blocks;
and the multilayer full-connection layer is used for performing multilayer neuron node connection operation on the pedestrian image blocks subjected to the maximum pooling processing.
The beneficial effect of adopting the further scheme is that: the method can give consideration to local characteristics and overall characteristics, can contain less network parameters on the basis of ensuring the characteristic extraction effect, and can achieve the optimal efficiency and accuracy.
Further, the feature classifier is a Softmax classifier.
The beneficial effect of adopting the further scheme is that: the characteristics can be independently learned, and the characteristics of each car face can be classified and identified to obtain higher accuracy.
Another technical solution of the present invention for solving the above technical problems is as follows: an adaptive depth spatial feature pedestrian recognition system comprising:
the segmentation module is used for acquiring a pedestrian image and segmenting the pedestrian image according to a preset segmentation number n to obtain n pedestrian image blocks;
the characteristic information extraction module is used for establishing n groups of characteristic extraction models with the same number as the pedestrian image blocks, and the n groups of characteristic extraction models correspondingly extract characteristic information from the n pedestrian image blocks one by one to obtain n groups of characteristic information;
the characteristic information classification module is used for establishing n groups of characteristic classifiers with the same number as the characteristic information groups, and the n groups of characteristic classifiers are used for carrying out characteristic classification on the n groups of characteristic information in a one-to-one correspondence manner;
the loss value module is used for respectively calculating the loss values generated in the classification process of each group of features according to a back propagation algorithm to obtain n groups of loss values;
and the optimization module is used for respectively returning the n groups of loss values to the corresponding feature extraction model and the feature classifier to obtain the optimized feature extraction model and the optimized feature classifier.
In the optimization module, n groups of loss values are respectively returned to the corresponding feature extraction model and the feature classifier, the feature extraction model adjusts respective configuration parameters according to the returned loss values to obtain an optimized feature extraction model, and the feature classifier adjusts respective configuration parameters according to the returned loss values to obtain an optimized feature classifier.
The pedestrian feature is extracted by means of segmenting the pedestrian image, each segmented pedestrian image corresponds to a feature extraction model and a feature classifier respectively, more information, particularly local information, of pedestrians can be effectively extracted, adaptive pedestrian feature classification in a loss value feedback mode is adopted, the optimal solution is finally approached, and the pedestrian feature extraction model can be guided to extract pedestrian feature information of more depth spaces.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of a method for pedestrian identification with adaptive depth spatial features according to the present invention;
FIG. 2 is a block diagram of an embodiment of the adaptive depth space feature pedestrian recognition system of the present invention;
FIG. 3 is a data structure diagram of feature extraction and feature classification in an embodiment of the present invention;
FIG. 4 is a data structure diagram of feature extraction and feature classification of header data according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 shows a pedestrian recognition method based on adaptive depth space features, which includes the following steps:
step S1: acquiring a pedestrian image, and segmenting the pedestrian image according to a preset segmentation number n to obtain n pedestrian image blocks;
step S2: establishing n groups of feature extraction models with the same number as the pedestrian image blocks, wherein the n groups of feature extraction models extract feature information from the n pedestrian image blocks in a one-to-one correspondence manner to obtain n groups of feature information; one-to-one correspondence is that a feature extraction model corresponds to a pedestrian image block to extract feature information;
step S3: establishing n groups of feature classifiers with the same number as the feature information groups, wherein the n groups of feature classifiers are used for carrying out feature classification on the n groups of feature information in a one-to-one correspondence manner; one-to-one correspondence is that a feature classifier corresponds to a group of feature information for feature classification;
step S4: respectively calculating loss values generated in the classification process of each group of features according to a back propagation algorithm to obtain n groups of loss values;
step S5: and respectively returning the n groups of loss values to the corresponding feature extraction model and the feature classifier to obtain the optimized feature extraction model and the optimized feature classifier.
The pedestrian feature is extracted by means of segmenting the pedestrian image, each segmented pedestrian image corresponds to a feature extraction model and a feature classifier respectively, more information, particularly local information, of pedestrians can be effectively extracted, adaptive pedestrian feature classification in a loss value feedback mode is adopted, the optimal solution is finally approached, and the pedestrian feature extraction model can be guided to extract pedestrian feature information of more depth spaces.
Specifically, in this embodiment, a specific method for implementing step S5 is as follows: and respectively returning n groups of loss values to the corresponding feature extraction model and the feature classifier, adjusting the configuration parameters of the feature extraction model according to the returned loss values to obtain an optimized feature extraction model, and adjusting the configuration parameters of the feature classifier according to the returned loss values to obtain an optimized feature classifier.
In the above embodiment, the feature extraction model and the feature classifier finally approach the optimal solution through the loss value.
Specifically, in this embodiment, the step S5 is followed by the step S6: and obtaining n groups of optimized feature classifications through n optimized feature extraction models and feature classifiers.
The specific method for implementing step S6 is as follows: and inputting the n groups of pedestrian images into the n optimized feature extraction models again to obtain n groups of optimized feature information, and inputting the n groups of feature information into n optimized feature classifiers to obtain n groups of optimized feature classifications.
In the above embodiment, better feature classification is obtained through the optimized feature extraction model and the feature classifier.
Specifically, in this embodiment, after obtaining the optimized n groups of feature classifications, the method further includes the steps of: and carrying out feature fusion on the optimized n groups of feature classifications to obtain feature fusion data of the pedestrian image.
In the above embodiment, the pedestrian recognition can be performed by acquiring the feature fusion data.
In this embodiment, after obtaining the feature fusion data of the pedestrian image, the method further includes the steps of: and setting a label for the feature fusion data of the pedestrian image.
In the above embodiment, it is convenient to recognize a plurality of recognized pedestrian images.
Specifically, in this embodiment, the feature extraction model includes a parameter configuration module, a plurality of convolutional layers, a pooling layer, and a plurality of fully-connected layers,
the parameter configuration module is used for performing parameter configuration on the multilayer convolution layer, the pooling layer and the multilayer all-connected layer and updating each parameter configuration according to a loss value;
the multilayer convolution layer is used for carrying out multilayer convolution operation on the pedestrian image blocks and calculating the output value of the multilayer convolution according to the neuron activation function to extract characteristic information;
the pooling layer is used for performing maximum pooling processing on the extracted characteristic information to reduce the size of the pedestrian image blocks;
and the multilayer full-connection layer is used for performing multilayer neuron node connection operation on the pedestrian image blocks subjected to the maximum pooling processing.
Specifically, in the convolutional layer, the trainable convolutional core performs convolution operation on the image, and calculates the convolution output value by using the neuron activation function, where the convolution formula is:
wherein x isiFor the i-th layer input image, yjFor the j-th output image, correspondingly, ki,jIs a convolution kernel connecting the i-th layer input image and the j-th layer output image, bjIs the offset of the output image of the j-th layer,is the convolution operator, and f (x) is the neuron activation function. The present embodiment uses a relu nonlinear function as an activation function, i.e., f (x) ═ max (0, x), which can accelerate the convergence speed of the deep network. Convolution kernel k in equation (1)i,jAnd offset bjThe method is a training parameter of the convolutional network, and a better value is obtained through a large amount of iterative training.
The pooling layer down-samples the output map of the convolutional layer, reduces the size of the feature map, and enhances the robustness of the feature to rotation and deformation, and common pooling methods include average pooling and maximum pooling, wherein the maximum pooling can be expressed as:
whereinThe value of the ith layer output map of the pooling layer at the (j, k) position, l is the step size of pooling, and m is the pooling size.
In the multilayer fully-connected layer, any neuron node of the upper layer is connected with all neuron nodes of the lower layer. The parameters of the fully-connected layer are composed of a node weight matrix W and an offset b, and the operation of the fully-connected layer can be expressed as:
y=f(W·x+b) (3)
wherein x and y are input and output data respectively, and f is an activation function.
In the above embodiment, local features and overall features can be considered, fewer network parameters can be included on the basis of ensuring the feature extraction effect, and the efficiency and accuracy can be optimized.
Specifically, in this embodiment, the feature classifier is a Softmax classifier.
And the Softmax classifier is connected with the last full-connection layer, and the probability output of each class is calculated by using a formula (4).
Wherein x isiFor the i node value, y of the Softmax classifieriAnd n is the number of nodes of the Softmax classifier.
In the embodiment, the characteristics can be automatically learned, and the characteristics of each car face can be classified and identified to obtain higher accuracy.
As shown in fig. 2, an embodiment of the present invention further provides a pedestrian recognition system with adaptive depth space features, including:
the segmentation module is used for acquiring a pedestrian image and segmenting the pedestrian image according to a preset segmentation number n to obtain n pedestrian image blocks;
the characteristic information extraction module is used for establishing n groups of characteristic extraction models with the same number as the pedestrian image blocks, and the n groups of characteristic extraction models correspondingly extract characteristic information from the n pedestrian image blocks one by one to obtain n groups of characteristic information;
the characteristic information classification module is used for establishing n groups of characteristic classifiers with the same number as the characteristic information groups, and the n groups of characteristic classifiers are used for carrying out characteristic classification on the n groups of characteristic information in a one-to-one correspondence manner;
the loss value module is used for respectively calculating the loss values generated in the classification process of each group of features according to a back propagation algorithm to obtain n groups of loss values;
and the optimization module is used for respectively returning the n groups of loss values to the corresponding feature extraction model and the feature classifier to obtain the optimized feature extraction model and the optimized feature classifier.
In the optimization module, n groups of loss values are respectively returned to the corresponding feature extraction model and the feature classifier, the feature extraction model adjusts respective configuration parameters according to the returned loss values to obtain an optimized feature extraction model, and the feature classifier adjusts respective configuration parameters according to the returned loss values to obtain an optimized feature classifier.
Specifically, the data structure of feature extraction and feature classification is shown in fig. 3, and the whole pedestrian image is divided into three parts: the pedestrian image display method comprises head Data, body Data and leg Data, wherein the Data is a whole pedestrian image, the Data _ head is the head Data, the Data _ body is the body Data, and the Data _ leg is the leg Data; CNN _ head is head feature extraction data, CNN _ body is body feature extraction data, and CNN _ leg is leg feature extraction data; softmax is data of the Softmax classifier, Concat is feature fusion data, and PersonID is pedestrian tag data.
Specifically, the data structure for extracting and classifying the head data features is shown in fig. 4, wherein,
data _ head is header Data, Conv1 is convolutional layer 1, pool1 is pooling layer 1, relu1 is activation function 1, Conv2 is convolutional layer 2, relu2 is activation function 2, Conv3 is convolutional layer 3relu3 is activation function 3, Conv4 is convolutional layer 4, pool4 is pooling layer 4, relu4 is activation function 4, ip1 is fully-connected layer 1, relu5 is activation function 5, dropout1 is dropped chemistry layer 1, ip2 is fully-connected layer 2, relu6 is activation function 6, dropout2 is dropped chemistry layer 2, and ip3 is fully-connected layer 3.
It should be understood that Dropout refers to randomly rendering the weights of some hidden layer nodes of the network inactive during model training, and those nodes that are inactive may be temporarily considered not to be part of the network structure, but their weights are preserved because they may have to be active again the next time a sample is input.
The relu activation function reduces the input value to an interval, the linear combination of linear equations only has linear expression capacity, which is far from enough, so we need to use the nonlinear combination of relu to enable the expression capacity to be stronger, the gradient of relu is constant under most conditions, which is helpful to solve the convergence problem of deep networks, and the other advantage of relu is biological rationality, which is unilateral.
In the embodiment, the pedestrian features are extracted by means of segmenting the pedestrian images, each segment of the pedestrian image corresponds to one feature extraction model and one feature classifier, more information, particularly local information, of pedestrians can be effectively extracted, adaptive pedestrian feature classification in a loss value feedback mode is adopted, the optimal solution is finally reached, and the pedestrian feature extraction models can be guided to extract pedestrian feature information in more depth spaces.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A pedestrian recognition method based on self-adaptive depth space features is characterized by comprising the following steps:
step S1: acquiring a pedestrian image, and segmenting the pedestrian image according to a preset segmentation number n to obtain n pedestrian image blocks;
step S2: establishing n groups of feature extraction models with the same number as the pedestrian image blocks, wherein the n groups of feature extraction models extract feature information from the n pedestrian image blocks in a one-to-one correspondence manner to obtain n groups of feature information;
step S3: establishing n groups of feature classifiers with the same number as the feature information groups, wherein the n groups of feature classifiers are used for carrying out feature classification on the n groups of feature information in a one-to-one correspondence manner;
step S4: respectively calculating loss values generated in the classification process of each group of features according to a back propagation algorithm to obtain n groups of loss values;
step S5: respectively returning n groups of loss values to the corresponding feature extraction model and feature classifier to obtain an optimized feature extraction model and an optimized feature classifier; the specific method for implementing step S5 is as follows: and respectively returning n groups of loss values to the corresponding feature extraction model and the feature classifier, wherein the feature extraction model adjusts configuration parameters according to the returned loss values to obtain an optimized feature extraction model, and the feature classifier adjusts the configuration parameters according to the returned loss values to obtain an optimized feature classifier.
2. The pedestrian recognition method of the adaptive depth space feature of claim 1, further comprising step S6 after the step S5: and obtaining n groups of optimized feature classifications through n optimized feature extraction models and feature classifiers.
3. The pedestrian recognition method of the adaptive depth space feature of claim 2, wherein the specific method for implementing step S6 is as follows: and inputting the n groups of pedestrian images into the n optimized feature extraction models again to obtain n groups of optimized feature information, and inputting the n groups of feature information into n optimized feature classifiers to obtain n groups of optimized feature classifications.
4. The pedestrian recognition method of the adaptive depth spatial feature of claim 3, wherein the step of obtaining the optimized n groups of feature classifications further comprises: and carrying out feature fusion on the optimized n groups of feature classifications to obtain feature fusion data of the pedestrian image.
5. The pedestrian recognition method of the adaptive depth space feature of claim 4, further comprising the steps of, after obtaining the feature fusion data of the pedestrian image: and setting a label for the feature fusion data of the pedestrian image.
6. The pedestrian recognition method of the adaptive depth spatial feature of claim 1, wherein the feature extraction model comprises a parameter configuration module, a plurality of convolutional layers, a pooling layer, and a plurality of fully-connected layers,
the parameter configuration module is used for performing parameter configuration on the multilayer convolution layer, the pooling layer and the multilayer all-connected layer and updating each parameter configuration according to a loss value;
the multilayer convolution layer is used for carrying out multilayer convolution operation on the pedestrian image blocks and calculating the output value of the multilayer convolution according to the neuron activation function to extract characteristic information;
the pooling layer is used for performing maximum pooling processing on the extracted characteristic information to reduce the size of the pedestrian image blocks;
and the multilayer full-connection layer is used for performing multilayer neuron node connection operation on the pedestrian image blocks subjected to the maximum pooling processing.
7. The pedestrian recognition method of the adaptive depth spatial feature of any one of claims 1 to 3 and 5 to 6, wherein the feature classifier is a Softmax classifier.
8. An adaptive depth space feature pedestrian recognition system, comprising:
the segmentation module is used for acquiring a pedestrian image and segmenting the pedestrian image according to a preset segmentation number n to obtain n pedestrian image blocks;
the characteristic information extraction module is used for establishing n groups of characteristic extraction models with the same number as the pedestrian image blocks, and the n groups of characteristic extraction models correspondingly extract characteristic information from the n pedestrian image blocks one by one to obtain n groups of characteristic information;
the characteristic information classification module is used for establishing n groups of characteristic classifiers with the same number as the characteristic information groups, and the n groups of characteristic classifiers are used for carrying out characteristic classification on the n groups of characteristic information in a one-to-one correspondence manner;
the loss value module is used for respectively calculating the loss values generated in the classification process of each group of features according to a back propagation algorithm to obtain n groups of loss values;
the optimization module is used for respectively returning the n groups of loss values to the corresponding feature extraction model and the feature classifier to obtain the optimized feature extraction model and the optimized feature classifier;
in the optimization module, n groups of loss values are respectively returned to the corresponding feature extraction model and the feature classifier, the feature extraction model adjusts respective configuration parameters according to the returned loss values to obtain an optimized feature extraction model, and the feature classifier adjusts respective configuration parameters according to the returned loss values to obtain an optimized feature classifier.
CN201610953664.9A 2016-11-03 2016-11-03 Pedestrian identification method and system based on self-adaptive depth space characteristics Active CN106611156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610953664.9A CN106611156B (en) 2016-11-03 2016-11-03 Pedestrian identification method and system based on self-adaptive depth space characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610953664.9A CN106611156B (en) 2016-11-03 2016-11-03 Pedestrian identification method and system based on self-adaptive depth space characteristics

Publications (2)

Publication Number Publication Date
CN106611156A CN106611156A (en) 2017-05-03
CN106611156B true CN106611156B (en) 2019-12-20

Family

ID=58615343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610953664.9A Active CN106611156B (en) 2016-11-03 2016-11-03 Pedestrian identification method and system based on self-adaptive depth space characteristics

Country Status (1)

Country Link
CN (1) CN106611156B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578091B (en) * 2017-08-30 2021-02-05 电子科技大学 Pedestrian and vehicle real-time detection method based on lightweight deep network
CN109934081A (en) * 2018-08-29 2019-06-25 厦门安胜网络科技有限公司 A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network
CN109409297B (en) * 2018-10-30 2021-11-23 咪付(广西)网络技术有限公司 Identity recognition method based on dual-channel convolutional neural network
CN109635636B (en) * 2018-10-30 2023-05-09 国家新闻出版广电总局广播科学研究院 Pedestrian re-identification method based on fusion of attribute characteristics and weighted blocking characteristics
CN113139496A (en) * 2021-05-08 2021-07-20 青岛根尖智能科技有限公司 Pedestrian re-identification method and system based on time sequence multi-scale fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617426A (en) * 2013-12-04 2014-03-05 东北大学 Pedestrian target detection method under interference by natural environment and shelter
CN105590102A (en) * 2015-12-30 2016-05-18 中通服公众信息产业股份有限公司 Front car face identification method based on deep learning
CN105631415A (en) * 2015-12-25 2016-06-01 中通服公众信息产业股份有限公司 Video pedestrian recognition method based on convolution neural network
CN105975931A (en) * 2016-05-04 2016-09-28 浙江大学 Convolutional neural network face recognition method based on multi-scale pooling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617426A (en) * 2013-12-04 2014-03-05 东北大学 Pedestrian target detection method under interference by natural environment and shelter
CN105631415A (en) * 2015-12-25 2016-06-01 中通服公众信息产业股份有限公司 Video pedestrian recognition method based on convolution neural network
CN105590102A (en) * 2015-12-30 2016-05-18 中通服公众信息产业股份有限公司 Front car face identification method based on deep learning
CN105975931A (en) * 2016-05-04 2016-09-28 浙江大学 Convolutional neural network face recognition method based on multi-scale pooling

Also Published As

Publication number Publication date
CN106611156A (en) 2017-05-03

Similar Documents

Publication Publication Date Title
Wang et al. Depth pooling based large-scale 3-d action recognition with convolutional neural networks
CN106611156B (en) Pedestrian identification method and system based on self-adaptive depth space characteristics
CN108520535B (en) Object classification method based on depth recovery information
CN108304788B (en) Face recognition method based on deep neural network
CN106096535B (en) Face verification method based on bilinear joint CNN
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN106919897B (en) Human face image age estimation method based on three-level residual error network
KR101254177B1 (en) A system for real-time recognizing a face using radial basis function neural network algorithms
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
Taheri et al. Animal classification using facial images with score‐level fusion
CN108108677A (en) One kind is based on improved CNN facial expression recognizing methods
Moustafa et al. Age-invariant face recognition based on deep features analysis
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN107729872A (en) Facial expression recognition method and device based on deep learning
CN108921019A (en) A kind of gait recognition method based on GEI and TripletLoss-DenseNet
CN109726619A (en) A kind of convolutional neural networks face identification method and system based on parameter sharing
CN110956082A (en) Face key point detection method and detection system based on deep learning
CN112070010B (en) Pedestrian re-recognition method for enhancing local feature learning by combining multiple-loss dynamic training strategies
KR20210067815A (en) Method for measuring health condition of user and apparatus therefor
CN110633689B (en) Face recognition model based on semi-supervised attention network
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
Kim et al. Facial dynamic modelling using long short-term memory network: Analysis and application to face authentication
CN109815887B (en) Multi-agent cooperation-based face image classification method under complex illumination
Nimbarte et al. Biased face patching approach for age invariant face recognition using convolutional neural network
Lin et al. A small sample face recognition method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant