CN111160226B - Pedestrian gender identification method based on visual angle adaptive feature learning - Google Patents
Pedestrian gender identification method based on visual angle adaptive feature learning Download PDFInfo
- Publication number
- CN111160226B CN111160226B CN201911370041.9A CN201911370041A CN111160226B CN 111160226 B CN111160226 B CN 111160226B CN 201911370041 A CN201911370041 A CN 201911370041A CN 111160226 B CN111160226 B CN 111160226B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- visual angle
- adaptive
- gender identification
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 40
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 210000002569 neuron Anatomy 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 12
- 238000012935 Averaging Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 230000002401 inhibitory effect Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract description 7
- 230000007547 defect Effects 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 abstract description 2
- 230000008859 change Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a pedestrian gender identification method based on visual angle adaptive feature learning, which comprises the following steps: a visual angle self-adaptive training process and a gender identification process. The invention guides the characteristic learning process of the convolutional neural network by utilizing the visual angle information of the input pedestrian so as to reduce the influence of the visual angle change of the pedestrian on the gender identification of the neural network, and the trained network model has more accurate pedestrian gender identification effect. The invention combines the visual angle information of the pedestrian, solves the defect of the prior convolutional neural network-based pedestrian gender identification problem, and effectively improves the accuracy of the pedestrian gender identification. The invention can be widely applied to intelligent video monitoring scenes, such as superstores, airports, railway stations and the like.
Description
Technical Field
The invention relates to computer vision and pattern recognition, in particular to a pedestrian gender recognition method based on visual angle adaptive feature learning.
Background
In recent years, with the active push of "smart cities" and the increasing demand of traffic monitoring, video monitoring will gradually cover various important places such as superstores, airports, train stations, etc. Tens of millions of cameras will provide basic guarantee for urban public safety. In order to meet the requirements of intelligent security, intelligent traffic, intelligent home and the like, a rapid identification technology for people moving in a remote and target non-cooperative state is urgently needed for video monitoring intellectualization, so that the identity of people can be rapidly confirmed under a remote condition, and intelligent management is realized. As an important auxiliary means for rapid identification of pedestrians, the pedestrian gender identification means identifying the gender of passing pedestrians in a monitoring video, and the technology will play an important role in a future intelligent video monitoring system.
The pedestrian gender identification method in the prior art is mainly used for identifying the gender of the pedestrian based on manual features, such as gradient histogram features (HOG) capable of describing the outline and shape of the pedestrian and LBP features capable of describing the texture details of the pedestrian, but the identification accuracy of a single manual feature extraction method is generally not high.
With the rapid development of deep learning, the convolutional neural network is effectively applied to a human-based identification task, and higher identification precision is obtained compared with manual characteristics. However, the method based on the deep convolutional neural network is sensitive to the perspective transformation of the pedestrian, for example, when the perspective of the pedestrian is changed, the convolutional neural network may not be able to correctly identify the gender of the pedestrian at a certain perspective.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a pedestrian gender identification method based on visual angle adaptive feature learning, and effectively improves the accuracy of pedestrian gender identification.
The technical scheme of the invention is as follows:
a pedestrian gender identification method based on visual angle adaptive feature learning comprises a visual angle adaptive training process and a gender identification process;
the visual angle self-adaptive training process comprises the following steps:
1.1 Basic model training step: selecting N training images with gender label attributes, inputting the training images into a convolutional neural network for training until the convolutional neural network is converged to obtain a basic model M;
1.2 View influence score calculation step: dividing the training image into a forward visual angle, a backward visual angle and other visual angles, respectively inputting the training image into a basic model M for extracting characteristics, and calculating the average influence score of the corresponding visual angle according to the extracted characteristics;
1.3 View angle fine-tuning step: adjusting the view angle of the basic model M by using the average influence score of each view angle until the model converges to obtain a feature extraction model P;
the steps of the gender identification process are as follows:
2.1 Inputting the test image and the average influence score obtained from the step 1.2) into a feature extraction model P, and obtaining view angle adaptive features through forward propagation;
2.2 Computing the gender probability of the view angle self-adaptive features by utilizing a Softmax classification function, and outputting a gender identification result.
Preferably, the basic model training steps are as follows:
1.1.1 Randomly selecting N training images with gender label attribute;
1.1.2 Inputting the selected training image into a convolutional neural network for training;
1.1.3 ) repeating the step 1.1.1) and the step 1.1.2) until the convolutional neural network is converged to obtain a basic model M;
the steps of calculating the view influence score are as follows:
1.2.1 Divide the training image into a forward view, a backward view, and other views;
1.2.2 Respectively inputting the training images of three visual angles into the basic model M obtained in the step 1.1.3), and obtaining the pedestrian depth characteristics gamma under the three visual angles through forward propagation frontal 、γ back 、γ other ;
1.2.3 According to the pedestrian depth characteristic gamma at three viewing angles frontal 、γ back 、γ other Calculating corresponding average influence scores
The visual angle fine adjustment steps are as follows:
1.3.1 Step 1.2.3) average impact score for each perspectiveAdjusting the visual angle of the basic model M;
1.3.2 Step 1.3.1) is repeated until the model converges, resulting in the feature extraction model P.
Preferably, the average impact score of the forward viewing angleThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network characteristic output layer is as follows:
I j,frontal =L(γ frontal\j )-L(γ frontal );
wherein F represents the depth feature set of the pedestrian with the forward view angle, E (-) is the averaging operation, I j,frontal Denotes the impact score, γ, of the jth neuron frontal D-dimensional feature vector, gamma, representing the M output of the base model frontal\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
Preferably, the average impact score of the back viewing angleThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network characteristic output layer is as follows:
I j,back =L(γ back\j )-L(γ back );
wherein F represents the depth feature set of the pedestrian with the back view angle, E (-) is the average operation, I j,back Denotes the impact score, γ, of the jth neuron back D-dimensional feature vector, gamma, representing the M output of the base model back\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
Preferably, the average impact score of other perspectivesThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network feature output layer is as follows:
I j,othter =L(γ othter\j )-L(γ othter );
wherein F represents the depth feature set of pedestrians at other visual angles, E (-) is the averaging operation, I j,othter Denotes the impact score, γ, of the jth neuron othter D-dimensional feature vector, gamma, representing the M output of the base model othter\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
Preferably, in step 1.3.1), the perspective adjustment of the base model M includes a forward propagation process and a backward propagation process;
in the forward propagation process, performing point multiplication on the characteristic vector and a view angle influence mask, and inhibiting neuron response irrelevant to a forward view angle to obtain a view angle self-adaptive characteristic vector;
in the process of backward propagation, the visual angle self-adaptive characteristic vector is substituted into a Softmax Loss function, and error Loss is calculated; and optimizing network parameters according to the error loss back propagation until the model converges to obtain a feature extraction model P.
Preferably, the viewing angle influencing mask is specifically as follows:
wherein m is j,frontal A view angle impact mask representing the jth neuron of the net feature output layer,the mean impact score for the jth neuron is shown.
Preferably, the gender identification process is specifically as follows:
test images and average influence scores obtained in step 1.2.3)Inputting a feature extraction model P, outputting a feature vector through forward propagation, and performing dot product calculation with a view angle influence mask to obtain view angle adaptive features; and calculating the gender probability for the self-adaptive characteristics of the input visual angle by utilizing a Softmax classification function, and outputting a gender identification result.
The invention has the following beneficial effects:
according to the pedestrian gender identification method based on visual angle adaptive feature learning, the characteristic learning process of the convolutional neural network is guided by using the input visual angle information of the pedestrian, so that the influence of the visual angle change of the pedestrian on the gender identification of the neural network is reduced, and the trained network model has a more accurate pedestrian gender identification effect. The invention combines the visual angle information of the pedestrian, solves the defect of the prior convolutional neural network-based pedestrian gender identification problem, and effectively improves the accuracy of the pedestrian gender identification.
The invention can be widely applied to intelligent video monitoring scenes, such as superstores, airports, railway stations and the like.
Drawings
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention provides a pedestrian gender identification method based on visual angle adaptive feature learning, which comprises a visual angle adaptive training process and a gender identification process as shown in figure 1.
The visual angle self-adaptive training process comprises the following steps:
1.1 Basic model training step: selecting N training images with gender label attributes, inputting the training images into a convolutional neural network for training until the convolutional neural network is converged to obtain a basic model M;
1.2 View influence score calculation step: dividing the training image into a forward visual angle, a backward visual angle and other visual angles, respectively inputting the training image into a basic model M for extracting characteristics, and calculating the average influence score of the corresponding visual angle according to the extracted characteristics;
1.3 View angle fine-tuning step: and adjusting the view angle of the basic model M by using the average influence score of each view angle until the model converges to obtain a feature extraction model P.
Specifically, in this embodiment, the basic model training steps are specifically as follows:
1.1.1 Randomly selecting N training images with the attribute of the gender label;
1.1.2 Inputting the selected training image into a convolutional neural network for training;
1.1.3 ) repeating the step 1.1.1) and the step 1.1.2) until the convolutional neural network converges to obtain the basic model M.
The steps of calculating the view influence score are as follows:
1.2.1 Divide the training image into a forward view, a backward view, and other views;
1.2.2 Respectively inputting the training images of three visual angles into the basic model M obtained in the step 1.1.3), and obtaining the pedestrian depth characteristics gamma under the three visual angles through forward propagation frontal 、γ back 、γ other ;
1.2.3 According to the pedestrian depth characteristic gamma at three viewing angles frontal 、γ back 、γ other Calculating corresponding average influence scores
In this embodiment, the average impact score of the forward viewing angleThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network feature output layer is as follows:
I j,frontal =L(γ frontal\j )-L(γ frontal );
wherein F represents the depth feature set of the pedestrian with the forward view angle, E (-) is the averaging operation, I j,frontal Denotes the impact score, γ, of the jth neuron frontal D-dimensional feature vector, gamma, representing the M output of the base model frontal\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
Similarly, average impact score for backward viewing angleThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network characteristic output layer is as follows:
I j,back =L(γ back\j )-L(γ back );
wherein F represents the depth feature set of the pedestrian with a back view angle, E (-) is the averaging operation, I j,back Representing the j-th neuronInfluence fraction, gamma back D-dimensional feature vector, gamma, representing the M output of the base model back\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
Similarly, the average impact score for other perspectivesThe calculation process of (c) is as follows:
the average influence score of the jth neuron of the network characteristic output layer is as follows:
I j,othter =L(γ othter\j )-L(γ othter );
wherein F represents the depth feature set of pedestrians at other visual angles, E (-) is the averaging operation, I j,othter Denotes the impact score, γ, of the jth neuron othter D-dimensional feature vector, gamma, representing the M output of the base model othter\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
The visual angle fine adjustment steps are as follows:
1.3.1 Step 1.2.3) average impact score for each perspectiveAnd adjusting the visual angle of the basic model M. The visual angle adjustment of the basic model M comprises a forward propagation process and a backward propagation process;
in the forward propagation process, performing point multiplication on the characteristic vector and a view angle influence mask, and inhibiting neuron response irrelevant to a forward view angle to obtain a view angle self-adaptive characteristic vector; the view angle influence mask is specifically as follows:
wherein m is j,frontal A view angle impact mask representing the jth neuron of the net feature output layer,the mean impact score for the jth neuron is shown.
In the process of backward propagation, the visual angle self-adaptive characteristic vector is substituted into a Softmax Loss function, and error Loss is calculated; and optimizing network parameters according to the error loss back propagation until the model converges to obtain a feature extraction model P.
1.3.2 Step 1.3.1) is repeated until the model converges, resulting in the feature extraction model P.
The steps of the gender identification process are as follows:
2.1 Inputting the test image and the average influence score obtained from the step 1.2) into a feature extraction model P, and obtaining visual angle adaptive features through forward propagation;
2.2 Computing the gender probability of the view angle self-adaptive features by utilizing a Softmax classification function, and outputting a gender identification result.
In this embodiment, the gender identification process is specifically as follows:
test images and the average influence scores obtained in the step 1.2.3)Inputting a feature extraction model P, outputting a feature vector through forward propagation, and performing dot product calculation with a view angle influence mask to obtain view angle adaptive features; and calculating the gender probability for the self-adaptive characteristics of the input visual angle by utilizing a Softmax classification function, and outputting a gender identification result.
The above examples are provided only for illustrating the present invention and are not intended to limit the present invention. Changes, modifications, etc. to the above-described embodiments are intended to fall within the scope of the claims of the present invention as long as they are in accordance with the technical spirit of the present invention.
Claims (7)
1. A pedestrian gender identification method based on visual angle adaptive feature learning is characterized by comprising a visual angle adaptive training process and a gender identification process;
the visual angle self-adaptive training process comprises the following steps:
1.1 Basic model training step: selecting N training images with gender label attributes, inputting the training images into a convolutional neural network for training until the convolutional neural network is converged to obtain a basic model M;
1.2 View influence score calculation step: dividing the training image into a forward visual angle, a backward visual angle and other visual angles, respectively inputting the training image into a basic model M for extracting characteristics, and calculating the average influence score of the corresponding visual angle according to the extracted characteristics;
1.3 View angle fine-tuning step: adjusting the view angle of the basic model M by using the average influence score of each view angle until the model converges to obtain a feature extraction model P;
the steps of the gender identification process are as follows:
2.1 Inputting the test image and the average influence score obtained from the step 1.2) into a feature extraction model P, and obtaining view angle adaptive features through forward propagation;
2.2 Utilizing a Softmax classification function to calculate the gender probability of the visual angle self-adaptive features, and outputting a gender identification result;
the basic model training steps are as follows:
1.1.1 Randomly selecting N training images with gender label attribute;
1.1.2 Inputting the selected training image into a convolutional neural network for training;
1.1.3 ) repeating the step 1.1.1) and the step 1.1.2) until the convolutional neural network is converged to obtain a basic model M;
the steps of calculating the view influence score are as follows:
1.2.1 Divide the training image into a forward view, a backward view, and other views;
1.2.2 Respectively inputting the training images of three visual angles into the basic model M obtained in the step 1.1.3), and obtaining the pedestrian depth characteristics gamma under the three visual angles through forward propagation frontal 、γ back 、γ other ;
1.2.3 According to the pedestrian depth feature gamma at three viewing angles frontal 、γ back 、γ other Calculating corresponding average influence scores
The visual angle fine adjustment steps are as follows:
1.3.1 Step 1.2.3) average impact score for each perspectiveAdjusting the visual angle of the basic model M;
1.3.2 Step 1.3.1) is repeated until the model converges, resulting in the feature extraction model P.
2. The method of claim 1, wherein the average impact score of forward perspective is based on the perspective adaptive feature learning for pedestrian gender identificationThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network feature output layer is as follows:
I j,frontal =L(γ frontal\j )-L(γ frontal );
wherein F represents the depth feature set of the pedestrian with the forward view angle, E (-) is the averaging operation, I j,frontal Denotes the impact score, γ, of the jth neuron frontal D-dimensional feature vector, gamma, representing the M output of the base model frontal\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
3. According to claimThe method of pedestrian gender identification based on perspective-adaptive feature learning of claim 1, wherein the average impact score of the back-to-perspectiveThe calculation process of (2) is as follows: />
The average influence score of the jth neuron of the network characteristic output layer is as follows:
I j,back =L(γ back\j )-L(γ back );
wherein F represents the depth feature set of the pedestrian with the back view angle, E (-) is the average operation, I j,back Denotes the impact score, γ, of the jth neuron back D-dimensional feature vector, gamma, representing the M output of the base model back\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
4. The pedestrian gender identification method based on perspective adaptive feature learning of claim 1, wherein the average impact score of other perspectivesThe calculation process of (2) is as follows:
the average influence score of the jth neuron of the network feature output layer is as follows:
I j,othter =L(γ othter\j )-L(γ othter );
wherein F represents the depth feature set of the pedestrian at other view angles, E (-) is the averaging operation, I j,othter Means the jth godFractional influence of meridian elements, gamma othter D-dimensional feature vector, gamma, representing the M output of the base model othter\j And L (-) represents a characteristic vector obtained when the jth neuron response of the network characteristic output layer is set to be 0, and represents a Softmax Loss function.
5. The pedestrian gender identification method based on the visual angle adaptive feature learning of claim 1, wherein in the step 1.3.1), the visual angle adjustment of the basic model M comprises a forward propagation process and a backward propagation process;
in the forward propagation process, performing point multiplication on the characteristic vector and a view angle influence mask, and inhibiting neuron response irrelevant to a forward view angle to obtain a view angle self-adaptive characteristic vector;
in the process of backward propagation, the visual angle self-adaptive characteristic vector is substituted into a Softmax Loss function, and error Loss is calculated; and optimizing network parameters according to the error loss back propagation until the model converges to obtain a feature extraction model P.
6. The pedestrian gender identification method based on the perspective adaptive feature learning of claim 5, wherein the perspective influence mask is specifically as follows:
7. The pedestrian gender identification method based on the visual angle adaptive feature learning of claim 6, wherein the gender identification process is as follows:
test images and average influence scores obtained in step 1.2.3)Inputting a feature extraction model P, outputting a feature vector through forward propagation, and performing dot product calculation with a view angle influence mask to obtain view angle adaptive features; and calculating the gender probability for the self-adaptive characteristics of the input visual angle by utilizing a Softmax classification function, and outputting a gender identification result. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911370041.9A CN111160226B (en) | 2019-12-26 | 2019-12-26 | Pedestrian gender identification method based on visual angle adaptive feature learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911370041.9A CN111160226B (en) | 2019-12-26 | 2019-12-26 | Pedestrian gender identification method based on visual angle adaptive feature learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111160226A CN111160226A (en) | 2020-05-15 |
CN111160226B true CN111160226B (en) | 2023-03-31 |
Family
ID=70556854
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911370041.9A Active CN111160226B (en) | 2019-12-26 | 2019-12-26 | Pedestrian gender identification method based on visual angle adaptive feature learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111160226B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009525A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks |
WO2018107760A1 (en) * | 2016-12-16 | 2018-06-21 | 北京大学深圳研究生院 | Collaborative deep network model method for pedestrian detection |
CN109711281A (en) * | 2018-12-10 | 2019-05-03 | 复旦大学 | A kind of pedestrian based on deep learning identifies again identifies fusion method with feature |
-
2019
- 2019-12-26 CN CN201911370041.9A patent/CN111160226B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018107760A1 (en) * | 2016-12-16 | 2018-06-21 | 北京大学深圳研究生院 | Collaborative deep network model method for pedestrian detection |
CN108009525A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks |
CN109711281A (en) * | 2018-12-10 | 2019-05-03 | 复旦大学 | A kind of pedestrian based on deep learning identifies again identifies fusion method with feature |
Non-Patent Citations (1)
Title |
---|
视频监控下的行人性别检测;苏宁等;《现代计算机(专业版)》;20181015(第29期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111160226A (en) | 2020-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109145939B (en) | Semantic segmentation method for small-target sensitive dual-channel convolutional neural network | |
CN110163110B (en) | Pedestrian re-recognition method based on transfer learning and depth feature fusion | |
US11195051B2 (en) | Method for person re-identification based on deep model with multi-loss fusion training strategy | |
CN107563372B (en) | License plate positioning method based on deep learning SSD frame | |
CN110781838B (en) | Multi-mode track prediction method for pedestrians in complex scene | |
CN109543695B (en) | Population-density population counting method based on multi-scale deep learning | |
CN110163187B (en) | F-RCNN-based remote traffic sign detection and identification method | |
CN109815826B (en) | Method and device for generating face attribute model | |
CN110276264B (en) | Crowd density estimation method based on foreground segmentation graph | |
CN111191667B (en) | Crowd counting method based on multiscale generation countermeasure network | |
CN110135296A (en) | Airfield runway FOD detection method based on convolutional neural networks | |
CN107145889A (en) | Target identification method based on double CNN networks with RoI ponds | |
CN103605972A (en) | Non-restricted environment face verification method based on block depth neural network | |
CN106096535A (en) | A kind of face verification method based on bilinearity associating CNN | |
CN111178208A (en) | Pedestrian detection method, device and medium based on deep learning | |
CN104504362A (en) | Face detection method based on convolutional neural network | |
CN110390308B (en) | Video behavior identification method based on space-time confrontation generation network | |
CN103971091B (en) | Automatic plane number recognition method | |
CN107657625A (en) | Merge the unsupervised methods of video segmentation that space-time multiple features represent | |
CN109948593A (en) | Based on the MCNN people counting method for combining global density feature | |
CN113963032A (en) | Twin network structure target tracking method fusing target re-identification | |
CN106651915A (en) | Target tracking method of multi-scale expression based on convolutional neural network | |
CN112861970B (en) | Fine-grained image classification method based on feature fusion | |
CN103839033A (en) | Face identification method based on fuzzy rule | |
CN106682681A (en) | Recognition algorithm automatic improvement method based on relevance feedback |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20200515 Assignee: Xiamen yunzhixin Intelligent Technology Co.,Ltd. Assignor: HUAQIAO University Contract record no.: X2024990000310 Denomination of invention: A Pedestrian Gender Recognition Method Based on Perspective Adaptive Feature Learning Granted publication date: 20230331 License type: Common License Record date: 20240627 |