CN105631415A - Video pedestrian recognition method based on convolution neural network - Google Patents
Video pedestrian recognition method based on convolution neural network Download PDFInfo
- Publication number
- CN105631415A CN105631415A CN201510984354.9A CN201510984354A CN105631415A CN 105631415 A CN105631415 A CN 105631415A CN 201510984354 A CN201510984354 A CN 201510984354A CN 105631415 A CN105631415 A CN 105631415A
- Authority
- CN
- China
- Prior art keywords
- video
- layer
- neural networks
- convolutional neural
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a video pedestrian recognition method based on a convolution neural network. The method comprises a step of reading the video in a video database, intercepting a video frame, and extracting the HOG feature of the video frame, a step of constructing and training the convolution neural network, a step of selecting a plurality of character feature attributes and designing a support vector machine classifier for each character feature attribute and carrying out training, a step of inputting the HOG feature into a trained convolution neural network model, and carrying out sorting classification on each character feature . The method has the advantages that the method of the convolution neural network is employed to reflect a recognition rate well, the HOG feature is extracted, thus the amount of calculation is reduced, the speed is improved, the constructed convolution neural network has a certain depth, at the same time combined with a support vector machine, the classification is carried out for multiple times, and the recognition efficiency and accuracy are improved greatly.
Description
Technical field
The pattern that the present invention relates to knows technical field, particularly to a kind of video pedestrian recognition method based on convolutional neural networks.
Background technology
Along with the development of multimedia technology and Internet technology, video pedestrian identification is also the popular object of study of computer vision field in recent years, all has broad application prospects at intelligent transportation, missing, secure context. A kind of tional identification algorithm that video pedestrian identifies is artificial neural network, and human brain neuroid is carried out abstract from information processing angle by it, sets up certain naive model. Training algorithm based on artificial neural network is back-propagation algorithm, and it makes network model be obtained in that statistical law through the process that a large amount of training samples are learnt, thus unknown event is made prediction. Artificial neural network advantage is in that to have stronger non-linear mapping capability, self study and adaptive ability, generalization ability and certain fault-tolerant ability, but suffer from the drawback that, when pedestrian identifies sample training, convergence rate is slow, and its training process is monitor procedure, and the mark of training sample is cumbersome and time consuming, and video pedestrian identifies the calculating and the analysis that relate to mass data, the interference of more additional environmental factorss, tional identification algorithm cannot extract the preferred feature of image, causes that discrimination is limited.
Summary of the invention
It is an object of the invention to provide a kind of recognition success rate height, training sample is stable, amount of calculation is low, the fireballing video pedestrian recognition method based on convolutional neural networks.
In order to realize foregoing invention purpose, the invention provides a kind of video pedestrian recognition method based on convolutional neural networks, comprise the following steps:
Step 101: read the video in video library, intercepts frame of video, and utilizes HOG feature extraction algorithm to extract the HOG feature of frame of video;
Step 102: build convolutional neural networks, and utilize the HOG feature that step 101 obtains that described convolutional neural networks is trained, obtain the convolutional neural networks model trained;
Step 103: choose several representational character features attributes, including outside association attributes and human body association attributes, a support vector machine classifier is designed for each personage's characteristic attribute, support vector machine classifier is trained by HOG feature respectively that obtain also with step 101, the support vector machine classifier finally trained;
Step 104: HOG feature step 101 obtained is input to the convolutional neural networks model trained, carries out a weak typing, obtains preferred feature by multiple convolution and down-sampling process;
Step 105: the preferred feature obtained is input to each support vector machine classifier trained, each attribute character is carried out sifting sort, finally giving the some groups of data sets answered respectively with character features Attribute Relative, each data set comprises the data corresponding with the frame of video with identical personage's property attribute.
In described step 101, HOG feature extraction algorithm is utilized to extract the HOG feature of frame of video, particularly as follows:
Step 201, by frame of video gray processing;
Input video frame is carried out the normalization of color space by step 202, employing Gamma correction method;
Step 203, calculate each pixel of frame of video gradient, including size and Orientation;
Step 204, frame of video being divided into junior unit lattice, every cell is 6*6 pixel;
Step 205, add up the histogram of gradients of each cell, the profiler of each cell can be formed;
Step 206, by one interval of every nine cells composition, in an interval, the profiler of all cells is together in series, and obtains the profiler in this interval;
Step 207, again the profiler in all intervals in same frame of video is together in series, obtains this frame of video characteristic of correspondence describer.
In described step 102, the convolutional neural networks of structure is the convolutional neural networks comprising nine hidden layers, particularly as follows:
Ground floor convolutional layer C1: setting 2 characteristic planes, convolution kernel is sized to 5 �� 5;
Ground floor down-sampling layer S1: setting 2 characteristic planes, pond window size is 2 �� 2;
Second layer convolutional layer C2: setting 4 characteristic planes, convolution kernel is sized to 5 �� 5;
Second layer down-sampling layer S2: setting 4 characteristic planes, pond window size is 2 �� 2;
Third layer convolutional layer C3: setting 8 characteristic planes, convolution kernel is sized to 5 �� 5;
Third layer down-sampling layer S3: setting 8 characteristic planes, pond window size is 2 �� 2;
4th layer of convolutional layer C4: setting 16 characteristic planes, convolution kernel is sized to 5 �� 5;
4th layer of down-sampling layer S4: setting 16 characteristic planes, pond window size is 2 �� 2;
Last layer of output layer.
Wherein, the convolution kernel in described convolutional neural networks chooses Roberts operator and Prewitt operator, utilizes Newton method, adjusts neural network weight.
In described step 102, by HOG feature, described convolutional neural networks model is trained particularly as follows:
Step 1: the output of each hidden layer is carried out deconvolution, obtains error E by the input feature vector contrast of deconvolution result with this layer;
Step 2: adjusting convolution kernel weights by Newton method, formula is:
Wherein W* is the weights updated, E ' (w) and E, and " (w) is error to the single order of weights and second order local derviation respectively;
Using step 1 and step 2, by 10 repetitive exercise to all training samples, namely update 10 times of network weight, training obtains extracting the convolutional neural networks model of image preferred feature.
Wherein, in step 103, described outside association attributes includes: band cap, wear glasses, knapsack, jacket color clothing color various, lower is single and shoes color is single; Described human body association attributes: male, women, height more than 1.7m, height less than or equal to 1.7m.
In described step 103, utilize HOG feature that support vector machine classifier is trained particularly as follows: the utilization minimum optimized algorithm of order, HOG feature is inputted training in support vector machine classifier, obtains model, by model generation detection; Support vector machine classifier kernel function in the training process selects polynomial function.
In described step 104, convolution process includes: with a trainable wave filter fxDeconvolute a frame of video inputted, then add a biasing bx, obtain convolutional layer Cx; Down-sampling process includes: four the pixel summations of every neighborhood become a pixel, then pass through scalar Wx+1Weighting, is further added by biasing bx+1, then pass through a sigmoid activation primitive, produce a Feature Mapping figure S about reducing four timesx+1. Wherein, x is the number of plies residing for current convolutional layer.
The invention has the beneficial effects as follows: the present invention is compared with traditional pedestrian recognition method, the method adopting convolutional neural networks embodies better discrimination, extracting HOG feature can make amount of calculation reduce, speed improves, the convolutional neural networks built has certain depth, carry out many subseries in combination with support vector machine, recognition efficiency and accuracy will be greatly improved.
Accompanying drawing explanation
Fig. 1 is the present invention flow chart based on the video pedestrian recognition method of convolutional neural networks.
Detailed description of the invention
Convolutional neural networks is a kind of efficient identification algorithm being widely used in the fields such as image procossing this year, is a kind of structure of neutral net. The optimization aim of neutral net is based on empirical risk minimization, is easily absorbed in local optimum, training result less stable, it is generally required to large sample; And support vector machine has strict theory and Fundamentals of Mathematics, based on structural risk minimization, generalization ability is better than the former, and algorithm has Global Optimality, is the theory for small sample statistics. Therefore first with convolutional neural networks, frame of video carried out first time classification and obtain preferred feature, then carry out secondary classification by support vector machine and will improve recognition success rate.
Referring to Fig. 1, embodiments provide a kind of video pedestrian recognition method based on convolutional neural networks, comprise the following steps:
Step 101: read the video in video library, intercepts frame of video, and utilizes HOG feature extraction algorithm to extract the HOG feature of frame of video;
Step 102: build convolutional neural networks, and utilize the HOG feature that step 101 obtains that described convolutional neural networks is trained, obtain the convolutional neural networks model trained;
Step 103: choose several representational character features attributes, including outside association attributes and human body association attributes, a support vector machine classifier is designed for each personage's characteristic attribute, support vector machine classifier is trained by HOG feature respectively that obtain also with step 101, the support vector machine classifier finally trained;
Step 104: HOG feature step 101 obtained is input to the convolutional neural networks model trained, carries out a weak typing, obtains preferred feature by multiple convolution and down-sampling process;
Step 105: the preferred feature obtained is input to each support vector machine classifier trained, each attribute character is carried out sifting sort, finally giving the some groups of data sets answered respectively with character features Attribute Relative, each data set comprises the data corresponding with the frame of video with identical personage's property attribute.
Wherein, in step 101, HOG feature extraction algorithm is utilized to extract the HOG feature of frame of video, particularly as follows:
Step 301, by frame of video gray processing;
Input video frame is carried out the normalization of color space by step 302, employing Gamma correction method;
Step 303, calculate each pixel of frame of video gradient, including size and Orientation;
Step 304, frame of video being divided into junior unit lattice, every cell is 6*6 pixel;
Step 305, add up the histogram of gradients of each cell, the profiler of each cell can be formed;
Step 306, by one interval of every nine cells composition, in an interval, the profiler of all cells is together in series, and obtains the profiler in this interval;
Step 307, again the profiler in all intervals in same frame of video is together in series, obtains this frame of video characteristic of correspondence describer.
Wherein, in step 102, the convolutional neural networks of structure is the convolutional neural networks comprising nine hidden layers, particularly as follows:
Ground floor convolutional layer C1: setting 2 characteristic planes, convolution kernel is sized to 5 �� 5;
Ground floor down-sampling layer S1: setting 2 characteristic planes, pond window size is 2 �� 2;
Second layer convolutional layer C2: setting 4 characteristic planes, convolution kernel is sized to 5 �� 5;
Second layer down-sampling layer S2: setting 4 characteristic planes, pond window size is 2 �� 2;
Third layer convolutional layer C3: setting 8 characteristic planes, convolution kernel is sized to 5 �� 5;
Third layer down-sampling layer S3: setting 8 characteristic planes, pond window size is 2 �� 2;
4th layer of convolutional layer C4: setting 16 characteristic planes, convolution kernel is sized to 5 �� 5;
4th layer of down-sampling layer S4: setting 16 characteristic planes, pond window size is 2 �� 2;
Last layer of output layer.
Wherein, the convolution kernel in convolutional neural networks chooses Roberts operator and Prewitt operator, utilizes Newton method, adjusts neural network weight.
Wherein, in step 102, by HOG feature, described convolutional neural networks model is trained particularly as follows:
Step 601: the output of each hidden layer is carried out deconvolution, obtains error E by the input feature vector contrast of deconvolution result with this layer;
Step 602: adjusting convolution kernel weights by Newton method, formula is:
Wherein W* is the weights updated, E ' (w) and E, and " (w) is error to the single order of weights and second order local derviation respectively;
Using step 601 and step 603, by 10 repetitive exercise to all training samples, namely update 10 times of network weight, training obtains extracting the convolutional neural networks model of image preferred feature.
Wherein, in step 103, described outside association attributes includes: band cap, wear glasses, knapsack, jacket color clothing color various, lower is single and shoes color is single; Described human body association attributes: male, women, height more than 1.7m, height less than or equal to 1.7m.
Wherein, in step 103, utilize HOG feature that support vector machine classifier is trained particularly as follows: the utilization minimum optimized algorithm of order, HOG feature is inputted training in support vector machine classifier, obtains model, by model generation detection; Support vector machine classifier kernel function in the training process selects polynomial function.
Wherein, in step 104, convolution process includes: with a trainable wave filter fxDeconvolute a frame of video inputted, then add a biasing bx, obtain convolutional layer Cx; Down-sampling process includes: four the pixel summations of every neighborhood become a pixel, then pass through scalar Wx+1Weighting, is further added by biasing bx+1, then pass through a sigmoid activation primitive, produce a Feature Mapping figure S about reducing four timesx+1��
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all within the spirit and principles in the present invention, any amendment of making, equivalent replacement, improvement etc., should be included within protection scope of the present invention.
Claims (6)
1. the video pedestrian recognition method based on convolutional neural networks, it is characterised in that comprise the following steps:
Step 101: read the video in video library, intercepts frame of video, and utilizes HOG feature extraction algorithm to extract the HOG feature of frame of video;
Step 102: build convolutional neural networks, and utilize the HOG feature that step 101 obtains that described convolutional neural networks is trained, obtain the convolutional neural networks model trained;
Step 103: choose several representational character features attributes, including outside association attributes and human body association attributes, a support vector machine classifier is designed for each personage's characteristic attribute, support vector machine classifier is trained by HOG feature respectively that obtain also with step 101, the support vector machine classifier finally trained;
Step 104: HOG feature step 101 obtained is input to the convolutional neural networks model trained, carries out a weak typing, obtains preferred feature by multiple convolution and down-sampling process;
Step 105: the preferred feature obtained is input to each support vector machine classifier trained, each attribute character is carried out sifting sort, finally giving the some groups of data sets answered respectively with character features Attribute Relative, each data set comprises the data corresponding with the frame of video with identical personage's property attribute.
2. the video pedestrian recognition method based on convolutional neural networks according to claim 1, it is characterised in that in described step 101, utilizes HOG feature extraction algorithm to extract the HOG feature of frame of video, particularly as follows:
Step 201, by frame of video gray processing;
Input video frame is carried out the normalization of color space by step 202, employing Gamma correction method;
Step 203, calculate each pixel of frame of video gradient, including size and Orientation;
Step 204, frame of video being divided into junior unit lattice, every cell is 6*6 pixel;
Step 205, add up the histogram of gradients of each cell, the profiler of each cell can be formed;
Step 206, by one interval of every nine cells composition, in an interval, the profiler of all cells is together in series, and obtains the profiler in this interval;
Step 207, the profiler in all intervals in same frame of video is together in series, obtains this frame of video characteristic of correspondence describer.
3. the video pedestrian recognition method based on convolutional neural networks according to claim 1, it is characterised in that in described step 102, the convolutional neural networks of structure is the convolutional neural networks comprising nine hidden layers, particularly as follows:
Ground floor convolutional layer C1: setting 2 characteristic planes, convolution kernel is sized to 5 �� 5;
Ground floor down-sampling layer S1: setting 2 characteristic planes, pond window size is 2 �� 2;
Second layer convolutional layer C2: setting 4 characteristic planes, convolution kernel is sized to 5 �� 5;
Second layer down-sampling layer S2: setting 4 characteristic planes, pond window size is 2 �� 2;
Third layer convolutional layer C3: setting 8 characteristic planes, convolution kernel is sized to 5 �� 5;
Third layer down-sampling layer S3: setting 8 characteristic planes, pond window size is 2 �� 2;
4th layer of convolutional layer C4: setting 16 characteristic planes, convolution kernel is sized to 5 �� 5;
4th layer of down-sampling layer S4: setting 16 characteristic planes, pond window size is 2 �� 2;
Last layer of output layer.
4. the video pedestrian recognition method based on convolutional neural networks according to claim 4, it is characterised in that the convolution kernel in described convolutional neural networks chooses Roberts operator and Prewitt operator, utilizes Newton method, adjusts neural network weight.
5. the video pedestrian recognition method based on convolutional neural networks according to claim 1, it is characterized in that, in step 103, described outside association attributes includes: band cap, wear glasses, knapsack, jacket color clothing color various, lower is single and shoes color is single; Described human body association attributes: male, women, height more than 1.7m, height less than or equal to 1.7m.
6. the video pedestrian recognition method based on convolutional neural networks according to claim 1, it is characterised in that in described step 104, described convolution process includes: with a trainable wave filter fxDeconvolute a frame of video inputted, then add a biasing bx, obtain convolutional layer Cx; Described down-sampling process includes: four the pixel summations of every neighborhood become a pixel, then pass through scalar Wx+1Weighting, is further added by biasing bx+1, then pass through a sigmoid activation primitive, produce a Feature Mapping figure S about reducing four timesx+1, wherein x is the number of plies residing for current convolutional layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510984354.9A CN105631415A (en) | 2015-12-25 | 2015-12-25 | Video pedestrian recognition method based on convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510984354.9A CN105631415A (en) | 2015-12-25 | 2015-12-25 | Video pedestrian recognition method based on convolution neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105631415A true CN105631415A (en) | 2016-06-01 |
Family
ID=56046328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510984354.9A Pending CN105631415A (en) | 2015-12-25 | 2015-12-25 | Video pedestrian recognition method based on convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105631415A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096568A (en) * | 2016-06-21 | 2016-11-09 | 同济大学 | A kind of pedestrian's recognition methods again based on CNN and convolution LSTM network |
CN106127164A (en) * | 2016-06-29 | 2016-11-16 | 北京智芯原动科技有限公司 | The pedestrian detection method with convolutional neural networks and device is detected based on significance |
CN106156765A (en) * | 2016-08-30 | 2016-11-23 | 南京邮电大学 | safety detection method based on computer vision |
CN106203318A (en) * | 2016-06-29 | 2016-12-07 | 浙江工商大学 | The camera network pedestrian recognition method merged based on multi-level depth characteristic |
CN106295507A (en) * | 2016-07-25 | 2017-01-04 | 华南理工大学 | A kind of gender identification method based on integrated convolutional neural networks |
CN106529511A (en) * | 2016-12-13 | 2017-03-22 | 北京旷视科技有限公司 | Image structuring method and device |
CN106529503A (en) * | 2016-11-30 | 2017-03-22 | 华南理工大学 | Method for recognizing face emotion by using integrated convolutional neural network |
CN106611156A (en) * | 2016-11-03 | 2017-05-03 | 桂林电子科技大学 | Pedestrian recognition method and system capable of self-adapting to deep space features |
CN106651973A (en) * | 2016-09-28 | 2017-05-10 | 北京旷视科技有限公司 | Image structuring method and device |
CN106778576A (en) * | 2016-12-06 | 2017-05-31 | 中山大学 | A kind of action identification method based on SEHM feature graphic sequences |
CN106778902A (en) * | 2017-01-03 | 2017-05-31 | 河北工业大学 | Milk cow individual discrimination method based on depth convolutional neural networks |
CN106897673A (en) * | 2017-01-20 | 2017-06-27 | 南京邮电大学 | A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks |
CN106951872A (en) * | 2017-03-24 | 2017-07-14 | 江苏大学 | A kind of recognition methods again of the pedestrian based on unsupervised depth model and hierarchy attributes |
CN107590449A (en) * | 2017-08-31 | 2018-01-16 | 电子科技大学 | A kind of gesture detecting method based on weighted feature spectrum fusion |
CN107644213A (en) * | 2017-09-26 | 2018-01-30 | 司马大大(北京)智能系统有限公司 | Video person extraction method and device |
CN108052929A (en) * | 2017-12-29 | 2018-05-18 | 湖南乐泊科技有限公司 | Parking space state detection method, system, readable storage medium storing program for executing and computer equipment |
CN108229288A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment |
CN108446649A (en) * | 2018-03-27 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | Method and device for alarm |
CN108683877A (en) * | 2018-03-30 | 2018-10-19 | 中国科学院自动化研究所 | Distributed massive video resolution system based on Spark |
CN109522807A (en) * | 2018-10-22 | 2019-03-26 | 深圳先进技术研究院 | Satellite image identifying system, method and electronic equipment based on self-generating feature |
CN110555341A (en) * | 2018-05-31 | 2019-12-10 | 北京深鉴智能科技有限公司 | Pooling method and apparatus, detection method and apparatus, electronic device, storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537647A (en) * | 2014-12-12 | 2015-04-22 | 中安消技术有限公司 | Target detection method and device |
CN104636732A (en) * | 2015-02-12 | 2015-05-20 | 合肥工业大学 | Sequence deeply convinced network-based pedestrian identifying method |
CN104992142A (en) * | 2015-06-03 | 2015-10-21 | 江苏大学 | Pedestrian recognition method based on combination of depth learning and property learning |
CN105160317A (en) * | 2015-08-31 | 2015-12-16 | 电子科技大学 | Pedestrian gender identification method based on regional blocks |
-
2015
- 2015-12-25 CN CN201510984354.9A patent/CN105631415A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104537647A (en) * | 2014-12-12 | 2015-04-22 | 中安消技术有限公司 | Target detection method and device |
CN104636732A (en) * | 2015-02-12 | 2015-05-20 | 合肥工业大学 | Sequence deeply convinced network-based pedestrian identifying method |
CN104992142A (en) * | 2015-06-03 | 2015-10-21 | 江苏大学 | Pedestrian recognition method based on combination of depth learning and property learning |
CN105160317A (en) * | 2015-08-31 | 2015-12-16 | 电子科技大学 | Pedestrian gender identification method based on regional blocks |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096568B (en) * | 2016-06-21 | 2019-06-11 | 同济大学 | A kind of pedestrian's recognition methods again based on CNN and convolution LSTM network |
CN106096568A (en) * | 2016-06-21 | 2016-11-09 | 同济大学 | A kind of pedestrian's recognition methods again based on CNN and convolution LSTM network |
CN106127164B (en) * | 2016-06-29 | 2019-04-16 | 北京智芯原动科技有限公司 | Pedestrian detection method and device based on conspicuousness detection and convolutional neural networks |
CN106127164A (en) * | 2016-06-29 | 2016-11-16 | 北京智芯原动科技有限公司 | The pedestrian detection method with convolutional neural networks and device is detected based on significance |
CN106203318A (en) * | 2016-06-29 | 2016-12-07 | 浙江工商大学 | The camera network pedestrian recognition method merged based on multi-level depth characteristic |
CN106203318B (en) * | 2016-06-29 | 2019-06-11 | 浙江工商大学 | Camera network pedestrian recognition method based on the fusion of multi-level depth characteristic |
CN106295507A (en) * | 2016-07-25 | 2017-01-04 | 华南理工大学 | A kind of gender identification method based on integrated convolutional neural networks |
CN106295507B (en) * | 2016-07-25 | 2019-10-18 | 华南理工大学 | A kind of gender identification method based on integrated convolutional neural networks |
CN106156765A (en) * | 2016-08-30 | 2016-11-23 | 南京邮电大学 | safety detection method based on computer vision |
CN106651973A (en) * | 2016-09-28 | 2017-05-10 | 北京旷视科技有限公司 | Image structuring method and device |
CN106651973B (en) * | 2016-09-28 | 2020-10-02 | 北京旷视科技有限公司 | Image structuring method and device |
CN106611156B (en) * | 2016-11-03 | 2019-12-20 | 桂林电子科技大学 | Pedestrian identification method and system based on self-adaptive depth space characteristics |
CN106611156A (en) * | 2016-11-03 | 2017-05-03 | 桂林电子科技大学 | Pedestrian recognition method and system capable of self-adapting to deep space features |
CN106529503B (en) * | 2016-11-30 | 2019-10-18 | 华南理工大学 | A kind of integrated convolutional neural networks face emotion identification method |
CN106529503A (en) * | 2016-11-30 | 2017-03-22 | 华南理工大学 | Method for recognizing face emotion by using integrated convolutional neural network |
CN106778576A (en) * | 2016-12-06 | 2017-05-31 | 中山大学 | A kind of action identification method based on SEHM feature graphic sequences |
CN106778576B (en) * | 2016-12-06 | 2020-05-26 | 中山大学 | Motion recognition method based on SEHM characteristic diagram sequence |
CN106529511B (en) * | 2016-12-13 | 2019-12-10 | 北京旷视科技有限公司 | image structuring method and device |
CN106529511A (en) * | 2016-12-13 | 2017-03-22 | 北京旷视科技有限公司 | Image structuring method and device |
CN106778902B (en) * | 2017-01-03 | 2020-01-21 | 河北工业大学 | Dairy cow individual identification method based on deep convolutional neural network |
CN106778902A (en) * | 2017-01-03 | 2017-05-31 | 河北工业大学 | Milk cow individual discrimination method based on depth convolutional neural networks |
CN106897673A (en) * | 2017-01-20 | 2017-06-27 | 南京邮电大学 | A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks |
CN106951872A (en) * | 2017-03-24 | 2017-07-14 | 江苏大学 | A kind of recognition methods again of the pedestrian based on unsupervised depth model and hierarchy attributes |
CN106951872B (en) * | 2017-03-24 | 2020-11-06 | 江苏大学 | Pedestrian re-identification method based on unsupervised depth model and hierarchical attributes |
CN108229288A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment |
CN108229288B (en) * | 2017-06-23 | 2020-08-11 | 北京市商汤科技开发有限公司 | Neural network training and clothes color detection method and device, storage medium and electronic equipment |
CN107590449A (en) * | 2017-08-31 | 2018-01-16 | 电子科技大学 | A kind of gesture detecting method based on weighted feature spectrum fusion |
CN107644213A (en) * | 2017-09-26 | 2018-01-30 | 司马大大(北京)智能系统有限公司 | Video person extraction method and device |
CN108052929A (en) * | 2017-12-29 | 2018-05-18 | 湖南乐泊科技有限公司 | Parking space state detection method, system, readable storage medium storing program for executing and computer equipment |
CN108446649A (en) * | 2018-03-27 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | Method and device for alarm |
CN108683877B (en) * | 2018-03-30 | 2020-04-28 | 中国科学院自动化研究所 | Spark-based distributed massive video analysis system |
CN108683877A (en) * | 2018-03-30 | 2018-10-19 | 中国科学院自动化研究所 | Distributed massive video resolution system based on Spark |
CN110555341A (en) * | 2018-05-31 | 2019-12-10 | 北京深鉴智能科技有限公司 | Pooling method and apparatus, detection method and apparatus, electronic device, storage medium |
CN109522807A (en) * | 2018-10-22 | 2019-03-26 | 深圳先进技术研究院 | Satellite image identifying system, method and electronic equipment based on self-generating feature |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105631415A (en) | Video pedestrian recognition method based on convolution neural network | |
Hao et al. | Two-stream deep architecture for hyperspectral image classification | |
CN110689086B (en) | Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network | |
CN110222215B (en) | Crop pest detection method based on F-SSD-IV3 | |
CN108319957A (en) | A kind of large-scale point cloud semantic segmentation method based on overtrick figure | |
CN108614997B (en) | Remote sensing image identification method based on improved AlexNet | |
CN108830330A (en) | Classification of Multispectral Images method based on self-adaptive features fusion residual error net | |
CN108734719A (en) | Background automatic division method before a kind of lepidopterous insects image based on full convolutional neural networks | |
CN109711401A (en) | A kind of Method for text detection in natural scene image based on Faster Rcnn | |
CN112115967B (en) | Image increment learning method based on data protection | |
Li et al. | A shallow convolutional neural network for apple classification | |
Yan et al. | Monocular depth estimation with guidance of surface normal map | |
CN112416293B (en) | Neural network enhancement method, system and application thereof | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
Li et al. | EMFNet: Enhanced multisource fusion network for land cover classification | |
CN107423771B (en) | Two-time-phase remote sensing image change detection method | |
CN115631462A (en) | AM-YOLOX-based strawberry disease and pest detection method and system | |
CN117079098A (en) | Space small target detection method based on position coding | |
Saraswat et al. | Plant Disease Identification Using Plant Images | |
CN117372881B (en) | Intelligent identification method, medium and system for tobacco plant diseases and insect pests | |
Dai et al. | DFN-PSAN: Multi-level deep information feature fusion extraction network for interpretable plant disease classification | |
CN110414560A (en) | A kind of autonomous Subspace clustering method for high dimensional image | |
CN113221913A (en) | Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion | |
Dai et al. | MDC-Net: A multi-directional constrained and prior assisted neural network for wood and leaf separation from terrestrial laser scanning | |
Liu et al. | “Is this blueberry ripe?”: a blueberry ripeness detection algorithm for use on picking robots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160601 |