CN106778650A - Scene adaptive pedestrian detection method and system based on polymorphic type information fusion - Google Patents

Scene adaptive pedestrian detection method and system based on polymorphic type information fusion Download PDF

Info

Publication number
CN106778650A
CN106778650A CN201611219029.4A CN201611219029A CN106778650A CN 106778650 A CN106778650 A CN 106778650A CN 201611219029 A CN201611219029 A CN 201611219029A CN 106778650 A CN106778650 A CN 106778650A
Authority
CN
China
Prior art keywords
region
scene
area
pedestrian
foreground area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611219029.4A
Other languages
Chinese (zh)
Inventor
黄缨宁
彭超然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Polar View Technology Co Ltd
Original Assignee
Shenzhen Polar View Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Polar View Technology Co Ltd filed Critical Shenzhen Polar View Technology Co Ltd
Priority to CN201611219029.4A priority Critical patent/CN106778650A/en
Publication of CN106778650A publication Critical patent/CN106778650A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a kind of scene adaptive pedestrian detection method and system based on polymorphic type information fusion, method is included:S1:Input picture;S2:Extract the foreground area of present frame;S3:Extract the moving region of present frame;S4:Extract the colour of skin and the color development region of present frame;S5:Extract profile peak point region;S6:Complementarity foreground area is obtained, the noise for eliminating complementarity foreground area is filtered using UNICOM region, and the hole in UNICOM region is supplemented, obtain final foreground area;S7:Final foreground area is added in AdaBoost detectors, quick detection goes out candidate's pedestrian area;S8:Candidate's pedestrian area is confirmed using convolutional neural networks;S9:Output pedestrian detection result.The present invention has the advantages that detection speed is fast, accuracy is high, scene is adaptable.

Description

Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
Technical field
It is adaptive the present invention relates to field of computer technology, more particularly to a kind of scene based on polymorphic type information fusion Answer pedestrian detection method and system.
Background technology
The main flow of prior art is:
Method one:By marking the positive negative sample of image in data set, i.e., one sample image can only be pedestrian or non- A kind of label in pedestrian, then training may determine that image is the grader of pedestrian or non-pedestrian.It is good in classifier training After, sliding window is done in each two field picture of video, the subgraph of diverse location different scale is input into grader, if Grader judges it is that pedestrian is then considered and detects pedestrian, exports the positional information of this subgraph.
Method two:The position of all pedestrians is marked in the entire image of training set, is extracted using deep neural network and schemed The feature of picture simultaneously produces the pedestrian area of candidate, and candidate region is judged by the diagnostic horizon in deep neural network finally, Deletion is not the candidate region of pedestrian, and produces final output to being that the candidate region of pedestrian is finely adjusted using regression algorithm The positional information in this region.
Method one:The sliding window subgraph not occurred in many training sets, therefore online lower instruction can be produced in new scene The pedestrian detection model perfected applies the effect in new scene often bad.And the method flase drop if Weak Classifier is used More, using strong classifier, then computing is excessively slow.
Method two:It is very big that deep neural network does the computing power consumed during the multiple convolution of multilayer to entire image, together Under the conditions of the calculation resources of sample, the hundred times of the time-consuming possible way one of computing.Can if the good GPU of performance carries out computing To reach in real time reluctantly, but the cost of GPU is far above CPU, is unfavorable for engineering.And method two can be also run under new scene not The problem of adaptation.
Therefore, the prior art is defective, it is necessary to improve.
The content of the invention
The technical problems to be solved by the invention are:There is provided that a kind of detection speed is fast, accuracy is high, scene is adaptable The scene adaptive pedestrian detection method based on polymorphic type information fusion.
Technical scheme is as follows:A kind of quickly scene adaptive pedestrian detection based on polymorphic type information fusion Method, comprises the following steps:S1:Input picture;S2:Present frame is calculated with the back of the body using the background subtraction algorithm of mixed Gauss model The gap of scape template, extracts the foreground area of present frame;S3:Using frame difference method, it is more than with the gap of former frame in present frame During threshold value, the moving region of present frame is extracted;S4:Algorithm of Complexion Extraction and color development extraction algorithm have been used, present frame has been extracted The colour of skin and color development region;S5:Profile peak point is extracted, wherein, 1/5 region of upper part of each integrity profile is extracted as wheel Wide peak point region;S6:Complementarity foreground area is obtained according to step S2-S5, is filtered using UNICOM region and is eliminated complementarity The noise of foreground area, and the hole in UNICOM region is supplemented, obtain final foreground area;S7:By final prospect Region is added in AdaBoost detectors, and quick detection goes out candidate's pedestrian area;S8:Using convolutional neural networks to candidate row People region is confirmed;S9:Output pedestrian detection result.
Above-mentioned technical proposal is applied to, it is any the step of exchange S2-S5 in described scene adaptive pedestrian detection method Sequentially.
Each above-mentioned technical proposal is applied to, in described scene adaptive pedestrian detection method, in step S4, is specifically held The RBG images for being about to input are converted into the image of HSV space, and tri- passages of H, S, V is minimum and maximum defined in HSV space Value, while the pixel for meeting this value is considered as the colour of skin or color development pixel, the colour of skin or the corresponding region of color development pixel is carried It is taken as the colour of skin and color development region.
Each above-mentioned technical proposal is applied to, in described scene adaptive pedestrian detection method, in step S5, is specifically held OK, by the edge of image using after canny operator extractions, binary image simultaneously extracts profile peak point region.
Each above-mentioned technical proposal is applied to, in described scene adaptive pedestrian detection method, in step S6, wheel is extracted Behind wide peak point region, the profile of all girths or area not in the range of restriction is deleted, and legal profile is filled out Complementarity foreground area is generated after filling the system of redrawing.
Each above-mentioned technical proposal is applied to, in described scene adaptive pedestrian detection method, before step S7, also Collection AdaBoost detectors are failed to judge and judge scene background data by accident, and are stored in scene database.
Each above-mentioned technical proposal is applied to, in described scene adaptive pedestrian detection method, scene number is regularly updated According to the data in storehouse.
Each above-mentioned technical proposal is applied to, a kind of quickly scene adaptive pedestrian based on polymorphic type information fusion examines In examining system, including a Universal Database and pedestrian detection subsystem, wherein, pedestrian detection subsystem includes:It is high using mixing The background subtraction algorithm of this model calculates the gap of present frame and background template, extracts the foreground zone of the foreground area of present frame Domain extraction module;Using frame difference method, when the gap of present frame and former frame is more than threshold value, the moving region of present frame is extracted Acquiring motion area module;Algorithm of Complexion Extraction and color development extraction algorithm have been used, the colour of skin and the color development region of present frame has been extracted The colour of skin and color development region extraction module;1/5 region of upper part of each integrity profile is extracted as profile peak point region Profile peak point region extraction module;Respectively with foreground area extraction module, Acquiring motion area module, the colour of skin and color development area Domain extraction module, the connection of profile peak point region extraction module, are filtered using UNICOM region and eliminate complementarity foreground area Noise, and the hole in UNICOM region is supplemented, obtain UNICOM's region filtration module of final foreground area;With UNICOM Region filtration module connection, the AdaBoost detectors of candidate's pedestrian area are used for quickly detecting out to final foreground area;With And be connected with AdaBoost detectors, for the convolutional neural networks confirmed to candidate's pedestrian area;
Each above-mentioned technical proposal is applied to, in described scene adaptive pedestrian detecting system, is also used to adopt including one Collect the data acquisition feedback subsystem and the contextual data for storing the scene background data of collection of scene background data Storehouse, data acquisition feedback subsystem is connected with pedestrian detection subsystem.
Be applied to each above-mentioned technical proposal, in described scene adaptive pedestrian detecting system, also including one respectively with Scene database and the online updating device of Universal Database connection.
Using such scheme, the present invention has following Advantageous Effects:
1st, speed is fast:Compared to all convolutional neural networks are used, due to using the amounts of calculation such as background subtraction small before Method eliminates substantial amounts of useless background.Only detection is carried out in foreground area and deep neural network confirms, can on single cpu The analyze speed per second to reach 50-100 frames, saves a large amount of calculation resources.
2nd, accuracy is high:Method compared to sliding window is only used, due to the outstanding ability in feature extraction of deep neural network And discriminating power, method raising more than 50% of the accuracy rate than single sliding window.And the multi-source input based on Multi-information acquisition can make Pedestrian region is tried one's best and is detected.
3rd, scene adaptability:Conventional method is often each using being placed directly on after the good model of specific database training Individual scene goes to use, and so often having different situations under different scenes occurs, and unified data are difficult to meet all Scene.And after adding data acquisition subsystem and online updating device, the present invention can obtain self adaptation under each scene Ability, so as to ensure the validity of the pedestrian detection under each scene.
Brief description of the drawings
Fig. 1 is the attachment structure schematic diagram of Scene adaptive pedestrian detecting system of the present invention;
Fig. 2 is pedestrian detection subsystem work flow chart in the present invention;
Fig. 3 is gauss hybrid models background modeling and detection process flow chart in the present invention;
Fig. 4 is frame-to-frame differences motion detection flow chart in the present invention;
Fig. 5 is the colour of skin and color development overhaul flow chart in the present invention;
Fig. 6 is filtering flow chart in UNICOM region in the present invention.
Specific embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
Present embodiments provide it is a kind of quickly based on polymorphic type information fusion scene adaptive pedestrian detection method, with And the system that the method is used, as shown in figures 1 to 6, wherein, the system is made up of four parts:Pedestrian's database, pedestrian detection Subsystem, data acquisition feedback subsystem and online updating device.
The candidate region scope that the system is detected needed for being greatly reduced using traditional manual characterization method, herein The judgement for carrying out one by one using a less convolutional neural networks in candidate region, the precision for not only improving computing is also reduced Computing consumption.
Also, data acquisition reponse system is introduced, adds the mistakes and omissions of the scene information and detector of data to judge, use this A little data constantly lift convolutional neural networks performance in the scene, accomplish the adaptivity of scene.
A general convolutional neural networks are trained to be implanted into pedestrian detection subsystem first by the data in pedestrian's database System.In pedestrian detection subsystem, data acquisition feedback subsystem is by detector false in constantly collection pedestrian detection subsystem The data sentenced and fail to judge, then the data of collection are put into the scene database of pedestrian's database, and the background in scene is made It is negative sample also seemingly in scene database.Final online renovator is instructed by using the information in the new scene database collected White silk is more suitable for the model of current scene and is put into pedestrian detection subsystem, substitutes original model, obtains constantly right with this The adaptability of current scene.
Pedestrian's database is divided into two types:One kind is system internal database, and this is all consistent logical for all scenes , referred to as Universal Database;Another kind is that have specific sample, referred to as scene database under the scene.In database Comprising data be picture and its corresponding label information.Label only has two kinds:Pedestrian is referred to as positive sample, and non-pedestrian is referred to as bearing Sample.Universal Database is preset by hand, and scene database is gathered under the scene that system is installed.
Pedestrian detection subsystem is one and is based on information fusion and the system for having used various machine learning algorithms:
First, testing process:
First we used the difference that the background subtraction algorithm based on mixed Gauss model calculates present frame and background template Away from extracting the foreground area of present frame.
Using frame difference method, present frame is more than threshold value with the gap of former frame, it is believed that generate change, extracts moving region, The details of the prospect that previous step is extracted is made up using these moving regions.
Due to that can be fallen as background subtraction by algorithm when people is long-term motionless, and we need to detect all people, Therefore, system has used Algorithm of Complexion Extraction and color development extraction algorithm, and the colour of skin that will be extracted is asked and prospect with color development region Occur simultaneously, referred to as complementarity prospect, the i.e. colour of skin and color development region.
Extract profile peak point, it is assumed that always above picture, then we extract the profile information of image to the head of people, And 1/5 region for extracting the upper part of each integrity profile adds candidate region, you can to obtain profile peak point region.
Comprehensive foreground area, moving region, the colour of skin and color development region, profile peak point region, can both obtain complementarity Foreground area, the noise for having used the method for UNICOM's region filtering to eliminate in complementarity foreground area, and in UNICOM region Hole supplemented.
Foreground area area now is less than picture area, the foreground area input AdaBoost that only previous step is extracted (Adaptive Boosting, self adaptation enhancing) detector, AdaBoost detectors are that one kind is produced by iteration Weak Classifier Strong classifier, it quickly detects candidate's pedestrian area, reduces search area.
AdaBoost detectors are set to as far as possible many detected objects during due to training, therefore candidate region can have one A little flase drops.Now we are sentenced with convolutional neural networks (Convolutional Neural Network) to candidate region It is disconnected, obtain candidate's pedestrian area.Because convolutional neural networks have outstanding ability in feature extraction, suitable pedestrian's feature is extracted The accuracy rate and recall ratio for judging afterwards are all very high.Candidate's pedestrian area by convolutional neural networks grader is considered as pedestrian Region.
Therefore, pedestrian detection subsystem has merged prospect, the colour of skin and color development information and has used AdaBoost detectors With the amalgamation subsystem of convolutional neural networks.
2nd, flow is trained:
The training of AdaBoost detectors and convolutional neural networks grader, uses AdaBoost detectors and convolution god The reason for being connected through network classifier is that AdaBoost is used as a cascade Weak Classifier structure, meter needed for each Weak Classifier Calculation resource is small, and the speed of service is very fast, is provided if doing candidate region using global convolutional Neural network and extracting consumed calculating Source at least two orders of magnitude high.Therefore the first step uses AdaBoost as a candidate region extractor.Training objective is to the greatest extent All of pedestrian area being extracted possible more.It is characterized in haar features to use, and Weak Classifier uses decision tree.
Adaboost is trained
The haar features of all samples are calculated, corresponding sample is represented with haar features;
Initialization training data weights distribution, when the weights for having N number of sample, each sample are assigned 1/N.
Using sample training Weak Classifier, what is trained in present case is decision tree, when sample is by Accurate classification, Its weights is lowered in the training set of next Weak Classifier of construction;If classification is incorrect, the weights of this sample are improved, and are put Enter the training set of next Weak Classifier.
The Weak Classifier that each training is obtained is combined into strong classifier.After the training process of each Weak Classifier terminates, The weight of the small Weak Classifier of error in classification rate is increased, makes it that larger decisive action is played in final classification function, and The weight of the big Weak Classifier of error in classification rate is reduced, makes it that less decisive action is played in final classification function.
Convolutional neural networks are trained
The convolutional neural networks structure that the system is used has four convolutional layers, each convolutional layer followed by a pond layer, It is finally a grader of softmax, whether the picture that input is represented with two classification results is pedestrian, and provides its probability Value, and the gap between output valve and sample label is transmitted using back-propagation algorithm, constantly update the weights of network until Convergence.
Before training over-sampling can be carried out until two sample sizes of classification to the classification of less sample in Universal Database Balance.Then negative sample is aligned to enter row stochastic rotation, cutting, upset, affine change, brightness change and make an uproar by a small margin at random Sound superposition produces more samples, and this step is referred to as data enhancing.Being trained using enhanced data to make Obtain convolutional neural networks and obtain yardstick, rotation, noise, the robustness of affine transformation.
3rd, step detailed annotation:
Mixed Gaussian background modeling
The on-line training of mixed Gaussian background modeling and use step such as accompanying drawing 3.
Mixed Gaussian background modeling is the background method for expressing based on pixel samples statistical information, using pixel when more long The statistical informations such as the probability density of interior great amount of samples value represent background, statistical information such as, pattern quantity, the average of each pattern And standard deviation) and then using statistics difference, for example, 3 σ principles, carry out object pixel judgement, complicated dynamic background can be carried out Modeling.
Background model is made up of K Gauss model, and each Gauss model has a corresponding weights.
The pixel of each new input compares with K Gauss model, if finding T threshold of its value of model in model In the range of then the pixel belong to background, the weights of the model increase according to learning rate set in advance, and according to the picture of new input Element updates the average and standard deviation of the model.
If not finding matching, the pixel belongs to prospect, and gives up the minimum Gauss model of weights, uses current picture Element value is average, and standard deviation is a larger value, and weight carrys out a newly-generated Gauss model and adds the back of the body for a less value Scape model group.When all pixels point is completed to image procossing, optimize final prospect using corrosion expansive working and represent.
Frame difference method
Frame difference method is entered the two adjacent frame informations before and after and is contrasted, and pixel value difference can consider this picture when being less than threshold value Element is not moving region, and otherwise this pixel is moving region.With this information can make up above mixed Gauss model extract less than Some edges foreground information, as shown in Figure 4.
Colour of skin color development is extracted
The extraction of the colour of skin and color development such as Fig. 5.First have to the RBG images of input to be converted into the image of HSV space.Because this Sample can remove influence of the illumination to the colour of skin and color development.We define the minimum and maximum of tri- passages of H, S, V in HSV space Value, while the pixel for meeting this value is considered as the colour of skin or color development pixel.The reason for supplementing this step is that some stand still Pedestrian can be considered as background, and can not extract movable information.Therefore the loss of this information is made up using the colour of skin and color development.
Profile peak point is extracted
Based in general monitor video, the profile of people is always less consistent with remaining image, and people head always In upper 1/5 part of human body contour outline.By the edge of image using after canny operator extractions, binary image simultaneously extracts profile.Will The region of upper the 1/5 of each profile is considered the candidate region of head.
UNICOM's region filtering
After the bianry image being above input into is carried out contours extract, all girths or area are deleted not in the range of restriction Profile, wherein, it is excessive or it is too small be impossible to be pedestrian, and legal profile is filled after the system of redrawing generate it is new excellent Foreground area after change.
Data acquisition feedback subsystem is a system that scene specific data collection is carried out according to scene.Due to general Sample in data set can not cover each scene, cause the detector of pedestrian to show in some scenarios and do not comply with one's wishes, therefore I Need data acquisition subsystem collection contextual data to be put into scene database.
Because the threshold value that Adaboost detectors are set is very low, therefore positive verification and measurement ratio and negative verification and measurement ratio are all very high, therefore I Target is mainly detected as with deletion error.Flase drop is much all because environment brings.Therefore background subtraction used above is worked as The prospect ratio of Shi Faxian is less than T, and when being not detected by pedestrian, it is believed that obtain a background picture.This background picture is made It is the generation picture of negative sample.By the method for random sliding window, the picture tag of production is designated as non-pedestrian, and be put into scene Database.
And after detecting pedestrian, we are tracked to it, until pedestrian disappears.If tracking the pedestrian candidate area for producing Domain there is not convolutional neural networks to be judged as positive sample, then this sample is put into scene database and is labeled as pedestrian by we.
Online updating device is one and periodically checks scene database, and utilizes the data in scene database to convolutional Neural The system that network classifier is finely adjusted.
The quantity of positive negative sample in scene database is calculated first, and the sample to that less class carries out over-sampling, makes Obtain the sample size balance of two classes.Then flip horizontal, Random-Rotation, random cropping and brightness and affine are carried out to sample again Conversion, to cause that the robustness of convolutional neural networks strengthens.
Finally the enhanced data of data are added the training of convolutional neural networks, convolutional neural networks use original general The data that the model of database training is carried out in the initialization of weights, but usage scenario database after enhancing treatment are trained micro- Adjust, the gap of predicted value and sample label is adjusted using back-propagation algorithm, so as to obtain being suitable for the die for special purpose of current scene Type.
When online updating device trains new convolutional neural networks model, then pedestrian detector Central Plains is substituted with this new model Some convolutional neural networks graders.
These are only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all it is of the invention spirit and Any modification, equivalent and improvement for being made within principle etc., should be included within the scope of the present invention.

Claims (10)

1. it is a kind of quickly based on polymorphic type information fusion scene adaptive pedestrian detection method, it is characterised in that comprising such as Lower step:
S1:Input picture;
S2:The gap of present frame and background template is calculated using the background subtraction algorithm of mixed Gauss model, present frame is extracted Foreground area;
S3:Using frame difference method, when the gap of present frame and former frame is more than threshold value, the moving region of present frame is extracted;
S4:Algorithm of Complexion Extraction and color development extraction algorithm have been used, the colour of skin and the color development region of present frame has been extracted;
S5:Profile peak point is extracted, wherein, 1/5 region of upper part of each integrity profile is extracted as profile peak point area Domain;
S6:Complementarity foreground area is obtained according to step S2-S5, is filtered using UNICOM region and is eliminated complementarity foreground area Noise, and the hole in UNICOM region is supplemented, obtain final foreground area;
S7:Final foreground area is added in AdaBoost detectors, quick detection goes out candidate's pedestrian area;
S8:Candidate's pedestrian area is confirmed using convolutional neural networks;
S9:Output pedestrian detection result.
2. scene adaptive pedestrian detection method according to claim 1, it is characterised in that:Any step for exchanging S2-S5 Rapid order.
3. scene adaptive pedestrian detection method according to claim 2, it is characterised in that:It is specific to perform in step S4 The RBG images of input are converted into the image of HSV space, tri- passages of H, S, V is minimum and maximum defined in HSV space Value, while the pixel for meeting this value is considered as the colour of skin or color development pixel, the colour of skin or the corresponding region of color development pixel is carried It is taken as the colour of skin and color development region.
4. scene adaptive pedestrian detection method according to claim 2, it is characterised in that:It is specific to perform in step S5, By the edge of image using after canny operator extractions, binary image simultaneously extracts profile peak point region.
5. scene adaptive pedestrian detection method according to claim 4, it is characterised in that:In step S6, profile is extracted Behind peak point region, the profile of all girths or area not in the range of restriction is deleted, and legal profile is filled Complementarity foreground area is generated after redrawing system.
6. scene adaptive pedestrian detection method according to claim 1 and 2, it is characterised in that:Before step S7, also Collection AdaBoost detectors are failed to judge and judge scene background data by accident, and are stored in scene database.
7. scene adaptive pedestrian detection method according to claim 6, it is characterised in that:Regularly update scene database Data.
8. it is a kind of quickly based on polymorphic type information fusion scene adaptive pedestrian detecting system, it is characterised in that:Including one Universal Database and pedestrian detection subsystem, wherein, pedestrian detection subsystem includes:
The gap of present frame and background template is calculated using the background subtraction algorithm of mixed Gauss model, before extracting present frame The foreground area extraction module of scene area;
Using frame difference method, when the gap of present frame and former frame is more than threshold value, the motor area of the moving region of present frame is extracted Domain extraction module;
Algorithm of Complexion Extraction and color development extraction algorithm have been used, the colour of skin of present frame has been extracted with the colour of skin in color development region and color development area Domain extraction module;
Extract each integrity profile upper part 1/5 region as profile peak point region profile peak point extracted region mould Block;
Respectively with foreground area extraction module, Acquiring motion area module, the colour of skin and color development region extraction module, profile peak point Region extraction module is connected, and the noise for eliminating complementarity foreground area is filtered using UNICOM region, and in UNICOM region Hole is supplemented, and obtains UNICOM's region filtration module of final foreground area;
It is connected with UNICOM region filtration module, the AdaBoost of candidate's pedestrian area is used for quickly detecting out to final foreground area Detector;
And be connected with AdaBoost detectors, for the convolutional neural networks confirmed to candidate's pedestrian area.
9. scene adaptive pedestrian detecting system according to claim 8, it is characterised in that:Also including one is used to gather field The data acquisition feedback subsystem of scape background data and the scene database for storing the scene background data of collection, number It is connected with pedestrian detection subsystem according to collection feedback subsystem.
10. scene adaptive pedestrian detecting system according to claim 8, it is characterised in that:Also include one respectively with field Scape database and the online updating device of Universal Database connection.
CN201611219029.4A 2016-12-26 2016-12-26 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion Pending CN106778650A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611219029.4A CN106778650A (en) 2016-12-26 2016-12-26 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611219029.4A CN106778650A (en) 2016-12-26 2016-12-26 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion

Publications (1)

Publication Number Publication Date
CN106778650A true CN106778650A (en) 2017-05-31

Family

ID=58926120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611219029.4A Pending CN106778650A (en) 2016-12-26 2016-12-26 Scene adaptive pedestrian detection method and system based on polymorphic type information fusion

Country Status (1)

Country Link
CN (1) CN106778650A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334840A (en) * 2018-02-01 2018-07-27 福州大学 Pedestrian detection method based on deep neural network under traffic environment
CN108765373A (en) * 2018-04-26 2018-11-06 西安工程大学 A kind of insulator exception automatic testing method based on integrated classifier on-line study
CN109063630A (en) * 2018-07-27 2018-12-21 北京以萨技术股份有限公司 A kind of fast vehicle detection method based on separable convolution technique and frame difference compensation policy
CN109558880A (en) * 2018-10-16 2019-04-02 杭州电子科技大学 A kind of whole profile testing method with Local Feature Fusion of view-based access control model
CN109871788A (en) * 2019-01-30 2019-06-11 云南电网有限责任公司电力科学研究院 A kind of transmission of electricity corridor natural calamity image recognition method
CN110188592A (en) * 2019-04-10 2019-08-30 西安电子科技大学 A kind of urinary formed element cell image disaggregated model construction method and classification method
CN110209063A (en) * 2019-05-23 2019-09-06 成都世纪光合作用科技有限公司 A kind of smart machine control method and device
CN110245628A (en) * 2019-06-19 2019-09-17 成都世纪光合作用科技有限公司 A kind of method and apparatus that testing staff discusses scene
CN110688945A (en) * 2019-09-26 2020-01-14 成都睿云物联科技有限公司 Cleanliness detection method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298976A (en) * 2014-10-16 2015-01-21 电子科技大学 License plate detection method based on convolutional neural network
CN105160297A (en) * 2015-07-27 2015-12-16 华南理工大学 Masked man event automatic detection method based on skin color characteristics
CN105303193A (en) * 2015-09-21 2016-02-03 重庆邮电大学 People counting system for processing single-frame image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298976A (en) * 2014-10-16 2015-01-21 电子科技大学 License plate detection method based on convolutional neural network
CN105160297A (en) * 2015-07-27 2015-12-16 华南理工大学 Masked man event automatic detection method based on skin color characteristics
CN105303193A (en) * 2015-09-21 2016-02-03 重庆邮电大学 People counting system for processing single-frame image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘琳: "基于人体头肩特征的行人检测方法研究与应用", 《中国优秀硕士论文全文数据库(信息科技辑)》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334840A (en) * 2018-02-01 2018-07-27 福州大学 Pedestrian detection method based on deep neural network under traffic environment
CN108765373A (en) * 2018-04-26 2018-11-06 西安工程大学 A kind of insulator exception automatic testing method based on integrated classifier on-line study
CN108765373B (en) * 2018-04-26 2022-03-22 西安工程大学 Insulator abnormity automatic detection method based on integrated classifier online learning
CN109063630A (en) * 2018-07-27 2018-12-21 北京以萨技术股份有限公司 A kind of fast vehicle detection method based on separable convolution technique and frame difference compensation policy
CN109063630B (en) * 2018-07-27 2022-04-26 以萨技术股份有限公司 Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy
CN109558880B (en) * 2018-10-16 2021-06-04 杭州电子科技大学 Contour detection method based on visual integral and local feature fusion
CN109558880A (en) * 2018-10-16 2019-04-02 杭州电子科技大学 A kind of whole profile testing method with Local Feature Fusion of view-based access control model
CN109871788A (en) * 2019-01-30 2019-06-11 云南电网有限责任公司电力科学研究院 A kind of transmission of electricity corridor natural calamity image recognition method
CN110188592A (en) * 2019-04-10 2019-08-30 西安电子科技大学 A kind of urinary formed element cell image disaggregated model construction method and classification method
CN110188592B (en) * 2019-04-10 2021-06-29 西安电子科技大学 Urine formed component cell image classification model construction method and classification method
CN110209063A (en) * 2019-05-23 2019-09-06 成都世纪光合作用科技有限公司 A kind of smart machine control method and device
CN110245628A (en) * 2019-06-19 2019-09-17 成都世纪光合作用科技有限公司 A kind of method and apparatus that testing staff discusses scene
CN110688945A (en) * 2019-09-26 2020-01-14 成都睿云物联科技有限公司 Cleanliness detection method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106778650A (en) Scene adaptive pedestrian detection method and system based on polymorphic type information fusion
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN108280397B (en) Human body image hair detection method based on deep convolutional neural network
CN110425005B (en) Safety monitoring and early warning method for man-machine interaction behavior of belt transport personnel under mine
CN108875624B (en) Face detection method based on multi-scale cascade dense connection neural network
CN108171112A (en) Vehicle identification and tracking based on convolutional neural networks
CN110232380A (en) Fire night scenes restored method based on Mask R-CNN neural network
Benyang et al. Safety helmet detection method based on YOLO v4
Han et al. Deep learning-based workers safety helmet wearing detection on construction sites using multi-scale features
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN110852190B (en) Driving behavior recognition method and system integrating target detection and gesture recognition
Ren et al. A novel squeeze YOLO-based real-time people counting approach
CN109255350A (en) A kind of new energy detection method of license plate based on video monitoring
CN104978567A (en) Vehicle detection method based on scenario classification
CN110569843A (en) Intelligent detection and identification method for mine target
CN107895379A (en) The innovatory algorithm of foreground extraction in a kind of video monitoring
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN112861917A (en) Weak supervision target detection method based on image attribute learning
CN105825233A (en) Pedestrian detection method based on random fern classifier of online learning
CN107230219A (en) A kind of target person in monocular robot is found and follower method
CN115393598A (en) Weakly supervised semantic segmentation method based on non-salient region object mining
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
CN105404682A (en) Digital image content based book retrieval method
CN111626197B (en) Recognition method based on human behavior recognition network model
CN110287970B (en) Weak supervision object positioning method based on CAM and covering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531