CN106503615B - Indoor human body detecting and tracking and identification system based on multisensor - Google Patents
Indoor human body detecting and tracking and identification system based on multisensor Download PDFInfo
- Publication number
- CN106503615B CN106503615B CN201610835988.2A CN201610835988A CN106503615B CN 106503615 B CN106503615 B CN 106503615B CN 201610835988 A CN201610835988 A CN 201610835988A CN 106503615 B CN106503615 B CN 106503615B
- Authority
- CN
- China
- Prior art keywords
- human body
- camera
- sample
- formula
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Indoor human body detecting and tracking and identification system based on multisensor, the system completes the Primary Location of human body by pyroelectric infrared sensor, camera is moved to the range of human body appearance by steering engine, the image information within the scope of this is acquired by camera and image information is transferred in computer, and computer completes the relevant calculation of human testing and controls steering engine and become to making camera and mobile platform tracking human body.Computer is matched by collecting image information and background information so that it is determined that detecting the identity of people.Mainly for detection of under indoor environment, whether someone invades and indoor moving service robot is helped to determine destination service people.System, which mainly controls camera steering engine holder by pyroelectric sensor, makes it turn to the range of physical activity, utilizes visual information detection and tracking human body.It detects and carries out identification with the method for Adaboost and Principle components analysis after human body.
Description
Technical field
The present invention relates to the technical field of indoor human body detection, specifically it is a kind of by pyroelectric infrared sensor and
The system that monocular cam detects and tracks indoor sport human body and complete identification.
Background technique
With the development of computer vision technique, human motion analysis problem is had been a great concern, and human testing
Tracking and identification are the important component of human motion analysis based on computer vision again.It is in human-computer interaction, view
Frequency monitoring, the fields such as intelligent vehicle system, virtual reality have a wide range of applications background and economic value.And indoor human body detection with
Track and identification technology are even more one of them important application, such as indoors in interactive environment, which can help Information Mobile Service
Robot determines destination service people, while can also guard children, the elderly and patient with disabilities.On the other hand,
In security protection and monitoring field, indoor human body detecting and tracking and identification technology have huge use value, pass through movement human
Detection can be monitored indoor environment, enable monitoring system automatic identification people and alarm risk object.
Traditional human body detecting method is broadly divided into Stereo Vision, profile testing method, template matching method, human body
Model method, the method for Gait Recognition, the method for wavelet analysis and neural network, are more by two of them or more
The method combined.But since the human body of movement is under the influence of the variation of angle and background, traditional human body detecting method exists
Do not reach desired effect in accuracy.The human testing tracking based on machine learning and identification technology are in recent years with good
Real-time and robustness be target, obtained faster development.By the correlation theory of machine learning and Technology application to each
In link, preferably instead of traditional detecting and tracking and identification technology, detection accuracy is high, while being also able to satisfy part in real time
The demand of property.So the human body detecting method with machine learning is widely applied, cognitive phase is mainly from target
Human body is distinguished, is mainly matched by the special characteristic of human body, or passes through neural network, SVM (support vector machines), more
Layer the methods of perceptron judges whether there is pedestrian's appearance in target area.
On the other hand, in order to obtain the information of human body target, current research hotspot is the feelings static in monocular cam
The human testing and tracking of movement are completed under condition.But in practical situations, since monocular cam wide angular range is limited, if fortune
Dynamic human body removes the angular field of view of camera, and system can not then detect the human body of movement.Therefore how to pass through control monocular
The movement of camera carrys out real-time detection and tracks the human body of indoor sport to be still a urgent problem to be solved.When detecting human body
After target, how to distinguish that identity is a highly important task, traditional method carries out face sorting to human body image
It surveys, the part that will test carries out identification, but traditional method does not refer to if do not detected in human body image
How face sorting examining system identifies human body identity.
Summary of the invention
The main object of the present invention is to propose that a kind of indoor human body detects and identification system, the system can detecte room
Whether someone invades and indoor moving service robot is helped to determine destination service people under interior environment, so it acts predominantly on inspection
It surveys under one or the indoor environment of a few peoples.It mainly solves three technical problems:
1. static camera can not determine room area where mobile human body, only when human motion to camera visual angle
In range, camera could capture human body information.
2. the image information that camera obtains can not continue to track indoor shifting when human motion goes out in camera angular field of view
Dynamic human body.
3. the information in the human body picture is people's body side surface, when system does not detect human face region from image information such as
What confirmation target identity information.
Indoor human body detecting and tracking and identification system based on multisensor, the system are sensed by rpyroelectric infrared
Device completes the Primary Location of human body, and camera is moved to the range of human body appearance by steering engine, acquires the model by camera
It encloses interior image information and image information is transferred in computer, computer completes the relevant calculation and control flaps of human testing
Machine becomes to making camera and mobile platform tracking human body.Computer by collect image information and background information carry out matching to
Determine the identity of detection people.Its system structure diagram is as shown in Figure 1.
The work flow diagram of the method for the present invention is as shown in Fig. 2, specific workflow is as follows:
The Preliminary detection of S1 human body and the Primary Location of camera
This system captures human infrared radiation using pyroelectric infrared sensor and is converted into faint voltage signal
Principle come detect it is indoor whether have human body, for 6 meters, incude angle is the pyroelectric infrared sensor induction range that system uses
100 degree.Four pyroelectric sensors are distributed in due west, the due south, due east, due north four direction of camera, detection angles will
Indoor 360 degree of the range of covering, when pyroelectric sensor detects human body signal, camera steering engine holder is just turned to
Within the scope of this, structure is as shown in Figure 3.
S2 completes human testing using image
This system detects the human body in video sequence using the method for HOG+SVM, and steps are as follows:
The production of S2.1 sample database
Human sample is acquired as positive sample, this system is using in INRIA static state pedestrian's Test database as shown in Figure 4
Picture is as training positive sample, using the negative sample and the treated unmanned environment in interior in INRIA database as shown in Figure 5
Under picture as training negative sample.
The selection of the parameter of S2.2HOG feature extraction
This method detects library function, corresponding parameter setting using the HOG that OpenCv is carried are as follows: detection window 64*64,
The size of cell factory is 16*16 pixel, and block sliding step is 16, and gradient direction is quantified as 9, therefore the HOG of piece image is special
Levying dimension is 4*9*3*3=324.The standardized method of characteristic block selects L2-Hys, and threshold value is 0.2 and carries out Gamma correction.
S2.3 carries out SVM training
The HOG feature of all positive negative samples is extracted, and assigns label to positive negative sample, positive sample is labeled as 1, by negative sample
This is labeled as 0, and then the HOG feature and label of positive negative sample are all input in SVM training aids and are trained, just obtains one
Human body classifier.
S3 controls camera rotation tracking human body
As shown in fig. 6, human body is marked its location with box in image by system, every 25 frame detection is primary, then
The angle of steering engine left-right rotation is calculated by the difference of center pixel coordinate in calculation block and whole sub-picture center pixel coordinate
The angle spent and rotated upwardly and downwardly, to guarantee that human body is appeared in always in camera angular field of view.On the other hand, pass through video camera
Calibration obtains the intrinsic parameter and distortion parameter of camera, to extrapolate human body reality according to the coordinate of human body block diagram center pixel
Physical coordinates are maintained at camera and human body in given area to adjust base plate electric machine.
S4 identification
The identification procedure of this system individually cuts out the human body block diagram that detected secondary independent to become one first
Image, on the sub-picture using VIola-Jones classifier carry out Face datection, process approximately as:
S4.1 accelerates the calculating of the value of 45 degree of rotations of rectangular image area or rectangular area, the image using integral image
Structure is used to accelerate the calculating of class Haar input feature vector, as the feature that fully enters of Adaboost classifier, class Harr
Feature is as shown in Figure 7.
S4.2 creates face and non-face classifier node using Adaboost algorithm.
Weak Classifier node is formed the cascade node of screening type by S4.3, as shown in Figure 8.Each node D in figurej
Use Like-Fenton Oxidation training either with or without the decision tree of face comprising one group.Node arranges from simple to complex, in this way can be most
Smallization refuses the calculation amount when simple region of image.First classifiers be it is optimal, can by the inclusion of the image-region of face,
Some images not comprising face are allowed to pass through simultaneously;Second classifiers suboptimum classifier, and have lower reject rate;So
Afterwards and so on, as long as image-region has passed through entire cascade, then it is assumed that there is face in the inside, and face is rectified out with block diagram.
Due to the limited amount system of house person, the quantity of the sample database of old friend's face is also limited, using principal component analysis
Method PCA carries out identification, and all training datas are projected to the subspace PCA, image projection to be identified to PCA sky
Between, find training data projection after vector sum images to be recognized projection after vector it is immediate that, training data such as Fig. 9
It is shown.
The method for the human testing that this system uses not is that can capture the positive information of face every time, if cut
Human body block diagram out only has the side-information on head, back side information or only head information below, then skips
The process of Adaboost Face datection directly carries out identity differentiation using the method for PCA, need to only replace data set, number
It is as shown in Figure 10 according to collection.
Detailed description of the invention
Fig. 1 system structure diagram.
Fig. 2 working-flow figure.
Fig. 3 pyroelectric sensor distributed architecture figure.
Fig. 4 human testing positive sample example.
Fig. 5 human testing negative sample example.
The human body block diagram example that Fig. 6 is detected.
Fig. 7 class Harr feature.
The screening type cascade used in Fig. 8 Viola-Jones classifier.
Fig. 9 recognition of face PCA training dataset example.
Figure 10 non-face part PCA training dataset example.
Specific embodiment
In conjunction with Fig. 1 to Fig. 9, this system is described in detail.
Fig. 1 illustrates system structure diagram, and Arduino, which reads pyroelectric sensor signal, to be responsible for completing initial alignment, will
Camera is moved to the range of physical activity, then all visual informations in camera acquisition current visual angle, by video sequence
Pass in the computer of system, computer by HOG+SVM algorithm detect video in human body, then adjust camera steering engine and
Base plate electric machine appears in human body always in the angular field of view of camera, and computer is sharp by the picture of the human body block diagram cut
Face datection is carried out with Adaboost, identification is carried out using PCA if detecting face, if not detecting face
Identification directly then is carried out using human body PCA, why directly identification is not carried out using PCA, is because straight using PCA
It connects and identifies that compare entire body carries out that identification feature dimension is few, and calculation amount is low, and discrimination is higher to face.
Fig. 3 illustrates branch schematic diagram of the pyroelectric sensor on camera steering engine flat, 0 ° of camera, 90 °,
It is 1,2,3,4 that the pyroelectric sensor of 180 °, 270 ° four directions is numbered respectively, if only one pyroelectric sensor,
Camera steering engine is moved to the direction where the pyroelectric sensor.When two pyroelectric sensors capture signal,
Camera is then moved to the position among two pyroelectric sensor angles, such as number is that 1,2 sensors capture signal,
Camera is then moved to the position of its 45° angle.After camera initial position is fixed, then pyroelectric sensor is closed
Signal capture, until that can not detect that human body then restarts the first of pyroelectric sensor from the video information that camera obtains
Begin to position.
HOG+SVM human testing step:
The HOG feature that system extracts human body, which is put into SVM training aids, trains disaggregated model, and the HOG of characteristics of human body is special
It is as follows to levy extraction process:
Collect the positive example and negative example of human sample, the human body of positive sample 96*160 size in data set as shown in Figure 4
Picture, when use, will all remove 16 pixels up and down, intercept the human body of intermediate 64*128 size.Negative sample is from such as Fig. 5
It being cut at random in the shown indoor picture not comprising human body, size is equally 64*128, and the quantity of positive negative sample is The more the better,
Preferably it is attained by 10000 or so.Sample label is assigned to all positive negative samples, positive sample is labeled as 1, negative sample label
It is 0, then extracts HOG description of positive and negative sample image.
The normalization for carrying out color space to input picture using Gamma correction method, not only can inhibit noise
Interference, can also reduce influence caused by illumination variation.
The gradient of pixel (x, y) is calculated, formula is as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x, y) indicates pixel (x, y) horizontal direction gradient, Gy(x, y) indicates pixel (x, y) vertical direction
On gradient, H (x+1, y) indicate point (x+1, y) pixel value, H (x-1, y) indicate point (x-1, y) pixel value, H (x, y+1)
Indicate that the pixel value of point (x, y+1), H (x, y-1) indicate the pixel value of point (x, y-1);Gradient magnitude G at pixel (x, y)
(x, y) and gradient direction α (x, y) are respectively as follows:
Gradient orientation histogram is constructed, the detection window of a 64*64 pixel is divided into 4*4 cell factory, each
Cell factory size is 16*16 pixel, and gradient direction is quantified as 9, and each of cell factory pixel is all based on certain
Nearest Neighbor with Weighted Voting is taken in the histogram channel in a direction, and weight is calculated according to the gradient magnitude of the pixel.It is thin by every adjacent 4
Born of the same parents form a block, 9 blocks a total of in this way, so the dimension of obtained HOG feature vector is 4*9**3*3=324.
Since the variation of illumination is so that the variation range of gradient intensity is very big, it is therefore desirable to carry out normalizing to gradient intensity
Change.Using the method for normalizing of following formula, formula is as follows:
ν indicates normalized vector not yet in formula, | | νk| | indicate the k rank norm of ν, ε indicates constant, the value very little of ε.
Then after normalization eigenvectors matrix and label put into SVM and be trained.
The Generalized optimal interface that SVM is solved is converted to the problem of quadratic function optimizing under an inequality constraints:
Corresponding constraint condition is yi(ωTxi-b)≥1-ξiAnd ξi>=0, i=1,2 ... m, m are positive integer, and C is in formula
Penalty, ξiFor the slack variable of introducing, xiFor input sample, ω is its parameter value, and b is threshold value.Then optimal classification function phase
Become with answering:
B in formula*For classification thresholds,For yiSlope, φT(xi) φ (x) be kernel function K (x, xi)=φT(xi)φ
(x), kernel function uses radial basis function:
σ is smoothness parameter in formula, human testing is then carried out in negative sample original image using trained classifier, so
The rectangle frame of error detection SVM is put into as difficult example again afterwards to be trained to obtain final classifier.Finally detect
The human body block diagram come is as shown in Figure 5.
The calibration process of Camera location human body and camera:
Human testing is carried out to video sequence by trained final classifier, every 25 frame detection is primary, when
After detecting human body, the center pixel coordinate G of human body rectangle frame is calculated1(x1,y1), then find out the middle imago of full frame image
Plain coordinate G2(x2,y2), according to formula:
In formula, X, Y are that camera shoots the maximum transverse and longitudinal pixel value of photo, and θ is camera wide-angle.
According further to national forest park in Xiaokeng:
F is the focal length of camera in formula, and Z is distance of the human body to camera, and X is the height of human body, and x is on the plane of delineation
The height of human body.The distance of human body and camera can be calculated, human body is then made by the adjusting of system chassis steering engine and take the photograph
As the distance of head remains definite value, it can guarantee the clarity of camera shooting video in this way.In fact, due to camera chip
Center usually not on optical axis and single pixel point be rectangular rather than on low order imager square reason, cannot make
It goes to calculate with ideal national forest park in Xiaokeng, therefore is reintroduced back to camera inner parameter Cx、Cy、fx、fy.Wherein CxAnd CyFor light
The possible offset of axis, fxFor the physics focal length F and length and each unit size s of imager of lensxProduct, fyFor lens
Physics focal length F and length and each unit size s of imageryProduct, because of sx、syIt cannot be in camera calibration process with F
In cannot directly measure, only combined amount fx=FsxAnd fy=FsyIt can directly calculate and be gone directly without removing camera
It connects and measures its component.So the point in the world is projected on camera can be indicated with following formula:
In formula, M is parameter matrix, and q is the pixel coordinate value of the point on projector space, indicates that two dimension is thrown for three-dimensional vector
The homogeneous coordinates in shadow space, expansion above formula can be found that w=Z, can restore pervious definition divided by w, Q is actual physics seat
Mark.
Video camera is not introduce any distortion without lens, it is therefore desirable to camera due to using lens
Radial distortion and tangential distortion are modeled and acquire accurate coordinate value.For radial distortion, the distortion at imager center is
0, as to edge movement, distortion is also increasingly severe.The Taylor series exhibition around the position r=0 can be used in a practical situation
The first few items opened are quantitatively described, and the radial position of imager point is adjusted as the following formula.
xco=x (1+k1r2+k2r4)
yco=y (1+k1r2+k2r4)
(x, y) is home position of the distortion point on imager, (x in formulaco,yco) be correction after new position, k1And k2
For the first item of Taylor series expansion and the parameter of Section 2.For tangential distortion, with two additional parameter p1And p2To retouch
It states, formula is as follows:
xco=x+ [2p1y+p2(r2+2x2)]
yco=y+ [p1(r2+2y2)+2p2x]
Therefore there are four distortion parameter k in total1、k2、p1、p2.Video camera mark is carried out for camera internal reference number and distortion parameter
Determine so as to find out 4 camera intrinsic parameter (fx,fy,cx,cy) and 4 distortion parameters --- two radial direction (k1,k2) and two cut
To (p1,p2).After all parameters acquire, human body actual physics coordinate can be acquired according to the pixel coordinate of human body in the picture,
Human body and video camera is set to keep appropriately distance by regulating system base plate electric machine.
System is using 8*6 black and white interval chessboard as calibrated reference, it is assumed that when solving calibrating parameters, camera does not have
Distortion.For each chessboard visual field, a homography matrix H is obtained, H is written as to the form of column vector, H=[h1 h2 h3],
Each h is 3*1 vector.H is equal to camera intrinsic parameter matrix M multiplied by the first two spin matrix r1And r2With the combination of translation vector t
Matrix adds zoom factor s, it may be assumed that
H=[h1 h2 h3]=sM [r1 r2 t]
Equation is decomposed to obtain:
r1=λ M-1h1
r2=λ M-1h2
T=λ M-1h3
λ=1/s in formula, rotating vector be in construction it is mutually orthogonal, so r1And r2It is mutually orthogonal, then have:
r1 Tr2=0
For any vector a and b, there is (ab)T=bTaT, therefore obtain first constraint:
h1 T(M-1)TM-1h2=0
The equal length of known rotating vector, i.e., | | r1| |=| | r2| |, obtain second constraint:
h1 T(M-1)TM-1h1=h2 T(M-1)TM-1h2
If B=(M-1)TM-1, expansion has:
The common version of matrix B closes solution are as follows:
Because B is symmetrically, it can be write as the dot product of 6 element vectors.Rearrange B element obtain one newly to
B is measured, then is had:
Then two constraints can be written as:
Using K checkerboard image of different angle, stacking these equations has:
Vb=0
V is the matrix of 2K*6 in formula, and camera intrinsic parameter can be directly obtained from the closing solution of B matrix:
cx=-B13fx 2/λ
In formula,, outer parameter can be calculated by homography condition:
r1=λ M-1h1
r2=λ M-1h2
r3=r1*r2
T=λ M-1h3
λ=1/ is determined by orthogonality condition in formula | | M-1h1| |, the position of the perception point obtained on the image due to distortion
It is false.If pin-hole model is perfectly, to enable (xp,yp) it is the position put, enable (xd,yd) it is the position distorted, then
Have:
By following replacement, the available calibration result not distorted:
In this way after reevaluating inside and outside parameter, these obtained a large amount of equations can find distortion parameter, take the photograph when finding out
As after all parameters, so that it may the position of human body physical coordinates is acquired according to its pixel coordinate, thus by adjusting chassis
Motor makes video camera and human body be maintained at certain distance.
Face datection is completed using Adaboost algorithm:
This system is using Harr feature as shown in Figure 7 as the key feature for judging face, the calculating institute of Harr feature
The feature templates used are made of using the combination of simple rectangle the rectangle of two or more congruences, are wherein had in feature templates black
Color and white two kinds of rectangles.The method that calculating process uses integrogram, for the point A (x, y) in image, this gray value is
I (x, y), (x ', y ') is the point less than (x, y), the calculation method of integrogram ii (x, y) are as follows:
Ii (x, y)=∑x′≤x,y′≤yI(x′,y′)
It after the Harr feature for calculating sample image, needs to screen required feature, each Harr feature corresponding one
A classifier, picks out optimal Harr feature from a large amount of characteristic value to construct Face datection classifier, Weak Classifier
Construction method is as follows:
Give a series of face sample (x1,y1),(x2,y2),...,(xi,yi),...,(xn,yn) wherein xiIt is i-th of sample
This, yiWork as y as sample labelingiFace sample, y are indicated when=1iNon-face sample is indicated when=0.Then at the beginning of following formula
Beginningization weight:
D in formulat(i) indicate that the weight of i-th of sample in the t times circulation, m, n indicate positive and negative number of samples, this system
Using ORL face database as sample.Acquire the weight q after weight being normalized after weightt(i):
The Harr characteristic value that all samples are calculated after weight normalization, the sequence by ascending order arrayed feature value and before recording
I, the weight and S of portion's face sample of demanding perfection+, whole non-face sample weights S-And the face sample weights T before currentElement+, non-face sample weights T before currentElement-.And calculate two parameter value J, K, J=T++(S--T-), K=T-+(S+-T+), then enable E=min (J, K), find element corresponding to min (E) and according to I find the element corresponding to characteristic value f
It (x) is required preliminary examination threshold value f 'θ, and direction vector pjValue as J=K be 1, otherwise be -1.It is special for each Harr
Sign f can have a classifier h (x, f, p, θ) to be corresponding to it, and θ is the angle of direction vector in formula, with preliminary examination threshold value f 'θAccording to
Weak Classifier formula:
The weighting fault rate of each classifier is trained and calculated to sample:
ξi=∑iqi|h(xi,f,p,θ)-yi|
Finding the smallest classifier of weighting fault rate is the obtained optimum classifier h of current signaturetAnd classifier is most
Excellent threshold value fθ, qiFor the weight after normalization, all parametric solutions of such Weak Classifier are finished.
Corresponding Harr feature and threshold value f are extracted with trained Weak ClassifierθPositive sample, classification are traversed to sample space
The sample weights of mistake are constant, and correct weight of classifying becomes smaller, and increase the weight of classification error with respect to specific gravity, weight is more
New strategy are as follows:
ξ in formulaiFor weighting fault rate, Dt+1It (i) is updated weight.
Sample after update weight is constructed into new classifier, having T weighting after this circulation of T, accidentally rate is the smallest weak
Classifier generates, these classifiers are cascaded to obtain strong classifier:
A in formulat=log [(1- ξi)/ξi] be h (t) coefficient, ξiFor weighting fault rate.
System is 20*20 using detection window size, and amplification coefficient is detected for 1.2 using multiple dimensioned method, and
With the number N > 4 of detection window, center of circle radius R < 5 is that the condition of merging window carries out multiple dimensioned merging.
Identification is completed using principal component analysis PCA:
Principal Component Analysis PCA is for random character χ={ χ1,χ2,...χn, wherein χi∈Rd, its mean vector is sought first
μ, formula are as follows:
N is characterized quantity in formula, then calculates covariance matrix S, and formula is as follows:
Calculate eigenvalue λiWith corresponding feature vector νi, formula is as follows:
Sνi=λiνi, i=1,2 ..., n
Arrangement of successively decreasing, feature vector and its sequence consensus are carried out to characteristic value, K main component is also K maximum spies
The corresponding feature vector of value indicative, the K principal component of χ:
Y=ωT(χ-μ)
ω=(ν in formula1,ν2,...,νk)
All samples are added into label, the sample of member A is collectively labeled as A, and the sample of member B is collectively labeled as B, with such
It pushes away.Retain 15 main components, then initialized with confidence threshold value, data set as shown in Figure 9 is put into training
The feature vector of each member's face is obtained in device and mean vector and is saved, and data set as shown in Figure 10 is put into training
It is interim to obtain the non-face feature vector of each member and mean vector and save.When detecting face using Adaboost algorithm
When, the feature vector that human face photo is cut with face code insurance is deposited is compared to verify identity, without examining
When measuring face, the picture of entire human body parts is compared to verify with the feature vector that non-face part saves
Identity.
Claims (6)
1. indoor human body detecting and tracking and identification system based on multisensor, it is characterised in that: the system is released by heat
Electric infrared sensor completes the Primary Location of human body, and camera is moved to the range of human body appearance by steering engine, passes through camera shooting
Head acquires the image information within the scope of this and image information is transferred in computer, and computer completes the correlometer of human testing
It calculates and controls steering engine and become to making camera and mobile platform tracking human body;Computer by collect image information and background information into
Row matching is so that it is determined that detect the identity of people;
Specific workflow is as follows:
The Preliminary detection of S1 human body and the Primary Location of camera
This system captures human infrared radiation using pyroelectric infrared sensor and is converted into the original of faint voltage signal
It manages to detect whether interior has human body, for the pyroelectric infrared sensor induction range that system uses for 6 meters, induction angle is 100
Degree;Four pyroelectric sensors are distributed in due west, the due south, due east, due north four direction of camera, detection angles will cover
Indoor 360 degree of range, when pyroelectric sensor detects human body signal, camera steering engine holder just turns to the model
In enclosing;
S2 completes human testing using image
This system detects the human body in video sequence using the method for HOG+SVM, and steps are as follows:
The production of S2.1 sample database
Human sample is acquired as positive sample, this system is using the picture in INRIA static state pedestrian's Test database as training
Positive sample, using the picture under the negative sample and the treated unmanned environment in interior in INRIA database as the negative sample of training
This;
The selection of the parameter of S2.2HOG feature extraction
This method detects library function, corresponding parameter setting are as follows: detection window 64*64, cell using the HOG that OpenCv is carried
The size of unit is 16*16 pixel, and block sliding step is 16, and gradient direction is quantified as 9, therefore the HOG feature dimensions of piece image
Number is 4*9*3*3=324;The standardized method of characteristic block selects L2-Hys, and threshold value is 0.2 and carries out Gamma correction;
S2.3 carries out SVM training
The HOG feature of all positive negative samples is extracted, and assigns label to positive negative sample, positive sample is labeled as 1, by negative sample mark
It is denoted as 0, then the HOG feature and label of positive negative sample are all input in SVM training aids and are trained, just obtains a human body
Classifier;
S3 controls camera rotation tracking human body
Human body is marked its location with box in image by system, and every 25 frame detection is primary, then by calculation block
The difference of center pixel coordinate and whole sub-picture center pixel coordinate calculates the angle of steering engine left-right rotation and rotates upwardly and downwardly
Angle, to guarantee that human body is appeared in always in camera angular field of view;On the other hand, camera is obtained by camera calibration
Intrinsic parameter and distortion parameter, to extrapolate human body actual physics coordinate according to the coordinate of human body block diagram center pixel, thus
Adjusting base plate electric machine is maintained at camera and human body in given area;
S4 identification
The identification procedure of this system individually cuts out the human body block diagram that detected to become a secondary independent figure first
Picture carries out Face datection using VIola-Jones classifier on the sub-picture, and process is as follows:
S4.1 accelerates the calculating of the value of 45 degree of rotations of rectangular image area or rectangular area, the picture structure using integral image
It is used to accelerate the calculating of class Haar input feature vector, fully enters feature as Adaboost classifier;
S4.2 creates face and non-face classifier node using Adaboost algorithm;
Weak Classifier node is formed the cascade node of screening type by S4.3;Each node DjIt is special using class Haar comprising one group
Decision tree of the sign training either with or without face;Node arranges from simple to complex, can be minimized the simple zones of refusal image in this way
Calculation amount when domain;First classifiers be it is optimal, can by the inclusion of the image-region of face, while allow it is some do not include people
The image of face passes through;Second classifiers suboptimum classifier, and have lower reject rate;Then and so on, as long as image
Region has passed through entire cascade, then it is assumed that there is face in the inside, and face is rectified out with block diagram.
2. the indoor human body detecting and tracking and identification system according to claim 1 based on multisensor, feature
Be: due to the limited amount system of house person, the quantity of the sample database of old friend's face is also limited, using Principal Component Analysis
PCA carries out identification, and all training datas are projected to the subspace PCA, image projection to be identified to PCA sky
Between, find training data projection after vector sum images to be recognized projection after vector it is immediate that.
3. the indoor human body detecting and tracking and identification system according to claim 1 based on multisensor, feature
Be: the method for the human testing that this system uses not is that can capture the positive information of face every time, if cut out
The human body block diagram come only has the side-information on head, back side information or only head information below, then skips Adaboost
The process of Face datection directly carries out identity differentiation using the method for PCA, need to only replace data set.
4. the indoor human body detecting and tracking and identification system according to claim 1 based on multisensor, feature
Be: Arduino, which reads pyroelectric sensor signal, to be responsible for completing initial alignment, and camera is moved to the model of physical activity
It encloses, then all visual informations in camera acquisition current visual angle, video sequence are passed in the computer of system, computer
The human body in video is detected by HOG+SVM algorithm, then adjusting camera steering engine and base plate electric machine appears in human body always
In the angular field of view of camera, the picture of the human body block diagram cut is carried out Face datection using Adaboost by computer,
Identification is carried out using PCA if detecting face, directly carries out body using human body PCA if not detecting face
Why part identification is not directly carried out identification using PCA, is because directly being identified and being compared to face using PCA
Entire body progress identification feature dimension is few, and calculation amount is low, and discrimination is higher.
5. the indoor human body detecting and tracking and identification system according to claim 1 based on multisensor, feature
Be: it is 1,2,3,4 that the pyroelectric sensor of 0 °, 90 °, 180 °, 270 ° four direction of camera is numbered respectively, if only one
A pyroelectric sensor, then camera steering engine is moved to the direction where the pyroelectric sensor;When two pyroelectric sensors
When capturing signal, then camera is moved to the position among two pyroelectric sensor angles, number 1,2 senses
Device captures signal, then camera is moved to the position of its 45° angle;After camera initial position is fixed, then close
The signal capture of pyroelectric sensor, until that can not detect that human body then restarts heat from the video information that camera obtains
Release the initial alignment of electric transducer.
6. the indoor human body detecting and tracking and identification system according to claim 1 based on multisensor, feature
It is: HOG+SVM human testing step:
The HOG feature that system extracts human body, which is put into SVM training aids, trains disaggregated model, and the HOG feature of characteristics of human body mentions
Take process as follows:
The positive example and negative example of human sample are collected, the human body picture of positive sample 96*160 size in data set will when use
All remove 16 pixels up and down, intercepts the human body of intermediate 64*128 size;Negative sample is from the indoor figure for not including human body
It is cut at random in piece, size is equally 64*128, and the quantity of positive negative sample can reach 10000;To all positive negative samples
Sample label is assigned, positive sample is labeled as 1, and negative sample is labeled as 0, then extracts HOG description of positive and negative sample image;
The normalization for carrying out color space to input picture using Gamma correction method, not only can inhibit the interference of noise,
Influence caused by illumination variation can also be reduced;
The gradient of pixel (x, y) is calculated, formula is as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x, y) indicates pixel (x, y) horizontal direction gradient, Gy(x, y) is indicated in pixel (x, y) vertical direction
Gradient, H (x+1, y) indicate that the pixel value of point (x+1, y), H (x-1, y) indicate that the pixel value of point (x-1, y), H (x, y+1) indicate
The pixel value of point (x, y+1), H (x, y-1) indicate the pixel value of point (x, y-1);Gradient magnitude G (x, y) at pixel (x, y)
It is respectively as follows: with gradient direction α (x, y)
Gradient orientation histogram is constructed, the detection window of a 64*64 pixel is divided into 4*4 cell factory, each cell
Cell size is 16*16 pixel, and gradient direction is quantified as 9, and each of cell factory pixel is all based on some side
To histogram channel take Nearest Neighbor with Weighted Voting, weight is calculated according to the gradient magnitude of the pixel;By every 4 adjacent groups of cells
At a block, 9 blocks a total of in this way, so the dimension of obtained HOG feature vector is 4*9**3*3=324;
Since the variation of illumination is so that the variation range of gradient intensity is very big, it is therefore desirable to which gradient intensity is normalized;
Using the method for normalizing of following formula, formula is as follows:
ν indicates normalized vector not yet in formula, | | νk| | indicate the k rank norm of ν, ε indicates constant, the value very little of ε;Then
After normalization eigenvectors matrix and label put into SVM and be trained;
The Generalized optimal interface that SVM is solved is converted to the problem of quadratic function optimizing under an inequality constraints:
Corresponding constraint condition is yi(ωTxi-b)≥1-ξiAnd ξi>=0, i=1,2 ... m, m are positive integer, and C is punishment in formula
Function, ξiFor the slack variable of introducing, xiFor input sample, ω is its parameter value, and b is threshold value;Then optimal classification function is correspondingly
Become:
B in formula*For classification thresholds,For yiSlope, φT(xi) φ (x) be kernel function K (x, xi)=φT(xi) φ (x), core
Function uses radial basis function:
σ is smoothness parameter in formula, and human testing is then carried out in negative sample original image using trained classifier, then will
The rectangle frame of error detection puts into SVM as difficult example again and is trained to obtain final classifier;
The calibration process of Camera location human body and camera:
Human testing is carried out to video sequence by trained final classifier, every 25 frame detection is primary, works as detection
To after human body, the center pixel coordinate G of human body rectangle frame is calculated1(x1,y1), the center pixel for then finding out full frame image is sat
Mark G2(x2,y2), according to formula:
In formula, X, Y are that camera shoots the maximum transverse and longitudinal pixel value of photo, and θ is camera wide-angle;
According further to national forest park in Xiaokeng:
F is the focal length of camera in formula, and Z is distance of the human body to camera, and X is the height of human body, and x is human body on the plane of delineation
Height;The distance of human body and camera can be calculated, human body and camera are then made by the adjusting of system chassis steering engine
Distance remain definite value, can guarantee in this way camera shooting video clarity;In fact, due in camera chip
The heart is usually rectangular rather than the reason of square not on optical axis with single pixel point on low order imager, cannot use reason
The national forest park in Xiaokeng thought goes to calculate, therefore is reintroduced back to camera inner parameter Cx、Cy、fx、fy;Wherein CxAnd CyIt can for optical axis
The offset of energy, fxFor the physics focal length F and length and each unit size s of imager of lensxProduct, fyFor the physics of lens
Focal length F and length and each unit size s of imageryProduct, because of sx、syIt cannot be during camera calibration not with F
It can directly measure, only combined amount fx=FsxAnd fy=FsyIt can directly calculate and go directly to survey without removing camera
Measure its component;So the point in the world is projected on camera can be indicated with following formula:
In formula, M is parameter matrix, and q is the pixel coordinate value of the point on projector space, indicates that two-dimensional projection is empty for three-dimensional vector
Between homogeneous coordinates, expansion above formula can be found that w=Z, can restore pervious definition divided by w, and Q is actual physics coordinate;
Video camera is not introduce any distortion without lens, it is therefore desirable to the radial direction of camera due to using lens
Distortion and tangential distortion are modeled and acquire accurate coordinate value;For radial distortion, the distortion at imager center is 0, with
It is mobile to edge, distort also increasingly severe;The Taylor series expansion around the position r=0 can be used in a practical situation
First few items are quantitatively described, and the radial position of imager point is adjusted as the following formula;
xco=x (1+k1r2+k2r4)
yco=y (1+k1r2+k2r4)
(x, y) is home position of the distortion point on imager, (x in formulaco,yco) be correction after new position, k1And k2For Thailand
Strangle the first item of series expansion and the parameter of Section 2;For tangential distortion, with two additional parameter p1And p2It describes,
Formula is as follows:
xco=x+ [2p1y+p2(r2+2x2)]
yco=y+ [p1(r2+2y2)+2p2x]
Therefore there are four distortion parameter k in total1、k2、p1、p2;For camera internal reference number and distortion parameter carry out camera calibration from
And find out 4 camera intrinsic parameter (fx,fy,cx,cy) and 4 distortion parameters --- two radial direction (k1,k2) and two it is tangential
(p1,p2);After all parameters acquire, human body actual physics coordinate can be acquired according to the pixel coordinate of human body in the picture, led to
Overregulating system chassis motor makes human body and video camera keep appropriately distance;
System is using 8*6 black and white interval chessboard as calibrated reference, it is assumed that when solving calibrating parameters, camera does not distort;
For each chessboard visual field, a homography matrix H is obtained, H is written as to the form of column vector, H=[h1 h2 h3], each h
It is 3*1 vector;H is equal to camera intrinsic parameter matrix M multiplied by the first two spin matrix r1And r2With the combinatorial matrix of translation vector t,
Along with zoom factor s, it may be assumed that
H=[h1 h2 h3]=sM [r1 r2 t]
Equation is decomposed to obtain:
r1=λ M-1h1
r2=λ M-1h2
T=λ M-1h3
λ=1/s in formula, rotating vector be in construction it is mutually orthogonal, so r1And r2It is mutually orthogonal, then have:
r1 Tr2=0
For any vector a and b, there is (ab)T=bTaT, therefore obtain first constraint:
h1 T(M-1)TM-1h2=0
The equal length of known rotating vector, i.e., | | r1| |=| | r2| |, obtain second constraint:
h1 T(M-1)TM-1h1=h2 T(M-1)TM-1h2
If B=(M-1)TM-1, expansion has:
The common version of matrix B closes solution are as follows:
Because B is symmetrically, it can be write as the dot product of 6 element vectors;The element for rearranging B obtains a new vector b,
Then have:
Then two constraints can be written as:
Using K checkerboard image of different angle, stacking these equations has:
Vb=0
V is the matrix of 2K*6 in formula, and camera intrinsic parameter can be directly obtained from the closing solution of B matrix:
cx=-B13fx 2/λ
In formula,Outer parameter can be calculated by homography condition:
r1=λ M-1h1
r2=λ M-1h2
r3=r1*r2
T=λ M-1h3
λ=1/ is determined by orthogonality condition in formula | | M-1h1| |, the position of the perception point obtained on the image due to distortion is not
Really;If pin-hole model is perfectly, to enable (xp,yp) it is the position put, enable (xd,yd) it is the position distorted, then having:
By following replacement, the available calibration result not distorted:
In this way after reevaluating inside and outside parameter, these obtained a large amount of equations can find distortion parameter, when finding out camera
After all parameters, so that it may the position of human body physical coordinates is acquired according to its pixel coordinate, thus by adjusting base plate electric machine
Video camera and human body is set to be maintained at certain distance;
Face datection is completed using Adaboost algorithm:
This system using Harr feature as the key feature for judging face, adopt by feature templates used in the calculating of Harr feature
It is combined with simple rectangle and is made of the rectangle of two or more congruences, wherein there is two kinds of squares of black and white in feature templates
Shape;The method that calculating process uses integrogram, for the point A (x, y) in image, this gray value is I (x, y), (x ',
Y ') it is point less than (x, y), the calculation method of integrogram ii (x, y) are as follows:
Ii (x, y)=∑x′≤x,y′≤yI(x′,y′)
It after the Harr feature for calculating sample image, needs to screen required feature, each Harr feature is one point corresponding
Class device picks out optimal Harr feature from a large amount of characteristic value to construct Face datection classifier, the building of Weak Classifier
Method is as follows:
Give a series of face sample (x1,y1),(x2,y2),...,(xi,yi),...,(xn,yn) wherein xiIt is i-th of sample, yi
Work as y as sample labelingiFace sample, y are indicated when=1iNon-face sample is indicated when=0;Then it is initialized and is weighed with following formula
Value:
D in formulat(i) indicate that the weight of i-th of sample in the t times circulation, m, n indicate positive and negative number of samples, this system uses
ORL face database is as sample;Acquire the weight q after weight being normalized after weightt(i):
The Harr characteristic value that all samples are calculated after weight normalization, the sequence I by ascending order arrayed feature value and before recording, is asked
The weight and S of whole face samples+, whole non-face sample weights S-And the face sample weights T before currentElement+, when
Non-face sample weights T before preceding element-;And calculate two parameter value J, K, J=T++(S--T-), K=T-+(S+-T+), so
After enable E=min (J, K), find element corresponding to min (E) and according to I find the element corresponding to characteristic value f (x) be
Required preliminary examination threshold value fθ', and direction vector pjValue as J=K be 1, otherwise be -1;It can for each Harr feature f
There is a classifier h (x, f, p, θ) to be corresponding to it, θ is the angle of direction vector in formula, with preliminary examination threshold value fθ' according to weak typing
Device formula:
The weighting fault rate of each classifier is trained and calculated to sample:
ξi=∑iqi|h(xi,f,p,θ)-yi|
Finding the smallest classifier of weighting fault rate is the obtained optimum classifier h of current signaturetAnd the optimal threshold of classifier
fθ, qiFor the weight after normalization, all parametric solutions of such Weak Classifier are finished;
Corresponding Harr feature and threshold value f are extracted with trained Weak ClassifierθPositive sample, classification error are traversed to sample space
Sample weights it is constant, correct weight of classifying becomes smaller, make in this way the weight of classification error with respect to specific gravity increase, right value update plan
Slightly:
ξ in formulaiFor weighting fault rate, Dt+1It (i) is updated weight;
Sample after update weight is constructed into new classifier, T weighting accidentally the smallest weak typing of rate is had after this circulation of T
Device generates, these classifiers are cascaded to obtain strong classifier:
A in formulat=log [(1- ξi)/ξi] be h (t) coefficient, ξiFor weighting fault rate;
System is 20*20 using detection window size, and amplification coefficient is detected for 1.2 using multiple dimensioned method, and with inspection
The number N > 4 of window is surveyed, center of circle radius R < 5 is that the condition of merging window carries out multiple dimensioned merging;
Identification is completed using principal component analysis PCA:
Principal Component Analysis PCA is for random character χ={ χ1,χ2,...χn, wherein χi∈Rd, its mean vector μ is sought first, it is public
Formula is as follows:
N is characterized quantity in formula, then calculates covariance matrix S, and formula is as follows:
Calculate eigenvalue λiWith corresponding feature vector νi, formula is as follows:
Sνi=λiνi, i=1,2 ..., n
Arrangement of successively decreasing, feature vector and its sequence consensus are carried out to characteristic value, K main component is also K maximum characteristic values
Corresponding feature vector, the K principal component of χ:
Y=ωT(χ-μ)
ω=(ν in formula1,ν2,...,νk)
All samples are added into label, the sample of member A is collectively labeled as A, and the sample of member B is collectively labeled as B, and so on.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610835988.2A CN106503615B (en) | 2016-09-20 | 2016-09-20 | Indoor human body detecting and tracking and identification system based on multisensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610835988.2A CN106503615B (en) | 2016-09-20 | 2016-09-20 | Indoor human body detecting and tracking and identification system based on multisensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106503615A CN106503615A (en) | 2017-03-15 |
CN106503615B true CN106503615B (en) | 2019-10-08 |
Family
ID=58290726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610835988.2A Active CN106503615B (en) | 2016-09-20 | 2016-09-20 | Indoor human body detecting and tracking and identification system based on multisensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106503615B (en) |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106510192A (en) | 2016-10-24 | 2017-03-22 | 深圳米乔科技有限公司 | Timing reminding method and reminding device applied to lifting table |
CN106951930A (en) * | 2017-04-13 | 2017-07-14 | 杭州申昊科技股份有限公司 | A kind of instrument localization method suitable for Intelligent Mobile Robot |
CN107222337A (en) * | 2017-05-27 | 2017-09-29 | 南京泛和电力自动化有限公司 | Encryption communication method and system for photovoltaic generating system |
CN107316036B (en) * | 2017-06-09 | 2020-10-27 | 广州大学 | Insect pest identification method based on cascade classifier |
CN107506023B (en) * | 2017-07-20 | 2020-12-08 | 武汉秀宝软件有限公司 | Wall surface image infrared ray light spot tracking method and system |
CN107609475B (en) * | 2017-08-08 | 2020-04-10 | 天津理工大学 | Pedestrian detection false detection extraction method based on light field camera |
CN107703553A (en) * | 2017-08-17 | 2018-02-16 | 广州隽智智能科技有限公司 | A kind of behavior monitoring method and system in place |
CN107644204B (en) * | 2017-09-12 | 2020-11-10 | 南京凌深信息科技有限公司 | Human body identification and tracking method for security system |
CN107862713B (en) * | 2017-09-22 | 2021-04-06 | 贵州电网有限责任公司 | Camera deflection real-time detection early warning method and module for polling meeting place |
KR20200053474A (en) * | 2017-09-26 | 2020-05-18 | 소니 세미컨덕터 솔루션즈 가부시키가이샤 | Information processing system |
CN107666596A (en) * | 2017-10-12 | 2018-02-06 | 安徽特旺网络科技有限公司 | A kind of tracing and monitoring method |
CN107666597A (en) * | 2017-10-12 | 2018-02-06 | 安徽特旺网络科技有限公司 | A kind of building video monitoring system |
CN107818339A (en) * | 2017-10-18 | 2018-03-20 | 桂林电子科技大学 | Method for distinguishing is known in a kind of mankind's activity |
CN107679528A (en) * | 2017-11-24 | 2018-02-09 | 广西师范大学 | A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms |
CN108460811B (en) * | 2018-03-09 | 2022-05-06 | 珠海方图智能科技有限公司 | Face image processing method and device and computer equipment |
CN108734083B (en) * | 2018-03-21 | 2023-04-25 | 北京猎户星空科技有限公司 | Control method, device, equipment and storage medium of intelligent equipment |
CN108647662A (en) * | 2018-05-17 | 2018-10-12 | 四川斐讯信息技术有限公司 | A kind of method and system of automatic detection face |
CN108830187B (en) * | 2018-05-29 | 2021-07-06 | 厦门瑞为信息技术有限公司 | Device and method for rapidly recognizing portrait of indoor scene |
CN110653812B (en) * | 2018-06-29 | 2021-06-04 | 深圳市优必选科技有限公司 | Interaction method of robot, robot and device with storage function |
CN108885698B (en) * | 2018-07-05 | 2022-12-09 | 达闼机器人股份有限公司 | Face recognition method and device and server |
CN109064595A (en) * | 2018-07-24 | 2018-12-21 | 上海闻泰信息技术有限公司 | Facial tripper |
CN109199398A (en) * | 2018-10-09 | 2019-01-15 | 芜湖博高光电科技股份有限公司 | A kind of recognition of face detection Gernral Check-up device |
CN109635698A (en) * | 2018-12-04 | 2019-04-16 | 杭州中房信息科技有限公司 | A kind of crowd's personal safety detection method of renting a house based on SVM algorithm |
CN109813434B (en) * | 2018-12-19 | 2020-07-10 | 厦门赢科光电有限公司 | Human body identification method and device based on temperature detection and terminal equipment |
CN109618096A (en) * | 2018-12-19 | 2019-04-12 | 浙江工业大学 | A kind of automatic follower method of video record |
CN110309709A (en) * | 2019-05-20 | 2019-10-08 | 平安科技(深圳)有限公司 | Face identification method, device and computer readable storage medium |
CN110096013A (en) * | 2019-05-24 | 2019-08-06 | 广东工业大学 | A kind of intrusion detection method and device of industrial control system |
CN110246169B (en) * | 2019-05-30 | 2021-03-26 | 华中科技大学 | Gradient-based window adaptive stereo matching method and system |
CN110414314A (en) * | 2019-06-11 | 2019-11-05 | 汉腾汽车有限公司 | A kind of camera structure carrying out Face tracking and recognition and system |
CN110378292B (en) * | 2019-07-22 | 2021-09-14 | 广州络维建筑信息技术咨询有限公司 | Three-dimensional space positioning system and method |
CN110807361B (en) * | 2019-09-19 | 2023-08-08 | 腾讯科技(深圳)有限公司 | Human body identification method, device, computer equipment and storage medium |
CN111144207B (en) * | 2019-11-21 | 2023-07-07 | 东南大学 | Human body detection and tracking method based on multi-mode information perception |
CN110942481B (en) * | 2019-12-13 | 2022-05-20 | 西南石油大学 | Image processing-based vertical jump detection method |
CN111489803B (en) * | 2020-03-31 | 2023-07-21 | 重庆金域医学检验所有限公司 | Report form coding model generation method, system and equipment based on autoregressive model |
CN111721420B (en) * | 2020-04-27 | 2021-06-29 | 浙江智物慧云技术有限公司 | Semi-supervised artificial intelligence human body detection embedded algorithm based on infrared array time sequence |
CN113221606B (en) * | 2020-04-27 | 2022-08-23 | 南京南瑞信息通信科技有限公司 | Face recognition method based on IMS video conference login |
CN111832542B (en) * | 2020-08-15 | 2024-04-16 | 武汉易思达科技有限公司 | Tri-vision identifying and positioning device |
CN112861607B (en) * | 2020-12-29 | 2022-10-04 | 湖北航天飞行器研究所 | Long-distance laser living body identification method |
CN113312953B (en) * | 2021-01-05 | 2022-10-04 | 武汉大学 | Humanoid robot identity identification method and system based on gait recognition |
CN112784828B (en) * | 2021-01-21 | 2022-05-17 | 珠海市杰理科技股份有限公司 | Image detection method and device based on direction gradient histogram and computer equipment |
CN112784771B (en) * | 2021-01-27 | 2022-09-30 | 浙江芯昇电子技术有限公司 | Human shape detection method, system and monitoring equipment |
CN113111728A (en) * | 2021-03-22 | 2021-07-13 | 广西电网有限责任公司电力科学研究院 | Intelligent identification method and system for power production operation risk in transformer substation |
CN113312985B (en) * | 2021-05-10 | 2023-05-26 | 中国地质大学(武汉) | Audio-visual double-mode 360-degree omnibearing speaker positioning method |
CN113327286B (en) * | 2021-05-10 | 2023-05-19 | 中国地质大学(武汉) | 360-degree omnibearing speaker vision space positioning method |
CN113192109B (en) * | 2021-06-01 | 2022-01-11 | 北京海天瑞声科技股份有限公司 | Method and device for identifying motion state of object in continuous frames |
CN117636698B (en) * | 2023-12-28 | 2024-06-07 | 北京奥康达体育科技有限公司 | Digital pull-up auxiliary training system for student examination |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236599A (en) * | 2007-12-29 | 2008-08-06 | 浙江工业大学 | Human face recognition detection device based on multi- video camera information integration |
CN102043966A (en) * | 2010-12-07 | 2011-05-04 | 浙江大学 | Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation |
-
2016
- 2016-09-20 CN CN201610835988.2A patent/CN106503615B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236599A (en) * | 2007-12-29 | 2008-08-06 | 浙江工业大学 | Human face recognition detection device based on multi- video camera information integration |
CN102043966A (en) * | 2010-12-07 | 2011-05-04 | 浙江大学 | Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation |
Also Published As
Publication number | Publication date |
---|---|
CN106503615A (en) | 2017-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106503615B (en) | Indoor human body detecting and tracking and identification system based on multisensor | |
Seemanthini et al. | Human detection and tracking using HOG for action recognition | |
CN110110629B (en) | Personnel information detection method and system for indoor environment control | |
Lin et al. | Estimation of number of people in crowded scenes using perspective transformation | |
CN102089770B (en) | Apparatus and method of classifying movement of objects in a monitoring zone | |
CN108256459A (en) | Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically | |
CN108563999A (en) | A kind of piece identity's recognition methods and device towards low quality video image | |
CN107438854A (en) | The system and method that the image captured using mobile device performs the user authentication based on fingerprint | |
US20140064571A1 (en) | Method for Using Information in Human Shadows and Their Dynamics | |
CN109800643A (en) | A kind of personal identification method of living body faces multi-angle | |
CN102473238A (en) | Method and system for image analysis | |
CN101236599A (en) | Human face recognition detection device based on multi- video camera information integration | |
CN109325393A (en) | Using the face detection of single network, Attitude estimation and away from the estimation of camera distance | |
JP2014010686A (en) | Face image authentication device | |
CN109255319A (en) | For the recognition of face payment information method for anti-counterfeit of still photo | |
Wang et al. | A new depth descriptor for pedestrian detection in RGB-D images | |
WO2021217764A1 (en) | Human face liveness detection method based on polarization imaging | |
CN110728252A (en) | Face detection method applied to regional personnel motion trail monitoring | |
CN108701211A (en) | For detecting, tracking, estimating and identifying the system based on depth sense occupied in real time | |
CN112541403A (en) | Indoor personnel falling detection method utilizing infrared camera | |
Galiyawala et al. | Person retrieval in surveillance videos using deep soft biometrics | |
Islam et al. | Correlating belongings with passengers in a simulated airport security checkpoint | |
CN112183287A (en) | People counting method of mobile robot under complex background | |
Park | Face Recognition: face in video, age invariance, and facial marks | |
KR102096324B1 (en) | System for detecting number of people using qr code and method for detecting number of people using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |