CN106503615A - Indoor human body detecting and tracking and identification system based on multisensor - Google Patents

Indoor human body detecting and tracking and identification system based on multisensor Download PDF

Info

Publication number
CN106503615A
CN106503615A CN201610835988.2A CN201610835988A CN106503615A CN 106503615 A CN106503615 A CN 106503615A CN 201610835988 A CN201610835988 A CN 201610835988A CN 106503615 A CN106503615 A CN 106503615A
Authority
CN
China
Prior art keywords
human body
camera
sample
formula
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610835988.2A
Other languages
Chinese (zh)
Other versions
CN106503615B (en
Inventor
于乃功
王琛
蒋晓军
苑云鹤
刘庆瑞
蔡建羡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201610835988.2A priority Critical patent/CN106503615B/en
Publication of CN106503615A publication Critical patent/CN106503615A/en
Application granted granted Critical
Publication of CN106503615B publication Critical patent/CN106503615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

Indoor human body detecting and tracking and identification system based on multisensor, the system completes the Primary Location of human body by pyroelectric infrared sensor, the scope that camera is moved to human body appearance by steering wheel, image information in the range of this is gathered by camera and image information is transferred in computer, computer completes the correlation computations of human testing and control steering wheel to become to making camera and mobile platform tracking human body.Computer is mated so that it is determined that being detected the identity of people by collecting image information and background information.Mainly for detection of under indoor environment, whether someone invades and helps indoor moving service robot to determine destination service people.System mainly controls camera steering wheel head by pyroelectric sensor makes which turn to the scope of physical activity, using visual information detect and track human body.Identification is carried out with the method for Adaboost and Principle components analysis after detecting human body.

Description

Indoor human body detecting and tracking and identification system based on multisensor
Technical field
The present invention relates to the technical field of indoor human body detection, specifically a kind of by pyroelectric infrared sensor and Monocular cam detects and tracks indoor sport human body and complete the system of identification.
Background technology
With the development of computer vision technique, human motion analysis problem has been a great concern, and human testing Tracking and the important component part that identification is the human motion analysis based on computer vision again.It in man-machine interaction, regard Frequency monitoring, the field such as intelligent vehicle system, virtual reality have a wide range of applications background and economic worth.And indoor human body detection with Track is even more one of which important application with technology of identification, such as, indoors in interactive environment, the technology can help Information Mobile Service Robot determines destination service people, while children, the elderly and patient with disabilities can also be guarded.On the other hand, In security protection and monitoring field, indoor human body detecting and tracking has huge use value with technology of identification, by movement human Detection can be monitored to indoor environment, enable monitoring system automatic identification people and risk object is reported to the police.
Traditional human body detecting method is broadly divided into Stereo Vision, profile testing method, template matching method, human body The method of model method, the method for Gait Recognition, wavelet analysis and neutral net, is more by two of which or more kinds of The method for combining.But as the human body for moving is under the influence of the change of angle and background, traditional human body detecting method exists Intended effect is not reached in accuracy.Human testing tracking in recent years based on machine learning and technology of identification are with good Real-time and robustness be target, obtained developing faster.The correlation theory of machine learning and Technology application are arrived each In link, traditional detecting and tracking and technology of identification is preferably instead of, its accuracy of detection is high, while can also meet part in real time The demand of property.So the human body detecting method with machine learning is widely applied, cognitive phase is mainly from target Human body is distinguished, is mainly mated by the special characteristic of human body, or by neutral net, SVM (SVMs), many Layer perceptron etc. method is judging whether have pedestrian to occur in target area.
On the other hand, in order to obtain the information of human body target, current study hotspot is in the static feelings of monocular cam Human testing and the tracking that moves is completed under condition.But in practical situations both, as monocular cam wide angular range is limited, if fortune Dynamic human body removes the angular field of view of camera, and system cannot then detect the human body of motion.Therefore how by controlling monocular It is still a problem demanding prompt solution that the motion of camera comes real-time detection and the human body of tracking indoor sport.When detecting human body After target, how to distinguish that identity is a highly important task, traditional method carries out face go-on-go to human body image Survey, the part for detecting is carried out identification, but traditional method is not referred to if do not detected in human body image How face go-on-go examining system recognizes human body identity.
Content of the invention
The main object of the present invention is to propose a kind of indoor human body detection and identification system, and the system can be with sensing chamber Under interior environment, whether someone invades and helps indoor moving service robot to determine destination service people, so which acts predominantly on inspection Survey under one or the indoor environment of a few peoples.Which mainly solves three technical problems:
1. static camera cannot determine mobile human body place room area, only when human motion is to camera visual angle In the range of, camera could catch human body information.
2., when human motion goes out in camera angular field of view, the image information that camera is obtained cannot continue to track indoor shifting Dynamic human body.
3. when information behaviour body side surface in human body picture, when system does not detect human face region from image information such as What confirms target identity information.
Indoor human body detecting and tracking and identification system based on multisensor, the system are sensed by rpyroelectric infrared Device completes the Primary Location of human body, the scope that camera is moved to human body appearance by steering wheel, gathers the model by camera Enclose interior image information and image information is transferred in computer, computer completes the correlation computations of human testing control flaps Machine becomes to making camera and mobile platform tracking human body.Computer by collect image information and background information mated so as to Determine the identity of detection people.Its system architecture diagram is as shown in Figure 1.
The workflow diagram of the inventive method is as shown in Fig. 2 specific workflow is as follows:
The Preliminary detection of S1 human bodies and the Primary Location of camera
The system catches human infrared radiation using pyroelectric infrared sensor and is converted into faint voltage signal Principle come detect interior whether have human body, the pyroelectric infrared sensor induction range that system is adopted for 6 meters, sense angle be 100 degree.Four pyroelectric sensors are distributed in the due west of camera, due south, due east, positive north four direction, detection angles will Indoor 360 degree scope is covered, when pyroelectric sensor detects human body signal, camera steering wheel head is just turned to In the range of being somebody's turn to do, its structure is as shown in Figure 3.
S2 completes human testing using image
The system using HOG+SVM method to video sequence in human body detect that step is as follows:
The making of S2.1 Sample Storehouses
Used as positive sample, the system is adopted in INRIA as shown in Figure 4 static state pedestrian's Test databases collection human sample Picture as training positive sample, using the negative sample in INRIA databases as shown in Figure 5 and treated indoor unmanned environment Under picture as training negative sample.
The selection of the parameter of S2.2HOG feature extractions
Using the HOG that OpenCv is carried, this method detects that built-in function, corresponding parameter are set to:Detection window is 64*64, The size of cell factory is 16*16 pixels, and block sliding step is 16, and gradient direction is quantified as 9, therefore the HOG of piece image is special Dimension is levied for 4*9*3*3=324.The standardized method of characteristic block selects L2-Hys, threshold value to be 0.2 and carry out Gamma corrections.
S2.3 carries out SVM training
The HOG features of all positive negative samples are extracted, and aligns negative sample and give label, positive sample is labeled as 1, by negative sample Originally 0 is labeled as, then the HOG features of positive negative sample and label is all input in SVM training aids and is trained, just obtain one Human body grader.
S3 control cameras rotate tracking human body
As shown in fig. 6, human body is marked its in-scope with square frame in image by system, every 25 frame is detected once, then The angle of steering wheel left-right rotation is calculated by the difference of center pixel coordinate in calculation block and whole sub-picture center pixel coordinate Degree and the angle for rotating upwardly and downwardly, so that ensure that human body is occurred in camera angular field of view all the time.On the other hand, by video camera The intrinsic parameter and distortion parameter for obtaining camera is demarcated, so that human body reality is extrapolated according to the coordinate of human body block diagram center pixel Physical coordinates, make camera be maintained in given area with human body so as to adjust base plate electric machine.
S4 identifications
It is secondary independent that the identification procedure of the system out becomes independent for the human body block diagram that detects cutting into one first Image, carry out Face datection using VIola-Jones graders on the sub-picture, its process approximately as:
S4.1 accelerates the calculating of the value of 45 degree of rotations of rectangular image area or rectangular area, the image using integral image Structure is used to the calculating for accelerating class Haar input feature vector, its feature that fully enters as Adaboost graders, class Harr Feature is as shown in Figure 7.
S4.2 creates face with non-face grader node using Adaboost algorithm.
S4.3 constitutes Weak Classifier node one node of screening type cascade, and which is as shown in Figure 8.Each node D in figurej Comprising one group of decision tree that is trained using Like-Fenton Oxidation either with or without face.Node is arranged from simple to complex, so can be most Littleization refuses the amount of calculation during simple region of image.First classifiers be optimum, can by the image-region comprising face, Some images not comprising face are allowed to pass through simultaneously;Second classifiers suboptimum grader, and have relatively low reject rate;So Afterwards by that analogy, as long as image-region has passed through whole cascade, then it is assumed that there is face the inside, and face block diagram is rectified out.
Due to the limited amount system of house person, the quantity of the Sample Storehouse of old friend's face is also limited, using principal component analysis Method PCA carries out identification, and all of training data is projected to PCA subspaces, and image projection to be identified is empty to PCA Between, find training data projection after vector sum images to be recognized projection after vectorial immediate that, training data such as Fig. 9 Shown.
The system using the method for human testing be not the information that can capture face front every time, if cutting Human body block diagram out only has the side-information of head, back side information or the information only below head, then skip The process of Adaboost Face datections, directly carries out identity differentiation using the method for PCA, only need to change data set, number According to collection as shown in Figure 10.
Description of the drawings
Fig. 1 system architecture diagrams.
Fig. 2 working-flow figures.
Fig. 3 pyroelectric sensor distributed architecture figures.
Fig. 4 human testing positive sample examples.
Fig. 5 human testing negative sample examples.
The human body block diagram example that Fig. 6 is detected.
Fig. 7 class Harr features.
The screening type cascade that uses in Fig. 8 Viola-Jones graders.
Fig. 9 recognition of face PCA training dataset examples.
Figure 10 non-face part PCA training dataset examples.
Specific embodiment
In conjunction with Fig. 1 to Fig. 9, the system is described in detail.
Fig. 1 illustrates system architecture diagram, and Arduino reads pyroelectric sensor signal to be responsible for completing initial alignment, will Camera moves to the scope of physical activity, then all visual informations in camera collection current visual angle, by video sequence Pass in the computer of system, computer detects the human body in video by HOG+SVM algorithms, then adjust camera steering wheel and Base plate electric machine makes human body be occurred in the angular field of view of camera all the time, the picture profit of the human body block diagram that cutting is got off by computer Face datection is carried out with Adaboost, if face is detected, identification is carried out using PCA, if being not detected by face Then directly identification is carried out using human body PCA, why directly do not carry out identification using PCA, be because straight using PCA To connect be identified face and will compare whole body and be identified that intrinsic dimensionality is few, amount of calculation is low, and discrimination is higher.
Fig. 3 illustrates branch schematic diagram of the pyroelectric sensor on camera steering engine flat, 0 ° of camera, 90 °, It is 1,2,3,4 that the pyroelectric sensor of 180 °, 270 ° four directions is numbered respectively, if only one of which pyroelectric sensor, Camera steering wheel moves to the direction that the pyroelectric sensor is located.When two pyroelectric sensors capture signal, Camera is moved to the position in the middle of two pyroelectric sensor angles then, for example numbering is that 1,2 sensors capture signal, Camera is then moved to the position of its 45° angle.After camera initial position is fixed to be finished, then close pyroelectric sensor Signal capture, detects human body in the video information that cannot be obtained from camera and then restarts the first of pyroelectric sensor Begin positioning.
HOG+SVM human testing steps:
The HOG features of system extraction human body are put into, and the HOG of characteristics of human body is special Levy extraction process as follows:
Collect the positive example and negative example of human sample, human body of the positive sample from 96*160 sizes in data set as shown in Figure 4 Picture, will all remove 16 pixels up and down during use, intercept the human body of middle 64*128 sizes.Negative sample is from such as Fig. 5 Cut in the shown indoor picture not comprising human body at random, size is equally 64*128, and the quantity of positive negative sample is The more the better, 10000 or so are preferably attained by.Sample label is given to all of positive negative sample, and positive sample is labeled as 1, and negative sample is marked For 0, HOG description of positive and negative sample image are then extracted.
The normalization that color space is carried out to input picture using Gamma correction methods, not only can so suppress noise Interference, can also reduce the impact caused by illumination variation.
The gradient of pixel (x, y) is calculated, its formula is as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x, y) represents pixel (x, y) horizontal direction gradient, Gy(x, y) represents pixel (x, y) vertical direction On gradient, (x+1 y) represents that ((x-1 y) represents point (x-1, pixel value y), H (x, y+1) to H to point for x+1, pixel value y) to H Represent that the pixel value of point (x, y+1), H (x, y-1) represent the pixel value of point (x, y-1);The gradient magnitude G at pixel (x, y) place (x, y) and gradient direction α (x, y) are respectively:
Gradient orientation histogram is built, the detection window of a 64*64 pixel is divided into 4*4 cell factory, each Cell factory size is 16*16 pixels, and gradient direction is quantified as 9, and each pixel in cell factory is based on certain The histogram passage in individual direction takes Nearest Neighbor with Weighted Voting, weights to be calculated according to the gradient magnitude of the pixel.Will be thin per adjacent 4 Born of the same parents constitute a block, so a total of 9 blocks, so the dimension of the HOG characteristic vectors for obtaining is 4*9**3*3=324.
As the change of illumination causes the excursion of gradient intensity very big, it is therefore desirable to carry out normalizing to gradient intensity Change.Using the method for normalizing of following formula, formula is as follows:
In formula, ν is represented also without normalized vector, | | νk| | represent that the k rank norms of ν, ε represent constant, the value very little of ε. Then normalization after eigenvectors matrix and label put in SVM and be trained.
The Generalized optimal interface solved by SVM changes into the problem of the quadratic function optimizing under an inequality constraints:
Corresponding constraints is yiTxi-b)≥1-ξiAnd ξi>=0, i=1,2 ... m, m are positive integer, and in formula, C is Penalty, ξiFor the slack variable for introducing, xiFor input sample, ω is its parameter value, and b is threshold value.Then optimal classification function phase Ground is answered to be changed into:
B in formula*For classification thresholds,For yiSlope, φT(xi) φ (x) be kernel function K (x, xi)=φT(xi)φ X (), kernel function adopt RBF:
In formula, σ is smoothness parameter, then carries out human testing in negative sample artwork using the grader for training, so The rectangle frame of error detection is put into SVM again as difficult example afterwards it is trained and obtains final grader.Finally detect The human body block diagram for coming is as shown in Figure 5.
Camera location human body and the calibration process of camera:
Human testing is carried out to video sequence by the final grader for training, is detected once per 25 frames, when After detecting human body, the center pixel coordinate G of human body rectangle frame is calculated1(x1,y1), then obtain the middle imago of whole two field picture Plain coordinate G2(x2,y2), according to formula:
In formula, X, Y are that camera shoots the maximum transverse and longitudinal pixel value of photo, and θ is camera wide-angle.
According further to national forest park in Xiaokeng:
Focal lengths of the f for camera in formula, Z is distance of the human body to camera, and X is the height of human body, and x is on the plane of delineation The height of human body.The distance of human body and camera can be calculated, the then regulation by system chassis steering wheel makes human body and takes the photograph As the distance of head remains definite value, so can ensure that camera shoots the definition of video.In fact, due to camera chip Center is not generally on optical axis and single pixel point is rectangular rather than foursquare reason on low order imager, it is impossible to make Go to calculate with preferable national forest park in Xiaokeng, be therefore reintroduced back to camera inner parameter Cx、Cy、fx、fy.Wherein CxAnd CyFor light The possible skew of axle, fxPhysics focal length F and length for lens and each unit size s of imagerxProduct, fyFor lens Physics focal length F and length and each unit size s of imageryProduct because sx、syCan not be in camera calibration process with F In be unable to direct measurement, only combined amount fx=FsxAnd fy=FsyCan directly calculate and go directly without removing camera Connect and measure its part.So can will be represented with following formula on the spot projection in the world to camera:
In formula, M is parameter matrix, and q is the pixel coordinate value of the point on projector space, and which is that three-dimensional vector represents two-dimentional throwing The homogeneous coordinates in shadow space, launch above formula and can be found that w=Z, can just recover former definition divided by w, and Q is sat for actual physics Mark.
Video camera, as a result of lens, is not introduce any distortion without lens, it is therefore desirable to camera Radial distortion and tangential distortion are modeled and try to achieve accurate coordinate value.For radial distortion, the distortion at imager center is 0, with moving to edge, distort also increasingly severe.Taylor series exhibition r=0 position around can be used in a practical situation Being quantitatively described, the radial position of imager point is adjusted the first few items that opens as the following formula.
xco=x (1+k1r2+k2r4)
yco=y (1+k1r2+k2r4)
In formula, (x, y) is home position of the distortion point on imager, (xco,yco) be correction after new position, k1And k2 Parameter for the Section 1 and Section 2 of Taylor series expansion.For tangential distortion, with two extra parameters p1And p2To retouch State, its formula is as follows:
xco=x+ [2p1y+p2(r2+2x2)]
yco=y+ [p1(r2+2y2)+2p2x]
Therefore a total of four distortion parameter k1、k2、p1、p2.It is that camera internal reference number and distortion parameter carry out video camera mark Fixed so as to obtaining 4 camera intrinsic parameter (fx,fy,cx,cy) and 4 distortion parameters, two radial direction (k1,k2) and two cut To (p1,p2).After all parameters are tried to achieve, just human body actual physics coordinate can be tried to achieve according to human body pixel coordinate in the picture, Human body is made to keep appropriately distance with video camera by regulating system base plate electric machine.
System is using 8*6 black and white interval chessboard as calibrated reference, it is assumed that when calibrating parameters are solved, camera does not have Distortion.For each chessboard visual field, a homography matrix H, the form that H is written as column vector, H=[h is obtained1h2h3], Each h is 3*1 vectors.H is multiplied by the first two spin matrix r equal to camera intrinsic parameter matrix M1And r2With combining for translation vector t Matrix, along with zoom factor s, i.e.,:
H=[h1h2h3]=sM [r1r2t]
Decompose equation to obtain:
r1=λ M-1h1
r2=λ M-1h2
T=λ M-1h3
λ=1/s in formula, rotating vector are mutually orthogonal in construction, so r1And r2Mutually orthogonal, then have:
r1 Tr2=0
For any vectorial a and b, there is (ab)T=bTaT, therefore obtain first constraint:
h1 T(M-1)TM-1h2=0
The equal length of known rotating vector, i.e., | | r1| |=| | r2| |, obtain second constraint:
h1 T(M-1)TM-1h1=h2 T(M-1)TM-1h2
If B=is (M-1)TM-1, launch to have:
The common version of matrix B closes solution:
Because B is symmetrical, it can be write as the dot product of 6 element vectors.Rearrange B element obtain one new to Amount b, then have:
Then two constraints can be written as:
Using different angle K checkerboard images, stacking these equations has:
Vb=0
In formula, V is the matrix of 2K*6, and camera intrinsic parameter can be directly obtained from the closing solution of B matrixes:
cx=-B13fx 2
In formula,, outer parameter can be calculated by homography condition:
r1=λ M-1h1
r2=λ M-1h2
r3=r1*r2
T=λ M-1h3
λ=1/ is determined by orthogonality condition in formula | | M-1h1| |, the position of the perception point obtained on image due to distortion It is false.If pin-hole model is perfect, (x is madep,yp) it is the position that puts, make (xd,yd) it is the position that distorts, then Have:
By following replacement, the calibration result that can not distorted:
So after inside and outside parameter is reevaluated, these a large amount of equations for obtaining can find distortion parameter, take the photograph when obtaining As after all parameters, it is possible to according to the position that its pixel coordinate tries to achieve human body physical coordinates, so as to pass through to adjust chassis Motor makes video camera and human body be maintained at certain distance.
Face datection is completed using Adaboost algorithm:
The system is by the use of Harr features as shown in Figure 7 as the key feature for judging face, the calculating institute of Harr features The feature templates for using are combined using simple rectangle and are made up of the rectangle of two or more congruences, are wherein had in feature templates black Color and white two kinds of rectangles.Method of the calculating process using integrogram, for image in 1 point of A (x, y), the gray value is I (x, y), (x ', y ') is that the computational methods of its integrogram ii (x, y) are less than the point of (x, y):
Ii (x, y)=∑x′≤x,y′≤yI(x′,y′)
After calculating the Harr features of sample image, need to screen required feature, each Harr feature corresponding Individual grader, picks out optimum Harr features to construct Face datection grader from substantial amounts of characteristic value, Weak Classifier Construction method is as follows:
Give a series of face sample (x1,y1),(x2,y2),...,(xi,yi),...,(xn,yn) wherein xiIt is i-th sample This, yiWork as y as sample labelingiFace sample, y is represented when=1iNon-face sample is represented when=0.Then with equation below at the beginning of Beginningization weights:
D in formulatI () represents the weights of i-th sample in the t time circulation, m, n represent positive and negative number of samples, the system Using ORL face databases as sample.Weights q after after trying to achieve weights normalized weightst(i):
The Harr characteristic values of all samples are calculated after weights normalization, the order by ascending order arrayed feature value and before recording I, the weights and S of portion's face sample of demanding perfection+, whole non-face sample weights S-And face sample weights T before currentElement+, non-face sample weights T before currentElement-.And calculate two parameter values J, K, J=T++(S--T-), K=T-+(S+-T+), E=min (J, K) is then made, the element corresponding to min (E) characteristic value f according to corresponding to I finds the element is found X () is required preliminary examination threshold value f 'θ, and direction vector pjWorthwhile J=K when be 1, otherwise be -1.For each Harr spy Levying f can all have a grader h (x, f, p, θ) corresponding therewith, and angles of the θ for direction vector in formula, with preliminary examination threshold value f 'θAccording to Weak Classifier formula:
The weighting fault rate of each grader is trained and is calculated to sample:
ξi=∑iqi|h(xi,f,p,θ)-yi|
It is the optimum classifier h obtained by current signature to find the minimum grader of weighting fault ratetAnd grader is most Excellent threshold value fθ, qiWeights after for normalization, all parametric solutions of such Weak Classifier are finished.
Corresponding Harr features and threshold value f are extracted with the Weak Classifier for trainingθPositive sample, classification is traveled through to sample space The sample weights of mistake are constant, and correct weights of classifying diminish, and the weights of classification error is increased with respect to proportion, and weights are more New Policy is:
ξ in formulaiFor weighting fault rate, Dt+1I () is the weights after updating.
The sample after weights will be updated and build new grader, the weak of T weighting rate minimum by mistake will be had after T this circulation Grader is generated, and these graders is carried out cascade and obtains strong classifier:
A in formulat=log [(1- ξi)/ξi] for h (t) coefficient, ξiFor weighting fault rate.
System is 20*20 using detection window size, and amplification coefficient is 1.2 to be detected using multiple dimensioned method, and With the number N > 4 of detection window, center of circle radius R < 5 carries out multiple dimensioned merging for merging the condition of window.
Identification is completed using principal component analysis PCA:
PCA PCA is for random character χ={ χ12,...χn, wherein χi∈Rd, seek its mean vector first μ, formula are as follows:
In formula, n is characterized quantity, then calculates covariance matrix S, and formula is as follows:
Calculate eigenvalue λiWith corresponding characteristic vector νi, formula is as follows:
iiνi, i=1,2 ..., n
Arrangement of successively decreasing, characteristic vector and its sequence consensus are carried out to characteristic value, and K main component is also K maximum spy The corresponding characteristic vector of value indicative, the K principal component of χ:
Y=ωT(χ-μ)
ω=(ν in formula12,...,νk)
All of sample is added label, the sample of member A is collectively labeled as A, and the sample of member B is collectively labeled as B, with such Push away.Retain 15 main components, then initialized with confidence threshold value, data set as shown in Figure 9 is put into training The characteristic vector of each member's face being obtained in device and mean vector being preserved, data set as shown in Figure 10 is put into training Interim obtain the non-face characteristic vector of each member and mean vector and preserve.When detecting face using Adaboost algorithm When, the characteristic vector that is deposited with face code insurance that human face photo cutting is got off is compared so as to verify identity, without inspection When measuring face, the characteristic vector that the picture of whole human body parts is preserved with non-face part is compared so as to verifying Identity.

Claims (6)

1. the indoor human body detecting and tracking and identification system based on multisensor, it is characterised in that:The system is released by heat Electric infrared sensor completes the Primary Location of human body, the scope that camera is moved to human body appearance by steering wheel, by shooting Head gathers the image information in the range of this and image information is transferred in computer, and computer completes the correlometer of human testing Calculate and control steering wheel and become to making camera and mobile platform tracking human body;Computer is entered by collecting image information and background information Row coupling is so that it is determined that the identity of detection people;
Specific workflow is as follows:
The Preliminary detection of S1 human bodies and the Primary Location of camera
The system catches human infrared radiation using pyroelectric infrared sensor and is converted into the original of faint voltage signal Manage to detect whether interior has human body, for 6 meters, sensing angle is 100 to the pyroelectric infrared sensor induction range that system is adopted Degree;Four pyroelectric sensors are distributed in the due west of camera, due south, due east, positive north four direction, detection angles will be covered Indoor 360 degree scope, when pyroelectric sensor detects human body signal, camera steering wheel head just turns to the model In enclosing;
S2 completes human testing using image
The system using HOG+SVM method to video sequence in human body detect that step is as follows:
The making of S2.1 Sample Storehouses
, used as positive sample, the system is using the picture in INRIA static state pedestrian's Test databases as training for collection human sample Positive sample, using the picture under the negative sample in INRIA databases and treated indoor unmanned environment as the negative sample of training This;
The selection of the parameter of S2.2HOG feature extractions
Using the HOG that OpenCv is carried, this method detects that built-in function, corresponding parameter are set to:Detection window is 64*64, cell The size of unit is 16*16 pixels, and block sliding step is 16, and gradient direction is quantified as 9, therefore the HOG feature dimensions of piece image Number is 4*9*3*3=324;The standardized method of characteristic block selects L2-Hys, threshold value to be 0.2 and carry out Gamma corrections;
S2.3 carries out SVM training
The HOG features of all positive negative samples are extracted, and aligns negative sample and give label, positive sample is labeled as 1, by negative sample mark 0 is designated as, then the HOG features of positive negative sample and label is all input in SVM training aids and is trained, just obtain a human body Grader;
S3 control cameras rotate tracking human body
Human body is marked its in-scope with square frame in image by system, and every 25 frame is detected once, then by calculation block The difference of center pixel coordinate and whole sub-picture center pixel coordinate is calculating the angle of steering wheel left-right rotation and rotate upwardly and downwardly Angle, so that ensure that human body is occurred in camera angular field of view all the time;On the other hand, camera is obtained by camera calibration Intrinsic parameter and distortion parameter, so as to extrapolate human body actual physics coordinate according to the coordinate of human body block diagram center pixel, so as to Adjusting base plate electric machine makes camera be maintained in given area with human body;
S4 identifications
The identification procedure of the system out becomes independent for the human body block diagram that detects cutting into a secondary independent figure first Picture, carries out Face datection using VIola-Jones graders on the sub-picture, its process approximately as:
S4.1 accelerates the calculating of the value of 45 degree of rotations of rectangular image area or rectangular area, the picture structure using integral image The calculating for accelerating class Haar input feature vector is used to, which fully enters feature as Adaboost graders;
S4.2 creates face with non-face grader node using Adaboost algorithm;
S4.3 constitutes Weak Classifier node one node of screening type cascade;Each node DjSpecial using class Haar comprising one group Levy the decision tree that trains either with or without face;Node is arranged from simple to complex, can so minimize the simple zones of refusal image Amount of calculation during domain;First classifiers are optimums, can be by the image-region comprising face, while allowing some not include people The image of face passes through;Second classifiers suboptimum grader, and have relatively low reject rate;Then by that analogy, as long as image Region has passed through whole cascade, then it is assumed that there is face the inside, and face block diagram is rectified out.
2. the indoor human body detecting and tracking and identification system based on multisensor according to claim 1, its feature It is:Due to the limited amount system of house person, the quantity of the Sample Storehouse of old friend's face is also limited, using PCA PCA carries out identification, and all of training data is projected to PCA subspaces, and image projection to be identified is empty to PCA Between, find training data projection after vector sum images to be recognized projection after vectorial immediate that.
3. the indoor human body detecting and tracking and identification system based on multisensor according to claim 1, its feature It is:The system using the method for human testing be not the information that can capture face front every time, if cut out The human body block diagram for coming only has the side-information of head, back side information or the information only below head, then skip Adaboost The process of Face datection, directly carries out identity differentiation using the method for PCA, only need to change data set.
4. the indoor human body detecting and tracking and identification system based on multisensor according to claim 1, its feature It is:Arduino reads pyroelectric sensor signal to be responsible for completing initial alignment, the model that camera is moved to physical activity Enclose, then all visual informations in camera collection current visual angle, video sequence are passed in the computer of system, computer Human body in video is detected by HOG+SVM algorithms, then adjusting camera steering wheel and base plate electric machine makes human body occur in all the time In the angular field of view of camera, the picture of the human body block diagram that cutting is got off by computer carries out Face datection using Adaboost, Identification is carried out using PCA if face is detected, if face is not detected by, directly body is carried out using human body PCA Why part identification, directly do not carry out identification using PCA, be because directly face being identified to compare using PCA Whole body is identified that intrinsic dimensionality is few, and amount of calculation is low, and discrimination is higher.
5. the indoor human body detecting and tracking and identification system based on multisensor according to claim 1, its feature It is:It is 1,2,3,4 that the pyroelectric sensor of 0 °, 90 °, 180 °, 270 ° four direction of camera is numbered respectively, if only one Individual pyroelectric sensor, then camera steering wheel move to the direction that the pyroelectric sensor is located;When two pyroelectric sensors When capturing signal, then camera is moved to the position in the middle of two pyroelectric sensor angles, for example numbering is 1,2 Sensor captures signal, then camera is moved to the position of its 45° angle;After camera initial position is fixed to be finished, then The signal capture of pyroelectric sensor is closed, and human body is detected in the video information that cannot be obtained from camera and is then opened again The initial alignment of dynamic pyroelectric sensor.
6. the indoor human body detecting and tracking and identification system based on multisensor according to claim 1, its feature It is:HOG+SVM human testing steps:
The HOG features of system extraction human body are put into, and the HOG features of characteristics of human body are carried Take process as follows:
The positive example and negative example of human sample is collected, human body picture of the positive sample from 96*160 sizes in data set will during use All remove 16 pixels up and down, intercept the human body of middle 64*128 sizes;Negative sample is from the indoor figure not comprising human body Cut in piece at random, size is equally 64*128, and the quantity of positive negative sample is The more the better, be preferably attained by 10000 left sides Right;Sample label is given to all of positive negative sample, positive sample is labeled as 1, and negative sample is labeled as 0, then extracts positive negative sample HOG description of image;
The normalization that color space is carried out to input picture using Gamma correction methods, not only can so suppress the interference of noise, The impact caused by illumination variation can also be reduced;
The gradient of pixel (x, y) is calculated, its formula is as follows:
Gx(x, y)=H (x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax(x, y) represents pixel (x, y) horizontal direction gradient, Gy(x, y) is represented in pixel (x, y) vertical direction Gradient, (x+1 y) represents that ((x-1 y) represents that (x-1, pixel value y), H (x, y+1) are represented point to H to point for x+1, pixel value y) to H The pixel value of point (x, y+1), H (x, y-1) represent the pixel value of point (x, y-1);Gradient magnitude G (x, y) at pixel (x, y) place It is respectively with gradient direction α (x, y):
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2
α ( x , y ) = arctan - 1 ( G x ( x , y ) G y ( x , y ) )
Gradient orientation histogram is built, the detection window of a 64*64 pixel 4*4 cell factory is divided into, each cell Cell size is 16*16 pixels, and gradient direction is quantified as 9, and each pixel in cell factory is based on certain side To histogram passage take Nearest Neighbor with Weighted Voting, weights are calculated according to the gradient magnitude of the pixel;By per 4 adjacent groups of cells Into a block, so a total of 9 blocks, so the dimension of the HOG characteristic vectors for obtaining is 4*9**3*3=324;
As the change of illumination causes the excursion of gradient intensity very big, it is therefore desirable to which gradient intensity is normalized; Using the method for normalizing of following formula, formula is as follows:
L 2 - H y s : ν → ← ν → / | | ν → | | 2 2 + ϵ 2
In formula, ν is represented also without normalized vector, | | νk| | represent that the k rank norms of ν, ε represent constant, the value very little of ε;Then Eigenvectors matrix and label after normalization is put in SVM and is trained;
The Generalized optimal interface solved by SVM changes into the problem of the quadratic function optimizing under an inequality constraints:
m i n 1 2 | | ω | | 2 + C Σ i = 1 m ξ i
Corresponding constraints is yiTxi-b)≥1-ξiAnd ξi>=0, i=1,2 ... m, m are positive integer, and in formula, C is punishment Function, ξiFor the slack variable for introducing, xiFor input sample, ω is its parameter value, and b is threshold value;Then optimal classification function is correspondingly It is changed into:
g ( x ) = s i g n { Σ i = 1 m α i * y i [ φ T ( x i ) * φ ( x ) ] + b * }
B in formula*For classification thresholds,For yiSlope, φT(xi) φ (x) be kernel function K (x, xi)=φT(xi) φ (x), core letter Number adopts RBF:
In formula, σ is smoothness parameter, then carries out human testing in negative sample artwork using the grader for training, and then will The rectangle frame of error detection is put into again SVM as difficult example and is trained and obtains final grader;
Camera location human body and the calibration process of camera:
Human testing is carried out to video sequence by the final grader for training, detection once, works as detection per 25 frames To after human body, the center pixel coordinate G of human body rectangle frame is calculated1(x1,y1), the center pixel for then obtaining whole two field picture is sat Mark G2(x2,y2), according to formula:
θ 1 = x 1 - x 2 X θ
θ 1 = x 1 - x 2 X θ
In formula, X, Y are that camera shoots the maximum transverse and longitudinal pixel value of photo, and θ is camera wide-angle;
According further to national forest park in Xiaokeng:
θ 1 = y 1 - y 2 Y θ
Focal lengths of the f for camera in formula, Z is distance of the human body to camera, and X is the height of human body, and x is human body on the plane of delineation Height;The distance of human body and camera can be calculated, the then regulation by system chassis steering wheel makes human body and camera Distance remain definite value, so can ensure that camera shoot video definition;In fact, in due to camera chip The heart is not generally on optical axis and single pixel point is rectangular rather than foursquare reason on low order imager, it is impossible to using reason The national forest park in Xiaokeng that thinks goes to calculate, and is therefore reintroduced back to camera inner parameter Cx、Cy、fx、fy;Wherein CxAnd CyCan for optical axis The skew of energy, fxPhysics focal length F and length for lens and each unit size s of imagerxProduct, fyPhysics for lens Focal length F and length and each unit size s of imageryProduct because sx、syCan not be during camera calibration not with F Can direct measurement, only combined amount fx=FsxAnd fy=FsyCan directly calculate directly survey is gone without removing camera Measure its part;So can will be represented with following formula on the spot projection in the world to camera:
q = M Q , q = x y w , M = f x 0 c x 0 f y c y 0 0 1 , Q = X Y Z
In formula, M is parameter matrix, and q is the pixel coordinate value of the point on projector space, and which is that three-dimensional vector represents that two-dimensional projection is empty Between homogeneous coordinates, launch above formula and can be found that w=Z, can just recover former definition divided by w, Q is actual physics coordinate;
Video camera, as a result of lens, is not introduce any distortion without lens, it is therefore desirable to the radial direction to camera Distortion and tangential distortion are modeled and try to achieve accurate coordinate value;For radial distortion, the distortion at imager center is 0, with To edge and move, distort also increasingly severe;In a practical situation can be with the Taylor series expansion around r=0 positions Being quantitatively described, the radial position of imager point is adjusted first few items as the following formula;
xco=x (1+k1r2+k2r4)
yco=y (1+k1r2+k2r4)
In formula, (x, y) is home position of the distortion point on imager, (xco,yco) be correction after new position, k1And k2For Thailand Strangle the parameter of the Section 1 and Section 2 of series expansion;For tangential distortion, with two extra parameters p1And p2To describe, its Formula is as follows:
Therefore a total of four distortion parameter k1、k2、p1、p2;For camera internal reference number and distortion parameter carry out camera calibration from And obtain 4 camera intrinsic parameter (fx,fy,cx,cy) and 4 distortion parameters, two radial direction (k1,k2) and two tangential (p1,p2);After all parameters are tried to achieve, just human body actual physics coordinate can be tried to achieve according to human body pixel coordinate in the picture, be led to Overregulating system chassis motor makes human body keep appropriately distance with video camera;
System is using 8*6 black and white interval chessboard as calibrated reference, it is assumed that when calibrating parameters are solved, camera does not distort; For each chessboard visual field, a homography matrix H, the form that H is written as column vector, H=[h is obtained1h2h3], each h It is 3*1 vectors;H is multiplied by the first two spin matrix r equal to camera intrinsic parameter matrix M1And r2With the combinatorial matrix of translation vector t, Zoom factor s is added, i.e.,:
H=[h1h2h3]=sM [r1r2t]
Decompose equation to obtain:
r1=λ M-1h1
r2=λ M-1h2
T=λ M-1h3
λ=1/s in formula, rotating vector are mutually orthogonal in construction, so r1And r2Mutually orthogonal, then have:
r1 Tr2=0
For any vectorial a and b, there is (ab)T=bTaT, therefore obtain first constraint:
h1 T(M-1)TM-1h2=0
The equal length of known rotating vector, i.e., | | r1| |=| | r2| |, obtain second constraint:
h1 T(M-1)TM-1h1=h2 T(M-1)TM-1h2
If B=is (M-1)TM-1, launch to have:
B B 11 B 12 B 13 B 12 B 22 B 23 B 13 B 23 B 33
The common version of matrix B closes solution:
B = 1 f x 2 0 - c x f x 2 0 1 f y 2 - c y f y 2 - c x f x 2 - c y f y 2 c x f x 2 + c y f y 2 + 1
Because B is symmetrical, it can be write as the dot product of 6 element vectors;The element for rearranging B obtains a new vector b, Then have:
h i T Bh j = ν i j T b = h i 1 h j 1 h i 1 h j 2 + h i 2 h j 1 h i 2 h j 2 h i 3 h j 1 + h i 1 h j 3 h i 3 h j 2 + h i 2 h j 3 h i 3 h j 3 T B 11 B 12 B 22 B 13 B 23 B 33 T
Then two constraints can be written as:
ν 12 T ( ν 11 - ν 22 ) T b = 0
Using different angle K checkerboard images, stacking these equations has:
Vb=0
In formula, V is the matrix of 2K*6, and camera intrinsic parameter can be directly obtained from the closing solution of B matrixes:
f x = λ / B 11
f y = λB 11 / ( B 11 B 22 - B 12 2 )
cx=-B13fx 2
c y = ( B 12 B 13 - B 11 B 23 ) / ( B 11 B 22 - B 12 2 )
In formula,Outer parameter can be calculated by homography condition:
r1=λ M-1h1
r2=λ M-1h2
r3=r1*r2
T=λ M-1h3
λ=1/ is determined by orthogonality condition in formula | | M-1h1| |, the position of the perception point obtained on image due to distortion is not Really;If pin-hole model is perfect, (x is madep,yp) it is the position that puts, make (xd,yd) it is the position that distorts, then have:
x p y p = f x X W / Z W + c x f y X W / Z W + c y
By following replacement, the calibration result that can not distorted:
x p y p = ( 1 + k 1 r 2 + k 2 r 4 ) x d y d + 2 p 1 x d y d + p 2 ( r 2 + 2 x d 2 ) p 1 ( r 2 + 2 y d 2 ) + 2 p 2 x d y d
So after inside and outside parameter is reevaluated, these a large amount of equations for obtaining can find distortion parameter, when obtaining camera After all parameters, it is possible to according to the position that its pixel coordinate tries to achieve human body physical coordinates, so as to pass through to adjust base plate electric machine Video camera and human body is made to be maintained at certain distance;
Face datection is completed using Adaboost algorithm:
The system by the use of Harr features as the key feature for judging face, adopt by the feature templates used by the calculating of Harr features It is made up of the rectangle of two or more congruences with the combination of simple rectangle, wherein in feature templates, there are two kinds of squares of black and white Shape;Calculating process using integrogram method, for image in 1 point of A (x, y), the gray value be I (x, y), (x ', Y ') it is that the computational methods of its integrogram ii (x, y) are less than the point of (x, y):
Ii (x, y)=∑x′≤x,y′≤yI(x′,y′)
After calculating the Harr features of sample image, need to screen required feature, each Harr feature is corresponding one point Class device, picks out optimum Harr features to construct Face datection grader, the structure of Weak Classifier from substantial amounts of characteristic value Method is as follows:
Give a series of face sample (x1,y1),(x2,y2),...,(xi,yi),...,(xn,yn) wherein xiIt is i-th sample, yi Work as y as sample labelingiFace sample, y is represented when=1iNon-face sample is represented when=0;Then with equation below initialization power Value:
D t ( i ) = 1 / ( 2 m ) , y i = 1 1 / ( 2 n ) , y i = 0
D in formulatI () represents that the weights of i-th sample in the t time circulation, m, n represent that positive and negative number of samples, the system are adopted ORL face databases are used as sample;Weights q after after trying to achieve weights normalized weightst(i):
q t ( i ) = D t ( i ) / Σ i = 1 n D t ( i )
The Harr characteristic values of all samples are calculated after weights normalization, order I by ascending order arrayed feature value and before recording is asked The weights and S of whole face samples+, whole non-face sample weights S-And face sample weights T before currentElement+, when Non-face sample weights T before front element-;And calculate two parameter values J, K, J=T++(S--T-), K=T-+(S+-T+), so After make E=min (J, K), find the element corresponding to min (E) characteristic value f (x) according to corresponding to I finds the element and be Required preliminary examination threshold value fθ', and direction vector pjWorthwhile J=K when be 1, otherwise be -1;For each Harr feature f can There is a grader h (x, f, p, θ) corresponding therewith, angles of the θ for direction vector in formula, with preliminary examination threshold value fθ' according to weak typing Device formula:
h j ( x ) = 1 , p j f j ( x ) ≤ p j f θ 0 , o t h e r s
The weighting fault rate of each grader is trained and is calculated to sample:
ξi=∑iqi|h(xi,f,p,θ)-yi|
It is the optimum classifier h obtained by current signature to find the minimum grader of weighting fault ratetAnd the optimal threshold of grader fθ, qiWeights after for normalization, all parametric solutions of such Weak Classifier are finished;
Corresponding Harr features and threshold value f are extracted with the Weak Classifier for trainingθPositive sample, classification error are traveled through to sample space Sample weights constant, the correct weights of classifying diminish, and so make the relative proportion increasing of weights of classification error, right value update plan Slightly:
D t + 1 ( i ) = D t ( i ) , r i g h t D t ( i ) ξ i 1 - ξ i , w r o n g
ξ in formulaiFor weighting fault rate, Dt+1I () is the weights after updating;
The sample after weights will be updated and build new grader, this will have the T weighting minimum weak typing of rate by mistake after circulating through T Device is generated, and these graders is carried out cascade and obtains strong classifier:
H ( x ) = 1 , Σ t = 1 T a t h ( t ) ≥ 1 2 Σ t = 1 T a t 0 , o t h e r s
A in formulat=log [(1- ξi)/ξi] for h (t) coefficient, ξiFor weighting fault rate;
System is 20*20 using detection window size, and amplification coefficient is 1.2 to be detected using multiple dimensioned method, and examining The number N > 4 of window is surveyed, center of circle radius R < 5 carries out multiple dimensioned merging for the condition for merging window;
Identification is completed using principal component analysis PCA:
PCA PCA is for random character χ={ χ12,...χn, wherein χi∈Rd, seek its mean vector μ first, public Formula is as follows:
μ = 1 n Σ i = 1 n χ i
In formula, n is characterized quantity, then calculates covariance matrix S, and formula is as follows:
S = 1 n Σ i = 1 n ( χ i - μ ) ( χ i - μ ) T i
Calculate eigenvalue λiWith corresponding characteristic vector νi, formula is as follows:
iiνi, i=1,2 ..., n
Arrangement of successively decreasing, characteristic vector and its sequence consensus are carried out to characteristic value, and K main component is also K maximum characteristic value Corresponding characteristic vector, the K principal component of χ:
Y=ωT(χ-μ)
ω=(ν in formula12,...,νk)
All of sample is added label, the sample of member A is collectively labeled as A, and the sample of member B is collectively labeled as B, by that analogy.
CN201610835988.2A 2016-09-20 2016-09-20 Indoor human body detecting and tracking and identification system based on multisensor Active CN106503615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610835988.2A CN106503615B (en) 2016-09-20 2016-09-20 Indoor human body detecting and tracking and identification system based on multisensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610835988.2A CN106503615B (en) 2016-09-20 2016-09-20 Indoor human body detecting and tracking and identification system based on multisensor

Publications (2)

Publication Number Publication Date
CN106503615A true CN106503615A (en) 2017-03-15
CN106503615B CN106503615B (en) 2019-10-08

Family

ID=58290726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610835988.2A Active CN106503615B (en) 2016-09-20 2016-09-20 Indoor human body detecting and tracking and identification system based on multisensor

Country Status (1)

Country Link
CN (1) CN106503615B (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951930A (en) * 2017-04-13 2017-07-14 杭州申昊科技股份有限公司 A kind of instrument localization method suitable for Intelligent Mobile Robot
CN107222337A (en) * 2017-05-27 2017-09-29 南京泛和电力自动化有限公司 Encryption communication method and system for photovoltaic generating system
CN107316036A (en) * 2017-06-09 2017-11-03 广州大学 A kind of insect recognition methods based on cascade classifier
CN107506023A (en) * 2017-07-20 2017-12-22 武汉秀宝软件有限公司 A kind of method for tracing and system of metope image infrared ray hot spot
CN107609475A (en) * 2017-08-08 2018-01-19 天津理工大学 Pedestrian detection flase drop based on light-field camera proposes method
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN107666597A (en) * 2017-10-12 2018-02-06 安徽特旺网络科技有限公司 A kind of building video monitoring system
CN107666596A (en) * 2017-10-12 2018-02-06 安徽特旺网络科技有限公司 A kind of tracing and monitoring method
CN107679528A (en) * 2017-11-24 2018-02-09 广西师范大学 A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms
CN107703553A (en) * 2017-08-17 2018-02-16 广州隽智智能科技有限公司 A kind of behavior monitoring method and system in place
CN107818339A (en) * 2017-10-18 2018-03-20 桂林电子科技大学 Method for distinguishing is known in a kind of mankind's activity
CN107862713A (en) * 2017-09-22 2018-03-30 贵州电网有限责任公司 Video camera deflection for poll meeting-place detects method for early warning and module in real time
WO2018076786A1 (en) * 2016-10-24 2018-05-03 深圳米乔科技有限公司 Timing reminding method and reminding apparatus for height-adjustable table
CN108460811A (en) * 2018-03-09 2018-08-28 珠海方图智能科技有限公司 Facial image processing method, device and computer equipment
CN108647662A (en) * 2018-05-17 2018-10-12 四川斐讯信息技术有限公司 A kind of method and system of automatic detection face
CN108734083A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Control method, device, equipment and the storage medium of smart machine
CN108830187A (en) * 2018-05-29 2018-11-16 厦门瑞为信息技术有限公司 A kind of device and method of the quick Identification of Images of indoor scene
CN108885698A (en) * 2018-07-05 2018-11-23 深圳前海达闼云端智能科技有限公司 Face identification method, device and server
CN109064595A (en) * 2018-07-24 2018-12-21 上海闻泰信息技术有限公司 Facial tripper
CN109199398A (en) * 2018-10-09 2019-01-15 芜湖博高光电科技股份有限公司 A kind of recognition of face detection Gernral Check-up device
CN109618096A (en) * 2018-12-19 2019-04-12 浙江工业大学 A kind of automatic follower method of video record
CN109635698A (en) * 2018-12-04 2019-04-16 杭州中房信息科技有限公司 A kind of crowd's personal safety detection method of renting a house based on SVM algorithm
CN109813434A (en) * 2018-12-19 2019-05-28 厦门赢科光电有限公司 A kind of human body recognition method based on temperature detection, device and terminal device
CN110096013A (en) * 2019-05-24 2019-08-06 广东工业大学 A kind of intrusion detection method and device of industrial control system
CN110246169A (en) * 2019-05-30 2019-09-17 华中科技大学 A kind of window adaptive three-dimensional matching process and system based on gradient
CN110378292A (en) * 2019-07-22 2019-10-25 广州络维建筑信息技术咨询有限公司 Three dimension location system and method
CN110414314A (en) * 2019-06-11 2019-11-05 汉腾汽车有限公司 A kind of camera structure carrying out Face tracking and recognition and system
CN110653812A (en) * 2018-06-29 2020-01-07 深圳市优必选科技有限公司 Interaction method of robot, robot and device with storage function
CN110807361A (en) * 2019-09-19 2020-02-18 腾讯科技(深圳)有限公司 Human body recognition method and device, computer equipment and storage medium
CN110942481A (en) * 2019-12-13 2020-03-31 西南石油大学 Image processing-based vertical jump detection method
CN111034171A (en) * 2017-09-26 2020-04-17 索尼半导体解决方案公司 Information processing system
CN111144207A (en) * 2019-11-21 2020-05-12 东南大学 Human body detection and tracking method based on multi-mode information perception
CN111489803A (en) * 2020-03-31 2020-08-04 重庆金域医学检验所有限公司 Report coding model generation method, system and equipment based on autoregressive model
CN111721420A (en) * 2020-04-27 2020-09-29 浙江智物慧云技术有限公司 Semi-supervised artificial intelligence human body detection embedded algorithm based on infrared array time sequence
CN111832542A (en) * 2020-08-15 2020-10-27 武汉易思达科技有限公司 Three-eye visual identification and positioning method and device
WO2020233000A1 (en) * 2019-05-20 2020-11-26 平安科技(深圳)有限公司 Facial recognition method and apparatus, and computer-readable storage medium
CN112784771A (en) * 2021-01-27 2021-05-11 浙江芯昇电子技术有限公司 Human shape detection method, system and monitoring equipment
CN112784828A (en) * 2021-01-21 2021-05-11 珠海市杰理科技股份有限公司 Image detection method and device based on direction gradient histogram and computer equipment
CN112861607A (en) * 2020-12-29 2021-05-28 湖北航天飞行器研究所 Long-distance laser living body identification method
CN113111728A (en) * 2021-03-22 2021-07-13 广西电网有限责任公司电力科学研究院 Intelligent identification method and system for power production operation risk in transformer substation
CN113192109A (en) * 2021-06-01 2021-07-30 北京海天瑞声科技股份有限公司 Method and device for identifying motion state of object in continuous frames
CN113221606A (en) * 2020-04-27 2021-08-06 南京南瑞信息通信科技有限公司 Face recognition method based on IMS video conference login
CN113312953A (en) * 2021-01-05 2021-08-27 武汉大学 Humanoid robot identity identification method and system based on gait recognition
CN113312985A (en) * 2021-05-10 2021-08-27 中国地质大学(武汉) Audio-visual dual-mode 360-degree omnibearing speaker positioning method
CN113327286A (en) * 2021-05-10 2021-08-31 中国地质大学(武汉) 360-degree omnibearing speaker visual space positioning method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102043966A (en) * 2010-12-07 2011-05-04 浙江大学 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102043966A (en) * 2010-12-07 2011-05-04 浙江大学 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076786A1 (en) * 2016-10-24 2018-05-03 深圳米乔科技有限公司 Timing reminding method and reminding apparatus for height-adjustable table
US10872516B2 (en) 2016-10-24 2020-12-22 Shenzhen MiniCute Technology Co. Ltd. Timing reminding method and reminding apparatus for height-adjustable table
CN106951930A (en) * 2017-04-13 2017-07-14 杭州申昊科技股份有限公司 A kind of instrument localization method suitable for Intelligent Mobile Robot
CN107222337A (en) * 2017-05-27 2017-09-29 南京泛和电力自动化有限公司 Encryption communication method and system for photovoltaic generating system
CN107316036A (en) * 2017-06-09 2017-11-03 广州大学 A kind of insect recognition methods based on cascade classifier
CN107316036B (en) * 2017-06-09 2020-10-27 广州大学 Insect pest identification method based on cascade classifier
CN107506023A (en) * 2017-07-20 2017-12-22 武汉秀宝软件有限公司 A kind of method for tracing and system of metope image infrared ray hot spot
CN107609475A (en) * 2017-08-08 2018-01-19 天津理工大学 Pedestrian detection flase drop based on light-field camera proposes method
CN107609475B (en) * 2017-08-08 2020-04-10 天津理工大学 Pedestrian detection false detection extraction method based on light field camera
CN107703553A (en) * 2017-08-17 2018-02-16 广州隽智智能科技有限公司 A kind of behavior monitoring method and system in place
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN107644204B (en) * 2017-09-12 2020-11-10 南京凌深信息科技有限公司 Human body identification and tracking method for security system
CN107862713A (en) * 2017-09-22 2018-03-30 贵州电网有限责任公司 Video camera deflection for poll meeting-place detects method for early warning and module in real time
CN107862713B (en) * 2017-09-22 2021-04-06 贵州电网有限责任公司 Camera deflection real-time detection early warning method and module for polling meeting place
CN111034171A (en) * 2017-09-26 2020-04-17 索尼半导体解决方案公司 Information processing system
CN111034171B (en) * 2017-09-26 2022-05-17 索尼半导体解决方案公司 Information processing system
CN107666596A (en) * 2017-10-12 2018-02-06 安徽特旺网络科技有限公司 A kind of tracing and monitoring method
CN107666597A (en) * 2017-10-12 2018-02-06 安徽特旺网络科技有限公司 A kind of building video monitoring system
CN107818339A (en) * 2017-10-18 2018-03-20 桂林电子科技大学 Method for distinguishing is known in a kind of mankind's activity
CN107679528A (en) * 2017-11-24 2018-02-09 广西师范大学 A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms
CN108460811B (en) * 2018-03-09 2022-05-06 珠海方图智能科技有限公司 Face image processing method and device and computer equipment
CN108460811A (en) * 2018-03-09 2018-08-28 珠海方图智能科技有限公司 Facial image processing method, device and computer equipment
CN108734083A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Control method, device, equipment and the storage medium of smart machine
CN108647662A (en) * 2018-05-17 2018-10-12 四川斐讯信息技术有限公司 A kind of method and system of automatic detection face
CN108830187B (en) * 2018-05-29 2021-07-06 厦门瑞为信息技术有限公司 Device and method for rapidly recognizing portrait of indoor scene
CN108830187A (en) * 2018-05-29 2018-11-16 厦门瑞为信息技术有限公司 A kind of device and method of the quick Identification of Images of indoor scene
CN110653812A (en) * 2018-06-29 2020-01-07 深圳市优必选科技有限公司 Interaction method of robot, robot and device with storage function
CN110653812B (en) * 2018-06-29 2021-06-04 深圳市优必选科技有限公司 Interaction method of robot, robot and device with storage function
CN108885698A (en) * 2018-07-05 2018-11-23 深圳前海达闼云端智能科技有限公司 Face identification method, device and server
CN109064595A (en) * 2018-07-24 2018-12-21 上海闻泰信息技术有限公司 Facial tripper
CN109199398A (en) * 2018-10-09 2019-01-15 芜湖博高光电科技股份有限公司 A kind of recognition of face detection Gernral Check-up device
CN109635698A (en) * 2018-12-04 2019-04-16 杭州中房信息科技有限公司 A kind of crowd's personal safety detection method of renting a house based on SVM algorithm
CN109813434A (en) * 2018-12-19 2019-05-28 厦门赢科光电有限公司 A kind of human body recognition method based on temperature detection, device and terminal device
CN109618096A (en) * 2018-12-19 2019-04-12 浙江工业大学 A kind of automatic follower method of video record
WO2020233000A1 (en) * 2019-05-20 2020-11-26 平安科技(深圳)有限公司 Facial recognition method and apparatus, and computer-readable storage medium
CN110096013A (en) * 2019-05-24 2019-08-06 广东工业大学 A kind of intrusion detection method and device of industrial control system
CN110246169A (en) * 2019-05-30 2019-09-17 华中科技大学 A kind of window adaptive three-dimensional matching process and system based on gradient
CN110414314A (en) * 2019-06-11 2019-11-05 汉腾汽车有限公司 A kind of camera structure carrying out Face tracking and recognition and system
CN110378292A (en) * 2019-07-22 2019-10-25 广州络维建筑信息技术咨询有限公司 Three dimension location system and method
CN110807361B (en) * 2019-09-19 2023-08-08 腾讯科技(深圳)有限公司 Human body identification method, device, computer equipment and storage medium
CN110807361A (en) * 2019-09-19 2020-02-18 腾讯科技(深圳)有限公司 Human body recognition method and device, computer equipment and storage medium
CN111144207A (en) * 2019-11-21 2020-05-12 东南大学 Human body detection and tracking method based on multi-mode information perception
CN111144207B (en) * 2019-11-21 2023-07-07 东南大学 Human body detection and tracking method based on multi-mode information perception
CN110942481B (en) * 2019-12-13 2022-05-20 西南石油大学 Image processing-based vertical jump detection method
CN110942481A (en) * 2019-12-13 2020-03-31 西南石油大学 Image processing-based vertical jump detection method
CN111489803A (en) * 2020-03-31 2020-08-04 重庆金域医学检验所有限公司 Report coding model generation method, system and equipment based on autoregressive model
CN113221606A (en) * 2020-04-27 2021-08-06 南京南瑞信息通信科技有限公司 Face recognition method based on IMS video conference login
CN113221606B (en) * 2020-04-27 2022-08-23 南京南瑞信息通信科技有限公司 Face recognition method based on IMS video conference login
CN111721420A (en) * 2020-04-27 2020-09-29 浙江智物慧云技术有限公司 Semi-supervised artificial intelligence human body detection embedded algorithm based on infrared array time sequence
CN111832542B (en) * 2020-08-15 2024-04-16 武汉易思达科技有限公司 Tri-vision identifying and positioning device
CN111832542A (en) * 2020-08-15 2020-10-27 武汉易思达科技有限公司 Three-eye visual identification and positioning method and device
CN112861607A (en) * 2020-12-29 2021-05-28 湖北航天飞行器研究所 Long-distance laser living body identification method
CN113312953A (en) * 2021-01-05 2021-08-27 武汉大学 Humanoid robot identity identification method and system based on gait recognition
CN113312953B (en) * 2021-01-05 2022-10-04 武汉大学 Humanoid robot identity identification method and system based on gait recognition
CN112784828A (en) * 2021-01-21 2021-05-11 珠海市杰理科技股份有限公司 Image detection method and device based on direction gradient histogram and computer equipment
CN112784828B (en) * 2021-01-21 2022-05-17 珠海市杰理科技股份有限公司 Image detection method and device based on direction gradient histogram and computer equipment
CN112784771A (en) * 2021-01-27 2021-05-11 浙江芯昇电子技术有限公司 Human shape detection method, system and monitoring equipment
CN113111728A (en) * 2021-03-22 2021-07-13 广西电网有限责任公司电力科学研究院 Intelligent identification method and system for power production operation risk in transformer substation
CN113327286A (en) * 2021-05-10 2021-08-31 中国地质大学(武汉) 360-degree omnibearing speaker visual space positioning method
CN113312985A (en) * 2021-05-10 2021-08-27 中国地质大学(武汉) Audio-visual dual-mode 360-degree omnibearing speaker positioning method
CN113192109B (en) * 2021-06-01 2022-01-11 北京海天瑞声科技股份有限公司 Method and device for identifying motion state of object in continuous frames
CN113192109A (en) * 2021-06-01 2021-07-30 北京海天瑞声科技股份有限公司 Method and device for identifying motion state of object in continuous frames

Also Published As

Publication number Publication date
CN106503615B (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN106503615A (en) Indoor human body detecting and tracking and identification system based on multisensor
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
Davis et al. A two-stage template approach to person detection in thermal imagery
Siagian et al. Biologically inspired mobile robot vision localization
US7587064B2 (en) Active learning system for object fingerprinting
TWI506565B (en) Dynamic object classification
CN109819208A (en) A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring
CN101661554B (en) Front face human body automatic identity recognition method under long-distance video
CN106295496A (en) Recognition algorithms and equipment
Zin et al. Fusion of infrared and visible images for robust person detection
CN106485735A (en) Human body target recognition and tracking method based on stereovision technique
CN105809088A (en) Vehicle identification method and system
CN103679677B (en) A kind of bimodulus image decision level fusion tracking updating mutually based on model
CN103093274B (en) Method based on the people counting of video
Wang et al. A new depth descriptor for pedestrian detection in RGB-D images
CN101551852A (en) Training system, training method and detection method
US20230008297A1 (en) Bio-security system based on multi-spectral sensing
CN112989889B (en) Gait recognition method based on gesture guidance
CN108520208A (en) Localize face recognition method
Huang et al. Real-time multi-modal people detection and tracking of mobile robots with a RGB-D sensor
De Cubber et al. Human victim detection
Orrite-Uruñuela et al. Counting people by infrared depth sensors
CN107273804A (en) Pedestrian recognition method based on SVMs and depth characteristic
Zou et al. Deep learning-based pavement cracks detection via wireless visible light camera-based network
Medasani et al. Active learning system for object fingerprinting

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant