CN101216885A - Passerby face detection and tracing algorithm based on video - Google Patents

Passerby face detection and tracing algorithm based on video Download PDF

Info

Publication number
CN101216885A
CN101216885A CNA2008100256116A CN200810025611A CN101216885A CN 101216885 A CN101216885 A CN 101216885A CN A2008100256116 A CNA2008100256116 A CN A2008100256116A CN 200810025611 A CN200810025611 A CN 200810025611A CN 101216885 A CN101216885 A CN 101216885A
Authority
CN
China
Prior art keywords
face
pedestrian
algorithm
people
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008100256116A
Other languages
Chinese (zh)
Inventor
马争鸣
丁晓宇
袁红梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CNA2008100256116A priority Critical patent/CN101216885A/en
Publication of CN101216885A publication Critical patent/CN101216885A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention proposes an algorithm of video pedestrian face detection and tracking through algorithms of motion object detection and motion object tracking, which pertains to the technical field of pattern recognition. The invention is composed of two main parts of a pedestrian face detection algorithm based on the motion object detection and a pedestrian face tracking algorithm based on the motion object tracking. The invention proposes the pedestrian face detection algorithm based on the motion object detection. The algorithm firstly utilizes a motion analysis method to detect the pedestrian, then calculates the gravity center of the human body and determines the face area according to the gravity center of the human body, and finally utilizes a skin-color model and a template matching method to detect the face in the face area. The invention proposes the pedestrian face tracking algorithm based on the motion object tracking. The algorithm tracks the face of the pedestrian through tracking the pedestrian, thereby effectively avoiding effects on the tracking of the face due to swing, rotation, expression, shelter and other factors of the face of the pedestrian.

Description

A kind of pedestrian people's face based on video detects and track algorithm
Technical field
The invention belongs to mode identification technology, be specifically related to a kind of method of moving object detection and tracking of utilizing and carry out detecting and the method for following the tracks of based on pedestrian people's face of video.
Technical background
Along with the widespread deployment of video monitoring, also put on agenda for the Intelligent treatment of the magnanimity information of monitor video in each big and medium-sized cities.Pedestrian people's face detection and tracking based on video are exactly a kind of of monitor video Intelligent Information Processing, have at aspects such as social security and police criminal detections extensively and important use.
At present, the algorithm that people's face detects is mostly based on image, rather than based on video.Whether these algorithms comprise people's face with a window traversal entire image in the detection window.In order to adapt to people's little variation of being bold, these algorithms will be with the window that varies in size traversing graph picture repeatedly.Therefore, the people's face detection algorithm based on image is an algorithm that time complexity is very high.People's face detection algorithm based on image is not added modification, be applied directly to each two field picture of video, can't satisfy the requirement of Video processing real-time, still more in actual applications, recognition of face often after people's face detects.Face recognition algorithms also is an algorithm that time complexity is very high.Therefore, if the time complexity of people's face detection algorithm and face recognition algorithms is considered every possible angle, will be more harsh then to the requirement of people's face detection algorithm real-time.
Face tracking based on video has important meaning.At first, face tracking can reduce the repetitive operation that people's face detects in adjacent each frame of video, thereby reduces the operand of whole algorithm significantly; Secondly, face tracking can be applied to the figure image intensifying, and facial image clearly is provided; At last, face tracking can provide the information at a plurality of visual angles of people's face, enriches the foundation of recognition of face.But, face tracking difficulty based on video is a lot, this is because in video, the attitude of people's face, shape and size are all in continuous variation, in addition people's face shared aspect ratio in entire image very little and block, factor such as expression, be difficult to find the invariant feature of people's face to carry throughout in each frame in succession at video.
The algorithm that everybody face of carrying out property of method that the present invention proposes to monitor and follow the tracks of by moving object with kind is monitored and followed the tracks of has solved the difficult problem that present pedestrian people's face detection and tracking run into preferably.
Summary of the invention
The present invention proposes a kind of algorithm that utilizes moving object to detect and follow the tracks of and carries out the algorithm that pedestrian people's face detects and follows the tracks of.This algorithm is to be based upon on the movable information basis of human body in the video, has utilized the information of the spatial coherence and the temporal correlation of human motion, and particular content is as follows:
1. the detection algorithm of moving object is applied in the detection of pedestrian people's face (referring to Fig. 1, Fig. 2)
(1) detection of moving region: for a given sequence of video images, generally adopting the difference fixed background to come the target image of present frame is reduced background, after thresholding is handled, is moving region and background area with image segmentation.
The differentiation of (2) moving object (pedestrian): use the two-dimension human body template that makes up according to anthropometric data that above-mentioned moving region is differentiated, determine whether comprise the pedestrian in this zone.
(3) judgement of human face region: after identifying the movement human zone, human face region is thought in 3/4ths zone more than the movement human center of gravity, this part zone marker is come out.Usually, judge it is on picture in its entirety, to detect human face region, and the human face region that the present invention is based on motion analysis is judged only judge human face region on the moving region, has reduced the hunting zone greatly, has shortened search time based on the human face region of image.
(4) carrying out people's face in human face region detects: human region is positioned on original image again, carry out skin color segmentation and mathematical morphology and handle, and get rid of those wide or long or excessive too small class area of skin color of length breadth ratio.Utilize the method for average face template matches to detect at last, obtain final human face region.
Successively the candidate face extracted region is come out according to the rough detection result who obtains, carries out some gray scales and handle the back and use face templates to treat the face image window of choosing to mate, with the image window that satisfies certain condition and reach the matching degree threshold value as people's face.
2. the track algorithm of moving object is applied in the tracking of pedestrian people's face
The present invention applies to the track algorithm of moving object in the tracking of pedestrian people's face, uses pedestrian's human body tracking algorithm of existing maturation to reach the purpose of real-time face tracking.Mainly utilize the characteristics of Kalman filter Recursive Linear minimum variance estimate, had and estimate advantage fast and accurately, specifically be divided into following two big steps:
(1) comes track human faces by following the tracks of human body: according to from present frame, detecting the position of human body that obtains, utilize Kalman filter that position, speed and the acceleration of human body in the present frame are estimated, utilize this estimated value to be made prediction in the position of human body in next frame simultaneously.After the tracking of pedestrian's human body is finished, utilize the pedestrian's method for detecting human face described in 1 on tracked human body, to detect people's face once more, realize tracking to pedestrian people's face.The face tracking method that the present invention is based on pedestrian's human body tracking has not only overcome pedestrian people's face target too small attitude and has changed the tracking unfavorable factor that characteristics such as too much cause, and the hunting zone when having dwindled each track and localization people's face, improved the speed of following the tracks of greatly.
(2) to the analysis in other zones: with the above-mentioned method for detecting human face that detects based on moving object all the other zones are analyzed, judged to have or not new people's face to occur.If have, then continue to use face tracking method that it is followed the tracks of based on moving body track.Obviously, the zone that at this moment needs to carry out pedestrian detection has reduced, and has saved operand greatly.
Whole algorithm process flow diagram of the present invention is referring to Fig. 3.
Characteristics of the present invention
The algorithm construction that the present invention proposes the algorithm frame of a kind of pedestrian people's face detection and tracking, under this framework, the algorithm of most of moving object detection and tracking can be combined into the algorithm of pedestrian people's face detection and tracking.The algorithm that the present invention proposes has two outstanding features:
(1) look for people's face after the algorithm of the present invention's proposition is looked for the pedestrian earlier, the pedestrian is as the foundation of human face region location.Because pedestrian's target is bigger than the target of people face, has movable information can support utilization in addition, therefore, it is more than directly looking for people's facial features to be easy to get to look for the pedestrian in video.
(2) algorithm of the present invention's proposition is realized the tracking of people's face by tracking pedestrians.Because pedestrian position with respect to camera in advancing can change, head also can be swung or be rotated, cause its face video in succession in each frame shape, size, direction all can change, factor such as in addition express one's feelings and block is difficult to extract stable characteristics as the foundation of following the tracks of from people's face.By contrast, pedestrian's target is bigger, and the pedestrian has more stable characteristics to can be used as the foundation that video is followed the tracks of in each frame in succession on one's body.
Description of drawings
Fig. 1, movement human testing process figure
Fig. 2, people's face testing process figure
The process flow diagram of whole algorithm among Fig. 3, the present invention
Specific embodiments
Step 1: utilize moving object to detect and carry out the detection of people's face
Utilizing the movable information of moving object to carry out people's face detects and comprise following several steps: the moving region is detected, movement human identification, and skin color segmentation, mathematical morphology is handled, filtering of class area of skin color and average face template matches, explanation respectively below.
One, the moving region is detected
At first, for the image sequence in the given video, this specific embodiments detects the moving region with the method for quadrature Gauss hermitian square (OGHMs) earlier, and separated moving region and background area.
Quadrature Gauss hermitian square template window size is 3, and corresponding standard deviation is 0.2.Window size is 3 o'clock, and the codomain of basis function is [0.9972,0.9972].Template is as follows:
1.2598 0 -1.2598
The single order template
4.0908 0 -4.0908
Three rank templates
7.9121 0 -7.9121
Five rank templates
Obtain after the OGHMIs, can not therefrom directly extract moving object, must cut apart the image that obtains.The pixel that is in the same object in the same width of cloth image has very strong spatial coherence, if only simple setting threshold is cut apart, can damage this correlativity, and the robustness of cutting apart is reduced.Aspect cutting apart, the invariant moments algorithm is not considered the spatial coherence of object, therefore (lists of references [1]: J.Shen.W.Shen such as J.Shen have been taked, H.J.Sun.J.Y.Yang.Fuzzy Neural Nets with Non-Symmetric Membership Function andApplication in Signal Processing and Image Analysis.Signal Process, 2000,80:965-983.) proposed Non-symmetric membership function and draw membership function (list of references [2]: Youfu Wu under the OGHMIs, Jun Shen, Mo Dai.Traffic Object Detections and its Action Analysis.PattenRecognition Letters, 2005.), use Fuzzy relaxation algorithm (FRM) on this basis and cut apart, thereby have stronger spatial coherence between each pixel of object when having guaranteed to cut apart.
The expression formula of Membership function is seen formula (1):
Figure S2008100256116D00041
By Non-symmetric membership function, the membership function u in the time of can releasing GHMIs realizes normalization, sees formula (2):
Figure S2008100256116D00051
U (M (x, y); T, M Min(x, y)) can be abbreviated as u (x, y).
Obtain membership function u (x, y) after, extract moving object by the FRM algorithm.This algorithm is a kind of region growing algorithm, and several committed steps are arranged: the determining of starting point, region growing method, end condition, interference filtering.
A. starting point determines
Image is carried out upper left sequential scanning to the bottom right, and (x, some y)=1 is as starting point to find out u.
B. growing method
With starting point is current point, searches the u (x ', y ') in its neighbours territory, and (x ', y ')>0.7 if u then makes u (x ', y ')=1, and will (x ', y ') as seed points, proceed neighbours' domain search; Otherwise u (x ', y ')=0, not the point of motion module.
C. end condition
When all seed points all search for finish after, finish the fast search of current motion, continue new starting point is sought in image scanning, repeat top step up to handling full images.
D. interference filtering
In the process of carrying out region growing, record belongs to the number of point of this moving region, when this moving mass size during less than a certain threshold value T, and then as interference filtering, corresponding u (x, y)=0.
Search by the FRM algorithm is handled, thereby again binaryzation in the image can be extracted moving object.
Extract after the moving object, can adopt the method for some optimizations, make the target of acquisition more accurate.This paper adopts the size of moving mass and static block in the computed image, and has set corresponding threshold.
It is white setting the moving region in this specific embodiments, and stagnant zone is a black, when black block number of pixels in the white blocks during less than certain threshold value, thinks that this part also is the moving region, is set to white, can solve empty problem like this; When the number of white blocks in the black block time less than some threshold values, think that this part also is a stagnant zone, be set to white, can eliminate the interference problem of background like this.
Two, movement human identification
The human body of hypothesis in the scene all is in erectility in this specific embodiments, and other moving object of the human body of erectility and occurring in nature is compared a very special feature, and promptly people's height, width are bigger.Its height of animal, width at the occurring in nature quadruped locomotion are smaller.Also have some moving objects such as vehicle etc. in order to keep steady state (SS) when the motion, center of gravity is generally all lower, its height, width are also more less than one like this.For specific scene, we can also determine some rule according to some prioris to scene, help discern human existence by them.Such as for known scene, people's bulk area has the scope of a cardinal principle, can help remove some noise region by it by the area of checking connected region.
This eigenwert of area is to extract by the number of pixels of calculating connected region in the binary image, according to the human body area parameters that obtains by experience in the special scenes threshold value is set, carry out Threshold Segmentation then, be configured to background pixel value less than the pixel value in the prospect connected region of this area threshold.
This eigenwert of connected region depth-width ratio is to obtain as follows:
A. search for first pixel of connected region, write down the value of its horizontal ordinate and ordinate, the pixel that forward lookup is follow-up writes down the value of its horizontal ordinate and ordinate, finds out the maximal value X of horizontal ordinate respectively by sort algorithm MaxWith minimum value X Min, and the maximal value Y of ordinate MaxWith minimum value Y Min
B. obtain connected region height: H n=(Y Max-Y Min), width: W n=(X Max-X Min), (numbering of connected region when n is forward lookup, first connected region that searches is numbered n=1), storage, n adds 1.
C. search for next connected region, change steps A, then withdraw from as not searching connected region.
This specific embodiments is by the test findings of a large amount of human body height and width ratio, draw after human body enters scene fully, because being subjected to human arm and shank is the influence of amplitude of fluctuation in walking, the angle difference that also has camera to take, the scope of his height, width ratio are substantially between 1 to 5.In order not miss human body, we can suitably relax range dimension in actual applications.
Three, skin color segmentation
Distance between two points is consistent with human eye vision in the HSV space, and the hue information of H component reflection object wherein, handles than being easier to make color images.The difference of skin color is mainly caused by illumination in the image, only considers chrominance information in detection, just can reduce the influence of illumination, therefore can only carry out the extraction of the colour of skin separately with H (tone).
In this specific embodiments, on original image, position again for the zone of determining after the motion detection, the pixel in this zone is finished from the conversion of RGB → HSV color space according to formula (3).Judge according to the value of H whether certain pixel belongs to the colour of skin, take in this specific embodiments when 0.02<H<0.08, to think that this pixel belongs to the colour of skin.
V=max(R,G,B)
Figure S2008100256116D00071
H = NaN ( undefine ) V = 0 1 - g 6 V = Rand min ( R , G , B ) = B 5 + b 6 V = Rand min ( R , G , B ) = G 3 - b b V = Gand min ( R , G , B ) = R 1 + r 6 V = Gand min ( R , G , B ) = B 5 - r 6 V = Band min ( R , G , B ) = G 3 + g 6 V = Band min ( R , G , B ) = R - - - ( 3 )
Wherein
r = V - R Delta ,
g = V - G Delta ,
b = V - B Delta , Delt=V-min(R,G,B)。
Four, mathematical morphology is handled
The mathematical morphology disposal route of having taked corrosion in this specific embodiments and having expanded comes the zone that previous step obtains is handled.
Corrosion is a kind of process of eliminating all frontier points of object, and consequently Sheng Xia object is along the area of its periphery than the little pixel of the original.If the width of object any point is less than three pixels, so it this is named a person for a particular job and becomes unconnected (becoming 2 objects) again.The object that width in any direction is not more than 2 pixels will be removed.Corrosion can be to removing little and insignificant object from a width of cloth split image.
Expansion is the process that all background dots that contact with object is merged to this object.The result of process makes the area of object increase the point of respective numbers.Be less than 3 pixels if two objects a bit are separated by at certain, they will be communicated with at this point fetches (being merged into an object).Expansion can be filled up the cavity of cutting apart the back object.
Five, class area of skin color filtering
After handling based on the filtering method of mathematical morphology, the fritter noise great majority in the image are eliminated, but some less class area of skin color still exists in the background.In order to delete dummy's face zone, we must analyze and calculate these zones.At first class area of skin color mark is come out, and then utilize the length breadth ratio of people's face to meet these characteristics of certain proportion and carry out filtering, get rid of those wide or long or excessive too small zones of length breadth ratio.
In order to determine the length breadth ratio in a certain zone, length L and width W that must the zone be obtained respectively.But because groups of people's face may exist some rotations to tilt, this make left and right, upper and lower 4 summits can't directly utilize this zone coordinate figure (here, the coordinate system that adopts is to be initial point with the lower left corner of image, level is to the right the positive dirction of x axle, is the positive dirction of Y-axis vertically upward) judge.Its detailed process is: add up the coordinate figure of being had a few on this zone boundary, seek the coordinate (X that has minimum, maximum x component on the x axle Min, X Max), reach the minimum on the Y-axis, the coordinate (Y of maximum Y component Min, Y Max), L=X Max-X Min, W=Y Max-Y MinValue promptly is length and width (wide length) parameter value of people's face.The ratio of L and W r = L W Be regional length and width (or the wide length) ratio of being asked.If people's face is a vertical front side, then this ratio should approach 1.2, but because there is rotation in people's face, and colour of skin similarity is cut apart and may be caused people's face incidence as same Region Segmentation, the upper limit of Gu Chang can suitably be amplified, to prevent the judgement of correct cut zone as mistake.In this specific embodiments, the span of r is (0.5,2), does not belong to the then directly deletion of candidate region of this scope.
Six, average face template matches
The zone of hand-making out people's face in the sample image of choosing is as people's face sample, with its scale calibrationization to 24 * 24, all samples are got gray scale on average obtain average facial image, and average facial image is carried out after the intensity profile standardization as face template.
In order to adapt to difform people's face, primary template is stretched according to 1: 0.9,1: 1,1: 1.1,1: 1.2 breadth length ratio respectively.
Wherein, the intensity profile standardization is that gray average and variance with image is transformed to μ 0=128 and σ 0=80.If the gray-scale value matrix of image is D[W] [H] (W wherein, H are respectively the width and the height of image), calculate its average, variance, and do as down conversion:
μ ‾ = 1 WH Σ i = 0 W - 1 Σ j = 0 H - 1 D [ i ] [ j ] - - - ( 4 )
σ ‾ 2 = 1 WH Σ i = 0 W - 1 Σ j = 0 H - 1 ( D [ i ] [ j ] - μ ‾ ) 2 - - - ( 5 )
D ^ [ i ] [ j ] = σ 0 σ ‾ ( D [ i ] [ j ] - μ ‾ ) = μ 0 - - - ( 6 )
Successively the candidate face extracted region is come out according to the rough detection result who obtains, change gray level image into, and image is carried out the intensity profile standardization.Use face template to treat then to choose the face image window to mate, with the image window that satisfies certain condition and reach the matching degree threshold value as people's face.
Take following matching criterior in the matching process.The gray matrix of supposing face template is T[M] [N], gray average and variance are respectively μ TAnd σ T, the gray matrix in facial image to be selected zone is R[M] and [N], gray average and variance are respectively μ RAnd σ R, then the correlation coefficient r between them (T R) is:
r ( T , R ) = Σ i = 0 M - 1 Σ j = 0 N - 1 ( T [ i ] [ j ] - μ T ) ( R [ i ] [ j ] - μ R ) MN σ T σ R - - - ( 7 )
When using face template to mate, if (T R) surpasses threshold value t (t=0.6) to correlation coefficient r, then thinks and has passed through the screening of average face coupling, is considered to people's face.
Step 2: utilize the moving body track algorithm to carry out face tracking
The algorithm of moving body track is a lot, and this specific embodiments adopts Kalman filter, but is not limited to Kalman filter.
Kalman filter is a kind of Recursive Linear minimum variance estimate, has to estimate advantage fast and accurately, and ripe application is arranged in video pedestrian's tracking.This specific embodiments uses Kalman filter that the motion state of pedestrian's human body is estimated, come the tracking pedestrians human body according to the motion state that estimates, detect people's face on tracked pedestrian's human body, realize the purpose of tracking pedestrians people face, detailed process is as follows:
One, pedestrian's human body is followed the tracks of
A. Kalman Filtering for Discrete device
One has determinacy to control, can be written as by the discrete system state equation of system noise Driven Dynamic system:
X[k]=AX[k-1]+BU[k]+W[k]
The measurement equation of recording geometry is:
Z[k]=HX[k]+V[k]
X[k wherein] be that system is at k state vector constantly, U[k] be that system is at k input vector constantly, Z[k] be that system is at k output vector constantly, W[k] and V[k] be the k noise vector in the moment, Gaussian distributed, independently mutual.A、
B, H are matrix of coefficients.Corresponding Kalman filtering fundamental equation is:
State one-step prediction equation: X[k/k-1]=AX[k-1/k-1]+BU[k]
State estimation: X[k/k]=AX[k/k-1]+K g[k] (Z[k]-HX[k/k-1])
The filter gain battle array: K g [ k ] = P [ k / k - 1 ] H T HP [ k / k - 1 ] H T + R
One-step prediction square error battle array: P[k/k-1]=AP[k-1/k-1] A T+ Q
Estimate square error battle array: P[k/k]=(I-K g[k] H) P[k/k-1]
Choose the filtering initial value according to the concrete condition in using, just can carry out Kalman Filter Estimation to system state then by the above-mentioned recursion formula of measurement vector sum that newly obtains.
By the discrete type Kalman filtering flow process of Fig. 3 as can be seen: Kalman filtering has gain calculating loop and two counter circuits of filtering counter circuit.Wherein the gain calculating loop is independently, and the filtering counter circuit depends on the gain calculating loop.Life period renewal process and two renewal processes of measurement renewal process in one-period.If known k-1 is constantly to k predicted state estimated value and k measuring value constantly constantly, and k-1 one-step prediction square error battle array constantly, just can obtain the k optimal estimation value of state vector constantly, and the state estimation value and the measuring value of etching system can predict k+1 the time.
B. the foundation of pedestrian's manikin
The gravity center of human body who finds in the optional step 1 is as unique point, this is because the shape of human body is symmetrical, the translation that the center of gravity of human body is can be along the direction of motion of human body stable is not subjected to the constraint of human body autokinesis, the influence of having avoided the body shape cyclical variation to be caused.In order to reduce computation complexity, be respectively applied for and estimate that the gravity center of human body is on the directions X and the motion state on the Y direction for each human body is provided with two Kalman filter.
The motion state of human body can be by vectorial X=(s x, v x, a x, s y, v y, a y) TExpression, wherein s x, v x, a xRepresent displacement, speed, the acceleration of gravity center of human body on directions X respectively, s y, v y, a yRepresent displacement, speed, the acceleration of gravity center of human body on the Y direction respectively.Because each human body all uses two Kalman filter respectively his center of gravity directions X, the motion state of Y direction to be estimated, so can be decomposed into two vectorial X to vectorial X x=(s x, v x, a x) TAnd X y=(s y, v y, a y) TRepresent the motion state of his center of gravity on directions X and Y direction respectively, the motion of human body is separate on this both direction, and disposal route also is identical, so only an elaboration is done in the processing of directions X here.
The system equation of gravity center of human body's motion state is:
s x [ k ] = s x [ k - 1 ] + v x [ k - 1 ] × ΔT + 1 2 a x [ k - 1 ] × ( ΔT ) 2 v x [ k ] = v x [ k - 1 ] + a x [ k - 1 ] × ΔT a x [ k ] = a x [ k - 1 ]
S in the formula xThe displacement of gravity center of human body's directions X during [k] expression k frame, v xThe speed of gravity center of human body's directions X during [k] expression k frame, a xThe acceleration of gravity center of human body's directions X during [k] expression k frame, Δ T express time are write following formula as matrix form and are at interval
x x [ k ] = s x [ k ] v x [ k ] a x [ k ] = 1 ΔT ( ΔT ) 2 2 0 1 ΔT 0 0 1 s x [ k - 1 ] v x [ k - 1 ] a x [ k - 1 ] + W [ k ] = A X x [ k - 1 ] + W [ k ]
In the formula A = 1 ΔT ( ΔT ) 2 2 0 1 ΔT 0 0 1 , W[k] and the expression noise.
In actual applications, can only observe the displacement of gravity center of human body in image, directly observation speed and acceleration, so measurement equation is:
Z x [ k ] = ( 100 ) s x [ k ] v x [ k ] a x [ k ] + V x [ k ] = s x [ k ] + V x [ k ] = HX k [ k ] + V x [ k ]
H=(100) wherein can be measurement noise V x[k] regards white noise as.
Therefore the state equation of system is identical with the measurement equation form with the state equation of standard Kalman filter with the form of measurement equation, so can estimate the state of system with discreteness Kalman filtering fundamental equation.Follow the tracks of video line people human body.
C. pedestrian's human body is followed the tracks of
Be two Kalman filter of pedestrian's human body initialization of utilizing the moving object detection method to obtain in the step 1, be used for respectively this pedestrian gravity center of human body's the directions X and the motion state of Y direction are estimated.Because what Kalman filter was used is the method for estimation of recursion, so as long as the original state and the initial estimation square error battle array of given filtering equations, just can utilize current measuring value to obtain the estimated value of system state, the measuring value that the use of this specific embodiments obtains for the first time carries out initialization to the system state vector of pedestrian's human body:
X x [ 0 ] = s x [ 0 ] v x [ 0 ] a x [ 0 ] = z x [ 0 ] 0 0 , Y x [ 0 ] = s y [ 0 ] v y [ 0 ] a y [ 0 ] = z y [ 0 ] 0 0
Z in the formula x[0] and z y[0] is gravity center of human body's the X and the measuring value of Y direction.The initial value of the estimation square error of two Kalman filter all is set to:
P 0 = 100 0 0 0 100 0 0 0 100
So just can displacement, speed, the acceleration of gravity center of human body's directions X and Y direction be estimated, displacement, speed, the acceleration of gravity center of human body in next frame predicted according to filtering equations by Kalman filter according to current measuring value.If the gravity center of human body that prediction obtains drops in the tracking window of detected certain pedestrian's human body in the next frame, so just think that human body and this human body in the previous frame is complementary, according to the measuring value that newly obtains system state is upgraded then.
Two, people from location face on tracked pedestrian's human body
On tracked movement human, utilize the method for detecting human face detection people face in the step 1 also to mark with red rectangle frame.Because this specific embodiments detects only location people's face in 3/4ths zone more than the movement human center of gravity based on people's face of motion analysis, reduced the hunting zone greatly, shortened running time of algorithm, making the detection of people's face almost is real-time carrying out synchronously with human body tracking, thereby reaches the purpose of tracking pedestrians people face accurately.
Three, other zones are analyzed
With the method for detecting human face that detects based on moving object in the step 1 all the other zones are analyzed, judged to have or not new people's face to occur.If have, then continue to use face tracking method that it is followed the tracks of based on moving body track.Obviously, the zone that at this moment needs to carry out the detection of pedestrian people's face has obviously reduced, and has saved operand.

Claims (3)

1. the present invention proposes a kind of algorithm that detects and follow the tracks of based on pedestrian people's face of video, it is characterized in that utilizing the algorithm of moving object detection and the algorithm of moving body track to carry out the detection and the tracking of pedestrian people's face.
2. as described in the right 1, the detection algorithm of moving object is applied in the detection of pedestrian people's face: for the image sequence in the given video, method with motion detection obtains the moving region earlier, and it is separated moving region and background area, the identification of movement human is carried out in the moving region that motion detection obtains, human face region must be the above zone of center of gravity that is positioned at movement human, this part zone marker is come out, again on original image, position, in this zone, carry out people's face then and detect.
3. according to claim 1, utilize the track algorithm of moving object to carry out the tracking of pedestrian people's face: movement human target for people's face is bigger, more easily follow the tracks of, utilize the algorithm movement human that 2 described detections obtain to power of moving body track to follow the tracks of, detect at the enterprising pedestrian's face of tracked movement human with the method for detecting human face in the power 1 again, overcome human face posture and changed the adverse effect that tracking is caused, thereby reached the purpose of track human faces.
CNA2008100256116A 2008-01-04 2008-01-04 Passerby face detection and tracing algorithm based on video Pending CN101216885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008100256116A CN101216885A (en) 2008-01-04 2008-01-04 Passerby face detection and tracing algorithm based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008100256116A CN101216885A (en) 2008-01-04 2008-01-04 Passerby face detection and tracing algorithm based on video

Publications (1)

Publication Number Publication Date
CN101216885A true CN101216885A (en) 2008-07-09

Family

ID=39623316

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008100256116A Pending CN101216885A (en) 2008-01-04 2008-01-04 Passerby face detection and tracing algorithm based on video

Country Status (1)

Country Link
CN (1) CN101216885A (en)

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101626493A (en) * 2009-08-06 2010-01-13 北京北大千方科技有限公司 Method for judging forward motion direction of pedestrian by combining laser scanning and videos
CN101908150A (en) * 2010-06-25 2010-12-08 北京交通大学 Human body detection method
CN101980245A (en) * 2010-10-11 2011-02-23 北京航空航天大学 Adaptive template matching-based passenger flow statistical method
CN102105904A (en) * 2008-08-11 2011-06-22 欧姆龙株式会社 Detection information registration device, object detection device, electronic device, method for controlling detection information registration device, method for controlling object detection device, program for controlling detection information registration device
CN101599177B (en) * 2009-07-01 2011-07-27 北京邮电大学 Video-based method for tracking human body limb movement
CN102750707A (en) * 2011-08-29 2012-10-24 新奥特(北京)视频技术有限公司 Image processing method and image processing device based on regions of interest
CN101706721B (en) * 2009-12-21 2012-11-28 汉王科技股份有限公司 Face detection method simulating radar scanning
CN102142146B (en) * 2010-01-28 2013-04-17 北京中科大洋科技发展股份有限公司 Method for tracking video target area
CN101783019B (en) * 2008-12-26 2013-04-24 佳能株式会社 Subject tracking apparatus and control method therefor, image capturing apparatus, and display apparatus
CN103745486A (en) * 2014-01-15 2014-04-23 重庆邮电大学 Method for eliminating noise interference by using moving track of object
CN104021394A (en) * 2014-06-05 2014-09-03 华北电力大学(保定) Insulator image recognition method based on Adaboost algorithm
WO2014169441A1 (en) * 2013-04-16 2014-10-23 Thomson Licensing Method and system for eye tracking using combination of detection and motion estimation
CN104573640A (en) * 2013-10-23 2015-04-29 想象技术有限公司 Face detection
CN104956400A (en) * 2012-11-19 2015-09-30 株式会社理光 Moving object recognizer
CN105469379A (en) * 2014-09-04 2016-04-06 广东中星电子有限公司 Video target area shielding method and device
CN106203379A (en) * 2016-07-20 2016-12-07 安徽建筑大学 Human body recognition system for security
CN106650682A (en) * 2016-12-29 2017-05-10 Tcl集团股份有限公司 Method and device for face tracking
CN106682620A (en) * 2016-12-28 2017-05-17 北京旷视科技有限公司 Human face image acquisition method and device
CN106897678A (en) * 2017-02-08 2017-06-27 中国人民解放军军事医学科学院卫生装备研究所 A kind of remote human face recognition methods of combination heartbeat signal, device and system
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN108090428A (en) * 2017-12-08 2018-05-29 广西师范大学 A kind of face identification method and its system
CN108269331A (en) * 2017-12-12 2018-07-10 国政通科技股份有限公司 A kind of intelligent video big data processing system
CN108304790A (en) * 2018-01-19 2018-07-20 腾讯科技(深圳)有限公司 Skeleton motion prediction processing method, device and limb motion prediction processing method
CN108573230A (en) * 2018-04-10 2018-09-25 京东方科技集团股份有限公司 Face tracking method and face tracking device
CN108664853A (en) * 2017-03-30 2018-10-16 北京君正集成电路股份有限公司 Method for detecting human face and device
CN109074473A (en) * 2016-04-11 2018-12-21 北京市商汤科技开发有限公司 For the method and system to image tracing
CN109086830A (en) * 2018-08-14 2018-12-25 江苏大学 Typical association analysis based on sample punishment closely repeats video detecting method
CN109446977A (en) * 2018-10-25 2019-03-08 平安科技(深圳)有限公司 Image processing method, device, storage medium and terminal based on recognition of face
CN109741282A (en) * 2019-01-16 2019-05-10 清华大学 A kind of multiframe bubble stream image processing method based on Predictor Corrector
CN109829436A (en) * 2019-02-02 2019-05-31 福州大学 Multi-face tracking method based on depth appearance characteristics and self-adaptive aggregation network
CN109948494A (en) * 2019-03-11 2019-06-28 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110099254A (en) * 2019-05-21 2019-08-06 浙江师范大学 A kind of driver's face tracking device and method
CN110300946A (en) * 2017-02-14 2019-10-01 微软技术许可有限责任公司 Intelligent assistant
CN110351268A (en) * 2019-07-03 2019-10-18 福建睿思特科技股份有限公司 A kind of digital law enforcement system of smart city
CN110490904A (en) * 2019-08-12 2019-11-22 中国科学院光电技术研究所 A kind of Dim targets detection and tracking
CN110580427A (en) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 face detection method, device and equipment
CN110781769A (en) * 2019-10-01 2020-02-11 浙江大学宁波理工学院 Method for rapidly detecting and tracking pedestrians
CN110826390A (en) * 2019-09-09 2020-02-21 博云视觉(北京)科技有限公司 Video data processing method based on face vector characteristics
US10692217B2 (en) 2016-03-14 2020-06-23 Sercomm Corporation Image processing method and image processing system
CN111382694A (en) * 2020-03-06 2020-07-07 杭州宇泛智能科技有限公司 Face recognition method and device and electronic equipment
CN111815662A (en) * 2019-04-11 2020-10-23 上海集森电器有限公司 Behavior recognition implementation method based on face detection
CN111821645A (en) * 2020-06-14 2020-10-27 于刚 Trampoline safety protection platform and method
CN111967422A (en) * 2020-08-27 2020-11-20 福建医联康护信息技术有限公司 Self-service face recognition service method
CN112906600A (en) * 2021-03-04 2021-06-04 联想(北京)有限公司 Object information monitoring method and device and electronic equipment
CN113051978A (en) * 2019-12-27 2021-06-29 广州慧睿思通科技股份有限公司 Face recognition method, electronic device and readable medium
CN113468998A (en) * 2021-06-23 2021-10-01 武汉虹信技术服务有限责任公司 Portrait detection method, system and storage medium based on video stream
CN113673381A (en) * 2021-08-05 2021-11-19 合肥永信科翔智能技术有限公司 A access control system for wisdom campus

Cited By (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102105904A (en) * 2008-08-11 2011-06-22 欧姆龙株式会社 Detection information registration device, object detection device, electronic device, method for controlling detection information registration device, method for controlling object detection device, program for controlling detection information registration device
CN102105904B (en) * 2008-08-11 2014-06-25 欧姆龙株式会社 Detection information registration device, object detection device, electronic device, method for controlling detection information registration device, method for controlling object detection device, program for controlling detection information registration device
CN101783019B (en) * 2008-12-26 2013-04-24 佳能株式会社 Subject tracking apparatus and control method therefor, image capturing apparatus, and display apparatus
CN101599177B (en) * 2009-07-01 2011-07-27 北京邮电大学 Video-based method for tracking human body limb movement
CN101626493A (en) * 2009-08-06 2010-01-13 北京北大千方科技有限公司 Method for judging forward motion direction of pedestrian by combining laser scanning and videos
CN101626493B (en) * 2009-08-06 2012-12-19 北京北大千方科技有限公司 Method for judging forward motion direction of pedestrian by combining laser scanning and videos
CN101706721B (en) * 2009-12-21 2012-11-28 汉王科技股份有限公司 Face detection method simulating radar scanning
CN102142146B (en) * 2010-01-28 2013-04-17 北京中科大洋科技发展股份有限公司 Method for tracking video target area
CN101908150B (en) * 2010-06-25 2012-05-30 北京交通大学 Human body detection method
CN101908150A (en) * 2010-06-25 2010-12-08 北京交通大学 Human body detection method
CN101980245A (en) * 2010-10-11 2011-02-23 北京航空航天大学 Adaptive template matching-based passenger flow statistical method
CN101980245B (en) * 2010-10-11 2013-07-17 北京航空航天大学 Adaptive template matching-based passenger flow statistical method
CN102750707A (en) * 2011-08-29 2012-10-24 新奥特(北京)视频技术有限公司 Image processing method and image processing device based on regions of interest
US9607400B2 (en) 2012-11-19 2017-03-28 Ricoh Company, Ltd. Moving object recognizer
CN104956400A (en) * 2012-11-19 2015-09-30 株式会社理光 Moving object recognizer
CN104956400B (en) * 2012-11-19 2018-02-06 株式会社理光 Mobile object identifier, vehicle and the method for identifying mobile object
WO2014169441A1 (en) * 2013-04-16 2014-10-23 Thomson Licensing Method and system for eye tracking using combination of detection and motion estimation
CN104573640A (en) * 2013-10-23 2015-04-29 想象技术有限公司 Face detection
CN104573640B (en) * 2013-10-23 2019-04-30 想象技术有限公司 Data processing system and its generating device, type of face detection method
CN103745486A (en) * 2014-01-15 2014-04-23 重庆邮电大学 Method for eliminating noise interference by using moving track of object
CN104021394B (en) * 2014-06-05 2017-12-01 华北电力大学(保定) Insulator image-recognizing method based on AdaBoost algorithms
CN104021394A (en) * 2014-06-05 2014-09-03 华北电力大学(保定) Insulator image recognition method based on Adaboost algorithm
CN105469379B (en) * 2014-09-04 2020-07-28 广东中星微电子有限公司 Video target area shielding method and device
CN105469379A (en) * 2014-09-04 2016-04-06 广东中星电子有限公司 Video target area shielding method and device
US10692217B2 (en) 2016-03-14 2020-06-23 Sercomm Corporation Image processing method and image processing system
CN109074473A (en) * 2016-04-11 2018-12-21 北京市商汤科技开发有限公司 For the method and system to image tracing
CN106203379A (en) * 2016-07-20 2016-12-07 安徽建筑大学 Human body recognition system for security
CN106682620A (en) * 2016-12-28 2017-05-17 北京旷视科技有限公司 Human face image acquisition method and device
CN106650682A (en) * 2016-12-29 2017-05-10 Tcl集团股份有限公司 Method and device for face tracking
CN106897678A (en) * 2017-02-08 2017-06-27 中国人民解放军军事医学科学院卫生装备研究所 A kind of remote human face recognition methods of combination heartbeat signal, device and system
US11194998B2 (en) 2017-02-14 2021-12-07 Microsoft Technology Licensing, Llc Multi-user intelligent assistance
CN110300946A (en) * 2017-02-14 2019-10-01 微软技术许可有限责任公司 Intelligent assistant
CN110300946B (en) * 2017-02-14 2021-11-23 微软技术许可有限责任公司 Intelligent assistant
CN108664853A (en) * 2017-03-30 2018-10-16 北京君正集成电路股份有限公司 Method for detecting human face and device
CN108664853B (en) * 2017-03-30 2022-05-27 北京君正集成电路股份有限公司 Face detection method and device
CN107644204B (en) * 2017-09-12 2020-11-10 南京凌深信息科技有限公司 Human body identification and tracking method for security system
CN107644204A (en) * 2017-09-12 2018-01-30 南京凌深信息科技有限公司 A kind of human bioequivalence and tracking for safety-protection system
CN108090428A (en) * 2017-12-08 2018-05-29 广西师范大学 A kind of face identification method and its system
CN108269331A (en) * 2017-12-12 2018-07-10 国政通科技股份有限公司 A kind of intelligent video big data processing system
CN108304790A (en) * 2018-01-19 2018-07-20 腾讯科技(深圳)有限公司 Skeleton motion prediction processing method, device and limb motion prediction processing method
CN108573230A (en) * 2018-04-10 2018-09-25 京东方科技集团股份有限公司 Face tracking method and face tracking device
CN108573230B (en) * 2018-04-10 2020-06-26 京东方科技集团股份有限公司 Face tracking method and face tracking device
CN110580427A (en) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 face detection method, device and equipment
CN109086830A (en) * 2018-08-14 2018-12-25 江苏大学 Typical association analysis based on sample punishment closely repeats video detecting method
CN109446977B (en) * 2018-10-25 2024-06-28 平安科技(深圳)有限公司 Image processing method and device based on face recognition, storage medium and terminal
CN109446977A (en) * 2018-10-25 2019-03-08 平安科技(深圳)有限公司 Image processing method, device, storage medium and terminal based on recognition of face
CN109741282B (en) * 2019-01-16 2021-03-12 清华大学 Multi-frame bubble flow image processing method based on pre-estimation correction
CN109741282A (en) * 2019-01-16 2019-05-10 清华大学 A kind of multiframe bubble stream image processing method based on Predictor Corrector
CN109829436A (en) * 2019-02-02 2019-05-31 福州大学 Multi-face tracking method based on depth appearance characteristics and self-adaptive aggregation network
TWI702544B (en) * 2019-03-11 2020-08-21 大陸商深圳市商湯科技有限公司 Method, electronic device for image processing and computer readable storage medium thereof
WO2020181728A1 (en) * 2019-03-11 2020-09-17 深圳市商汤科技有限公司 Image processing method and apparatus, electronic device, and storage medium
CN109948494A (en) * 2019-03-11 2019-06-28 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
US11288531B2 (en) 2019-03-11 2022-03-29 Shenzhen Sensetime Technology Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN109948494B (en) * 2019-03-11 2020-12-29 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111815662A (en) * 2019-04-11 2020-10-23 上海集森电器有限公司 Behavior recognition implementation method based on face detection
CN110099254A (en) * 2019-05-21 2019-08-06 浙江师范大学 A kind of driver's face tracking device and method
CN110099254B (en) * 2019-05-21 2023-08-25 浙江师范大学 Driver face tracking device and method
CN110351268A (en) * 2019-07-03 2019-10-18 福建睿思特科技股份有限公司 A kind of digital law enforcement system of smart city
CN110490904A (en) * 2019-08-12 2019-11-22 中国科学院光电技术研究所 A kind of Dim targets detection and tracking
CN110826390A (en) * 2019-09-09 2020-02-21 博云视觉(北京)科技有限公司 Video data processing method based on face vector characteristics
CN110826390B (en) * 2019-09-09 2023-09-08 博云视觉(北京)科技有限公司 Video data processing method based on face vector characteristics
CN110781769A (en) * 2019-10-01 2020-02-11 浙江大学宁波理工学院 Method for rapidly detecting and tracking pedestrians
CN113051978A (en) * 2019-12-27 2021-06-29 广州慧睿思通科技股份有限公司 Face recognition method, electronic device and readable medium
CN111382694A (en) * 2020-03-06 2020-07-07 杭州宇泛智能科技有限公司 Face recognition method and device and electronic equipment
CN111821645A (en) * 2020-06-14 2020-10-27 于刚 Trampoline safety protection platform and method
CN111967422A (en) * 2020-08-27 2020-11-20 福建医联康护信息技术有限公司 Self-service face recognition service method
CN112906600A (en) * 2021-03-04 2021-06-04 联想(北京)有限公司 Object information monitoring method and device and electronic equipment
CN113468998A (en) * 2021-06-23 2021-10-01 武汉虹信技术服务有限责任公司 Portrait detection method, system and storage medium based on video stream
CN113673381A (en) * 2021-08-05 2021-11-19 合肥永信科翔智能技术有限公司 A access control system for wisdom campus

Similar Documents

Publication Publication Date Title
CN101216885A (en) Passerby face detection and tracing algorithm based on video
Wang et al. Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching
Hsieh et al. Shadow elimination for effective moving object detection by Gaussian shadow modeling
CN109146921B (en) Pedestrian target tracking method based on deep learning
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN103310194B (en) Pedestrian based on crown pixel gradient direction in a video shoulder detection method
CN102592112B (en) Method for determining gesture moving direction based on hidden Markov model
CN102214309B (en) Special human body recognition method based on head and shoulder model
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
CN105512618B (en) Video tracing method
CN102289672A (en) Infrared gait identification method adopting double-channel feature fusion
CN103903278A (en) Moving target detection and tracking system
Havasi et al. Detection of gait characteristics for scene registration in video surveillance system
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN104794449A (en) Gait energy image acquisition method based on human body HOG (histogram of oriented gradient) features and identity identification method
CN106056078B (en) Crowd density estimation method based on multi-feature regression type ensemble learning
CN104036528A (en) Real-time distribution field target tracking method based on global search
CN104217211B (en) Multi-visual-angle gait recognition method based on optimal discrimination coupling projection
CN106127798B (en) Dense space-time contextual target tracking based on adaptive model
CN104063682A (en) Pedestrian detection method based on edge grading and CENTRIST characteristic
Dessauer et al. Optical flow object detection, motion estimation, and tracking on moving vehicles using wavelet decompositions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20080709