CN104200483A - Human body central line based target detection method under multi-camera environment - Google Patents

Human body central line based target detection method under multi-camera environment Download PDF

Info

Publication number
CN104200483A
CN104200483A CN201410268650.4A CN201410268650A CN104200483A CN 104200483 A CN104200483 A CN 104200483A CN 201410268650 A CN201410268650 A CN 201410268650A CN 104200483 A CN104200483 A CN 104200483A
Authority
CN
China
Prior art keywords
target
center line
human body
people
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410268650.4A
Other languages
Chinese (zh)
Other versions
CN104200483B (en
Inventor
梁志伟
徐小根
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing huaruizhiguang Information Technology Research Institute Co., Ltd
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201410268650.4A priority Critical patent/CN104200483B/en
Publication of CN104200483A publication Critical patent/CN104200483A/en
Application granted granted Critical
Publication of CN104200483B publication Critical patent/CN104200483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a human body central line based target detection method under a multi-camera environment. The human body central line based target detection method under the multi-camera environment comprises obtaining a binary image of a target person through the background difference technology; obtaining spindle information of the target person through a minimum median mean square deviation method; calculating homography matrix between the cameras and obtaining corresponding information of the target between the multiple cameras through the homography matrix to achieve detection of a movement target in the multiple cameras. The human body central line based target detection method under the multi-camera environment has the advantages of being low in error sensibility and strong in robustness during movement detection; enabling the central line of the target person to be well detected through single camera view under the conditions of being free of a shield and having the shield; being strong in effectiveness and robustness; providing a solid foundation for follow-up target confirmation and tracking in multiple views due to accurate detection of the central line of the target person.

Description

Object detection method based on human body center line in multi-cam environment
Technical field
The present invention relates to the object detection method based on human body center line in a kind of multi-cam environment.
Background technology
Target detection, also makes target extract, and is the image partition method of how much of a kind of based targets and statistical nature, it by target cut apart and identification unites two into one, its accuracy and real-time are significant capability of whole Target Tracking System.Especially in complex scene, need to process in real time multiple targets time, the automatic extraction of target and the identification particular importance that just seems.
In multi-cam tracing process, first need to detect in each camera picture have which target, only know the target information under each visual angle, could in subsequent step, the information fusion of these targets be got up to work in coordination with tracking.
But in reality detects, due to the impact of noise in actual scene, usually there is error in the pixel that algorithm detects target people; Illumination variation and shade also can impact detecting, as testing result caused to certain impurity; Meanwhile, can there are some errors in the unique point of extracting in target detection process unavoidably, affects robustness and realize more excellent.Especially, the situation of blocking often appears in the scene of multiple target tracking, and it is also a great problem in computer vision, is the problem that should consider in target detection.
Summary of the invention
The object of this invention is to provide the object detection method based on human body center line in a kind of multi-cam environment, to unobstructed and have a situation of blocking, target people's center line all can well be detected, solve exist in prior art due to impacts such as the impurity as factors such as noise, illumination variation and shades, testing result being caused in actual scene, and test problems while having solved target occlusion.
Technical solution of the present invention is:
An object detection method based on human body center line in multi-cam environment,
Obtain target people binary picture by background differential technique;
Utilize the method for minimum median mean square deviation to obtain target people's main shaft information;
By calculating the homography matrix between each camera and obtaining the correspondence position of target between multiple cameras by this homography matrix, realize the detection of moving target in multi-cam.
Further, background subtraction partial objectives for detects and is specially:
To background modeling, the image sequence of input is carried out to medium filtering, obtain single Gaussian Background model, in background image, the Gauss model parameter of each point is wherein, u iand σ i(i=r, g, b) represents average and the variance of corresponding point in background model;
After background modeling, foreground area obtains by the difference of input present image and background image, present image and background image difference processing, then differentiated image is carried out to threshold process and can obtain binary picture B xy, foreground pixel is expressed as 1, and background pixel is expressed as 0,
Wherein, I i(x, y), (i=r, g, b) is the current observed reading of pixel (x, y), Γ ifor threshold parameter.
Further, binary picture is carried out to morphological operator processing, use the operations such as corrosion and expansion to filter further noise; Then extract simply connected foreground area by binary UNICOM component analysis.
Further, the algorithm of estimating based on minimum median variance is distinguished the center line of target people in three kinds of situations by following rule:
When only having one " tracking target ", be while only having a target people in former frame, " detection target " in corresponding present frame, and in the vertical histogram of " detection target ", only have a significant peak region, should " detection target " be classified as an independently target people;
When there being more than one " tracking target " in former frame, and in present frame, correspond respectively to multiple " detection targets ", this " detection target " i.e. very large possibility is made up of several target people, first cuts apart this " detection target " with vertical projection histogram; If the method failure classifies as " the multiple targets under blocking " the method based on color model and detects the center line of target under circumstance of occlusion;
If more than one " tracking target " during corresponding to corresponding " detection target ", is classified as " situation about being blocked ".
Further, use minimum median quadratic method to decide independent target people's center line, this centre line L is determined by formula (6):
L = arg min l median { E ( I i , l ) 2 } - - - ( 6 )
Wherein, Ε (I i, l) be i foreground pixel I iand the vertical range between center line l to be determined.
Further, the target's center's line under unobstructed condition detects and is specially:
Whole target area is divided into some subregions, and these subregions correspond respectively to single independently target people; Determine in every sub regions independently target people's center line.
Further, to situation about being blocked, in the foreground pixel of the target people in whole foreground area, separate, then to the foreground pixel of separating, detect target people's center line by the method for the minimum median difference of two squares.
Further, adopt the method based on color model to cut apart the target people under circumstance of occlusion;
Color model is realized renewal by the display model of mixing present image pixel and all foreground pixels, and the value of all mask probability is also upgraded (χ=η=0.95) by following formula:
Further, after the color model of given moving target, the color distribution of the background pixel of target is approximated to spherical Gauss model:
r rgb ( X ) = ( 2 π σ 2 ) - 3 2 exp { - | | I ( X ) - T ( X ) | | 2 2 σ 2 } - - - ( 12 )
The foreground pixel X that meets formula (13) just belongs to k moving object:
k = arg max i P i ( X ) , x ∈ [ 1 , N ] - - - ( 13 ) .
The invention has the beneficial effects as follows: the method wrong susceptibility in motion detection is also very low, has very strong robustness.Unobstructed and have under circumstance of occlusion, in the single camera visual field, target people's center line can well be detected.And through experiment showed, that the method has stronger validity and robustness.The accurate detection of target people's center line, for the tracking definite and target of target in follow-up multiple visual angles provides good basis.
Brief description of the drawings
Fig. 1 is background subtraction point-score schematic flow sheet in embodiment;
Fig. 2 is the foreground detection schematic diagram of target people in embodiment;
Fig. 3 is pinpoint target people center line detecting schematic diagram;
Fig. 4 is target people's vertical projection histogram;
Fig. 5 is the detection schematic diagram of target group in unobstructed situation;
Fig. 6 is center line detecting schematic diagram under circumstance of occlusion;
Fig. 7 is that the target people who under different monitoring visual angle, different situations is changed detects;
Fig. 8 is the detection of target people center line under laboratory environment three cameras.
Embodiment
Describe the preferred embodiments of the present invention in detail below in conjunction with accompanying drawing.
Algorithm based on center line is the center line that detects each target people in each camera image plane, be similar to target people's axis of symmetry, and define center line match likelihood function, this function is should be related to as geometrical constraint for the list by between different visual angles, thereby passes through the method for the detection of different visual angles origin realize target.
In multi-target detection method based on human body center line under multiple camera environment of embodiment, utilize target people's center line as the matching characteristic between multiple cameras, selecting center line is because the feature that target people's center line is symmetric as clarification of objective.Due to the impact of noise in actual scene, usually there is error in the pixel that algorithm detects target people, utilizes the feature of people's center line symmetry just can make the pixel of the center line both sides error producing of cancelling each other, thus the error in reducing to detect.
Algorithm of target detection based on human body center line is for the almost feature of full symmetric of human body center line both sides, utilize the method for minimum median mean square deviation to realize the main shaft detection for single target people, and utilize homography matrix corresponding informance between multiple cameras, obtain detecting target relevant position in each camera, the detection of realize target target in multiple cameras.
Embodiment method is followed the tracks of on basis at single camera, first obtain target people binary picture by background differential technique, then utilize on this basis the method for minimum median mean square deviation to obtain target people's main shaft information, finally, by calculating the homography matrix between each camera and obtaining the correspondence position of target between multiple cameras by this homography matrix, realize the detection of moving target in multi-cam.
Background modeling and foreground detection
Paper target people's center line detecting method, before underway heart line detects, need carry out background subtraction divisional processing to the image of the target of input, then utilizes image after background subtraction to carry out target people's center line detecting again.
Detection under multi-cam environment is to be built and combined by single camera, and its detection and tracking, also taking single camera as important foundation, are discussed the algorithm of based target human body center line detecting under single camera for this reason first.Method based on human body center line be on background difference basis, utilize minimum median quadratic method realize to target people center line determine, therefore the principle that background subtraction divides is first described.
Background subtraction point-score, i.e. background subtraction method, is a kind of method of identifying and cut apart moving target under static background.If do not consider the impact of noise n (x, y, t), input video frame image I (x, y, t) can be regarded as by background image B (x, y, t) and moving target X (x, y, t) composition:
I(x,y,t)=B(x,y,t)+X(x,y,t) (1)
Can obtain moving target X (x, y, t) by formula (2):
X(x,y,t)=I(x,y,t)-B(x,y,t) (2)
And under actual conditions, due to the existence of noise, formula (2) can not obtain real moving target, but formed by motion target area and noise synthetic after image S (x, y, t), that is:
S(x,y,t)=I(x,y,t)-B(x,y,t)+n(x,y,t) (3)
Therefore, obtain detecting target need to be further processed according to certain judgment principle, and the most frequently used method is thresholding method:
X ( x , y , t ) = I ( x , y , t ) S ( x , y , t ) &GreaterEqual; T 0 S ( x , y , t ) < T - - - ( 4 )
In formula (4), T is a threshold value, and Fig. 1 is the process flow diagram of background subtraction point-score.
Based on the target detection of center line, first to carry out background difference extraction foreground target to inputting pending image, the background subtraction separating method using in embodiment is A.Pentland for the method for extracting foreground area in moving target, it is single Gauss model background subtraction separating method, in embodiment, the method is done to improve and reduced illumination variation and the impact of shade on detection, adopt normalized color RGB model at this, wherein r=R/ (R+G+B), g=G/ (R+G+B), b=B/ (R+G+B).
In embodiment, first background subtraction partial objectives for detection algorithm will carry out modeling to background, the image sequence of input is carried out to medium filtering, obtains single Gaussian Background model, and in background image, the Gauss model parameter of each point is wherein u iand σ i(i=r, g, b) represents average and the variance of corresponding point in background model.After background modeling, foreground area can obtain by the difference of input present image and background image, present image and background image difference processing, then differentiated image is carried out to threshold process and can obtain binary picture B xy, foreground pixel is expressed as 1, and background pixel is expressed as 0.
Wherein, I i(x, y), (i=r, g, b) is the current observed reading of pixel (x, y), Γ ifor threshold parameter, can be determined by the experience in experimentation.The r of each pixel in background difference, g, b tri-colouring components are independently to carry out, once one of them thinks that current pixel is the pixel changing, and thinks that this pixel is foreground pixel.Due to the impact of noise in target detection region, at this, binary picture is carried out to morphological operator processing, use the operations such as corrosion and expansion to filter further noise.Then extract simply connected foreground area by binary UNICOM component analysis.Foreground detection testing result as shown in Figure 2.In Fig. 2, image under first and second behavior camera 1 visual field, the first row and the third line utilization be improved method in the present invention, what the second row utilized is the method for A.Pentland, as seen from the figure, in camera 1, due to the impact (ground is reflective strong) of illumination, use the affect difference of diverse ways on testing result.In the first row, use the method after the present invention improves, in the second row, do not use and improve one's methods, as can be seen from Figure 2, do not use improved method testing result to have certain impurity.The third line is under camera 2 visual field light conditions good condition, to use the testing result of improving one's methods, and as seen from Figure 2, can clearly detect foreground image.
The center line detecting of target
People's center line refers to target people's axis of symmetry.Than unique point, utilize center line to have more robustness as feature, because can there are unavoidably some errors in the unique point of extracting in target detection process, but according to the symmetry of center line, the point that is symmetrically distributed in center line both sides returns these errors of cancelling each other, thereby realizes more excellent robustness.
The target people who supposes in embodiment is upright walking, and target people's center line all exists.In fact this hypothesis is also reasonably and is widely adopted.Based on the center line detecting method that three kinds of target people under different situations are discussed in this kind of hypothesis embodiment, i.e. center line detecting in independent target people's center line detecting, unobstructed situation, the center line detecting under circumstance of occlusion.
Center line detecting will detect based on above-mentioned three kinds of situations.When single people's situation detected, directly use single people's center line detecting algorithm to realize the detection of its center line to it; When occurring in the camera visual field that unscreened multiple target is, be first divided into point other single target people's a form, more respectively it carried out the detection of center line; When in the camera visual field, occurred mutually blocking target time, first the target that is blocked is cut apart, form respectively independently form of single target, more respectively it carried out the detection algorithm of center line.
Target people's center line detecting separately
The global characteristics restriction of target people based on detected, human body is in close proximity to symmetrical feature in heart line both sides therein, uses minimum median quadratic method to decide independent target people's center line.Minimum median quadratic method is a kind of method for estimation model parameter, and its target is the median of minimum error square.The estimator that minimum median quadratic method obtains belongs to robust estimation, and parameter estimation is not easy to be subject to the appreciable impact of minority Outlier Data.In the present invention, the method shows as target people about symmetrical this constraint of center line.
This center line decides (making it reach minimum) by the square value of intermediate value of the vertical range that minimizes foreground pixel and human body center line.Its principle as shown in Figure 3, wherein, Ε (I i, l) be i foreground pixel I iand the vertical range between center line l to be determined.Centre line L is determined by formula (6).
L = arg min l median { E ( I i , l ) 2 } - - - ( 6 )
Target's center's line under unobstructed condition detects
For two or more target people, its image motion region produces a foreground target set, in the time mutually not blocking between these multiple targets, determines that the center line of each target mainly comprises following two steps:
Step 1, whole target area is divided into some subregions, these subregions correspond respectively to single independently target people.
Step 2, determine the center line of target people in every sub regions.
In divided subregion, single target people's center line can determine that method is definite according to pinpoint target people's center line.Because vertical projection histogram can represent the binary profile of target 2D, according to the method for I.Haritaoglu, embodiment carrys out segmentation object people with vertical projection histogram, in the time having an obvious peak region between two main valleies in histogram, and corresponding target people.
Vertical projection histogram is that target people's foreground pixel is projected to a series of histograms that obtain in the horizontal coordinate of image.Suppose that B (x, y) is binary picture, its vertical coordinate and horizontal coordinate distribute and represent with y and x, and the height of target detection foreground area and width are divided into for h and w, and the vertical projection histogram H (x) of this foreground area is:
H ( x ) = &Sigma; y = 1 h B ( x , y ) x &Element; [ 1 , w ] - - - ( 7 )
Fig. 4 has provided target people's vertical projection histogram.
Significantly peak region need to meet two conditions:
Maximal value in condition one, all obvious peak value region is at crest value threshold value (P t) on.
Two minimum value values in condition two, obvious peak value region are at trough value threshold (C t) under.
Suppose in a peak region M 1, M 2..., M nit is its local extreme value.G l, G rbe respectively the value of the left and right trough in this region, above two conditions can be expressed as:
max(M 1,M 2,...,M n)>M T (8)
G l<G T,G r<G T (9)
Can be found out by upper formula, significantly definite threshold value (P that depends on crest and trough of peak region t) and (G t).Therefore, the selection of these two values is very important.In embodiment, the threshold value (P of crest t) elect as target people in image coordinate height 80 percent, trough (G t) threshold value elect whole histogrammic average as.The height of target people in image coordinate assumes with the coordinate of people position y in image proportional, i.e. h=β y, and wherein β is the scale factor that training obtains from image sequence.
Figure 5 shows that the situation of center line detecting target people in group.In Fig. 5, figure a represents input picture, and figure b represents image after background difference, and figure c represents corresponding vertical histogram.Wherein, symbol "+" represents peak value, and symbol " o " represents valley, and solid line represents peak value (P t), dotted line represents valley (G t).In this histogram, there are 2 obvious peak regions.Can judge according to these regions, this group may be partitioned into 2 independently target people, and as shown in figure d, their center line, as shown in e, can correctly be detected.
Target people's center line detecting under circumstance of occlusion
The situation of blocking often appears in the scene of multiple target tracking, is also a great problem in computer vision.In embodiment, put aside the situation that target people is blocked by static target.
In order accurately to detect shelter target people's center line, very necessary separating in the foreground pixel of the target people in whole foreground area.Then to the foreground pixel of separating, detect target people's center line by the method for the minimum median difference of two squares.This has just become the problem of target people's separation under circumstance of occlusion, and this problem is also difficult point and the focus of computer vision research, has also carried out in the world research.The general thinking of these research methods is: set up model-classification-tracking, target people is set up to corresponding model, as color model and utilize spatial information, then utilize this model to classify, finally distribute regional is followed the tracks of.Usual way has the Target Segmentation method of the colored pixels classification based on color model, based on the method for color histogram, and method based on cuclear density.Method based on color model records each target people's spatial color information, and therefore, although there is a large amount of redundant informations in template, the method has and highlyer in foreground pixel, has higher reliability cutting apart.Consider reliability, in embodiment, adopt the method based on color model to cut apart the target people under circumstance of occlusion.The method comes from the method for A.Senior, comprises a color model and an additional probability mask.
At this, the target in each frame is set up to a display model, represent this target and how in input picture, to represent.What color model herein used is the RGB color model with independent probability mask, uses T rGB(x) display model of each pixel in expression target, its probability mask P c(x) represent, what this probability recorded is the possibility that target occurs in this pixel.For the convenience representing, with X coordinate presentation video coordinate, but in actual applications, the local zone of this display model presentation video, is separately converted to the centre of form of the normalized current goal of image coordinate.But, whenever in the time that calibration is known, just can computed image in the display model T of any some x rGBand probability mask P (x) c(x), work as P c(x) be to represent that target is outside model area at 1 o'clock.In the time that tracking target is determined, will set up the display model of a rectangle, its size is identical with the boundary sizes of foreground area.This model carrys out initialization by the pixel that copies foreground elements in color model.Corresponding mask probability is initialized as 0.4, if pixel corresponding be not this target to be tracked; be initialized as zero.
In follow-up picture frame, display model is by upgrading with the variation of current foreground area.Color model is realized renewal by the display model of mixing present image pixel and all foreground pixels, and the value of all mask probability is also upgraded (χ=η=0.95) by following formula:
Therefore, just can obtain the display model in the foreground area of continuous updating, with and the observation probability of corresponding continuous updating.Observe probability and can be arranged to thresholding, and think the mask of the binary value that finds thus target.Simultaneously also provided the information of the non-rigid variation of target, as in obtaining in whole region, do not comprise target people leg the observed information in region.
Color model (the T of target i i) by color variance C i(X) composition, the rgb colouring information of each pixel of its record object i, and relevant probability mask P[I i(X) possibility that], its record object i observes at pixel X place.In the time that a new target is tracked, by its color model of initialization, and keep continuous updating in follow-up successive frame.The coordinate of pixel X will be normalized into the position of current goal in image coordinate system.The color distribution of each pixel of moving target is defaulted as Gaussian distribution.
The color that makes I (X) observe for pixel X, needs to know the probability density p[I (X) of I (X) under color model i | T i(X)].Block under environment, foreground pixel cut apart the problem that has formed a classification, this problem has determined which color model each foreground pixel belongs to.Utilize bayes rule, and the probability P of model i [I (X) | T i(X)], known I (X) in the situation that, draw.In the time providing pixel X in the probability of model m and reach maximal value, pixel X is classified in model m.
After the color model of given moving target, the color distribution of the background pixel of target can be approximated to spherical Gauss model:
r rgb ( X ) = ( 2 &pi; &sigma; 2 ) - 3 2 exp { - | | I ( X ) - T ( X ) | | 2 2 &sigma; 2 } - - - ( 12 )
If meet formula (13), foreground pixel X just belongs to k moving object (model).
k = arg max i P i ( X ) , x &Element; [ 1 , N ] - - - ( 13 )
After shelter target splits, the center line detecting method of each target is consistent with independent people's center line detecting method.Fig. 6 has provided the example that detects target people's center line under circumstance of occlusion.In Fig. 6, target people has been blocked, and the method based on color template of utilization separates target, is utilizing pixel detection target people's the center line of visible part.
The differentiation of dbjective state in three kinds of situations
Utilize the algorithm of estimating based on minimum median variance to distinguish the center line of target people in three kinds of situations.
Judge that the condition in above-mentioned three kinds of situations can judge according to the corresponding relation of each target in successive frame in tracing process.Being " tracking target " by the object definition in tracked state in former frame, will be " detection target " in the object definition that is detected state in present frame, and these three kinds of situations judge according to following rule:
Situation one: when only having one " tracking target ", be while only having a target people in former frame, " detection target " in corresponding present frame, and in the vertical histogram of " detection target ", only have a significant peak region, should " detection target " be classified as an independently target people.
Situation two: when there being more than one " tracking target " in former frame, and correspond respectively to multiple " detection targets " in present frame, this " detection target " i.e. very large possibility is made up of several target people.In the case, first cut apart this " detection target " with vertical projection histogram.If the method failure is categorized as this kind of situation " the multiple targets under blocking " and uses method based on color model to detect the center line of target under circumstance of occlusion.
Situation three: if more than one " tracking target " during corresponding to corresponding " detection target ", in this, situation is classified as " situation about being blocked ".
Experimental verification
In order to verify the accuracy of center line detecting target people under multi-cam, under laboratory environment, utilize three cameras to test two target people, and carried out some relatively.
In experiment, target people to be tracked indicates with coloured bounding box, the color of different target people's indicia framings is different, vertical curve in frame is this target people's center line, the intersection point of center line and bounding box be target people's " ground point ", this ground point is realized instant renewal by corresponding intersection region, and the hemline of bounding box is determined by target people " ground point ", and upper, the left and right edge line of bounding box is determined by target people's foreground pixel.
Valley threshold value (G in experiment t) elect whole histogrammic mean value, peak threshold (P as t) elect 80 percent of foreground area height as.Rule of thumb, the threshold value D of respective distances tbe made as 5.
Under laboratory environment, follow the tracks of 2 target people by 3 cameras, this video sequence is made up of 800 frames, under most of conditions, target can be detected and be mated exactly, below except two kinds of situations:
In situation one, picture, target is too little to such an extent as to can't detect;
Situation two, there is very serious circumstance of occlusion or a few frames that occurs due to target in target people's display model can not form in time.
In indoor human body image library, carry out upright human detection experiment.In following experiment, the correct standard detecting is: detect center line and have more than 80% overlapping with the true knot center line of human body target.
Model target's center line, and test this center line detecting and (block for different situations in indoor environment, do not block, illumination variation etc.) target people's detection performance, the testing result of Fig. 7 is visible, this detection is to different light, there is the complete human body in the various situations such as unobstructed to there is good adaptive faculty, can accurately detect its center line, but owing to there being target not exclusively presenting or blocking more serious situation in camera coverage, therefore the situation incomplete for the visual field occurs undetected, be there is to flase drop situation in serious shielding, as the target of two serious shielding thought by mistake to a target.
Fig. 8 has shown some the corresponding testing results from 100 frames to 600 frames.In this video sequence, two target people appear in each camera coverage.In the 326th frame, in the visual field of camera 3, two targets are separated by nearer, in the time utilizing the method for segmentation object under circumstance of occlusion not, lost efficacy, two targets are thought to a target, but the method still can detect two separately center lines of target exactly in follow-up time frame, as camera 3 visuals field in 576 frames, in this frame there is eclipse phenomena in camera 1 visual field, utilizes the center line that two target people can effectively be detected of cutting apart of target under circumstance of occlusion.In 128 frames, in camera 2 visuals field, target people is not completely, and is subject to the impact of illumination, fails to detect its center line.And there are some undetected phenomenons while being subject to partial occlusion for human body, there is flase drop phenomenon in the environmental background region crisscross for some lines.
Embodiment method wrong susceptibility in motion detection is also very low, has very strong robustness.Unobstructed and have under circumstance of occlusion, in the single camera visual field, target people's center line can well be detected.And through experiment showed, that the method has stronger validity and robustness.The accurate detection of target people's center line, for the tracking definite and target of target in follow-up multiple visual angles provides good basis.

Claims (9)

1. the object detection method based on human body center line in multi-cam environment, is characterized in that:
Obtain target people binary picture by background differential technique;
Utilize the method for minimum median mean square deviation to obtain target people's main shaft information;
By calculating the homography matrix between each camera and obtaining the correspondence position of target between multiple cameras by this homography matrix, realize the detection of moving target in multi-cam.
2. the object detection method based on human body center line in multi-cam environment as claimed in claim 1, is characterized in that, background subtraction partial objectives for detects and is specially:
To background modeling, the image sequence of input is carried out to medium filtering, obtain single Gaussian Background model, in background image, the Gauss model parameter of each point is wherein, u iand σ i(i=r, g, b) represents average and the variance of corresponding point in background model;
After background modeling, foreground area obtains by the difference of input present image and background image, present image and background image difference processing, then differentiated image is carried out to threshold process and can obtain binary picture B xy, foreground pixel is expressed as 1, and background pixel is expressed as 0,
Wherein, I i(x, y), (i=r, g, b) is the current observed reading of pixel (x, y), Γ ifor threshold parameter.
3. the object detection method based on human body center line in multi-cam environment as claimed in claim 2, is characterized in that: binary picture is carried out to morphological operator processing, use the operations such as corrosion and expansion to filter further noise; Then extract simply connected foreground area by binary UNICOM component analysis.
4. the object detection method based on human body center line in the multi-cam environment as described in claim 1-3 any one, is characterized in that, the algorithm of estimating based on minimum median variance is distinguished the center line of target people in three kinds of situations by following rule:
When only having one " tracking target ", be while only having a target people in former frame, " detection target " in corresponding present frame, and in the vertical histogram of " detection target ", only have a significant peak region, should " detection target " be classified as an independently target people;
When there being more than one " tracking target " in former frame, and in present frame, correspond respectively to multiple " detection targets ", this " detection target " i.e. very large possibility is made up of several target people, first cuts apart this " detection target " with vertical projection histogram; If the method failure classifies as " the multiple targets under blocking " and uses method based on color model to detect the center line of target under circumstance of occlusion;
If more than one " tracking target " during corresponding to corresponding " detection target ", is classified as " situation about being blocked ".
5. the object detection method based on human body center line in multi-cam environment as claimed in claim 4, is characterized in that, uses minimum median quadratic method to decide independent target people's center line, and this centre line L is determined by formula (6):
L = arg min l median { E ( I i , l ) 2 } - - - ( 6 )
Wherein, Ε (I i, l) be i foreground pixel I iand the vertical range between center line l to be determined.
6. the object detection method based on human body center line in multi-cam environment as claimed in claim 4, is characterized in that, the target's center's line under unobstructed condition detects and is specially:
Whole target area is divided into some subregions, and these subregions correspond respectively to single independently target people; Determine in every sub regions independently target people's center line.
7. the object detection method based on human body center line in multi-cam environment as claimed in claim 4, it is characterized in that: to situation about being blocked, in the foreground pixel of target people in whole foreground area, separate, then to the foreground pixel of separating, detect target people's center line by the method for the minimum median difference of two squares.
8. the object detection method based on human body center line in multi-cam environment as claimed in claim 7, is characterized in that: adopt the method based on color model to cut apart the target people under circumstance of occlusion;
Color model is realized renewal by the display model of mixing present image pixel and all foreground pixels, and the value of all mask probability is also upgraded (χ=η=0.95) by following formula:
9. the object detection method based on human body center line in multi-cam environment as claimed in claim 8, is characterized in that: after the color model of given moving target, the color distribution of the background pixel of target is approximated to spherical Gauss model:
r rgb ( X ) = ( 2 &pi; &sigma; 2 ) - 3 2 exp { - | | I ( X ) - T ( X ) | | 2 2 &sigma; 2 } - - - ( 12 )
The foreground pixel X that meets formula (13) just belongs to k moving object:
k = arg max i P i ( X ) , x &Element; [ 1 , N ] - - - ( 13 ) .
CN201410268650.4A 2014-06-16 2014-06-16 Object detection method based on human body center line in multi-cam environment Active CN104200483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410268650.4A CN104200483B (en) 2014-06-16 2014-06-16 Object detection method based on human body center line in multi-cam environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410268650.4A CN104200483B (en) 2014-06-16 2014-06-16 Object detection method based on human body center line in multi-cam environment

Publications (2)

Publication Number Publication Date
CN104200483A true CN104200483A (en) 2014-12-10
CN104200483B CN104200483B (en) 2018-05-18

Family

ID=52085769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410268650.4A Active CN104200483B (en) 2014-06-16 2014-06-16 Object detection method based on human body center line in multi-cam environment

Country Status (1)

Country Link
CN (1) CN104200483B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327493A (en) * 2016-08-23 2017-01-11 电子科技大学 Multi-visual-angle image object detecting method based on visual saliency
EP3258214A4 (en) * 2015-02-12 2019-03-13 Hitachi Automotive Systems, Ltd. Object detection device
CN110135382A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of human body detecting method and device
CN110307791A (en) * 2019-06-13 2019-10-08 东南大学 Vehicle length and speed calculation method based on three-dimensional vehicle bounding box
CN110349206A (en) * 2019-07-18 2019-10-18 科大讯飞(苏州)科技有限公司 A kind of method and relevant apparatus of human body symmetrical detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763512A (en) * 2009-12-11 2010-06-30 西安电子科技大学 Method for semi-automatically detecting road target in high-resolution remote sensing images
CN102831446A (en) * 2012-08-20 2012-12-19 南京邮电大学 Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping)
CN103065323A (en) * 2013-01-14 2013-04-24 北京理工大学 Subsection space aligning method based on homography transformational matrix
CN103778436A (en) * 2014-01-20 2014-05-07 电子科技大学 Pedestrian gesture inspecting method based on image processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763512A (en) * 2009-12-11 2010-06-30 西安电子科技大学 Method for semi-automatically detecting road target in high-resolution remote sensing images
CN102831446A (en) * 2012-08-20 2012-12-19 南京邮电大学 Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping)
CN103065323A (en) * 2013-01-14 2013-04-24 北京理工大学 Subsection space aligning method based on homography transformational matrix
CN103778436A (en) * 2014-01-20 2014-05-07 电子科技大学 Pedestrian gesture inspecting method based on image processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ISMAIL HARITAOGLU 等: "W4: Real-Time Surveillance of People and Their Activities", 《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
ROBERTO VEZZANI 等: "Probabilistic people tracking with appearance models and occlusion classification: The AD-HOC system", 《PATTERN RECOGNITION LETTERS》 *
SAAD M. KHAN 等: "A Multiview Approach to Tracking People in Crowded Scenes using a Planar Homography Constraint", 《COMPUTER VISION》 *
YEE-HONG YANG 等: "The Background Primal Sketch: An Approach for Tracking Moving Objects", 《MACHINE VISION AND APPLICATIONS》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3258214A4 (en) * 2015-02-12 2019-03-13 Hitachi Automotive Systems, Ltd. Object detection device
US10627228B2 (en) 2015-02-12 2020-04-21 Hitachi Automotive Systems, Ltd. Object detection device
CN106327493A (en) * 2016-08-23 2017-01-11 电子科技大学 Multi-visual-angle image object detecting method based on visual saliency
CN106327493B (en) * 2016-08-23 2018-12-18 电子科技大学 A kind of multi-view image object detection method of view-based access control model conspicuousness
CN110135382A (en) * 2019-05-22 2019-08-16 北京华捷艾米科技有限公司 A kind of human body detecting method and device
CN110135382B (en) * 2019-05-22 2021-07-27 北京华捷艾米科技有限公司 Human body detection method and device
CN110307791A (en) * 2019-06-13 2019-10-08 东南大学 Vehicle length and speed calculation method based on three-dimensional vehicle bounding box
CN110307791B (en) * 2019-06-13 2020-12-29 东南大学 Vehicle length and speed calculation method based on three-dimensional vehicle boundary frame
CN110349206A (en) * 2019-07-18 2019-10-18 科大讯飞(苏州)科技有限公司 A kind of method and relevant apparatus of human body symmetrical detection
CN110349206B (en) * 2019-07-18 2023-05-30 科大讯飞(苏州)科技有限公司 Method and related device for detecting human body symmetry

Also Published As

Publication number Publication date
CN104200483B (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN106203274B (en) Real-time pedestrian detection system and method in video monitoring
JP6549797B2 (en) Method and system for identifying head of passerby
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN105608479B (en) In conjunction with the anomaly detection method and system of depth data
US20100202657A1 (en) System and method for object detection from a moving platform
CN101854467B (en) Method for adaptively detecting and eliminating shadow in video segmentation
CN106373143A (en) Adaptive method and system
CN103164858A (en) Adhered crowd segmenting and tracking methods based on superpixel and graph model
CN105893946A (en) Front face image detection method
CN104200483A (en) Human body central line based target detection method under multi-camera environment
CN102243765A (en) Multi-camera-based multi-objective positioning tracking method and system
CN103035013A (en) Accurate moving shadow detection method based on multi-feature fusion
CN104123529A (en) Human hand detection method and system thereof
CN106778633B (en) Pedestrian identification method based on region segmentation
CN106503170B (en) It is a kind of based on the image base construction method for blocking dimension
CN106384345A (en) RCNN based image detecting and flow calculating method
Kim et al. Autonomous vehicle detection system using visible and infrared camera
Fradi et al. Spatio-temporal crowd density model in a human detection and tracking framework
Yang et al. Vehicle detection methods from an unmanned aerial vehicle platform
CN106296708B (en) Car tracing method and apparatus
CN105893957A (en) Method for recognizing and tracking ships on lake surface on the basis of vision
CN107862262A (en) A kind of quick visible images Ship Detection suitable for high altitude surveillance
Chen et al. A precise information extraction algorithm for lane lines
Yang et al. Vehicle detection from low quality aerial LIDAR data
Sajid et al. Crowd counting using adaptive segmentation in a congregation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201030

Address after: Room 2, No.2, No.2, Kechuang Road, NO.201, Qixia District, Nanjing, Jiangsu Province

Patentee after: Nanjing huaruizhiguang Information Technology Research Institute Co., Ltd

Address before: Yuen Road Qixia District of Nanjing City, Jiangsu Province, No. 9 210023

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS