CN104200483B - Object detection method based on human body center line in multi-cam environment - Google Patents
Object detection method based on human body center line in multi-cam environment Download PDFInfo
- Publication number
- CN104200483B CN104200483B CN201410268650.4A CN201410268650A CN104200483B CN 104200483 B CN104200483 B CN 104200483B CN 201410268650 A CN201410268650 A CN 201410268650A CN 104200483 B CN104200483 B CN 104200483B
- Authority
- CN
- China
- Prior art keywords
- target
- mrow
- center line
- target person
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The present invention provides the object detection method based on human body center line in a kind of multi-cam environment, and target person binary picture is obtained by background differential technique;The main shaft information of target person is obtained using the method for minimum median mean square deviation;The correspondence position of target between multiple cameras is obtained by calculating the homography matrix between each camera and passing through the homography matrix, realizes detection of the moving target in multi-cam.This method Error sensitivity in motion detection is also very low, has very strong robustness.Unobstructed and under having circumstance of occlusion, the center line of target person can be detected well in the single camera visual field.And it was proved that, this method has stronger validity and robustness.The accurate detection of target person center line, definite and target the tracking for target in follow-up multiple visual angles provide good basis.
Description
Technical field
The present invention relates to the object detection methods based on human body center line in a kind of multi-cam environment.
Background technology
Target detection is also Objective extraction, is a kind of image partition method based on target geometry and statistical nature, it will
The segmentation and identification of target are combined into one, and accuracy and real-time are a significant capabilities of entire Target Tracking System.Especially
It is, it is necessary to when handled in real time multiple targets in complex scene, and automatically extracting and identifying for target just seems special
It is important.
During multi-cam tracking, it is necessary first to detect there is which target in each camera picture, only know
The information fusion of these targets could be got up to carry out collaboration tracking by the target information under each visual angle in road in subsequent step.
But in actually detected, due to the influence of noise in actual scene, algorithm detects that the pixel of target person is normal
It is commonly present error;Illumination variation and shade will also result in influence to detection, and certain impurity is such as caused to testing result;Meanwhile
The characteristic point extracted during target detection inevitably it is more excellent to influence robustness realization there are some errors.Especially,
Situation about blocking is frequently appeared in the scene of multiple target tracking, it is also a great problem in computer vision, is in target
The problem of being contemplated that in detection.
The content of the invention
The object of the present invention is to provide the object detection method based on human body center line in a kind of multi-cam environment, to nothing
Block and have situation about blocking, the center line of target person can be detected well, solve it is in the prior art by
The influences such as impurity caused by testing result such as factors such as noise, illumination variation and shades in actual scene, and solve mesh
Test problems when mark blocks.
The present invention technical solution be:
Object detection method based on human body center line in a kind of multi-cam environment,
Target person binary picture is obtained by background differential technique;
The main shaft information of target person is obtained using the method for minimum median mean square deviation;
Target between multiple cameras is obtained by calculating the homography matrix between each camera and passing through the homography matrix
Correspondence position, realize detection of the moving target in multi-cam.
Further, background subtraction partial objectives for, which detects, is specially:
To background modeling, i.e., medium filtering is carried out to the image sequence of input, obtain single Gaussian Background model, background image
In the Gauss model parameter each put beWherein, uiAnd σi(i=r, g, b) is represented in background model
The mean and variance of corresponding points;
After background modeling, foreground area by input present image and background image difference obtain, present image with
Then differentiated image is carried out threshold process and can obtain binary picture B by background image difference processingxy, foreground pixel
1 is expressed as, background pixel is expressed as 0,
Wherein, Ii(x, y), (i=r, g, b) be pixel (x, y) Current observation value, ΓiFor threshold parameter.
Further, to binary picture carry out morphological operator processing, using corrosion with expansion etc. operation to noise into
Row further filters;Then simply connected foreground area is extracted by binary unicom Component Analysis.
Further, target person in the case of distinguishing three kinds by following rule based on the algorithm of minimum median variance evaluation
Center line:
When only there are one " tracking target ", i.e., only there are one during target person in former frame, " detecting in corresponding present frame
Target ", and be somebody's turn to do " detection target " only there are one significant peak region in the vertical histogram of " detection target " and be classified
For an independent target person;
When in former frame there are more than one " tracking target ", and correspond respectively in the current frame it is multiple " detection mesh
Mark " is somebody's turn to do " detection target " i.e. very big possibility and is made of several target persons, split first using vertical projective histogram
It is somebody's turn to do " detection target ";If this method fail, be classified as " multiple targets under blocking " and based on the method for color model come
Detect the center line of target under circumstance of occlusion;
More than one if " tracking target " corresponds to corresponding " detection target ", it is classified as " the feelings being blocked
Condition ".
Further, the center line of individual target person is determined using minimum median quadratic method, center line L is by formula
(6) determine:
Wherein, E (Ii, l) and it is the i-th foreground pixel IiWith the vertical range between center line l to be determined.
Further, target's center's line detection under the conditions of unobstructed is specially:
Entire target area is divided into several subregions, these subregions correspond respectively to single independent target person;Really
The center line of independent target person in fixed every sub-regions.
Further, to situation about being blocked, separated in the foreground pixel of the target person in entire foreground area, so
Afterwards to the foreground pixel separated, the center line of target person is detected using the method for the minimum median difference of two squares.
Further, using the target person under the method segmentation circumstance of occlusion based on color model;
Color model is updated by mixing the display model of current picture and all foreground pixels to realize, all
The value of mask probability updates (χ=η=0.95) also by following formula:
Further, after giving the color model of moving target, the distribution of color approximation glomeration of the background pixel of target
Gauss model:
The foreground pixel X for meeting formula (13) just belongs to k-th of moving object:
The beneficial effects of the invention are as follows:This method Error sensitivity in motion detection is also very low, has very strong Shandong
Stick.Unobstructed and under having circumstance of occlusion, the center line of target person can be detected well in the single camera visual field.
And it was proved that, this method has stronger validity and robustness.The accurate detection of target person center line is follow-up more
Definite and target the tracking of target provides good basis in a visual angle.
Description of the drawings
Fig. 1 is background subtraction flow diagram in embodiment;
Fig. 2 is the foreground detection schematic diagram of target person in embodiment;
Fig. 3 is pinpoint target people's center line detecting schematic diagram;
Fig. 4 is the vertical projective histogram of target person;
Fig. 5 be it is unobstructed in the case of target group detects schematic diagram;
Fig. 6 is center line detecting schematic diagram under circumstance of occlusion;
Fig. 7 is that the target person of different situations variation is detected under different monitoring visual angle;
Fig. 8 is the detection of target person center line under three camera of laboratory environment.
Specific embodiment
The preferred embodiment that the invention will now be described in detail with reference to the accompanying drawings.
Algorithm based on center line is the center line that each target person is detected in each camera image plane, is similar to
The symmetry axis of target person, and center line match likelihood function is defined, which is for that should be related to by the list between different visual angles
As geometrical constraint, so as to realize the method for the detection of target by different visual angles origin.
Under multiple camera environment of embodiment in the multi-target detection method based on human body center line, target person is utilized
For center line as the matching characteristic between multiple cameras, it is the center because target person to select center line as clarification of objective
The characteristics of line is symmetric.Due to the influence of noise in actual scene, algorithm detects that the pixel of target person is usually present
Error can make the pixel of center line both sides cancel each other the error of generation using the symmetrical feature in the center of people, so as to
Reduce the error in detection.
Algorithm of target detection based on human body center line is directed to the characteristics of human body center line both sides are almost symmetrical, utilizes
The method of minimum median mean square deviation is realized to be detected for the main shaft of single target people, and singly answers square using between multiple cameras
Battle array corresponding informance obtains detection target corresponding position in each camera, realizes the inspection of target target in multiple cameras
It surveys.
Embodiment method obtains target person binary system figure by background differential technique first on the basis of single camera tracking
Then picture obtains the main shaft information of target person, finally by calculating using the method for minimum median mean square deviation on this basis
Homography matrix between each camera simultaneously passes through the homography matrix and obtains the correspondence position of target between multiple cameras, realizes fortune
Detection of the moving-target in multi-cam.
Background modeling and foreground detection
The center line detecting method of target person is introduced first, it, need to be to the target of input before underway heart line detection
Image carries out background difference processing, then carries out the center line detecting of target person again using the image after background subtraction.
Detection under multi-cam environment is to be built to be composed by single camera, and detection and tracking are also with list
A camera is important foundation, and the algorithm based on target body center line detecting under single camera is discussed first therefore.Base
It is to be realized on the basis of background difference using minimum median quadratic method to target person center line in the method for human body center line
Determine, therefore first illustrate the principle of background difference.
Background subtraction, i.e. background subtraction method are a kind of methods that moving target is identified and split under static background.Such as
Fruit without considering noise n (x, y, t) influence, then input video two field picture I (x, y, t) can be regarded as by background image B (x, y,
T) formed with moving target X (x, y, t):
I (x, y, t)=B (x, y, t)+X (x, y, t) (1)
Moving target X (x, y, t) can be obtained by formula (2):
X (x, y, t)=I (x, y, t)-B (x, y, t) (2)
And in practical situations, due to the presence of noise, formula (2) can not obtain real moving target, but by transporting
Image S (x, y, t) after the synthesis of moving-target region and noise composition, i.e.,:
S (x, y, t)=I (x, y, t)-B (x, y, t)+n (x, y, t) (3)
Therefore, detection target needs are obtained to be further processed according to certain judgment principle, and most common method
For thresholding method:
T is a threshold value in formula (4), and Fig. 1 is the flow chart of background subtraction.
Target detection based on center line first has to carry out background difference extraction prospect mesh to inputting pending image
It marks, the method for extracting foreground area in moving target that the background difference method used in embodiment is A.Pentland,
I.e. single Gauss model background difference method has done the method improvement to reduce illumination variation and shade to detection in embodiment
It influences, herein using normalized color RGB models, wherein r=R/ (R+G+B), g=G/ (R+G+B), b=B/ (R+G+B).
Background subtraction partial objectives for detection algorithm first has to be modeled background in embodiment, i.e. the image to input
Sequence carries out medium filtering, obtains single Gaussian Background model, the Gauss model parameter each put in background image isWherein uiAnd σi(i=r, g, b) represents the mean and variance of corresponding points in background model.Background is built
After mould, foreground area can be obtained by inputting the difference of present image and background image, present image and Background aberration
Office is managed, and differentiated image then is carried out threshold process can obtain binary picture Bxy, foreground pixel is expressed as 1, the back of the body
Scene element is expressed as 0.
Wherein, Ii(x, y), (i=r, g, b) be pixel (x, y) Current observation value, ΓiIt, can be by testing for threshold parameter
In the process empirically determined.The r of each pixel in background difference, tri- colouring component of g, b independently carries out, once one of them is recognized
It is the pixel of variation for current pixel, then it is assumed that the pixel is foreground pixel.Due to the influence of noise in object detection area,
Morphological operator processing is carried out to binary picture at this, further mistake is carried out to noise using corrosion and the operations such as expansion
Filter.Then simply connected foreground area is extracted by binary unicom Component Analysis.Foreground detection testing result such as Fig. 2 institutes
Show.In Fig. 2, the image under first and second behavior camera, 1 visual field, that the first row and the third line utilize is improved side in the present invention
Method, the method that it is A.Pentland that the second row, which utilizes, as seen from the figure, due to the influence of illumination, (ground is reflective strong in camera 1
It is strong), it is different using influence of the different methods to testing result.The improved method of the present invention, the second row are used in the first row
In be not used improved method, from figure 2 it can be seen that be not used improved method testing result have certain impurity.The third line
To use the testing result of improved method under 2 visual field light conditions good condition of camera, from Figure 2 it can be seen that can clearly detect
Go out foreground image.
The center line detecting of target
The center line of people refers to the symmetry axis of target person.For characteristic point, had more by the use of center line as feature
There is robustness, because the characteristic point extracted during target detection inevitably can be there are some errors, however according to center
The symmetry of line, the point for being symmetrically distributed in center line both sides returns these errors that cancel each other, so as to fulfill more preferably robustness.
The target person assumed in embodiment is all to be upright walking, and the center line of target person all exists.It is actually this
It is assuming that and rational and be widely adopted.Based in the target person discussed in such hypothesis embodiment under three kinds of different situations
Heart line detecting method, i.e., the center line detecting of independent target person, it is unobstructed in the case of center line detecting, under circumstance of occlusion in
Heart line detects.
Center line detecting will be detected based on above-mentioned three kinds of situations.When detecting the situation of single people, directly it is made
The detection of its center line is realized with single people's center line detecting algorithm;When occurring unobstructed multiple targets in camera view
It is the form for the single target people for being divided into difference first, then the detection of center line is carried out to it respectively;When camera regards
When the target mutually blocked occurs in Yezhong, the target that is blocked is split first, is respectively formed the shape of single target independence
Formula, then the detection algorithm of center line is carried out to it respectively.
The center line detecting of independent target person
Global characteristics limitation based on detected target person, i.e., both sides are in close proximity to symmetrically human body at the center line
Feature determines the center line of individual target person using minimum median quadratic method.Minimum median quadratic method is a kind of use
In the method for estimation model parameter, its target is to minimize the median of square-error.What minimum median quadratic method obtained
Estimator belongs to robust estimation, and parameter Estimation is not readily susceptible to significantly affecting for a small number of Outlier Datas.This method in the present invention
Target person is shown as on this symmetrical constraint of center line.
The center line is determined (to make by the square value of minimum foreground pixel and the intermediate value of the vertical range of human body center line
It reaches minimum).Its principle is as shown in figure 3, wherein, E (Ii, l) and it is the i-th foreground pixel IiBetween center line l to be determined
Vertical range.Center line L is determined by formula (6).
Target's center's line detection under the conditions of unobstructed
For two or more target persons, image motion region generates a foreground target set, when this is more
When not blocked mutually between a target, determining the center line of each target mainly includes following two steps:
Step 1: entire target area is divided into several subregions, these subregions correspond respectively to single independent mesh
Mark people.
Step 2: determine the center line of target person in every sub-regions.
The center line of single target people can determine that method is true according to the center line of pinpoint target people in divided subregion
It is fixed.Since vertical projective histogram can represent the binary profiles of target 2D, according to the method for I.Haritaoglu, embodiment
Carry out segmentation object people using vertical projective histogram, when there is an apparent peak region between two main valleies in histogram
When, then correspond to a target person.
Vertical projective histogram be the foreground pixel of target person is projected to obtained in the horizontal coordinate of image it is a series of
Block diagram.Assuming that B (x, y) is binary picture, vertical coordinate and horizontal coordinate distribution are represented with y and x, target detection prospect
The height and width in region are divided into for h and w, and the vertical projective histogram H (x) of the foreground area is:
Fig. 4 gives the vertical projective histogram of target person.
Apparent peak region needs meet two conditions:
Maximum in condition one, all obvious peak value regions is in crest value threshold value (PT) on.
Condition two, two minimum value values in obvious peak value region are in valley value threshold (CT) under.
Assuming that in a peak region, M1,M2,...,MnIt is its local extreme value.Gl,GrThe respectively left and right in the region
The value of trough, then two above condition can be expressed as:
max(M1,M2,...,Mn) > MT (8)
Gl< GT,Gr< GT (9)
It can be seen that by upper formula, it will be apparent that the threshold value (P determined depending on wave crest and trough of peak regionT) and (GT)。
Therefore, the selection of the two values is critically important.In embodiment, the threshold value (P of wave crestT) elect the height of target person in image coordinate as
80 percent, the threshold value of trough (GT) elects the average of entire histogram as.Height of the target person in image coordinate assumes
Proportional to the coordinate of people position y in the picture, i.e. h=β y, wherein β are the scale factors that training obtains from image sequence.
Fig. 5 show the situation of center line detecting target person in group.In Fig. 5, figure a represents input picture, and figure b represents the back of the body
Image after scape difference, figure c represent corresponding vertical histogram.Wherein, symbol "+" represents peak value, and symbol " o " represents valley, real
Line represents peak value (PT), dotted line represents valley (GT).In the histogram, there are 2 apparent peak regions.It can according to these regions
To judge, which may be partitioned into 2 independent target persons, and as schemed as shown in d, their center line, can be by as shown in e
It is correctly detected.
The center line detecting of target person under circumstance of occlusion
Situation about blocking frequently appears in a great problem in the scene of multiple target tracking and in computer vision.It is real
Apply the situation for putting aside that target person is blocked by static target in example.
In order to accurately detect the center line of shelter target people, the prospect of the very necessary target person in entire foreground area
It is separated in pixel.Then to the foreground pixel separated, target is detected using the method for the minimum median difference of two squares
The center line of people.This has reformed into the problem of separation of target person under circumstance of occlusion, this problem is also computer vision research
Difficult point and hot spot have also carried out research in the world.The general thinking of these research methods is:Model-classification-tracking is established, i.e.,
Corresponding model is established to target person, such as color model and using spatial information, is then classified using the model, finally divided
Cloth is to each area into line trace.Usual way has the Target Segmentation method of the colored pixels classification based on color model, base
Method in color histogram and the method based on cuclear density.Method based on color model records the sky of each target person
Between colouring information, therefore, although there is substantial amounts of redundancy in a template, this method have it is higher have in foreground pixel is split compared with
High reliability.In view of reliability, the target person under the method segmentation circumstance of occlusion based on color model is used in embodiment.
The method that this method comes from A.Senior, including a color model and an additional probability mask.
Here, establishing a display model to the target in each frame, show how the target represents in the input image.
Color model herein uses the RGB color model with independent probability mask, uses TRGB(x) each picture in target is represented
The display model of element, probability mask Pc(x) represent, what which recorded is the possibility that target occurs in the pixel.
For the convenience of expression, image coordinate is represented with X-coordinate, but in practical applications, which only represents the local of image
Region is separately converted to the centre of form of the normalized current goal of image coordinate.However, whenever when known to calibration, just
The display model T of any point x in image can be calculatedRGB(x) and probability mask Pc(x), P is worked asc(x) represent when being zero target in mould
Outside type region.When tracking target and determining, the display model of a rectangle will be established, the border of size and foreground area is big
It is small identical.The model is initialized by the pixel of duplication foreground elements into color model.Corresponding mask probability is initialized as
0.4, if pixel it is corresponding be not the target to be tracked if be initialized as zero.
In subsequent picture frame, display model is updated by the variation with current foreground area.Color model passes through
The display model of mixing current picture and all foreground pixels realizes update, the values of all mask probability also by with
Under formula update (χ=η=0.95):
Therefore, the sight of the display model in the foreground area with regard to continuous updating can be obtained and its corresponding continuous updating
Examine probability.Observation probability can be arranged to thresholding, and be considered thus to find the mask of the binary value of target.It also gives simultaneously
The information of the non-rigid variation of target is gone out, the observation letter in the region of the leg of target person is not included in such as obtaining in whole region
Breath.
Color model (the T of target ii) by color variance Ci(X) form, it records the rgb colors of each pixel of target i
Information and relevant probability mask P [Ii(X)] possibility that, record target i is observed at pixel X.When one it is new
When target is traced, its color model will be initialized, and continuous updating is kept in subsequent successive frame.The seat of pixel X
Mark will be normalized into position of the current goal in image coordinate system.The distribution of color of each pixel of moving target is defaulted as height
This distribution.
It is the color that pixel X is observed to make I (X), then needs to know probability density p [Is (X) of the I (X) under color model i
|Ti(X)].It blocks under environment, the problem of segmentation of foreground pixel forms a classification, which determines each prospect picture
Which color model element belongs to.Using bayes rule, and the probability P of model i [I (X) | Ti(X)], in the case of known I (X)
It draws.When providing pixel X in the probability of model m and reaching maximum, pixel X is classified into model m.
After the color model of given moving target, the distribution of color of the background pixel of target can approximate glomeration Gaussian mode
Type:
If meeting formula (13), foreground pixel X just belongs to k-th of moving object (model).
After shelter target is split, the center line detecting method of each target and the center line detecting side of independent people
Method is consistent.Fig. 6 gives the example for the center line that target person is detected under circumstance of occlusion.In Fig. 6, target person is blocked, and utilizes
Method based on color template separates target, in the center line of the pixel detection target person using visible part.
The differentiation of dbjective state in the case of three kinds
The center line of target person in the case of three kinds is distinguished using the algorithm based on minimum median variance evaluation.
Judge that the condition in the case of above-mentioned three kinds can be according to the corresponding relation of each target in successive frame during tracking
To judge.It is " tracking target " by the object definition that tracked state is in former frame, is detected being in present frame
The object definition of state is " detection target ", then three kinds of situations judge according to following rule:
Situation one:When only there are one " tracking target ", i.e., in former frame only there are one target person when, in corresponding present frame
" detection target ", should " detection target " and only there are one significant peak region in the vertical histogram of " detection target "
It is classified as an independent target person.
Situation two:When in former frame there are more than one " tracking target ", and correspond respectively in the current frame it is multiple " inspection
Survey target ", it is somebody's turn to do " detection target " i.e. very big possibility and is made of several target persons.In the case, first using vertical
Projection histogram is split " the detection target ".If this method fails, such situation is categorized as " multiple mesh under blocking
Mark " simultaneously detects the center line of target under circumstance of occlusion with based on the method for color model.
Situation three:More than one if " tracking target " corresponds to corresponding " detection target ", situation is divided in this
Class is " situation about being blocked ".
Experimental verification
In order to verify the accuracy of center line detecting target person under multi-cam, in laboratory environments, taken the photograph using three
As head tests two target persons, and some are carried out and have compared.
In experiment, target person to be tracked is indicated with coloured bounding box, and the color of different target person indicia framings is each
It differs, the vertical curve in frame is the center line of the target person, and the intersection point of center line and bounding box is the " ground of target person
Point ", the ground point realize instant update by corresponding intersection region, i.e., the hemline of bounding box by target person " ground
Point " determines, and upper, the left and right edge line of bounding box is determined by the foreground pixel of target person.
Valley threshold (G in experimentT) elect the average value of entire histogram, peak threshold (P asT) elect foreground area height as
80 percent.Rule of thumb, the threshold value D of respective distancesTIt is set to 5.
In laboratory environments, 2 target persons are tracked by 3 cameras, which is made of 800 frames, most of
Under the conditions of, target can be detected and matched exactly, except following two situations:
Target is too small so that can't detect in situation one, picture;
Situation two, the display model that there is target person in very serious circumstance of occlusion or a few frames occurred due to target
It can not be formed in time.
Upright human testing experiment is carried out in human body image storehouse indoors.In the following experiments, the standard correctly detected is:
The true knot center line of detection center line and human body target is overlapped there are more than 80%.
Target's center's line is initially set up, and tests the center line detected and environmentally (is hidden for different situations indoors
Keep off, do not block, illumination variation etc.) detection performance of target person, the testing result of Fig. 7 as it can be seen that the detection to different illumination, whether there is
Block etc. it is various in the case of complete human body have and well adapt to ability, can accurately detect its center line, but due to depositing
Compare serious situation in target not exclusively presenting or blocking in camera coverage, therefore occur in the case of the visual field is not complete
There is serious shielding flase drop situation in missing inspection, and the target of two serious shieldings such as is mistakenly considered a target.
Fig. 8 is illustrated from 100 frames to the corresponding testing result of some of 600 frames.In the video sequence, two target persons go out
In present each camera coverage.In 326th frame, two targets are separated by relatively near in the visual field of camera 3, are utilizing non-circumstance of occlusion
It fails during the method for lower segmentation object, two targets is considered a target, however this method is also in subsequent time frame
It is the center lines that can be accurately detected two separated targets, such as 3 visual field of camera in 576 frames, 1 visual field of camera exists in the frame
Eclipse phenomena can effectively detect the center line of two target persons using the segmentation of target under circumstance of occlusion.Phase in 128 frames
Target person is not complete in 2 visual field of machine, and is influenced be subject to illumination, fails to detect its center line.And for human body by portion
Divide when blocking and some detection leakage phenomenons occur, miss detection occur for the crisscross environmental background region of some lines.
Embodiment method Error sensitivity in motion detection is also very low, has very strong robustness.Unobstructed and
Have under circumstance of occlusion, the center line of target person can be detected well in the single camera visual field.And it was proved that,
This method has stronger validity and robustness.The accurate detection of target person center line, for target in follow-up multiple visual angles
It determines and the tracking of target provides good basis.
Claims (8)
1. a kind of object detection method based on human body center line in multi-cam environment, it is characterised in that:
Target person binary picture is obtained by background differential technique;
The main shaft information of target person is obtained using minimum median quadratic method, is specially:
There are one only " tracking target ", i.e., only " mesh is detected in corresponding present frame in former frame there are one during target person
Mark ", and be somebody's turn to do " detection target " only there are one significant peak region in the vertical histogram of " detection target " and be classified as
One independent target person;
It, should when, there are more than one " tracking target ", and corresponding respectively to multiple " detection targets " in the current frame in former frame
" detection target " i.e. very big possibility is made of several target persons, splits " the inspection using vertical projective histogram first
Survey target ";If this method fails, it is classified as " multiple targets under blocking " and is detected with based on the method for color model
The center line of target under circumstance of occlusion;
More than one if " tracking target " corresponds to corresponding " detection target ", it is classified as " situation about being blocked ";
Pair of target between multiple cameras is obtained by calculating the homography matrix between each camera and passing through the homography matrix
Position is answered, realizes detection of the moving target in multi-cam.
2. the object detection method based on human body center line in multi-cam environment as described in claim 1, which is characterized in that
Background subtraction partial objectives for detects:
To background modeling, i.e., medium filtering is carried out to the image sequence of input, obtain single Gaussian Background model, it is every in background image
The Gauss model parameter of a point isWherein, uiAnd σi, i=r, g, b represent corresponding in background model
The mean and variance of point;
After background modeling, foreground area is obtained by inputting the difference of present image and background image, present image and background
Then differentiated image is carried out threshold process and can obtain binary picture B by image difference processingxy, foreground pixel expression
For 1, background pixel is expressed as 0,
Wherein, wherein, Ii(x, y), i=r, g, b, for the Current observation value of pixel (x, y), ΓiFor threshold parameter.
3. the object detection method based on human body center line in multi-cam environment as claimed in claim 2, it is characterised in that:
Morphological operator processing is carried out to binary picture, noise is further filtered with expansive working using corrosion;Then
Simply connected foreground area is extracted by binary unicom Component Analysis.
4. the object detection method based on human body center line in multi-cam environment as described in claim 1, which is characterized in that
The center line of individual target person is determined using minimum median quadratic method, and center line L is determined by formula (6):
<mrow>
<mi>L</mi>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mi>min</mi>
<mi>l</mi>
</munder>
<mi>m</mi>
<mi>e</mi>
<mi>d</mi>
<mi>i</mi>
<mi>a</mi>
<mi>n</mi>
<mo>{</mo>
<mi>E</mi>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>}</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein, E (Ii, l) and it is the i-th foreground pixel IiWith the vertical range between center line l to be determined.
5. the object detection method based on human body center line in multi-cam environment as described in claim 1, which is characterized in that
Target's center's line under the conditions of unobstructed, which detects, is specially:
Entire target area is divided into several subregions, these subregions correspond respectively to single independent target person;It determines every
The center line of independent target person in sub-regions.
6. the object detection method based on human body center line in multi-cam environment as described in claim 1, it is characterised in that:
To situation about being blocked, separated in the foreground pixel of the target person in entire foreground area, then to separating before
Scene element, the center line of target person is detected using minimum median quadratic method.
7. the object detection method based on human body center line in multi-cam environment as claimed in claim 6, it is characterised in that:
Using the target person under the method segmentation circumstance of occlusion based on color model;
Color model is updated by mixing the display model of current picture and all foreground pixels to realize, all masks
The value of probability is updated also by following formula:
Wherein χ=η=0.95, TRGBRepresent the display model of each pixel in target, X represents the pixel of image.
8. the object detection method based on human body center line in multi-cam environment as claimed in claim 7, it is characterised in that:
After the color model of given moving target, the distribution of color approximation glomeration Gauss model of the background pixel of target:
<mrow>
<msub>
<mi>r</mi>
<mrow>
<mi>r</mi>
<mi>g</mi>
<mi>b</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msup>
<mrow>
<mo>(</mo>
<mn>2</mn>
<msup>
<mi>&pi;&sigma;</mi>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mfrac>
<mn>3</mn>
<mn>2</mn>
</mfrac>
</mrow>
</msup>
<mi>exp</mi>
<mo>{</mo>
<mo>-</mo>
<mfrac>
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>T</mi>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<mn>2</mn>
<msup>
<mi>&sigma;</mi>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>}</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>12</mn>
<mo>)</mo>
</mrow>
</mrow>
The foreground pixel X for meeting formula (13) just belongs to k-th of moving object:
<mrow>
<mi>k</mi>
<mo>=</mo>
<mi>arg</mi>
<munder>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
<mi>i</mi>
</munder>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>X</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>i</mi>
<mo>&Element;</mo>
<mo>&lsqb;</mo>
<mn>1</mn>
<mo>,</mo>
<mi>N</mi>
<mo>&rsqb;</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>13</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (12) and (13), I (X) is the color that pixel X is observed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410268650.4A CN104200483B (en) | 2014-06-16 | 2014-06-16 | Object detection method based on human body center line in multi-cam environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410268650.4A CN104200483B (en) | 2014-06-16 | 2014-06-16 | Object detection method based on human body center line in multi-cam environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104200483A CN104200483A (en) | 2014-12-10 |
CN104200483B true CN104200483B (en) | 2018-05-18 |
Family
ID=52085769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410268650.4A Active CN104200483B (en) | 2014-06-16 | 2014-06-16 | Object detection method based on human body center line in multi-cam environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104200483B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6511283B2 (en) * | 2015-02-12 | 2019-05-15 | 日立オートモティブシステムズ株式会社 | Object detection device |
CN106327493B (en) * | 2016-08-23 | 2018-12-18 | 电子科技大学 | A kind of multi-view image object detection method of view-based access control model conspicuousness |
CN110135382B (en) * | 2019-05-22 | 2021-07-27 | 北京华捷艾米科技有限公司 | Human body detection method and device |
CN110307791B (en) * | 2019-06-13 | 2020-12-29 | 东南大学 | Vehicle length and speed calculation method based on three-dimensional vehicle boundary frame |
CN110349206B (en) * | 2019-07-18 | 2023-05-30 | 科大讯飞(苏州)科技有限公司 | Method and related device for detecting human body symmetry |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101763512A (en) * | 2009-12-11 | 2010-06-30 | 西安电子科技大学 | Method for semi-automatically detecting road target in high-resolution remote sensing images |
CN102831446A (en) * | 2012-08-20 | 2012-12-19 | 南京邮电大学 | Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping) |
CN103065323A (en) * | 2013-01-14 | 2013-04-24 | 北京理工大学 | Subsection space aligning method based on homography transformational matrix |
CN103778436A (en) * | 2014-01-20 | 2014-05-07 | 电子科技大学 | Pedestrian gesture inspecting method based on image processing |
-
2014
- 2014-06-16 CN CN201410268650.4A patent/CN104200483B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101763512A (en) * | 2009-12-11 | 2010-06-30 | 西安电子科技大学 | Method for semi-automatically detecting road target in high-resolution remote sensing images |
CN102831446A (en) * | 2012-08-20 | 2012-12-19 | 南京邮电大学 | Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping) |
CN103065323A (en) * | 2013-01-14 | 2013-04-24 | 北京理工大学 | Subsection space aligning method based on homography transformational matrix |
CN103778436A (en) * | 2014-01-20 | 2014-05-07 | 电子科技大学 | Pedestrian gesture inspecting method based on image processing |
Non-Patent Citations (4)
Title |
---|
A Multiview Approach to Tracking People in Crowded Scenes using a Planar Homography Constraint;Saad M. Khan 等;《Computer Vision》;20060531;摘要、图2 * |
Probabilistic people tracking with appearance models and occlusion classification: The AD-HOC system;Roberto Vezzani 等;《Pattern Recognition Letters》;20101110;第867-877页 * |
The Background Primal Sketch: An Approach for Tracking Moving Objects;Yee-Hong Yang 等;《Machine Vision and Applications》;19921231;第20页左栏最后一段、右栏4.1节第4段、第21页4.2节第1段、第22页右栏Step 1、第23页左栏Step 4 * |
W4: Real-Time Surveillance of People and Their Activities;Ismail Haritaoglu 等;《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20000831;第809-830页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104200483A (en) | 2014-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sochor et al. | Traffic surveillance camera calibration by 3d model bounding box alignment for accurate vehicle speed measurement | |
CN104200483B (en) | Object detection method based on human body center line in multi-cam environment | |
CA2949844C (en) | System and method for identifying, analyzing, and reporting on players in a game from video | |
JP6549797B2 (en) | Method and system for identifying head of passerby | |
CN103035013B (en) | A kind of precise motion shadow detection method based on multi-feature fusion | |
CN108717531B (en) | Human body posture estimation method based on Faster R-CNN | |
CN105608479B (en) | In conjunction with the anomaly detection method and system of depth data | |
Mangawati et al. | Object Tracking Algorithms for video surveillance applications | |
CN111462200A (en) | Cross-video pedestrian positioning and tracking method, system and equipment | |
CN103810475B (en) | A kind of object recognition methods and device | |
US20070076922A1 (en) | Object detection | |
CN104301712B (en) | A kind of monitoring camera blur detecting method based on video analysis | |
CN109658433B (en) | Image background modeling and foreground extracting method and device and electronic equipment | |
CN103425967A (en) | Pedestrian flow monitoring method based on pedestrian detection and tracking | |
CN105893946A (en) | Front face image detection method | |
CN107507170A (en) | A kind of airfield runway crack detection method based on multi-scale image information fusion | |
GB2430736A (en) | Image processing | |
CN107146239A (en) | Satellite video moving target detecting method and system | |
CN104268853A (en) | Infrared image and visible image registering method | |
CN104077596A (en) | Landmark-free tracking registering method | |
US20160300125A1 (en) | Systems and methods for classification and alignment of highly similar or self-similar patterns | |
CN109460764A (en) | A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method | |
CN106296708B (en) | Car tracing method and apparatus | |
CN104599288A (en) | Skin color template based feature tracking method and device | |
CN110110618A (en) | A kind of SAR target detection method based on PCA and global contrast |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20201030 Address after: Room 2, No.2, No.2, Kechuang Road, NO.201, Qixia District, Nanjing, Jiangsu Province Patentee after: Nanjing huaruizhiguang Information Technology Research Institute Co., Ltd Address before: Yuen Road Qixia District of Nanjing City, Jiangsu Province, No. 9 210023 Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS |
|
TR01 | Transfer of patent right |