CN102236902A - Method and device for detecting targets - Google Patents

Method and device for detecting targets Download PDF

Info

Publication number
CN102236902A
CN102236902A CN2011101902146A CN201110190214A CN102236902A CN 102236902 A CN102236902 A CN 102236902A CN 2011101902146 A CN2011101902146 A CN 2011101902146A CN 201110190214 A CN201110190214 A CN 201110190214A CN 102236902 A CN102236902 A CN 102236902A
Authority
CN
China
Prior art keywords
submodel
pixel
point
var
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101902146A
Other languages
Chinese (zh)
Other versions
CN102236902B (en
Inventor
蔡巍伟
浦世亮
贾永华
任烨
全晓臣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Software Co Ltd filed Critical Hangzhou Hikvision Software Co Ltd
Priority to CN 201110190214 priority Critical patent/CN102236902B/en
Publication of CN102236902A publication Critical patent/CN102236902A/en
Application granted granted Critical
Publication of CN102236902B publication Critical patent/CN102236902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method and device for detecting targets. The method comprises the following steps: when a first frame image is input, constructing a background model for each pixel point in the image and initializing every background model; for each pixel point in each subsequently input frame image, updating a corresponding background model according to the gray value of the pixel point and the gray value of a pixel point in the same coordinate position in a previous frame image and determining whether the pixel point is a foreground point or a background point according to the update result; and obtaining more than one foreground mass by utilizing the determined foreground point and background point and taking the foreground masses as detected targets. The accuracy of the detection results can be improved by applying the method and device disclosed by the invention.

Description

A kind of object detection method and device
Technical field
The present invention relates to the intelligent video analysis technology, particularly a kind of object detection method and device.
Background technology
Intelligent video analysis is meant technology such as utilizing Flame Image Process, computer vision and pattern-recognition, to the target in the video image of input, detect, follow the tracks of as vehicle or people etc., and its behavior understood and describe, as carrying out the behavioural analysis of target, in conjunction with the rule of conduct outputting alarm information that is provided with etc. according to movement locus.Wherein, it is a important step in the intelligent video analysis that target is detected, accurately whether will directly the having influence on subsequent treatment and whether can carry out smoothly of testing result.
At present, relatively Chang Yong object detection method is the object detection method based on background subtraction, promptly detects by the difference of analyzing between present image and the background image.But this method is easy to be interfered, and as having leaf swing, sleet in the background, or during ripples etc., will cause testing result inaccurate, as detecting false target etc.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of object detection method, can improve the accuracy of testing result.
Another object of the present invention is to provide a kind of object detecting device, can improve the accuracy of testing result.
For achieving the above object, technical scheme of the present invention is achieved in that
A kind of object detection method comprises:
A, when first two field picture is imported, for each pixel in the image is created a background model respectively, and each background model of initialization;
B, at each pixel in every two field picture of follow-up input, according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image its corresponding background model is upgraded respectively, and to determine this pixel be foreground point or background dot according to upgrading the result;
C, utilize foreground point and the background dot determine to obtain an above prospect agglomerate, with the prospect agglomerate as detected target.
A kind of object detecting device comprises:
First processing module is used for when first two field picture is imported, for each pixel in the image is created a background model respectively, and each background model of initialization;
Second processing module, be used for each pixel at every two field picture of follow-up input, according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image its corresponding background model is upgraded respectively, and to determine this pixel be foreground point or background dot according to upgrading the result;
The 3rd processing module is used to utilize foreground point and the background dot determined to obtain an above prospect agglomerate, with the prospect agglomerate as detected target.
As seen, adopt scheme of the present invention, for each pixel is set up a background model respectively, and each background model is dynamically updated according to actual conditions, thereby can reflect the situation of change in the background preferably, and then overcome owing to there is the problem such as false target that detects of disturbing and may causing in the background, improved the accuracy of testing result; In addition, scheme of the present invention only need be stored information such as background model, can not take too many memory headroom, and testing process implements simple and conveniently, and processing speed is very fast, can satisfy the demand of aspects such as real-time.
Description of drawings
Fig. 1 is the process flow diagram of object detection method embodiment of the present invention.
Fig. 2 is the data structure synoptic diagram of the background model among the present invention.
Fig. 3 is the foreground point determined among the present invention and the synoptic diagram of background dot.
Fig. 4 is the synoptic diagram of each prospect agglomerate of obtaining among the present invention.
Fig. 5 is the composition structural representation of object detecting device embodiment of the present invention.
Embodiment
At problems of the prior art, the target detection scheme after a kind of the improvement is proposed among the present invention, can improve the accuracy of testing result.
For make technical scheme of the present invention clearer, understand, below with reference to the accompanying drawing embodiment that develops simultaneously, scheme of the present invention is elaborated.
Fig. 1 is the process flow diagram of object detection method embodiment of the present invention.As shown in Figure 1, may further comprise the steps:
Step 11: when first two field picture is imported, for each pixel in the image is created a background model respectively, and each background model of initialization.
In the intelligent video analysis system of reality, the size of each two field picture of input all is identical, such as being 640 * 480, so, at these 640 * 480 pixels, being respectively it and creating a background model.False coordinate be (x, gray values of pixel points y) is Y, so its corresponding background model can be expressed as bg_model (x, y).
Include three submodel sub_model[3 in each background model], for explaining conveniently, be referred to as submodel 1, submodel 2 and submodel 3 respectively, further, include average (mean), variance (variance), the frequency of occurrences (freq) and four variablees of duration (time) occur in each submodel.Wherein, all the value representation submodel be evenly distributed the position; Variance is represented the distribution range of submodel; The frequency of occurrences is represented the frequency that submodel occurs on sequential; Duration occurs and represent the time span (frame number) of submodel from occurring present frame first.Fig. 2 is the data structure synoptic diagram of the background model among the present invention.
Afterwards, image according to input carries out initialization to each background model, the average that is about to submodel 1 is set to its corresponding gray values of pixel points Y, its variance is set to a predetermined initial value Init_Var, its frequency of occurrences is set to another predetermined initial value Init_Freq, and it duration occurs and is set to 1; All variablees in submodel 2 and the submodel 3 all are set to 0.The span of Init_Var is generally [4,144], and the span of Init_Freq is generally [0.3,1.0].
Step 12: at each pixel in every two field picture of follow-up input, according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image its corresponding background model is upgraded respectively, and to determine this pixel be foreground point or background dot according to upgrading the result.
The specific implementation of this step comprises:
A, (convenient for explaining at each pixel X, represent arbitrary pixel with pixel X), the gray values of pixel points according to same coordinate position in its gray-scale value and the former frame image calculates the overall variance global_var that embodies image change trend respectively.
global _ var = Σ x = 0 x = width Σ y = 0 y = width max ( min ( T var _ max , ( p ( x , y ) t - p ( x , y ) t - 1 ) * ( p ( x , y ) t - p ( x , y ) t - 1 ) ) , T var _ min ) width * height ;
Wherein, the width of width presentation video, the height of height presentation video, p (x, y) tThe gray-scale value of remarked pixel point X, its coordinate be (x, y), p (x, y) T-1Coordinate is (x, gray values of pixel points y), T in the expression former frame image Var_minValue be 4, T Var_maxValue be 144, represent the upper and lower bound of variance respectively.
B, determine whether to exist in the background model of pixel X correspondence the submodel with pixel X coupling according to the global_var that calculates, if exist, execution in step C then, otherwise, execution in step D.
In this step, can come to determine whether to exist in the background model of pixel X correspondence submodel in such a way with pixel X coupling:
B1, each submodel in the background model of pixel X correspondence is sorted according to the descending order of the frequency of occurrences, and with the frequency of occurrences less than predetermined threshold T 1Submodel be labeled as invalid submodel, all the other submodels are labeled as effective submodel, follow-up, only mate with effective submodel; T 1Span be generally [0.001,0.3]; Effectively the number of submodel mostly is 3 most, and minimum also have 1 usually.
Afterwards, be in the difference Dif between the average of primary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and whether definite Dif meets the following conditions:
Dif*Dif<T 2*max_var;
Wherein, T 2The expression predetermined threshold, span is generally [1,16], can control the matching range of submodel by it, max_var=max (global_var, var iance), var iance represent corresponding submodel, here be in the variance of primary effective submodel after referring to sort, global_var represents the overall variance that calculates in the steps A;
If satisfy, then think to be in primary effective submodel coupling after pixel X and the ordering, and execution in step C, otherwise, execution in step B2.
B2, determine whether to exist next effectively submodel, if there is no, execution in step D then is if exist, then be in the difference Dif between the average of deputy effective submodel after the gray-scale value of calculating pixel point X and the ordering, and whether definite Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, execution in step C then, otherwise, execution in step B3.
B3, determine whether to exist next effectively submodel, if there is no, execution in step D then is if exist, then be in the difference Dif between the average of tertiary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and whether definite Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, execution in step C then, otherwise, execution in step D.
Among above-mentioned steps B1~B3, if exist and submodel that pixel X mates, execution in step C then, if there is no, execution in step D then.
All variablees in the submodel of C, renewal coupling, give Model Matching variable match_value with the appearance duration assignment after upgrading, and to the frequency of occurrences in other two submodels except that the submodel of coupling with two variablees of duration occur and upgrade, execution in step E then.
In this step, upgrade for all variablees in the submodel of coupling, even
var?iance new=(1-α)*var?iance old+Dif*Dif*α;
mean new=(1-α)*mean old+Dif*α;
freq new=(1+α)*freq old
time new=time old+1;
Wherein, var iance NewVariance after expression is upgraded, mean NewAverage after expression is upgraded, freq NewThe frequency of occurrences after expression is upgraded, time NewAppearance duration after expression is upgraded, var iance OldVariance before expression is upgraded, mean OldAverage before expression is upgraded, freq OldThe frequency of occurrences before expression is upgraded, time OldAppearance duration before expression is upgraded, α is the parameter of controlling models renewal rate, span is [0.0001,0.1] usually, the difference between the average before the gray-scale value of Dif remarked pixel point X and the submodel of coupling upgrade.
And make match_value=time New
Afterwards, to the frequency of occurrences in other two submodels except that the submodel of coupling with two variablees of duration occur and upgrade, even
freq new=(1-α)*freq old
time new=time old+1;
The submodel of supposing coupling is a submodel 1, and other two submodels then are submodel 2 and submodel 3 so, and other variable in submodel 2 and the submodel 3 remains unchanged.
D, upgrade all variablees in the submodel (, can choose wantonly) of frequency of occurrences minimum in the background model of pixel X correspondence if be a plurality of, and with 0 assignment to Model Matching variable match_value, execution in step E then.
In this step, order
mean new=Y;
var?iance new=144;
freq new=0.3;
time new=1;
And make match_value=0;
Wherein, the gray-scale value of Y remarked pixel point X.
All variablees in two submodels of except that the submodel of frequency of occurrences minimum other all remain unchanged.
If E match_value is less than predetermined threshold T Absorb, promptly
Mask_value<T Absorb, T AbsorbSpan be generally [100,3000];
Determine that then pixel X is the foreground point, otherwise, be background dot.
In addition,, need its gray-scale value to be set to 1 so,, then be set to 0 if be background dot if determine that pixel X is the foreground point.
Fig. 3 is the foreground point determined among the present invention and the synoptic diagram of background dot.As shown in Figure 3, white point is wherein represented the foreground point, and stain is represented background dot.
Step 13: utilize foreground point and the background dot determine to obtain an above prospect agglomerate, with the prospect agglomerate as detected target.
Though determined which pixel in the step 12 is the foreground point, and the ownership of undefined each foreground point.Usually, the pairing foreground point of target spatially is continuous, shows as a prospect agglomerate in image, and the profile of these prospect agglomerates is normally closed.The agglomerate mark can be regarded as the process of a profile search and profile tracking, and the corresponding unique profile of each prospect agglomerate can mark each prospect agglomerate by seeking these profiles.
Because according to after mode is handled shown in the step 12, each gray values of pixel points in the image or be 0, or be 1, therefore, this step can realize in the following manner.
A, according to from top to bottom, respectively be not marked as the pixel of following the tracks of end point in from left to right the order traversing graph picture, if a gray values of pixel points is 1, and having a gray values of pixel points in four pixels in the upper and lower, left and right that are adjacent at least is 0, determine that then this pixel is a point, and execution in step B, if can not find point, execution in step E then.
B, profile chained list that the agglomerate label is N of establishment, the point of determining is joined in this profile chained list, and the point of determining is labeled as the tracking end point, specifically, can be labeled as and follow the tracks of end value 0XFF, simultaneously the point of determining is sought reference point as profile, execution in step C; The initial value of N is 1.
C, in seek 3 * 3 the zone that reference point is the center with profile, seek new point, in case find new point, then it is joined in the profile chained list that the agglomerate label is N, and new point is labeled as the tracking end point, simultaneously it is sought reference point as new profile, repeated execution of steps C is up to can not find new point, execution in step D.
D, make N=N+1, and repeat steps A.
E, with the pixel that writes down in each profile chained list respectively as a prospect agglomerate, obtain N prospect agglomerate, with each prospect agglomerate as detected target.
In actual applications, for convenience of subsequent treatment, remove interference such as noise, can the prospect agglomerate that obtain in the step e further be screened, the prospect agglomerate of predetermined condition is not satisfied in i.e. deletion, the prospect agglomerate of counting and being less than 10 as the profile in the profile chained list, and each point in the profile chained list all is in the prospect agglomerate in other prospect agglomerate scope etc.
Fig. 4 is the synoptic diagram of each prospect agglomerate of obtaining among the present invention.
So far, promptly finished introduction about the inventive method embodiment.
Based on above-mentioned introduction, Fig. 5 is the composition structural representation of object detecting device embodiment of the present invention.As shown in Figure 5, comprising:
First processing module 51 is used for when first two field picture is imported, for each pixel in the image is created a background model respectively, and each background model of initialization;
Second processing module 52, be used for each pixel at every two field picture of follow-up input, according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image its corresponding background model is upgraded respectively, and to determine this pixel be foreground point or background dot according to upgrading the result;
The 3rd processing module 53 is used to utilize foreground point and the background dot determined to obtain an above prospect agglomerate, with the prospect agglomerate as detected target.
Wherein, can specifically comprise (, not shown) in first processing module 51 for simplifying accompanying drawing:
First processing unit, be used at each pixel, be respectively it and create a background model, comprise three submodels in the described background model, be respectively submodel 1, submodel 2 and submodel 3, include average, variance, the frequency of occurrences in each submodel and four variablees of duration occur;
Second processing unit, the average that is used for each submodel 1 is set to its corresponding gray values of pixel points, and its variance is set to a predetermined initial value, and its frequency of occurrences is set to another predetermined initial value, and it duration occurs and is set to 1; All variablees in each submodel 2 and the submodel 3 all are set to 0.
Can specifically comprise (, not shown) in second processing module 52 for simplifying accompanying drawing:
The 3rd processing unit is used for each the pixel X at every two field picture of follow-up input, and the gray values of pixel points according to same coordinate position in its gray-scale value and the former frame image calculates the overall variance global_var that embodies image change trend respectively;
Manages the unit everywhere, be used for determining according to the global_var that calculates whether the background model of pixel X correspondence exists the submodel that mates with pixel X, if exist, then upgrade all variablees in the submodel that mates, give Model Matching variable match_value with the appearance duration assignment after upgrading, and to the frequency of occurrences in other two submodels except that the submodel of coupling with two variablees of duration occur and upgrade, notify the 5th processing unit to carry out self function then, otherwise, all variablees in the background model of renewal pixel X correspondence in the submodel of frequency of occurrences minimum, and give Model Matching variable match_value with 0 assignment, notify the 5th processing unit to carry out self function then;
The 5th processing unit is used for working as match_value less than predetermined threshold T AbsorbThe time, determine that pixel X is the foreground point, otherwise, be background dot; Further, the gray values of pixel points that is defined as the foreground point is set to 1, and the gray values of pixel points that is defined as background dot is set to 0.
Wherein, overall variance
global _ var = Σ x = 0 x = width Σ y = 0 y = width max ( min ( T var _ max , ( p ( x , y ) t - p ( x , y ) t - 1 ) * ( p ( x , y ) t - p ( x , y ) t - 1 ) ) , T var _ min ) width * height ;
The width of width presentation video, the height of height presentation video, p (x, y) tThe gray-scale value of remarked pixel point X, its coordinate be (x, y), p (x, y) T-1Coordinate is (x, gray values of pixel points y), T in the expression former frame image Var_minValue be 4, T Var_maxValue be 144.
In addition, manage everywhere in the unit and also can specifically comprise:
First handles subelement, be used for each submodel of the background model of pixel X correspondence is sorted according to the descending order of the frequency of occurrences, and with the frequency of occurrences less than predetermined threshold T 1Submodel be labeled as invalid submodel, all the other submodels are labeled as effective submodel, be in the difference Dif between the average of primary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and whether definite Dif meets the following conditions: Dif*Dif<T 2* max_var, wherein, T 2The expression predetermined threshold, max_var=max (global_var, var iance), var iance represents the variance of corresponding submodel, if satisfy, then notify second to handle subelement execution self function, otherwise, determine whether to exist next effectively submodel, if there is no, then notify the 3rd to handle subelement execution self function, if exist, then be in the difference Dif between the average of deputy effective submodel after the gray-scale value of calculating pixel point X and the ordering, and whether definite Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, then notify second to handle subelement execution self function, otherwise, determine whether to exist next effectively submodel, if there is no, then notify the 3rd to handle subelement execution self function, if exist, then be in the difference Dif between the average of tertiary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and whether definite Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, then notify second to handle subelement execution self function, otherwise, notify the 3rd to handle subelement execution self function;
Second handles subelement, is used for upgrading all variablees of the submodel of coupling, even var is iance New=(1-α) * var iance Old+ Dif*Dif* α, mean New=(1-α) * mean Old+ Dif* α, freq New=(1+ α) * freq Old, time New=time Old+ 1, wherein, var iance NewVariance after expression is upgraded, mean NewAverage after expression is upgraded, freq NewThe frequency of occurrences after expression is upgraded, time NewAppearance duration after expression is upgraded, var iance OldVariance before expression is upgraded, mean OldAverage before expression is upgraded, freq OldThe frequency of occurrences before expression is upgraded, time OldAppearance duration before expression is upgraded, α is the parameter of controlling models renewal rate, the difference between the average before the gray-scale value of Dif remarked pixel point X and the submodel of coupling upgrade; Give Model Matching variable match_value with the appearance duration assignment after upgrading, and to the frequency of occurrences in other two submodels the submodel that removes coupling with two variablees of duration occur and upgrade, even freq New=(1-α) * freq Old, time New=time Old+ 1, notify the 5th processing unit to carry out self function then;
The 3rd handles subelement, is used for upgrading all variablees in the submodel of background model frequency of occurrences minimum of pixel X correspondence, even mean New=Y, var iance New=144, freq New=0.3, time New=1, the gray-scale value of Y remarked pixel point X, and give Model Matching variable match_value with 0 assignment, notify the 5th processing unit to carry out self function then.
Can specifically comprise (, not shown) in the 3rd processing module 53 for simplifying accompanying drawing:
The 6th processing unit, be used for according to from top to bottom, respectively be not marked as the pixel of following the tracks of end point in the order traversing graph picture from left to right, if a gray values of pixel points is 1, and on being adjacent, down, a left side, having a gray values of pixel points at least in right four pixels is 0, determine that then this pixel is a point, and the profile chained list that to create an agglomerate label be N, the point of determining is joined in this profile chained list, and the point of determining is labeled as the tracking end point, simultaneously the point of determining is sought reference point as profile, notify the 7th processing unit to carry out self function, the initial value of N is 1; If can not find point, then notify the 8th processing unit to carry out self function;
The 7th processing unit, being used for seek reference point with profile is that new point is sought in 3 * 3 the zone at center, in case find new point, then it is joined in the profile chained list that the agglomerate label is N, and new point is labeled as the tracking end point, simultaneously it is sought reference point as new profile, repeat self function,, make N=N+1 up to can not find new point, and new N sent to the 6th processing unit, notify it to repeat self function;
The 8th processing unit, the pixel that is used for each profile chained list is write down obtains N prospect agglomerate respectively as a prospect agglomerate, with each prospect agglomerate as detected target; Further, after obtaining N prospect agglomerate, also can delete the prospect agglomerate that does not wherein satisfy predetermined condition.
The concrete workflow of device embodiment shown in Figure 5 please refer to the respective description among the method embodiment shown in Figure 1, repeats no more herein.
The above only is preferred embodiment of the present invention, and is in order to restriction the present invention, within the spirit and principles in the present invention not all, any modification of being made, is equal to replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (15)

1. an object detection method is characterized in that, comprising:
A, when first two field picture is imported, for each pixel in the image is created a background model respectively, and each background model of initialization;
B, at each pixel in every two field picture of follow-up input, according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image its corresponding background model is upgraded respectively, and to determine this pixel be foreground point or background dot according to upgrading the result;
C, utilize foreground point and the background dot determine to obtain an above prospect agglomerate, with the prospect agglomerate as detected target.
2. method according to claim 1 is characterized in that, described steps A comprises:
A1, at each pixel, be respectively it and create a background model, comprise three submodels in the described background model, be respectively submodel 1, submodel 2 and submodel 3, include average, variance, the frequency of occurrences in each submodel and four variablees of duration occur;
The average of A2, submodel 1 is set to its corresponding gray values of pixel points, and its variance is set to a predetermined initial value, and its frequency of occurrences is set to another predetermined initial value, and it duration occurs and is set to 1; All variablees in submodel 2 and the submodel 3 all are set to 0.
3. method according to claim 2 is characterized in that, described step B comprises:
B1, at each pixel X, calculate the overall variance global_var that embodies image change trend according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image respectively;
B2, determine whether to exist in the background model of pixel X correspondence the submodel with pixel X coupling according to the global_var that calculates, if exist, execution in step B3 then, otherwise, execution in step B4;
All variablees in the submodel of B3, renewal coupling, give Model Matching variable match_value with the appearance duration assignment after upgrading, and to the frequency of occurrences in other two submodels except that the submodel of coupling with two variablees of duration occur and upgrade, execution in step B5 then;
B4, upgrade all variablees in the submodel of frequency of occurrences minimum in the background model of pixel X correspondence, and give Model Matching variable match_value, execution in step B5 then 0 assignment;
If B5 match_value is less than predetermined threshold T Absorb, determine that then pixel X is the foreground point, otherwise, be background dot.
4. method according to claim 3 is characterized in that, calculates overall variance global_var among the described step B1 and comprises:
Calculate global _ var = Σ x = 0 x = width Σ y = 0 y = width max ( min ( T var _ max , ( p ( x , y ) t - p ( x , y ) t - 1 ) * ( p ( x , y ) t , - p ( x , y ) t - 1 ) ) , T var _ min ) width * height ;
Wherein, the width of described width presentation video, the height of described height presentation video, described p (x, y) tThe gray-scale value of remarked pixel point X, its coordinate be (x, y), described p (x, y) T-1Coordinate is (x, gray values of pixel points y), described T in the expression former frame image Var_minValue be 4, described T Var_maxValue be 144.
5. method according to claim 3 is characterized in that, described step B2 comprises:
B21, each submodel in the background model of pixel X correspondence is sorted according to the descending order of the frequency of occurrences, and with the frequency of occurrences less than predetermined threshold T 1Submodel be labeled as invalid submodel, all the other submodels are labeled as effective submodel; Be in the difference Dif between the average of primary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and determine whether described Dif meets the following conditions:
Dif*Dif<T 2*max_var;
Wherein, described T 2The expression predetermined threshold, described max_var=max (global_var, var iance), described var iance represent the variance of corresponding submodel;
If satisfy, execution in step B3 then, otherwise, execution in step B22;
B22, determine whether to exist next effectively submodel, if there is no, execution in step B4 then is if exist, then be in the difference Dif between the average of deputy effective submodel after the gray-scale value of calculating pixel point X and the ordering, and determine whether described Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, execution in step B3 then, otherwise, execution in step B23;
B23, determine whether to exist next effectively submodel, if there is no, execution in step B4 then is if exist, then be in the difference Dif between the average of tertiary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and determine whether described Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, execution in step B3 then, otherwise, execution in step B4.
6. method according to claim 3 is characterized in that, all variablees that upgrade in the submodel that mates among the described step B3 comprise:
Make var iance New=(1-α) * var iance Old+ Dif*Dif* α;
mean new=(1-α)*mean old+Dif*α;
freq new=(1+α)*freq old
time new=time old+1;
Wherein, described var iance NewVariance after expression is upgraded, described mean NewAverage after expression is upgraded, described freq NewThe frequency of occurrences after expression is upgraded, described time NewAppearance duration after expression is upgraded, described var iance OldVariance before expression is upgraded, described mean OldAverage before expression is upgraded, described freq OldThe frequency of occurrences before expression is upgraded, described time OldAppearance duration before expression is upgraded, described α is the parameter of controlling models renewal rate, the difference between the average before the gray-scale value of described Dif remarked pixel point X and the submodel of coupling upgrade;
Among the described step B3 to the frequency of occurrences in other two submodels except that the submodel of coupling with two variablees of duration occur and upgrade and comprise:
Make freq New=(1-α) * freq Old
time new=time old+1;
All variablees that upgrade among the described step B4 in the submodel of frequency of occurrences minimum in the background model of pixel X correspondence comprise:
Make mean New=Y;
var?iance new=144;
freq new=0.3;
time new=1;
Wherein, the gray-scale value of described Y remarked pixel point X.
7. according to each described method in the claim 1~6, it is characterized in that further comprise among the described step B: the gray values of pixel points that is defined as the foreground point is set to 1, the gray values of pixel points that is defined as background dot is set to 0; Described step C comprises:
C1, according to from top to bottom, respectively be not marked as the pixel of following the tracks of end point in from left to right the order traversing graph picture, if a gray values of pixel points is 1, and having a gray values of pixel points in four pixels in the upper and lower, left and right that are adjacent at least is 0, determine that then this pixel is a point, and execution in step C2, if can not find point, execution in step C5 then;
C2, profile chained list that the agglomerate label is N of establishment join the point of determining in this profile chained list, and the point of determining are labeled as the tracking end point, simultaneously the point of determining are sought reference point as profile, execution in step C3; The initial value of N is 1;
C3, in seek 3 * 3 the zone that reference point is the center with profile, seek new point, in case find new point, then it is joined in the profile chained list that the agglomerate label is N, and new point is labeled as the tracking end point, simultaneously it is sought reference point as new profile, repeated execution of steps C3 is up to can not find new point, execution in step C4;
C4, make N=N+1, and repeated execution of steps C1;
C5, with the pixel that writes down in each profile chained list respectively as a prospect agglomerate, obtain N prospect agglomerate, with each prospect agglomerate as detected target.
8. method according to claim 7 is characterized in that, described obtaining further comprises after N the prospect agglomerate: the prospect agglomerate of predetermined condition is not satisfied in deletion.
9. an object detecting device is characterized in that, comprising:
First processing module is used for when first two field picture is imported, for each pixel in the image is created a background model respectively, and each background model of initialization;
Second processing module, be used for each pixel at every two field picture of follow-up input, according to the gray values of pixel points of same coordinate position in its gray-scale value and the former frame image its corresponding background model is upgraded respectively, and to determine this pixel be foreground point or background dot according to upgrading the result;
The 3rd processing module is used to utilize foreground point and the background dot determined to obtain an above prospect agglomerate, with the prospect agglomerate as detected target.
10. device according to claim 9 is characterized in that, comprises in described first processing module:
First processing unit, be used at each pixel, be respectively it and create a background model, comprise three submodels in the described background model, be respectively submodel 1, submodel 2 and submodel 3, include average, variance, the frequency of occurrences in each submodel and four variablees of duration occur;
Second processing unit, the average that is used for each submodel 1 is set to its corresponding gray values of pixel points, and its variance is set to a predetermined initial value, and its frequency of occurrences is set to another predetermined initial value, and it duration occurs and is set to 1; All variablees in each submodel 2 and the submodel 3 all are set to 0.
11. device according to claim 10 is characterized in that, comprises in described second processing module:
The 3rd processing unit is used for each the pixel X at every two field picture of follow-up input, and the gray values of pixel points according to same coordinate position in its gray-scale value and the former frame image calculates the overall variance global_var that embodies image change trend respectively;
Manages the unit everywhere, be used for determining according to the global_var that calculates whether the background model of pixel X correspondence exists the submodel that mates with pixel X, if exist, then upgrade all variablees in the submodel that mates, give Model Matching variable match_value with the appearance duration assignment after upgrading, and to the frequency of occurrences in other two submodels except that the submodel of coupling with two variablees of duration occur and upgrade, notify the 5th processing unit to carry out self function then, otherwise, all variablees in the background model of renewal pixel X correspondence in the submodel of frequency of occurrences minimum, and give Model Matching variable match_value with 0 assignment, notify the 5th processing unit to carry out self function then;
The 5th processing unit is used for working as match_value less than predetermined threshold T AbsorbThe time, determine that pixel X is the foreground point, otherwise, be background dot.
12. device according to claim 11 is characterized in that, described overall variance
global _ var = Σ x = 0 x = width Σ y = 0 y = width max ( min ( T var _ max , ( p ( x , y ) t - p ( x , y ) t - 1 ) * ( p ( x , y ) t - p ( x , y ) t - 1 ) ) , T var _ min ) width * height ;
Wherein, the width of described width presentation video, the height of described height presentation video, described p (x, y) tThe gray-scale value of remarked pixel point X, its coordinate be (x, y), described p (x, y) T-1Coordinate is (x, gray values of pixel points y), described T in the expression former frame image Var_minValue be 4, described T Var_maxValue be 144.
13. device according to claim 11 is characterized in that, described manages in the unit everywhere and comprises:
First handles subelement, be used for each submodel of the background model of pixel X correspondence is sorted according to the descending order of the frequency of occurrences, and with the frequency of occurrences less than predetermined threshold T 1Submodel be labeled as invalid submodel, all the other submodels are labeled as effective submodel, be in the difference Dif between the average of primary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and determine whether described Dif meets the following conditions: Dif*Dif<T 2* max_var, wherein, described T 2The expression predetermined threshold, described max_var=max (global_var, var iance), described var iance represents the variance of corresponding submodel, if satisfy, then notify second to handle subelement execution self function, otherwise, determine whether to exist next effectively submodel, if there is no, then notify the 3rd to handle subelement execution self function, if exist, then be in the difference Dif between the average of deputy effective submodel after the gray-scale value of calculating pixel point X and the ordering, and determine whether described Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, then notify second to handle subelement execution self function, otherwise, determine whether to exist next effectively submodel, if there is no, then notify the 3rd to handle subelement execution self function, if exist, then be in the difference Dif between the average of tertiary effective submodel after the gray-scale value of calculating pixel point X and the ordering, and determine whether described Dif meets the following conditions: Dif*Dif<T 2* max_var; If satisfy, then notify second to handle subelement execution self function, otherwise, notify the 3rd to handle subelement execution self function;
Second handles subelement, is used for upgrading all variablees of the submodel of coupling, even var is iance New=(1-α) * var iance Old+ Dif*Dif* α, mean New=(1-α) * mean Old+ Dif* α, freq New=(1+ α) * freq Old, time New=time Old+ 1, wherein, described var iance NewVariance after expression is upgraded, described mean NewAverage after expression is upgraded, described freq NewThe frequency of occurrences after expression is upgraded, described time NewAppearance duration after expression is upgraded, described var iance OldVariance before expression is upgraded, described mean OldAverage before expression is upgraded, described freq OldThe frequency of occurrences before expression is upgraded, described time OldAppearance duration before expression is upgraded, described α is the parameter of controlling models renewal rate, the difference between the average before the gray-scale value of described Dif remarked pixel point X and the submodel of coupling upgrade; Give Model Matching variable match_value with the appearance duration assignment after upgrading, and to the frequency of occurrences in other two submodels the submodel that removes coupling with two variablees of duration occur and upgrade, even freq New=(1-α) * freq Old, time New=time Old+ 1, notify the 5th processing unit to carry out self function then;
The 3rd handles subelement, is used for upgrading all variablees in the submodel of background model frequency of occurrences minimum of pixel X correspondence, even mean New=Y, var iance New=144, freq New=0.3, time New=1, the gray-scale value of Y remarked pixel point X, and give Model Matching variable match_value with 0 assignment, notify the 5th processing unit to carry out self function then.
14., it is characterized in that described the 5th processing unit is further used for according to each described device in the claim 11~13, the gray values of pixel points that is defined as the foreground point is set to 1, the gray values of pixel points that is defined as background dot is set to 0; Comprise in described the 3rd processing module:
The 6th processing unit, be used for according to from top to bottom, respectively be not marked as the pixel of following the tracks of end point in the order traversing graph picture from left to right, if a gray values of pixel points is 1, and on being adjacent, down, a left side, having a gray values of pixel points at least in right four pixels is 0, determine that then this pixel is a point, and the profile chained list that to create an agglomerate label be N, the point of determining is joined in this profile chained list, and the point of determining is labeled as the tracking end point, simultaneously the point of determining is sought reference point as profile, notify the 7th processing unit to carry out self function, the initial value of N is 1; If can not find point, then notify the 8th processing unit to carry out self function;
The 7th processing unit, being used for seek reference point with profile is that new point is sought in 3 * 3 the zone at center, in case find new point, then it is joined in the profile chained list that the agglomerate label is N, and new point is labeled as the tracking end point, simultaneously it is sought reference point as new profile, repeat self function,, make N=N+1 up to can not find new point, and new N sent to the 6th processing unit, notify it to repeat self function;
The 8th processing unit, the pixel that is used for each profile chained list is write down obtains N prospect agglomerate respectively as a prospect agglomerate, with each prospect agglomerate as detected target.
15. device according to claim 14 is characterized in that, described the 8th processing unit is further used for, and after obtaining N prospect agglomerate, the prospect agglomerate of predetermined condition is not wherein satisfied in deletion.
CN 201110190214 2011-06-21 2011-06-21 Method and device for detecting targets Active CN102236902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110190214 CN102236902B (en) 2011-06-21 2011-06-21 Method and device for detecting targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110190214 CN102236902B (en) 2011-06-21 2011-06-21 Method and device for detecting targets

Publications (2)

Publication Number Publication Date
CN102236902A true CN102236902A (en) 2011-11-09
CN102236902B CN102236902B (en) 2013-01-09

Family

ID=44887526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110190214 Active CN102236902B (en) 2011-06-21 2011-06-21 Method and device for detecting targets

Country Status (1)

Country Link
CN (1) CN102236902B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496272A (en) * 2011-11-16 2012-06-13 杭州海康威视数字技术股份有限公司 Method and system for detecting of traffic parking incidents
WO2014047856A1 (en) * 2012-09-27 2014-04-03 华为技术有限公司 Method and device for determining video foreground main image area
CN103810691A (en) * 2012-11-08 2014-05-21 杭州海康威视数字技术股份有限公司 Video-based automatic teller machine monitoring scene detection method and apparatus
CN106846297A (en) * 2016-12-21 2017-06-13 深圳市镭神智能系统有限公司 Pedestrian's flow quantity detecting system and method based on laser radar
CN107481249A (en) * 2017-08-11 2017-12-15 上海博超联石智能科技有限公司 A kind of data processing method of computer supervisory control system
CN108229256A (en) * 2016-12-21 2018-06-29 杭州海康威视数字技术股份有限公司 A kind of road construction detection method and device
CN108629254A (en) * 2017-03-24 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of detection method and device of moving target
CN108629230A (en) * 2017-03-16 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of demographic method and device and elevator scheduling method and system
CN111325701A (en) * 2018-12-14 2020-06-23 杭州海康威视数字技术股份有限公司 Image processing method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101094413A (en) * 2007-07-06 2007-12-26 浙江大学 Real time movement detection method in use for video monitoring
US20100098331A1 (en) * 2008-09-26 2010-04-22 Sony Corporation System and method for segmenting foreground and background in a video
CN101751669A (en) * 2009-12-17 2010-06-23 北京中星微电子有限公司 Static object detection method and device
CN102054277A (en) * 2009-11-09 2011-05-11 深圳市朗驰欣创科技有限公司 Method and system for detecting moving target, and video analysis system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101094413A (en) * 2007-07-06 2007-12-26 浙江大学 Real time movement detection method in use for video monitoring
US20100098331A1 (en) * 2008-09-26 2010-04-22 Sony Corporation System and method for segmenting foreground and background in a video
CN102054277A (en) * 2009-11-09 2011-05-11 深圳市朗驰欣创科技有限公司 Method and system for detecting moving target, and video analysis system
CN101751669A (en) * 2009-12-17 2010-06-23 北京中星微电子有限公司 Static object detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUNHAI LIU ET AL.: "Surveillance Video Denoising Based on Background Modeling", 《COMMUNICATIONS AND NETWORKING IN CHINA,2008.THIRD INTERNATIONAL CONFERENCE ON》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496272A (en) * 2011-11-16 2012-06-13 杭州海康威视数字技术股份有限公司 Method and system for detecting of traffic parking incidents
WO2014047856A1 (en) * 2012-09-27 2014-04-03 华为技术有限公司 Method and device for determining video foreground main image area
CN103810691A (en) * 2012-11-08 2014-05-21 杭州海康威视数字技术股份有限公司 Video-based automatic teller machine monitoring scene detection method and apparatus
CN106846297A (en) * 2016-12-21 2017-06-13 深圳市镭神智能系统有限公司 Pedestrian's flow quantity detecting system and method based on laser radar
CN108229256A (en) * 2016-12-21 2018-06-29 杭州海康威视数字技术股份有限公司 A kind of road construction detection method and device
CN108629230A (en) * 2017-03-16 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of demographic method and device and elevator scheduling method and system
CN108629254A (en) * 2017-03-24 2018-10-09 杭州海康威视数字技术股份有限公司 A kind of detection method and device of moving target
CN107481249A (en) * 2017-08-11 2017-12-15 上海博超联石智能科技有限公司 A kind of data processing method of computer supervisory control system
CN111325701A (en) * 2018-12-14 2020-06-23 杭州海康威视数字技术股份有限公司 Image processing method, device and storage medium
CN111325701B (en) * 2018-12-14 2023-05-09 杭州海康微影传感科技有限公司 Image processing method, device and storage medium

Also Published As

Publication number Publication date
CN102236902B (en) 2013-01-09

Similar Documents

Publication Publication Date Title
CN102236902B (en) Method and device for detecting targets
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
EP2858008A2 (en) Target detecting method and system
CN112052787A (en) Target detection method and device based on artificial intelligence and electronic equipment
CN104063885A (en) Improved movement target detecting and tracking method
EP3121791A1 (en) Method and system for tracking objects
CN103140876B (en) Information processing device, information processing method, program for information processing device, and recording medium
CN105354791A (en) Improved adaptive Gaussian mixture foreground detection method
CN103886617A (en) Method and device for detecting moving object
CN105184763A (en) Image processing method and device
CN105493147A (en) Systems, devices and methods for tracking objects on a display
Johansson et al. Combining shadow detection and simulation for estimation of vehicle size and position
WO2016021411A1 (en) Image processing device, image processing method, and program
CN105574891A (en) Method and system for detecting moving object in image
CN114647248B (en) Robot obstacle avoidance method, electronic device and readable storage medium
CN102270298B (en) Method and device for detecting laser point/area
CN108133491A (en) A kind of method for realizing dynamic target tracking
Zhang et al. New mixed adaptive detection algorithm for moving target with big data
CN103810718A (en) Method and device for detection of violently moving target
CN102930559A (en) Image processing method and device
WO2014084622A1 (en) Motion recognizing method through motion prediction
CN101833760A (en) Background modeling method and device based on image blocks
CN104766100A (en) Infrared small target image background predicting method and device based on machine learning
CN101901486A (en) Method for detecting moving target and device thereof
CN104063878A (en) Motion object detection device, motion object detection method and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: HANGZHOU HAIKANG WEISHI SOFTWARE CO., LTD.

Effective date: 20120904

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20120904

Address after: Hangzhou City, Zhejiang province 310051 Binjiang District East Road Haikang Science Park No. 700, No. 1

Applicant after: Hangzhou Hikvision Digital Technology Co., Ltd.

Address before: Hangzhou City, Zhejiang province 310051 Binjiang District East Road Haikang Science Park No. 700, No. 1

Applicant before: Hangzhou Haikang Weishi Software Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant