CN105261037A - Moving object detection method capable of automatically adapting to complex scenes - Google Patents

Moving object detection method capable of automatically adapting to complex scenes Download PDF

Info

Publication number
CN105261037A
CN105261037A CN201510645189.4A CN201510645189A CN105261037A CN 105261037 A CN105261037 A CN 105261037A CN 201510645189 A CN201510645189 A CN 201510645189A CN 105261037 A CN105261037 A CN 105261037A
Authority
CN
China
Prior art keywords
image
background
model
sigma
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510645189.4A
Other languages
Chinese (zh)
Other versions
CN105261037B (en
Inventor
闫河
杨德红
刘婕
王朴
陈伟栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN201510645189.4A priority Critical patent/CN105261037B/en
Publication of CN105261037A publication Critical patent/CN105261037A/en
Application granted granted Critical
Publication of CN105261037B publication Critical patent/CN105261037B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a moving object detection method capable of automatically adapting to complex scenes, which comprises the steps of 1) carrying out illumination compensation on a video image; 2) acquiring a background image of each frame of the video image by using a mixed Gaussian background modeling method; 3) acquiring an absolute difference image of each frame by using a background difference method principle; 4) acquiring an optimal segmentation threshold of a gray probability model of each absolute difference image by adopting a maximum entropy segmentation principle; 5) carrying out binarization processing on each absolute difference image by using the optimal segmentation threshold so as to acquire a foreground image; 6) carrying out morphological processing by adopting modules with different structures; and 7) carrying out region calibration on each foreground image by using a connected domain calibration algorithm, and locking a calibrated moving object by using a rectangular frame. The moving object detection method disclosed by the invention has good moving object adaptive detection accuracy and robustness in different complex scenes such as drastic changes in global illumination, background disturbance, relative movement and the like, and can improve the performance of object detection.

Description

A kind of moving target detecting method of self-adaptation complex scene
Technical field
The present invention relates to video brainpower watch and control technology, particularly relate to a kind of moving target detecting method of self-adaptation complex scene, belong to technical field of image processing.
Background technology
Detection for Moving Target is one of gordian technique of video brainpower watch and control technical field, is the basis of the follow-up studies such as target identities identification, tracking, behavioural analysis.Conventional Detection for Moving Target has optical flow method, frame differential method, background subtraction.Wherein, optical flow method is a kind of motion conditions of pixel between successive frame of estimated sequence image, because the method is only concerned about the pixel of image, pixel is not associated with moving target, be difficult to accomplish accurate location to the irregular target of profile, and computing is complicated.The applicability of frame differential method to scene changes is better, especially the scene of illumination variation, but it is comparatively sharp to neighbourhood noise, extract the "or" region that target area is target position in the frame of front and back two, larger than realistic objective region, do not have remarkable movement tendency if follow the tracks of in scene, then between two frames, inspection does not measure by target lap, or the target area detected exists comparatively macroscopic-void, intactly moving target cannot be extracted.The key of background subtraction is choosing of background modeling and threshold value, and its ultimate principle utilizes present frame subtracting background image, and in conjunction with threshold value to obtain motion target area.Utilize traditional Gauss background modeling, average background modeling, median background modeling etc., be vulnerable to the impact of Changes in weather, illuminance abrupt variation, background perturbation and the factor such as camera and target relative movement, in addition fixed threshold does not have adaptability, such as, Threshold selection is too low, is not enough to suppress noise in image; Select too high, then have ignored change useful in image; For moving target that is larger, solid colour, likely produce cavity in target internal, intactly cannot extract the problem of moving target.Although background subtraction in stationary background and ideal scenario, its target detection effect is better, but because actual scene is complicated, the sudden change of Changes in weather, global illumination, background perturbation and the factor such as camera and target relative movement, easily cause moving object detection inaccurate.
Summary of the invention
For prior art above shortcomings, the object of this invention is to provide a kind of moving target detecting method of self-adaptation complex scene.This method has good motion target adaptive detection accuracy and robustness under the different complex scenes such as global illumination acute variation, background interference, relative motion.This method can improve the performance of target detection under complex scene, for link operation below provides more sane basis.
For realizing the object of the invention, have employed following technical scheme:
A moving target detecting method for self-adaptation complex scene, step is as follows,
1) obtain video image, illumination compensation is carried out to video image, to suddenly change the impact brought to overcome global illumination;
2) mixed Gaussian background modeling method is utilized to obtain background image corresponding to every frame video image;
3) according to the background image extracted, utilize background subtraction principle, obtain the absolute difference partial image of every frame, and carry out medium filtering process, to slacken noise effect;
4) the optimum segmentation threshold value that the gray probability model of the filtered each absolute difference partial image of maximum entropy segmentation principle acquisition is corresponding is adopted;
5) each self-corresponding optimum segmentation threshold value is utilized to carry out binary conversion treatment to obtain foreground image to filtered each absolute difference partial image;
6) in step 5) obtain on the basis of foreground image, adopt the module of different structure body to carry out Morphological scale-space, the impact brought with the little noise of cancellation, makes up the cavity of componental movement target area; First use " decussate texture " template of 3*3 core to carry out an etching operation, to remove some little noises, then carry out twice expansive working with 5*3 core, then carry out an etching operation;
7) connected domain calibration algorithm is utilized to the 6th) foreground image after step Morphological scale-space carries out region labeling, utilizes rectangle frame to lock the moving target demarcated.
Wherein, step 1) illumination compensation carry out as follows,
Suppose that I (t) represents inputted video image frame, δ represents that two interframe allow the maximum global illumination change occurred; First the average pixel value of each frame sequence image of video is calculated then following rule is utilized to carry out illumination compensation:
| Δ V = [ V ‾ ( t ) - V ‾ ( t - 1 ) ] | > δ
I ‾ ( t ) = I ( t ) - sgn ( Δ V ) ( | Δ V | - δ )
In formula, sgn () represents sign function, represent the image after compensating.
Wherein, step 4) optimum segmentation threshold value acquisition methods is,
If a width size is the image I (x, y) of M*N, I (x, y) represent the grey scale pixel value of image coordinate point (x, y), and gray-scale value span is 0-(L-1), step 3) filtered absolute difference partial image is DF (x, y), n ithe gray-scale value of expression absolute difference partial image is the number of pixels of i, then number of pixels total amount is: p irepresent the probability of grey scale pixel value i, so:
p i=n i/N,i=0,1,2,3……,L-1;
Then adopt segmentation candidates threshold value T that the pixel value in image is divided into C0 and C1 two class by gray shade scale, C0 represents destination object, and C1 represents background object, i.e. C0={0,1 ..., t}, C1={t+1, t+2 ..., L-1}, then corresponding to C0 and C1, grey scale pixel value probability distribution is respectively:
C 0 : P 0 P D , P 1 P D , P 2 P D , ... ... , P T P D ;
C 1 : P T + 1 1 - P D , P T + 2 1 - P D , ... ... , p L - 1 1 - P D ;
In formula, l is the number of gray level; So, the entropy of C0 and C1 is expressed from the next respectively;
C 0 : H 0 = - Σ i = 0 T P i P D l o g ( P i P D ) ;
C 1 : H 1 = - Σ i = T + 1 L P i 1 - P D l o g ( P i 1 - P D ) ;
On the basis of gained image C0 entropy and C1 entropy, then posterior entropy sum H is expressed as follows:
H=H 0+H 1
So, compare obtain entropy-discriminate function maximal value corresponding to gray shade scale, namely represent the optimum segmentation threshold value THR based on maximum entropy algorithm, be shown below,
T H R = arg 0 < t < L m a x ( H ) ;
Utilize the optimum segmentation threshold value THR obtained to carry out binary conversion treatment to filtered absolute difference partial image DF (x, y), obtain the foreground image FI (x, y) in video, be shown below,
F I ( x , y ) = 255 , D F ( x , y ) &GreaterEqual; T H R 0 , o t h e r .
Wherein, step 2) concrete grammar that utilizes mixed Gaussian background modeling method to extract background image is,
Utilize the gauss hybrid models of K a certain pixel X of single gaussian probability model construction, see shown in formula (3);
p ( X t ) = &Sigma; i = 1 K w i , t &CenterDot; &eta; ( X t , &mu; i , t , &Sigma; i , t ) - - - ( 3 )
Wherein, p (X t) be that pixel value X appears in t tprobability, w i,trepresent the weights of t i-th Gauss model, and weights and be that 1, K represents Gauss model sum, get 3 -5, η (X t, μ i,t, Σ i,t) represent t i-th Gauss model, μ i,tfor average, Σ i,tfor covariance matrix, n representation dimension, is shown in formula (4);
&eta; ( X t , &mu; i , t , &Sigma; i , t ) = 1 ( 2 &tau; ) n / 2 | &Sigma; i , t | 1 / 2 e - 1 2 ( X t - &mu; i , t ) T &Sigma; i , t - 1 ( X t - &mu; i , t ) - - - ( 4 )
Mixture Gaussian background model coupling is as follows with renewal process:
Model Matching video image current frame pixel value X and an existing K Gauss model is carried out coupling contrast, if i-th Gauss model meets formula (5), then represents that current frame pixel value matches, otherwise do not mate;
|X ti,t-1|<2.5·σ i,t-1(5)
If mate unsuccessful, then adopt the average of video present frame, and set a larger variance yields, set up new Gaussian distribution model;
Carry out the renewal of model according to formula (6) according to matching result;
&mu; t = ( 1 - &alpha; ) &CenterDot; &mu; t - 1 + &alpha; &CenterDot; X t &sigma; t 2 = ( 1 - &alpha; ) &CenterDot; &sigma; t - 1 2 + &alpha; &CenterDot; ( &mu; t - X t ) 2 w i , t = ( 1 - &alpha; ) &CenterDot; w i , t - 1 + &alpha; &CenterDot; M i , t - - - ( 6 )
Wherein, α represents that video present frame is embedded into the speed of background model, is called learning rate, if Model Matching, then and M i,t=1, otherwise be 0, itself μ and σ 2remain unchanged;
Due to Σ i,tthe less gaussian probability distributed model large with weights is more likely for approximate representation background pixel distributed model, for this reason, the order that pixel value in the every two field picture of video successively decreases according to the size of w/ σ value is sorted to K gaussian probability distributed model, by front B gaussian probability distribution as a setting, form background image BI, see formula (7);
B = argmin B ( &Sigma; k = 1 B w k > T ) - - - ( 7 )
Wherein, T is the threshold value of background model setting, T span [0.7,0.8].
Compared with the conventional method, the present invention has following beneficial effect:
1) video background image does not need to preset.
2) adopt illumination compensation and mixed Gauss model to set up background model, effectively can overcome the impact of illuminance abrupt variation, camera relative motion imaging, background perturbation, thus obtain more sane background image.
3) the present invention introduces maximum entropy segmentation threshold, each absolute difference partial image carries out respectively calculating and obtains (i.e. each absolute difference partial image possibility difference, and existing be fixed threshold, namely all absolute difference partial image threshold values are identical), different complex scene video images involved by practical application, its fixed threshold not adaptive problem of tool can well be resolved.
4) under the different complex scenes such as illuminance abrupt variation, background interference, relative motion, there is good accuracy and robustness.
Accompanying drawing explanation
The moving target detecting method overall framework figure of Fig. 1-self-adaptation complex scene of the present invention.
The process flow diagram of Fig. 2-mixed Gaussian background modeling of the present invention.
Fig. 3-step 2 schematic diagram of the present invention.
Embodiment
General thought of the present invention is: the first, considers the degree of illumination variation, and introducing illumination compensation method improves the impact that illumination variation detects succeeding target; Second, consider that the key of background subtraction is background modeling and Threshold selection, utilize the impact that mixed Gaussian background modeling extraction background image detects succeeding target to overcome dynamic background, on this basis, background subtraction principle is utilized to obtain absolute difference partial image, and introduce medium filtering and first filtering process is carried out to absolute difference partial image, to slacken the impact of noise, in addition the defect that in original background method of difference, threshold value is fixing, introduces maximum entropy split plot design and extracts threshold value so that the different complex scene video image of self-adaptation; 3rd, consider that the foreground image of acquisition exists little noise and the same area exists disconnected factor, adopt the module of different structure body to carry out Morphological scale-space to foreground image, the impact brought with the little noise of cancellation, makes up the cavity of componental movement target area.Finally, connected domain calibration algorithm is utilized to mark foreground object, and according to connected domain size lock motion target.
Concrete technical scheme of the present invention is as follows, and its principle is shown in Fig. 1:
Step 1: obtain and detect video, adopts illumination compensation and mixed Gauss model to set up background model, to obtain more sane background image.Obtain the detailed process of background image:
(1) obtain video sequence image, first illumination compensation is carried out to video image, to suddenly change the interference brought to overcome global illumination.
Suppose that I (t) represents inputted video image frame, δ represents that two interframe allow the maximum global illumination change occurred.First the average pixel value of each frame sequence image of video is calculated then following rule is utilized to carry out illumination compensation:
| &Delta; V = &lsqb; V &OverBar; ( t ) - V &OverBar; ( t - 1 ) &rsqb; | > &delta; - - - ( 1 )
I &OverBar; ( t ) = I ( t ) - sgn ( &Delta; V ) ( | &Delta; V | - &delta; ) - - - ( 2 )
In formula, sgn () represents sign function, represent the image after compensating.
(2) on this basis, utilize mixed Gaussian background modeling method to extract background image, can be applicable to the dynamic scenes such as camera relative motion imaging, background perturbation, Changes in weather, the flow process of its mixed Gaussian background modeling as shown in Figure 2.
Mixture Gaussian background model is a kind of extended pattern list Gauss model, can the probability distribution of any shape of approximate representation.In the model, in sequence of video images, the change of pixel point value is treated as stochastic process and meets Gaussian distribution, utilizes the gauss hybrid models of K a certain pixel X of single gaussian probability model construction, sees shown in formula (3).
p ( X t ) = &Sigma; i = 1 K w i , t &CenterDot; &eta; ( X t , &mu; i , t , &Sigma; i , t ) - - - ( 3 )
Wherein, p (X t) be that pixel value X appears in t tprobability, w i,trepresent the weights of t i-th Gauss model, and weights and be that 1, K represents Gauss model sum, generally get 3-5, η (X t, μ i,t, Σ i,t) represent t i-th Gauss model, μ i,tfor average, Σ i,tfor covariance matrix, n representation dimension, is shown in formula (4).
&eta; ( X t , &mu; i , t , &Sigma; i , t ) = 1 ( 2 &tau; ) n / 2 | &Sigma; i , t | 1 / 2 e - 1 2 ( X t - &mu; i , t ) T &Sigma; i , t - 1 ( X t - &mu; i , t ) - - - ( 4 )
Mixture Gaussian background model mainly considers coupling and replacement problem, and its coupling is as follows with renewal process:
Model Matching video image current frame pixel value X and an existing K Gauss model is carried out coupling contrast, if i-th Gauss model meets formula (5), then represents that current frame pixel value matches, otherwise do not mate.
|X ti,t-1|<2.5·σ i,t-1(5)
If mate unsuccessful, then adopt the average of video present frame, and set a larger variance yields, set up new Gaussian distribution model.
Carry out the renewal of model according to following formula (6) according to matching result.
&mu; t = ( 1 - &alpha; ) &CenterDot; &mu; t - 1 + &alpha; &CenterDot; X t &sigma; t 2 = ( 1 - &alpha; ) &CenterDot; &sigma; t - 1 2 + &alpha; &CenterDot; ( &mu; t - X t ) 2 w i , t = ( 1 - &alpha; ) &CenterDot; w i , t - 1 + &alpha; &CenterDot; M i , t - - - ( 6 )
Wherein, α represents that video present frame is embedded into the speed of background model, is called learning rate, if Model Matching, then and M i,t=1, otherwise be 0, itself μ and σ 2remain unchanged.
Due to Σ i,tthe less gaussian probability distributed model large with weights is more likely for approximate representation background pixel distributed model, for this reason, the order that pixel value in the every two field picture of video successively decreases according to the size of w/ σ value is sorted to K gaussian probability distributed model, by front B gaussian probability distribution as a setting, form background image BI, see formula (7).
B = argmin B ( &Sigma; k = 1 B w k > T ) - - - ( 7 )
Wherein, T is the threshold value of background model setting, if this value is less, model will be degenerated to single gaussian probability distributed model; If this value is comparatively large, then can represent comparatively complicated background model, great many of experiments illustrates, T optimum valuing range [0.7,0.8].
Step 2: utilize background subtraction principle to obtain absolute difference partial image D (x, y), and medium filtering process is carried out to absolute difference partial image.The ultimate principle of background subtraction is that present frame and background image are carried out absolute differential process, shown in (8).
D(x,y)=|I(x,y)-BI(x,y)|(8)
Wherein, I (x, y) represents video current frame image, and BI (x, y) represents the background image obtained by step 1, and its step 2 schematic diagram as shown in Figure 3.
Step 3: utilize maximum entropy segmentation threshold method self-adaptation to obtain the optimal threshold of each absolute difference partial image, and carry out binary conversion treatment acquisition foreground image.Obtain the detailed process of foreground image:
If a width size is the image I (x of M*N, y), I (x, y) image coordinate point (x is represented, y) grey scale pixel value, and gray-scale value span is 0 ~ (L-1), obtains filtered background absolute difference partial image DF (x by step 2, y), n ithe gray-scale value of expression absolute difference partial image is the number of pixels of i, then number of pixels total amount is: p irepresent the probability of grey scale pixel value i, so:
p i=n i/N,i=0,1,2,3……,L-1(9)
Then adopt segmentation candidates threshold value T that the pixel value in image is divided into 2 class C0 by gray shade scale and C1, C0 represent destination object, C1 represents background object, i.e. C0={0,1 ..., t}, C1={t+1, t+2 ..., L-1}, then corresponding to C0 and C1, grey scale pixel value probability distribution is respectively:
C 0 : P 0 P D , P 1 P D , P 2 P D , ... ... , P T P D - - - ( 10 )
C 1 : P T + 1 1 - P D , P T + 2 1 - P D , ... ... , p L - 1 1 - P D - - - ( 11 )
In formula, l is the number of gray level.So, the entropy of C0 and C1 is represented by formula (12) (13) respectively.
C 0 : H 0 = - &Sigma; i = 0 T P i P D l o g ( P i P D ) - - - ( 12 )
C 1 : H 1 = - &Sigma; i = T + 1 L P i 1 - P D l o g ( P i 1 - P D ) - - - ( 13 )
On the basis of gained image C0 entropy and C1 entropy, then posterior entropy sum H is expressed as follows:
H=H 0+H 1(14)
So, compare obtain entropy-discriminate function maximal value corresponding to gray shade scale, namely represent the optimal threshold THR based on maximum entropy algorithm, see shown in formula (15).
T H R = arg 0 < t < L m a x ( H ) - - - ( 15 )
Utilize the difference image DF (x, y) of optimal threshold THR to step 2 gained obtained to carry out binary conversion treatment, obtain the foreground image FI (x, y) in video, see shown in formula (16).
F I ( x , y ) = 255 , D F ( x , y ) &GreaterEqual; T H R 0 , o t h e r - - - ( 16 )
Step 4: carry out morphological operation to the foreground image that step 3 obtains, the impact brought with the little noise of cancellation, makes up the cavity of componental movement target area.
Morphological scale-space to separate the impact that the little noise of cancellation brings, and makes up the cavity of componental movement target area.First use " decussate texture " template of 3*3 core to carry out an etching operation, some little noises can be removed, then carry out reexpansion operation with 5*3 core, then carry out an etching operation.Longitudinally adopt larger core to be consider that common pedestrian exists not being communicated with of the number of people and trunk as moving target object, can suitably do a little compensation.
Step 5: connected domain calibration algorithm carries out region labeling to foreground image, utilizes rectangle frame to lock the moving target demarcated.
Can be found out by foregoing description, the present invention mainly solves following problem:
1, background modeling problem
The present invention adopts illumination compensation and mixed Gauss model to set up background model, effectively can overcome the impact of global illumination sudden change, camera relative motion imaging, background perturbation.
2, fixed threshold problem
The present invention introduces maximum entropy segmentation threshold, and each absolute difference partial image obtains corresponding segmentation threshold respectively, and the different complex scene video images involved by practical application, its fixed threshold not adaptive problem of tool can well be resolved.Such as, Threshold selection is too low, is not enough to suppress noise in image; Select too high, then have ignored change useful in image, for moving target that is larger, solid colour, likely produce cavity in target internal, intactly cannot extract the problem of moving target.
3, there is good moving object detection accuracy and robustness
First the present invention adopts illumination compensation and mixed Gauss model to set up background model, then utilize background subtraction principle to carry out absolute value difference and obtain absolute difference partial image, recycling maximum entropy segmentation threshold method self-adaptation obtains the optimal threshold of absolute difference partial image, and carry out binary conversion treatment acquisition foreground image, again morphological operation is carried out to foreground image, with the impact that the little noise of cancellation brings, make up the cavity of componental movement target area, finally, connected domain calibration algorithm carries out region labeling to foreground image, utilizes rectangle frame to lock the moving target demarcated.Under the different complex scenes such as the method can change at global illumination, background interference, relative motion, there is good moving object detection accuracy and robustness.
What finally illustrate is, above-described embodiment is only unrestricted for illustration of technical scheme of the present invention, although with reference to preferred embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that, can modify to technical scheme of the present invention or equivalent replacement, and not departing from aim and the scope of technical solution of the present invention, it all should be encompassed in the middle of right of the present invention.

Claims (4)

1. a moving target detecting method for self-adaptation complex scene, is characterized in that: step is as follows,
1) obtain video image, illumination compensation is carried out to video image, to suddenly change the impact brought to overcome global illumination;
2) mixed Gaussian background modeling method is utilized to obtain background image corresponding to every frame video image;
3) according to the background image extracted, utilize background subtraction principle, obtain the absolute difference partial image of every frame, and carry out medium filtering process, to slacken noise effect;
4) the optimum segmentation threshold value that the gray probability model of the filtered each absolute difference partial image of maximum entropy segmentation principle acquisition is corresponding is adopted;
5) each self-corresponding optimum segmentation threshold value is utilized to carry out binary conversion treatment to obtain foreground image to filtered each absolute difference partial image;
6) in step 5) obtain on the basis of foreground image, adopt the module of different structure body to carry out Morphological scale-space, the impact brought with the little noise of cancellation, makes up the cavity of componental movement target area; First use " decussate texture " template of 3*3 core to carry out an etching operation, to remove some little noises, then carry out twice expansive working with 5*3 core, then carry out an etching operation;
7) connected domain calibration algorithm is utilized to the 6th) foreground image after step Morphological scale-space carries out region labeling, utilizes rectangle frame to lock the moving target demarcated.
2. the moving target detecting method of self-adaptation complex scene according to claim 1, is characterized in that: step 1) illumination compensation carry out as follows,
Suppose that I (t) represents inputted video image frame, δ represents that two interframe allow the maximum global illumination change occurred; First the average pixel value of each frame sequence image of video is calculated then following rule is utilized to carry out illumination compensation:
| &Delta; V = &lsqb; V &OverBar; ( t ) - V &OverBar; ( t - 1 ) &rsqb; | > &delta;
I &OverBar; ( t ) = I ( t ) - sgn ( &Delta; V ) ( | &Delta; V | - &delta; )
In formula, sgn () represents sign function, represent the image after compensating.
3. the moving target detecting method of self-adaptation complex scene according to claim 1, is characterized in that: step 4) optimum segmentation threshold value acquisition methods is,
If a width size is the image I (x, y) of M*N, I (x, y) represent the grey scale pixel value of image coordinate point (x, y), and gray-scale value span is 0-(L-1), step 3) filtered absolute difference partial image is DF (x, y), n ithe gray-scale value of expression absolute difference partial image is the number of pixels of i, then number of pixels total amount is: p irepresent the probability of grey scale pixel value i, so:
p i=n i/N,i=0,1,2,3……,L-1;
Then adopt segmentation candidates threshold value T that the pixel value in image is divided into C0 and C1 two class by gray shade scale, C0 represents destination object, and C1 represents background object, i.e. C0={0,1 ..., t}, C1={t+1, t+2 ..., L-1}, then corresponding to C0 and C1, grey scale pixel value probability distribution is respectively:
C 0 : P 0 P D , P 1 P D , P 2 P D , ... ... , P T P D ;
C 1 : P T + 1 1 - P D , P T + 2 1 - P D , ... ... , p L - 1 1 - P D ;
In formula, l is the number of gray level; So, the entropy of C0 and C1 is expressed from the next respectively;
C 0 : H 0 = - &Sigma; i = 0 T P i P D l o g ( P i P D ) ;
C 1 : H 1 = - &Sigma; i = T + 1 L P i 1 - P D l o g ( P i 1 - P D ) ;
On the basis of gained image C0 entropy and C1 entropy, then posterior entropy sum H is expressed as follows:
H=H 0+H 1
So, compare obtain entropy-discriminate function maximal value corresponding to gray shade scale, namely represent the optimum segmentation threshold value THR based on maximum entropy algorithm, be shown below,
T H R = arg 0 < t < L m a x ( H ) ;
Utilize the optimum segmentation threshold value THR obtained to carry out binary conversion treatment to filtered absolute difference partial image DF (x, y), obtain the foreground image FI (x, y) in video, be shown below,
F I ( x , y ) = { 255 , D F ( x , y ) &GreaterEqual; T H R 0 , o t h e r .
4. the moving target detecting method of self-adaptation complex scene according to claim 1, is characterized in that: step 2) utilize mixed Gaussian background modeling method to extract the concrete grammar of background image to be,
Utilize the gauss hybrid models of K a certain pixel X of single gaussian probability model construction, see shown in formula (3);
p ( X t ) = &Sigma; i = 1 K w i , t &CenterDot; &eta; ( X t , &mu; i , t , &Sigma; i , t ) - - - ( 3 )
Wherein, p (X t) be that pixel value X appears in t tprobability, w i,trepresent the weights of t i-th Gauss model, and weights and be that 1, K represents Gauss model sum, get 3-5, η (X t, μ i,t, Σ i,t) represent t i-th Gauss model, μ i,tfor average, Σ i,tfor covariance matrix, n representation dimension, is shown in formula (4);
&eta; ( X t , &mu; i , t , &Sigma; i , t ) = 1 ( 2 &tau; ) n / 2 | &Sigma; i , t | 1 / 2 e - 1 2 ( X t - &mu; i , t ) T &Sigma; i , t - 1 ( X t - &mu; i , t ) - - - ( 4 )
Mixture Gaussian background model coupling is as follows with renewal process:
Model Matching video image current frame pixel value X and an existing K Gauss model is carried out coupling contrast, if i-th Gauss model meets formula (5), then represents that current frame pixel value matches, otherwise do not mate;
|X ti,t-1|<2.5·σ i,t-1(5)
If mate unsuccessful, then adopt the average of video present frame, and set a larger variance yields, set up new Gaussian distribution model;
Carry out the renewal of model according to formula (6) according to matching result;
{ &mu; t = ( 1 - &alpha; ) &CenterDot; &mu; t - 1 + &alpha; &CenterDot; X t &sigma; t 2 = ( 1 - &alpha; ) &CenterDot; &sigma; t - 1 2 + &alpha; &CenterDot; ( &mu; t - X t ) 2 w i , t = ( 1 - &alpha; ) &CenterDot; w i , t - 1 + &alpha; &CenterDot; M i , t - - - ( 6 )
Wherein, α represents that video present frame is embedded into the speed of background model, is called learning rate, if Model Matching, then and M i,t=1, otherwise be 0, itself μ and σ 2remain unchanged;
Due to Σ i,tthe less gaussian probability distributed model large with weights is more likely for approximate representation background pixel distributed model, for this reason, the order that pixel value in the every two field picture of video successively decreases according to the size of w/ σ value is sorted to K gaussian probability distributed model, by front B gaussian probability distribution as a setting, form background image BI, see formula (7);
B = argmin B ( &Sigma; k = 1 B w k > T ) - - - ( 7 )
Wherein, T is the threshold value of background model setting, T span [0.7,0.8].
CN201510645189.4A 2015-10-08 2015-10-08 A kind of moving target detecting method of adaptive complex scene Expired - Fee Related CN105261037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510645189.4A CN105261037B (en) 2015-10-08 2015-10-08 A kind of moving target detecting method of adaptive complex scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510645189.4A CN105261037B (en) 2015-10-08 2015-10-08 A kind of moving target detecting method of adaptive complex scene

Publications (2)

Publication Number Publication Date
CN105261037A true CN105261037A (en) 2016-01-20
CN105261037B CN105261037B (en) 2018-11-02

Family

ID=55100708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510645189.4A Expired - Fee Related CN105261037B (en) 2015-10-08 2015-10-08 A kind of moving target detecting method of adaptive complex scene

Country Status (1)

Country Link
CN (1) CN105261037B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157303A (en) * 2016-06-24 2016-11-23 浙江工商大学 A kind of method based on machine vision to Surface testing
CN106205217A (en) * 2016-06-24 2016-12-07 华中科技大学 Unmanned plane automatic testing method based on machine vision and unmanned plane method of control
CN106651923A (en) * 2016-12-13 2017-05-10 中山大学 Method and system for video image target detection and segmentation
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN107563272A (en) * 2017-06-14 2018-01-09 南京理工大学 Target matching method in a kind of non-overlapping visual field monitoring system
CN107909608A (en) * 2017-10-30 2018-04-13 北京航天福道高技术股份有限公司 The moving target localization method and device suppressed based on mutual information and local spectrum
CN108376406A (en) * 2018-01-09 2018-08-07 公安部上海消防研究所 A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene
CN108960253A (en) * 2018-06-27 2018-12-07 魏巧萍 A kind of object detection system
CN109145805A (en) * 2018-08-15 2019-01-04 深圳市豪恩汽车电子装备股份有限公司 Moving target detection method and system under vehicle-mounted environment
CN109166137A (en) * 2018-08-01 2019-01-08 上海电力学院 For shake Moving Object in Video Sequences detection algorithm
CN109345472A (en) * 2018-09-11 2019-02-15 重庆大学 A kind of infrared moving small target detection method of complex scene
CN109727266A (en) * 2019-01-08 2019-05-07 青岛一舍科技有限公司 A method of the target person photo based on the pure view background of video acquisition
CN109784164A (en) * 2018-12-12 2019-05-21 北京达佳互联信息技术有限公司 Prospect recognition methods, device, electronic equipment and storage medium
CN109978917A (en) * 2019-03-12 2019-07-05 黑河学院 A kind of Dynamic Object Monitoring System monitoring device and its monitoring method
CN110348305A (en) * 2019-06-06 2019-10-18 西北大学 A kind of Extracting of Moving Object based on monitor video
CN110472569A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 A kind of method for parallel processing of personnel detection and identification based on video flowing
CN111311644A (en) * 2020-01-15 2020-06-19 电子科技大学 Moving target detection method based on video SAR
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA
CN111652910A (en) * 2020-05-22 2020-09-11 重庆理工大学 Target tracking algorithm based on object space relationship
CN111723634A (en) * 2019-12-17 2020-09-29 中国科学院上海微系统与信息技术研究所 Image detection method and device, electronic equipment and storage medium
CN112446889A (en) * 2020-07-01 2021-03-05 龚循安 Medical video reading method based on ultrasound
CN113052940A (en) * 2021-03-14 2021-06-29 西北工业大学 Space-time associated map real-time construction method based on sonar
CN114463389A (en) * 2022-04-14 2022-05-10 广州联客信息科技有限公司 Moving target detection method and detection system
CN116182871A (en) * 2023-04-26 2023-05-30 河海大学 Sea cable detection robot attitude estimation method based on second-order hybrid filtering

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
孟介成: "运动目标检测的研究与DSP实现", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》 *
杨帆等编著: "《数字图像处理与分析》", 31 October 2007, 北京:北京航空航天大学出版社 *
杨霖: "一种适应光照突变的运动目标检测方法", 《科技创新与应用》 *
闫河等: "基于特征融合的粒子滤波目标跟踪新方法", 《光电子·激光》 *
陈鹏: "一种新的最大熵阈值图像分割方法", 《计算机科学》 *
高凯亮等: "一种混合高斯背景模型下的像素分类运动目标检测方法", 《南京大学学报(自然科学)》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157303A (en) * 2016-06-24 2016-11-23 浙江工商大学 A kind of method based on machine vision to Surface testing
CN106205217A (en) * 2016-06-24 2016-12-07 华中科技大学 Unmanned plane automatic testing method based on machine vision and unmanned plane method of control
CN106205217B (en) * 2016-06-24 2018-07-13 华中科技大学 Unmanned plane automatic testing method based on machine vision and unmanned plane method of control
CN106651923A (en) * 2016-12-13 2017-05-10 中山大学 Method and system for video image target detection and segmentation
CN107563272A (en) * 2017-06-14 2018-01-09 南京理工大学 Target matching method in a kind of non-overlapping visual field monitoring system
CN107563272B (en) * 2017-06-14 2023-06-20 南京理工大学 Target matching method in non-overlapping vision monitoring system
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN107909608A (en) * 2017-10-30 2018-04-13 北京航天福道高技术股份有限公司 The moving target localization method and device suppressed based on mutual information and local spectrum
CN108376406A (en) * 2018-01-09 2018-08-07 公安部上海消防研究所 A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene
CN108960253A (en) * 2018-06-27 2018-12-07 魏巧萍 A kind of object detection system
CN109166137A (en) * 2018-08-01 2019-01-08 上海电力学院 For shake Moving Object in Video Sequences detection algorithm
CN109145805A (en) * 2018-08-15 2019-01-04 深圳市豪恩汽车电子装备股份有限公司 Moving target detection method and system under vehicle-mounted environment
CN109345472A (en) * 2018-09-11 2019-02-15 重庆大学 A kind of infrared moving small target detection method of complex scene
CN109345472B (en) * 2018-09-11 2021-07-06 重庆大学 Infrared moving small target detection method for complex scene
CN109784164A (en) * 2018-12-12 2019-05-21 北京达佳互联信息技术有限公司 Prospect recognition methods, device, electronic equipment and storage medium
CN109784164B (en) * 2018-12-12 2020-11-06 北京达佳互联信息技术有限公司 Foreground identification method and device, electronic equipment and storage medium
CN109727266A (en) * 2019-01-08 2019-05-07 青岛一舍科技有限公司 A method of the target person photo based on the pure view background of video acquisition
CN109978917A (en) * 2019-03-12 2019-07-05 黑河学院 A kind of Dynamic Object Monitoring System monitoring device and its monitoring method
CN110348305A (en) * 2019-06-06 2019-10-18 西北大学 A kind of Extracting of Moving Object based on monitor video
CN110348305B (en) * 2019-06-06 2021-06-25 西北大学 Moving object extraction method based on monitoring video
CN110472569A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 A kind of method for parallel processing of personnel detection and identification based on video flowing
CN111723634A (en) * 2019-12-17 2020-09-29 中国科学院上海微系统与信息技术研究所 Image detection method and device, electronic equipment and storage medium
CN111723634B (en) * 2019-12-17 2024-04-16 中国科学院上海微系统与信息技术研究所 Image detection method and device, electronic equipment and storage medium
CN111311644B (en) * 2020-01-15 2021-03-30 电子科技大学 Moving target detection method based on video SAR
CN111311644A (en) * 2020-01-15 2020-06-19 电子科技大学 Moving target detection method based on video SAR
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA
CN111652910A (en) * 2020-05-22 2020-09-11 重庆理工大学 Target tracking algorithm based on object space relationship
CN112446889A (en) * 2020-07-01 2021-03-05 龚循安 Medical video reading method based on ultrasound
CN113052940A (en) * 2021-03-14 2021-06-29 西北工业大学 Space-time associated map real-time construction method based on sonar
CN113052940B (en) * 2021-03-14 2024-03-15 西北工业大学 Space-time correlation map real-time construction method based on sonar
CN114463389B (en) * 2022-04-14 2022-07-22 广州联客信息科技有限公司 Moving target detection method and detection system
CN114463389A (en) * 2022-04-14 2022-05-10 广州联客信息科技有限公司 Moving target detection method and detection system
CN116182871A (en) * 2023-04-26 2023-05-30 河海大学 Sea cable detection robot attitude estimation method based on second-order hybrid filtering

Also Published As

Publication number Publication date
CN105261037B (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN105261037A (en) Moving object detection method capable of automatically adapting to complex scenes
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN106846359A (en) Moving target method for quick based on video sequence
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CN101420536B (en) Background image modeling method for video stream
CN105631898B (en) The infrared motion target detection method that conspicuousness merges when based on sky
CN103871076A (en) Moving object extraction method based on optical flow method and superpixel division
CN109345472A (en) A kind of infrared moving small target detection method of complex scene
CN105488811A (en) Depth gradient-based target tracking method and system
CN102663362B (en) Moving target detection method based on gray features
CN105427626A (en) Vehicle flow statistics method based on video analysis
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN104318211A (en) Anti-shielding face tracking method
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN107895379A (en) The innovatory algorithm of foreground extraction in a kind of video monitoring
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN108022249A (en) A kind of remote sensing video satellite moving vehicle target region of interest extraction method
CN104537688A (en) Moving object detecting method based on background subtraction and HOG features
Chen et al. Research on moving object detection based on improved mixture Gaussian model
Xu et al. Moving object detection based on improved three frame difference and background subtraction
CN112766056B (en) Method and device for detecting lane lines in low-light environment based on deep neural network
CN104933728A (en) Mixed motion target detection method
CN113077494A (en) Road surface obstacle intelligent recognition equipment based on vehicle orbit

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181102

Termination date: 20191008

CF01 Termination of patent right due to non-payment of annual fee