CN103106633B - A kind of video foreground object screenshot method based on gauss hybrid models and system - Google Patents

A kind of video foreground object screenshot method based on gauss hybrid models and system Download PDF

Info

Publication number
CN103106633B
CN103106633B CN201210511000.9A CN201210511000A CN103106633B CN 103106633 B CN103106633 B CN 103106633B CN 201210511000 A CN201210511000 A CN 201210511000A CN 103106633 B CN103106633 B CN 103106633B
Authority
CN
China
Prior art keywords
pixel
video
shadow lattice
shadow
prospect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210511000.9A
Other languages
Chinese (zh)
Other versions
CN103106633A (en
Inventor
郑连松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taizhou hazens Mdt InfoTech Ltd
Original Assignee
Taizhou Hazens Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taizhou Hazens Mdt Infotech Ltd filed Critical Taizhou Hazens Mdt Infotech Ltd
Priority to CN201210511000.9A priority Critical patent/CN103106633B/en
Publication of CN103106633A publication Critical patent/CN103106633A/en
Application granted granted Critical
Publication of CN103106633B publication Critical patent/CN103106633B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention belongs to analysis and the processing technology field of video, disclose a kind of video foreground object screenshot method based on gauss hybrid models, first the method decodes treats sectional drawing video, then, the shadow lattice obtained according to decoding video, set up the initial Gaussian mixed model of each pixel treating sectional drawing video, and set up initial back-ground model based on this gauss hybrid models;Then, more one by one shadow lattice and background model carry out pixel proportioning, it is judged that whether there is prospect object in shadow lattice, update background model simultaneously;The shadow lattice including prospect object are preserved, as sectional drawing result.The present invention is directed to monitor analysis and the application problem of video dynamic object, design and Implement complete analysis framework and combined the operation planning effectively using human resources, complete and can reach saving manpower, time-consuming, the dynamic screenshot method of video of the requirement such as accuracy and system.

Description

A kind of video foreground object screenshot method based on gauss hybrid models and system
Technical field
The invention belongs to video analysis and processing technology field, relate generally to the process to monitor video, specifically one Shadow lattice or the method for short-movie comprising dynamic object is intercepted from monitor video.
Background technology
Along with the development of digital photographing technique, video monitoring as a kind of safety precaution means by universal and be widely applied In fields such as daily and professional criminal investigations.Video monitoring is while the aspects such as safety precaution bring irreplaceable guarantee, existing Video monitoring system is excessively simple, and deficiency also embodies gradually to be not easy to check that video is checked etc..
Use the most common monitoring system, when needs checking monitoring video, it is necessary to manually completely check whole video, and In reality, in the monitoring camera-shooting region that general unit is arranged, it is unmanned process in the most of the time, as accident Occur compared with the round-the-clock video recording of 24 hours every days, the most extremely short.During checking monitoring video, major part in fact Time is all to watch static figure viewed from behind picture, although existing monitoring system is provided with the function that F.F. is checked.But still can disappear Consumption is checked plenty of time of personnel, and F.F. is a kind of jumping detects broadcast mode, and video is checked in F.F., there is also miss important The possibility of picture.If intercepting the video time interval having the mobile object such as personage to occur from round-the-clock monitor video so that afterwards Check, it is clear that be necessary, manual intercepting monitor video, although feasible, it is also possible to bring greatly for checking in the future Convenient, but the process that manual volume seizes video is a very long process equally.The most do not utilize computer to be automatically performed at present to cut Take concrete grammar and the application example of monitor video.
In terms of Video processing, before video based on gauss hybrid models, background separation technology reach its maturity, this technology is false In setting video picture, the pixel value of each pixel meets Gauss distribution, sets up the height of video by the way of combining multiple Gauss distribution This mixed model, and on this basis, takes one or several distribution that can express background, sets up the background model of video. Then video pictures is contrasted with background model, to determine foreground pixel therein, finally it is separated from background. This technology is used for from monitor video during sectional drawing, in order to judge dynamic key element, will make automatically to intercept in video and comprise Picture or the fragment of dynamic object are possibly realized.
Additionally, before being currently based on gauss hybrid models, background separation technology shadow process on still have deficiency, to front, the back of the body Accuracy that scape separates and the effect of foreground picture separated suffer from considerable influence.Make improvements, dynamic to improve The accuracy rate of state article identification, is also the most necessary.
Summary of the invention
It is an object of the invention to for the above-mentioned problems in the prior art, it is provided that one is the most feasible, and accuracy rate is high Video dynamic object high speed screenshot method.To reach to shorten monitor video, the effect conveniently checked.
For achieving the above object, present invention firstly provides a kind of its in the video foreground object screenshot method of gauss hybrid models, The method comprises the steps:
A () treats sectional drawing video according to the decoding of Video coding rule, it is thus achieved that by the continuous shadow lattice playing sequential arrangement;
B (), according to one group of continuous shadow lattice including the first frame shadow lattice, sets up the initial height of each pixel treating sectional drawing video This mixed model, and set up initial back-ground model based on this gauss hybrid models;
Whether c () starts shadow lattice and background model the most one by one from first shadow lattice and carries out pixel proportioning, it is judged that deposit in shadow lattice At prospect object, update background model simultaneously;
D () preserves the shadow lattice including prospect object.
In said method, the modeling process described in step (b) comprises the steps:
(b-1) setting current pixel point in t, pixel value value is that the probability of X is expressed as follows:
P ( X t ) = Σ i = 1 K ω i , t * η ( X t ; μ i , t , Σ i , t ) , - - - ( 1 )
In formula, η ( X t , μ t , Σ t ) = 1 ( 2 π ) n / 2 | Σ | 1 / 2 e - 1 / 2 ( x t - μ ) Σ - 1 ( x t - μ ) Wherein, K is the number of Gauss distribution, μI, yWith ∑I, tFor t, the average variance together of i-th Gauss distribution, η is standard gaussian probability density function, ωI, tFor t Moment, the weight of i-th Gauss distribution;
From formula (1), above-mentioned gauss hybrid models is made up of K Gauss distribution, and its K value can be according to monitored scene Complexity select, general value 3,4 or 5.
(b-2) in the statistics one group of continuous shadow lattice including the first frame shadow lattice, the pixel value distribution rule rate of current pixel point, And calculate the value of each parameter in acquisition corresponding (1) accordingly, initialize formula (1), shape according to calculating each parameter value obtained Become initial Gaussian mixed model;
(b-3) weight of each Gauss distribution and standard deviation (variance in initial Gaussian mixed model are calculated?) ratio (ω/σ2) It is used for characterizing the importance of Gauss distribution;
(b-4) arrange the order of K Gauss distribution according to the ratio of step (b-3) from big to small, and choose front B Gauss and divide Cloth is as background distributions, wherein B = arg min b ( Σ k = 1 b ω i > T ) ;
(b-5) mean of mean background pixel value as current pixel point of front B Gauss distribution is taken;
(b-6) background pixel value corresponding to pixel each in video pictures is combined composition initial back-ground model.
The described determination methods of step (c) can have multiple, and simplest way is, is directly carried out with background model by current shadow lattice Contrast, is judged to represent the pixel of prospect by pixel bigger for value differences.But, this method is only suitable for background letter List and immobilize or change little situation.If background is easily affected by factors such as sunlight, light, leaf swing and is become Change, then the result obtained based on this direct determination methods will be the most inaccurate.Additionally, prospect object is the most often Shade can be produced, with this direct determination methods also None-identified shade.
In order to improve the accuracy rate of judgement, can be with above-mentioned direct judgement on the basis of, then use some common shade filtering methods It is further processed, to obtain more accurate foreground information.But effect is the most undesirable.
To this, the present invention proposes a kind of method judging prospect object based on three steps.The method, first, with regard to current shadow lattice Carry out proportioning pixel-by-pixel with background model, utilize the pixel value of coupling to update background model, by unmatched pixel preliminary judgement For representing the pixel of prospect;Then, size and brightness according to the chromaticity coordinate difference between current pixel and background model increase Benefit size, getting rid of by preliminary judgement further is the pixel representing background in the pixel representing prospect;Subsequently, to not getting rid of Pixel carry out region division according to luminance gain, divide close for brightness gain values neighbor into a region, according to The size of the meansigma methods of each pixel intensity yield value in region, gets rid of the region representing shade, it is determined that the pixel in other region is Represent the pixel of prospect.
Based on, above-mentioned determination methods, the proportioning judge process of described step (c) can be specifically divided into following steps:
(c-1) current shadow lattice are carried out pixel-by-pixel to proportioning with background model, get rid of the pixel matched with background model, and use This pixel value updates background model, if there is not matched pixel in shadow lattice, the most tentatively regards as the pixel of prospect object, record This pixel, and proceed to step (c-2), otherwise, it is determined that current shadow lattice do not comprise dynamic object;
(c-2) pixel recorded for step (c-1), pixel calculates chromaticity coordinate r of each pixel respectively one by oneo、go、bo With this pixel chromaticity coordinate r in background modelb、gb、bbDifference dr=|ro-rb|、dg=|go-gb|、db=|bo-bb|, with And the yield value gain=(I of brightness Io-Ib)/Ib, getting rid of the pixel meeting formula (2) described condition, being unsatisfactory for formula (2) if existing The pixel of described condition, records this pixel, and proceeds to step (c-3), otherwise, it is judged that current shadow lattice do not comprise dynamic object;
dr< yr, dg< yg, db< yb, | gain | < ygain(2)
In formula, yr、yg、yb、ygainFor threshold, its value can determine according to experiment;
(c-3) pixel recording step (c-2) carries out subregion according to the brightness gain values gain of each pixel, by gain value phase Near pixel is divided into a region, the r component of each pixel chromaticity coordinate and the meansigma methods of g component in calculating each region WithWith in background model to should the r component of chromaticity coordinate in region and the meansigma methods of g componentWithAnd gain The meansigma methods of value, gets rid of the pixel meeting formula (3) described condition, if there is the pixel being unsatisfactory for formula (3) described condition Point, then record this pixel, and judge that current shadow lattice comprise prospect object;
r o &OverBar; ~ r b &OverBar; , g o &OverBar; ~ g b &OverBar; , gain &OverBar; < T gain - - - ( 3 )
In formula, TgainIt it is a threshold.The value of this value can determine according to experiment, it is also possible to according to different monitoring scene Feature scene adjusts.
Coupling described in step (c-1) refers to that pixel value falls in 2.5 standard deviations that corresponding Gauss distribution is average.Concrete Say, be that the pixel value of corresponding point in current pixel value and background model (is represented average flat of several Gauss distribution of background Average) compare, if numerical value difference is less than 2.5 times of the meansigma methods of the standard deviation of each Gauss distribution, then belong to coupling, no Then, it is not mate.
The renewal of the background model described in step (c), refers to that the pixel value utilizing coupling updates gauss hybrid models according to following formula, Then redefine front B the Gauss announcement representing background according to the method for background modeling, then calculate the average of each Gauss distribution Meansigma methods, and then update the respective pixel value in background model.
ωK, t=(1-α) ωK, t-1+αMK, t μt=(1-ρ) μt-1+ρXt
&sigma; t 2 = ( 1 - &rho; ) &sigma; t - 1 2 + &rho; ( X t - &mu; t ) T ( X t - &mu; t )
ρ=α η (Xtkk)
Wherein 0 < α < 1, it is referred to as learning ratio (learning rate), is used for determining that a period of time prospect object keeps static receiving Enter the speed of background model.
In the method for the invention, in the process for sectional drawing result, can there is various ways, include except directly preserving The shadow of prospect object is especially, it is also possible to according to including the shadow lattice of prospect object time sequence information in video, from original video Middle intercepting includes the video time interval corresponding to shadow lattice of prospect object.
Based on said method, the invention allows for a kind of video dynamic high speed sectional drawing system, this system includes composition portion as follows Point:
Video decoding unit, decodes pending video and obtains the sequential shadow lattice of this video, and be sequentially transmitted to background model Set up and updating block;
Background Modeling and updating block, what the sequential shadow lattice obtained according to video decoding unit set up video static background is Time gauss hybrid models, be stored in memorizer.And carry out gauss hybrid models renewal according to sequential;
Background separation device, removes the background pixel in shadow lattice to be checked according to the gauss hybrid models in corresponding moment, and before having The shadow lattice of scene element are transferred to shadow and filter unit;
Shadow filters unit, removes in the shadow lattice that prospect separator transmits and is mistakenly identified as the pixel of prospect because of shadow change, protects Deposit the shadow lattice with foreground pixel;
Sectional drawing performance element, filters the time sequence information of the shadow lattice with foreground pixel that unit obtains at original video according to shadow The corresponding video-frequency band of upper intercepting, is stored in memorizer or is sent to screen and shows.
The present invention is directed to monitor analysis and the application problem of video dynamic object, design and Implement complete analysis framework and knot Closing the operation planning effectively using human resources, complete and can reach saving manpower, time-consuming, accuracy, location chases after The video dynamic sectional drawing system that four, track etc. require.
The analysis method of native system, the Gauss model form the basis that utilization obtains, the fore/background completing video shadow lattice separates, and Continuable renewal background tackles the video persistently inputted, and adds two stage algorithm gain followed by the color-values obtained Removing shade, certain takes out foreground target, and finally recycling algorithm filters some unnecessary prospect objects, takes Obtain analysis result accurately.
After using method of the present invention to carry out sectional drawing, if only preserving the most valuable shadow lattice having and including prospect object, Then can be greatly saved storage resource.
Saving in manpower and the application of time, this method is likewise supplied with powerful effect.When this method is applied at present During digital detecting system, can immediately perform an analysis sectional drawing for video, it is also possible to video converge whole after, once for all of Video performs an analysis, it is not necessary to have personnel aside to await orders at any time, only need to watch sectional drawing result, it is also possible to will determine that afterwards Dynamically the picture of people's things captures out, by 10 minutes long video, shortens into the short-movie of tens of seconds, the most again by Individual picture goes to play, and the most just can save the time of more than ninety percent, under tradition uses the identification system of manpower, uses human eye Search a video, need completely to finish watching whole video, and in order to find out suspect, it may be necessary to repeating playing again and again And search.If use native system, only video need to be carried out the operation of an identification, just the picture of dynamic people's things can be screened Get off, and the video of 10 minutes the most only needs for one to human eye identification time of several minutes can complete.
By above narration, it will be appreciated that native system, in the business monitoring video identification, eliminates a lot in manpower and time On waste, and then effectively redistribute resource, in the result of identification, accurately the picture that dynamic people's things exists cut Take out, allow identification personnel more easily can carry out identification accurately, and to link to the row of Google Maps tracking suspect Track, it is possible to the business raided for crime, increases more help.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) using screenshot method of the present invention to carry out sectional drawing;
Fig. 2 is the graphical schematic diagram of each Gauss distribution of general gauss hybrid models;
Fig. 3 is the experiment effect figure using screenshot method of the present invention to carry out video interception, and wherein, first picture is one Frame comprises the untreated shadow lattice of prospect object, and second picture is after classification method based on pixel processes, and still comprises part The shadow lattice effect of the prospect object pixel of shade, the 3rd picture is after classification method based on region processes, before only comprising The shadow lattice effect of scape part pixel.
Detailed description of the invention
Further illustrate method of the present invention and system thereof below in conjunction with the accompanying drawings.Describe background by way of example in detail The detailed processes such as modeling, front background separation, shadow filter, video intercepting.
Video foreground object screenshot method of the present invention is a video analysis sectional drawing side combining video multiplex analytical technology Method, its main analyzing and processing flow process is as shown in Figure 1.
Fore/background separation process extraction algorithm based on a kind of automatic adaptation foreground target in this example, this algorithm improvement is in early days Gaussian mixture model-universal background model (GMM), use a kind of two stage foreground/background sorting technique to eliminate because shade and prominent Right illumination variation and cause mistake separation.Traditional background separation technology the most all cannot corresponding become with the light in scene Change.Two stage foreground/background sort program can adjust with the color obtained according to each pixel and brightness information in prospect, Then start to allow other pixel partitions of prospect make comparisons with corresponding background block, it is judged that whether they are prospect block. The detailed process of method described in this example is described the most step by step.
0th, video is disassembled.
The DirevtShow utilizing Microsoft combines the assembly (ex:wmv, asf, mp4 ...) of corresponding each video format coding, Video is decoded, disassembles into one per second, even up to the shadow lattice of 30, it is provided that follow-up analysis uses.Tear open The quantity of the shadow lattice solved is relevant with monitor video file format.
1st, Background Modeling/renewal.
1.1st, set up gauss hybrid models (Gaussian Mixture Model, be abbreviated as GMM).
1.1.1GMM utilizes, and can set up the background pixel Strength Changes model of change over time.
1.1.2, a pixel is at t time point, and pixel value is that the probability of X can represent with following equation:
Start to read one group of continuous shadow lattice from the first frame shadow lattice, add up the characteristic distributions of each pixel value of these group shadow lattice, and according to This calculates and obtains each parameter in above formula, initializes gauss hybrid models.By above equation, we can obtain Gaussian Mixture mould Multiple scattergrams of type, as shown in Figure 2.
1.1.3, utilizes GMM to obtain after each image pixel intensities variation model, just can carry out background modeling, and this process can be Elaborate in step the 1.2.
1.2nd, set up background model.
1.2.1, calculates weight ratio (ω/σ divided by variance2) it is used as the importance that Gauss distributes, to be used for which determines A little Gauss distribution can represent background.
1.2.2, according to this ratio (ω/σ2) rearrange the order of K Gauss distribution of a pixel from big to small, And take front B distribution for representing background distributions.
Wherein the value of B is calculated by following formula and obtains:
B = arg min b ( &Sigma; k = 1 b &omega; i > T )
Using the mean of mean of front B distribution as the background pixel value of corresponding pixel points.
1.3rd, prospect object detects.
Judge whether pixel value has in average 2.5 standard deviation of the Gauss distribution of representative background corresponding to this pixel, Have in 2.5 standard deviations, then judge that it belongs to the some of background, not if, then be probably prospect.Just This pixel is recorded as foreground pixel by step.
1.4th, background model updates.
Before detection during scenery part, enter by new video, update parameter the persistence maintenance of gauss hybrid models, with Adapt to the follow-up video data transmitted and proceed moving article detecting.Specifically, when a certain pixel is judged as background, According to following formula renewal relevant parameter:
1.5th, after being separated by fore/background, it is mistaken for the pixel of prospect in order to get rid of because of shadow change, in addition it is also necessary to Doing for foreground classification pixel out and screen judgement further, this action can elaborate in step the 2.
2nd, shadow filters.
In this step, system sequentially will carry out two-phase algorithm to every hardwood shadow lattice and find out dynamic object from shadow lattice.
2.1st, utilize classification method based on pixel (pixel-wise classification), for every hardwood shadow lattice, pixel is entered one by one Row filter.
2.1.1, calculates the chromaticity coordinate of respective pixel in current pixel and background model respectively.
The R/G/B colourity of each pixel represents with R, G, B respectively, and corresponding colourity (chromaticity) coordinate calculates Formula is:
r = R R + G + B , g = G R + G + B , b = B R + G + B ,
The chromaticity coordinate making current pixel is ro,go,bo, in background model, the chromaticity coordinate of respective pixel is rb,gb,bb, this Three pairs of coordinate values, each other can be very close to, is formulated as rb~ro, gb~go, bb~bo
2.1.2, calculates three colors relation when light changes and between background model intensity.
The R/G/B chromaticity coordinate of current pixel is subtracted each other with the R/G/B chromaticity coordinates of respective pixel in background model respectively To three differences;dr=| ro-rb|, dg=| go-gb|, db=| bo-bd|。
2.2.3, adds gain value and judges: the pixel gray level value that gain is defined as changing because of light impact is with corresponding The ratio of background pixel value, be formulated asWherein, IoWith IbThe video respectively observed Grey decision-making with background video.
2.2.4, analyzes dr、dg、db、gainValue understand, if its value is very big, then represent working as of this pixel Preceding pixel value is a lot of with the margin of image element in background model, namely indicates that prospect object moves into.Accordingly, we will sentence The disconnected rule belonging to background is defined as follows:
dr< yr, dg< yg, db< yb, | gain | < ygain, wherein, yr、yg、yb、ygainFor threshold.
2.2.5, judges the prospect object pixel of record preliminary in step the 1.3 one by one according to above-mentioned rule, then may be used Wherein most to be represented the pixel of shadow change, separate with foreground pixel, and then obtain the most accurate True prospect data.So far, each pixel constituting prospect object is decided the most substantially.
2.3rd, the judgement owing to carrying out based on single pixel still has certain limitation, also has the picture representing shade on a small quantity Element is judged as prospect object.To this end, as improving, so we can also utilize classification method based on region (region-based classification) carries out shade and filters, by less for brightness flop and represent the part of shade and further filter out. Obtain closer to real foreground data (whole pixel values of prospect object).
2.3.1, calculates yield value gain for the pixel in the preliminary prospect data that pixel classifications method is obtained.Yield value gain It is equal to the difference of the grey decision-making of respectively one of these shadow lattice pixel and the grey decision-making of background video data respective pixel, with this background video The ratio of the grey decision-making of data respective pixel;
2.3.2, becomes into the potting gum with close yield value gain same region, then tries to achieve in this region complete The chromaticity coordinate meansigma methods of the R of portion's pixel, G colourity and gain meansigma methods, further judge further according to meansigma methods, should Whether region represents shade, and its judgment rule is as follows:
r o &OverBar; ~ r b &OverBar; , g o &OverBar; ~ g b &OverBar; , gain &OverBar; < T gain , Wherein, TgainIt it is a threshold.
2.3.3, after close for yield value and neighbouring potting gum are become a region, when the average gain value in this region is less than One presets threshold value Tgain, represent optical strength and background compared change and not quite belonged to shade, then delete further and obtain Closer to real foreground data.
2.4th, test video is tested
In this experiment, for showing that judged result, system will be deemed as the pixel of background and directly delete more intuitively, and will delete Shadow lattice after foreground pixel are presented to user.After two-stage processing procedure via this step the 2nd, a frame video shadow lattice institute quilt In the pixel that screening retains, only can leave the part of mobile object, such as people, bicycle, locomotive, automobile or other objects; The object of the most static shadow change, as street lamp or car light flash, is not the most dynamic object, so will not be left yet. Test video experimental result is shown in Fig. 3.
Above-mentioned steps 0-the 2.4th gives the background separation in screenshot method of the present invention and shadow sieve with the type of example The detailed process of choosing.Can judge and extract the pixel of prospect object in frame shadow lattice relatively accurately based on above-mentioned means.? On the basis of this, just can implement the subsequent processes such as sectional drawing.

Claims (6)

1. a video foreground object screenshot method based on gauss hybrid models, including:
A () treats sectional drawing video according to the decoding of Video coding rule, it is thus achieved that by the continuous shadow lattice playing sequential arrangement;
B () mixes according to one group of continuous shadow lattice including the first frame shadow lattice, the initial Gaussian setting up each pixel treating sectional drawing video Matched moulds type, and set up initial back-ground model based on this gauss hybrid models;
Whether c () starts shadow lattice and background model the most one by one from first shadow lattice and carries out pixel proportioning, it is judged that before existing in shadow lattice Scenery part, updates background model simultaneously;
D () preserves the shadow lattice including prospect object;
It is characterized in that, step (c) described proportioning determination methods is: first, carry out proportioning with background model pixel-by-pixel with regard to current shadow lattice, The pixel value utilizing coupling updates background model, is the pixel representing prospect by unmatched pixel preliminary judgement;Then, root According to size and the size of luminance gain of the chromaticity coordinate difference between each pixel and background model, get rid of by tentatively further It is judged to the pixel of the prospect that represents represents the pixel of background;Subsequently, the pixel do not got rid of is carried out district according to luminance gain Territory divides, and neighbor close for brightness gain values divides into a region, putting down according to pixel intensity yield value each in region The size of average, gets rid of the region representing shade, it is determined that the pixel in other region is the pixel representing prospect.
Screenshot method the most according to claim 1, it is characterised in that step (c) described proportioning judge process specifically include as Lower step:
(c-1) current shadow lattice are carried out pixel-by-pixel to proportioning with background model, get rid of the pixel matched with background model, and use this picture Element value updates background model, if there is not matched pixel in shadow lattice, the most tentatively regards as the pixel of prospect object, and record is not Join pixel, and proceed to step (c-2), otherwise, it is determined that current shadow lattice do not comprise dynamic object;
(c-2) pixel recorded for step (c-1), pixel calculates chromaticity coordinate r of each pixel respectively one by oneo、go、boWith this Pixel chromaticity coordinate r in background modelb、gb、bbDifference dr=| ro-rb|、dg=| go-gb|、db=| bo-bb|, Yi Jiliang Yield value gain=(the I of degree Io-Ib)/Ib, Io、IbIt is respectively pixel current gray value and this pixel grey decision-making in background model, Get rid of the pixel meeting formula (2) condition, if there is the pixel being unsatisfactory for formula (2) condition, recording these pixels, and turning Enter step (c-3), otherwise, it is judged that current shadow lattice do not comprise dynamic object;
dr< yr, dg< yg, db< yb, | gain | < ygain (2)
In formula, yr、yg、yb、ygainFor threshold;
(c-3) pixel recording step (c-2) carries out subregion according to the brightness gain values gain of each pixel, by close for gain value Pixel is divided into a region, calculates r component and the meansigma methods of g component of pixel chromaticity coordinate in each regionWith With in background model to should the r component of chromaticity coordinate in region and the meansigma methods of g componentWithAnd gain value Meansigma methods, gets rid of the region meeting formula (3) condition, if there is the region being unsatisfactory for formula (3) condition, then records this region, And judge that current shadow lattice comprise prospect object;
r o &OverBar; ~ r b &OverBar; , g o &OverBar; ~ g b &OverBar; , g a i n &OverBar; < T g a i n - - - ( 3 )
In formula, TgainIt it is a threshold.
Screenshot method the most according to claim 1, it is characterised in that the method also includes step (e): from original video Intercept the video time interval corresponding to shadow lattice including prospect object.
Screenshot method the most according to claim 1, it is characterised in that described gauss hybrid models is by K Gauss distribution structure Become, described K value 3,4 or 5.
Screenshot method the most according to claim 1 and 2, it is characterised in that described coupling refers to: pixel value falls right In 2.5 standard deviations that the Gauss distribution answered is average.
6. a video dynamic high speed sectional drawing system based on method described in claim 1, it is characterised in that including:
Video decoding unit, decodes pending video and obtains the sequential shadow lattice of this video, and be sequentially transmitted to Background Modeling And updating block;
Background Modeling and updating block, the sequential shadow lattice obtained according to video decoding unit set up the instant high of video static background This mixed model, is stored in memorizer, and carries out gauss hybrid models renewal according to sequential;
Prospect separator, removes the background pixel in shadow lattice to be checked according to the gauss hybrid models in corresponding moment, and will have prospect picture The shadow lattice of element are transferred to shadow and filter unit;
Shadow filters unit, removes in the shadow lattice that prospect separator transmits and is mistakenly identified as the pixel of prospect because of shadow change, preserves tool There are the shadow lattice of foreground pixel;
Sectional drawing performance element, filters the time sequence information of the shadow lattice comprising prospect object that unit obtains according to shadow and cuts on original video Take corresponding video-frequency band, be stored in memorizer or be sent to screen and show.
CN201210511000.9A 2012-11-30 2012-11-30 A kind of video foreground object screenshot method based on gauss hybrid models and system Expired - Fee Related CN103106633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210511000.9A CN103106633B (en) 2012-11-30 2012-11-30 A kind of video foreground object screenshot method based on gauss hybrid models and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210511000.9A CN103106633B (en) 2012-11-30 2012-11-30 A kind of video foreground object screenshot method based on gauss hybrid models and system

Publications (2)

Publication Number Publication Date
CN103106633A CN103106633A (en) 2013-05-15
CN103106633B true CN103106633B (en) 2016-12-21

Family

ID=48314467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210511000.9A Expired - Fee Related CN103106633B (en) 2012-11-30 2012-11-30 A kind of video foreground object screenshot method based on gauss hybrid models and system

Country Status (1)

Country Link
CN (1) CN103106633B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392119A (en) * 2017-07-04 2017-11-24 西北农林科技大学 A kind of cattle farm monitor video method for real time filtering and system based on Spark Streaming
TWI668669B (en) * 2018-05-31 2019-08-11 國立中央大學 Object tracking system and method thereof
CN110111518A (en) * 2019-06-06 2019-08-09 厦门钛尚人工智能科技有限公司 A kind of dedicated destruction recognizer of venue
CN112560655A (en) * 2020-12-10 2021-03-26 瓴盛科技有限公司 Method and system for detecting masterless article

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101094413A (en) * 2007-07-06 2007-12-26 浙江大学 Real time movement detection method in use for video monitoring
CN101998063A (en) * 2009-08-20 2011-03-30 财团法人工业技术研究院 Foreground image separation method
CN102087707A (en) * 2009-12-03 2011-06-08 索尼株式会社 Image processing equipment and image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101094413A (en) * 2007-07-06 2007-12-26 浙江大学 Real time movement detection method in use for video monitoring
CN101998063A (en) * 2009-08-20 2011-03-30 财团法人工业技术研究院 Foreground image separation method
CN102087707A (en) * 2009-12-03 2011-06-08 索尼株式会社 Image processing equipment and image processing method

Also Published As

Publication number Publication date
CN103106633A (en) 2013-05-15

Similar Documents

Publication Publication Date Title
CN103778237B (en) Video abstraction generation method based on space-time recombination of active events
CN103578119B (en) Target detection method in Codebook dynamic scene based on superpixels
CN101326514B (en) Background removal in a live video
CN103106633B (en) A kind of video foreground object screenshot method based on gauss hybrid models and system
CN105469105A (en) Cigarette smoke detection method based on video monitoring
CN105654471A (en) Augmented reality AR system applied to internet video live broadcast and method thereof
CN102768757B (en) Remote sensing image color correcting method based on image type analysis
CN103679704B (en) Video motion shadow detecting method based on lighting compensation
CN104616664A (en) Method for recognizing audio based on spectrogram significance test
CN104063883A (en) Surveillance video abstract generating method based on combination of object and key frames
CN103942812B (en) Moving object detection method based on Gaussian mixture and edge detection
CN105513053B (en) One kind is used for background modeling method in video analysis
CN104834916A (en) Multi-face detecting and tracking method
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
CN104636759A (en) Method for obtaining picture recommending filter information and picture filter information recommending system
CN109120919A (en) A kind of automatic analysis system and method for the evaluation and test of picture quality subjectivity
CN112686276A (en) Flame detection method based on improved RetinaNet network
CN103034997B (en) Foreground detection method for separation of foreground and background of surveillance video
CN110363720A (en) A kind of color enhancement method, apparatus, equipment and the storage medium of image
CN112380982A (en) Integrated monitoring method for progress and quality of infrastructure project in power industry
CN104299214B (en) The detection of raindrop and minimizing technology and system in light rain scene video data
CN103119625A (en) Video character separation method and device
CN104469089A (en) Multimedia interaction teaching system and teaching method
CN107992937A (en) Unstructured data decision method and device based on deep learning
CN104732524A (en) Random weight network partitioning method for blood leukocyte microscopic image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160909

Address after: Taizhou City, Zhejiang province 317700 New Oriental Commercial No. 4006-3

Applicant after: Taizhou hazens Mdt InfoTech Ltd

Address before: Hangzhou City, Zhejiang province 310052 Binjiang District Jiang Hui Road No. 1772 SUPOR Building Room 903

Applicant before: Hangzhou Enginex Digital Technology Co., Ltd.

Applicant before: Zheng Liansong

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20161221

Termination date: 20181130

CF01 Termination of patent right due to non-payment of annual fee