CN101216943A - A method for video moving object subdivision - Google Patents

A method for video moving object subdivision Download PDF

Info

Publication number
CN101216943A
CN101216943A CNA2008100466966A CN200810046696A CN101216943A CN 101216943 A CN101216943 A CN 101216943A CN A2008100466966 A CNA2008100466966 A CN A2008100466966A CN 200810046696 A CN200810046696 A CN 200810046696A CN 101216943 A CN101216943 A CN 101216943A
Authority
CN
China
Prior art keywords
background image
background
image
fringe region
long fringe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100466966A
Other languages
Chinese (zh)
Other versions
CN101216943B (en
Inventor
朱松纯
龚海峰
胡文泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE
Original Assignee
HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE filed Critical HUBEI LOTUS HILL INSTITUTE FOR COMPUTER VISION AND INFORMATION SCIENCE
Priority to CN2008100466966A priority Critical patent/CN101216943B/en
Publication of CN101216943A publication Critical patent/CN101216943A/en
Application granted granted Critical
Publication of CN101216943B publication Critical patent/CN101216943B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a video motion object segmentation method with the following steps of: (1) establishing sub-area information of a background image comprising the steps of dividing the background image into a long marginal area, a texture area, a flat area and storing; (2) inputting video image frames; (3) dividing the present image frames into sub-areas according to the sub-area information of the background image; (4) judging whether the present image frames match with the background information comprising the steps of judging the matching of the long marginal area, judging the matching of the texture area, and judging the matching of the flat area; (5) outputting the sub-area image when judging the step (4) unmatched. With dividing the background image into the long marginal area, the texture area, the flat area; dividing the present image frame into areas according to the image background sub-area class, the invention can respectively match each area in different ways to improve the accuracy of the video motion object segmentation.

Description

The method that a kind of video frequency motion target is cut apart
Technical field
The present invention relates to the video technique field, be specifically related to computer video content analysis, detection technique field, particularly the method that the image of moving target in the consecutive image sequence is split from background automatically.
Background technology
The method that video frequency motion target is cut apart is meant by mathematical model and algorithm, the method for utilizing computing machine that the image of moving target in the consecutive image sequence is split from background automatically.This method can be used for multiple applications such as video intelligent analysis, video coding, man-machine interaction.
The method that video frequency motion target is cut apart mainly contains following several:
1, suppose the method for background segment: by the complete transfixion of hypothesis background, the simplest background modeling algorithm is by two interframe, even moving target and background are differentiated in the calculating of many frame-to-frame differencess.What image difference was little between two frames or multiframe is background, and big is prospect.This method has the simple and direct advantage, and experiment effect is also better under the ideal conditions.Owing to often have the complicated background situation in the monitoring scene, a lot of scenes can't obtain desirable pure background image at all but in fact.Add the noise that produces in the camera imaging process, and automatic gain is controlled and the global change of the gradation of image value that white balance causes, the method that this video frequency motion target is cut apart is poor effect often, is difficult to practicality.
Therefore, the method for present main flow moving Object Segmentation mainly adopts following two kinds of methods:
2, the method cut apart of color characteristic: utilize the method for color characteristic mainly to consider the statistical distribution of background image and foreground image pixel value, by setting up a plurality of Gauss models this distribution is described, and then use certain decision rule that current pixel is belonged to background pixel or the moving target pixel is judged, also be foreground segmentation.After background and prospect are cut apart, upgrade according to the statistical distribution parameter of certain rule the background image pixel value, usually comparatively at a slow speed renewal is carried out in the distribution of prospect, the distribution of background is carried out the renewal of conventional speeds.Article in computer vision in 1998 and the pattern-recognition meeting " Adaptivebackground mixture models for real-time tracking " in the method introduced promptly be typical case's representative of this method.This method very effectively customer service the method for hypothesis background to the sensitivity of noise, and can some are subjective and the motion that thinks little of, go into background as statements such as rocking of flowers and plants.Meanwhile, these class methods do not need to use in advance a desirable background model, so and practical.But because the distribution of using Gaussian distribution to describe non-moving target pixel grey scale under a lot of situation is inaccurate, therefore all erroneous segmentation may appear to this method to situations such as illumination variation, shades.
3, the method cut apart of gradient feature: the dividing method that utilizes the gradient feature on idea about modeling and the above-mentioned modeling method of color characteristic of utilizing similar, just changed color characteristic into the gradient feature.The automatic gain control of gradient information camera subject and the influence of local illumination variation are less, therefore the method for cutting apart than color characteristic is more stable, but erroneous judgement is all appearred in flat site in the image (as ceramic tile quality face, metope) and the bigger zone of noise easily.
Summary of the invention
Technical matters to be solved by this invention is: for overcoming the defective of said method, and the method that provides a kind of video frequency motion target to cut apart, this method can improve the accuracy that video frequency motion target is cut apart.
The present invention solves the problems of the technologies described above the technical scheme that is adopted:
A kind of video frequency motion target dividing method the steps include:
(1) sets up the step of background image subregion information; It comprises the step that background image is divided into long fringe region, texture region, flat site and storage;
(2) step of inputted video image frame;
(3) according to background image subregion information current image frame is carried out subregional step;
(4) judge the step whether current image frame and background information mate, it comprises:
Judge the step whether long fringe region in the current image frame and the long fringe region in the background image mate;
Judge the step whether texture region in the current image frame and the texture region in the background image mate;
Judge the step whether flat site in the current image frame and the flat site in the background image mate;
(5) step (4) is judged as the step that area image is not exported.
In the step of such scheme (4),
Judge the method that step that whether long fringe region in the current image frame and the long fringe region in the background image mate adopts Haar filtering to cut apart.
In the step of such scheme (4),
Judge the method that step that whether texture region in the current image frame and the texture region in the background image mate adopts the gradient feature to cut apart.
In the step of such scheme (4),
Judge the method that step that whether flat site in the current image frame and the flat site in the background image mate adopts color characteristic to cut apart.
The step of such scheme (1) comprising:
(1.1) be divided into the step of long fringe region at background image.
(1.2) the area dividing texture region beyond long fringe region and the step of flat site.
In the such scheme, step (1.1) is specially:
(1.1.1) background image is grown the step of rim detection;
(1.1.2) detected edge is carried out the vector quantization with edge image of reconnecting at broken edge;
(1.1.3) length at every edge of calculating is amplified the edge of edge length greater than threshold value by setting value, obtain long fringe region;
(1.1.4) be labeled as long fringe region and storage.
In the such scheme, the video frequency motion target dividing method also comprises:
(6) step of background image updating information.
The step of such scheme (6) specifically comprises:
(6.1) step of the update background module as a result of basis (4);
(6.2) step of the background image updating as a result of basis (6.1).
In the such scheme, the method that the gradient feature is cut apart is the histogrammic method of LBP filter field.
The step of such scheme (1.2) is specially:
Divide by the LBP local acknowledgement that sets with to the zone beyond the long fringe region, the zone that is higher than setting value is a texture region, and the zone of being less than or equal to setting value is a flat site.
Compared with prior art, the present invention has the following advantages:
1, background image is divided into long fringe region, texture region, flat site, classification is carried out area dividing with current image frame according to image background regions, can adopt diverse ways that each zone is mated respectively, make the accuracy that video frequency motion target is cut apart improve.
2, the coupling of long fringe region, texture region, flat site adopts the method that Haar filtering cuts apart, the method that the gradient feature is cut apart, the method that color characteristic is cut apart to carry out respectively, the accuracy height that video frequency motion target is cut apart.
The method that Haar filtering is cut apart is meant by the Haar filter response that extracts image to come the method that background/foreground is differentiated is carried out in the respective image zone.This feature is used for image and field of video processing appears at the eighties in 20th century the earliest, is mainly used in video compress and coding.In the computer vision of calendar year 2001 and the pattern-recognition international conference, Viola Jones is successfully applied to the identification and the detection of people's face with this feature, thereby causes scientific research personnel's extensive attention again.But do not see the Haar feature application in background modeling and moving Object Segmentation.
Whether its simple radical present principles is, by observing the response of long fringe region Haar feature, the Haar characteristic response value of background image distributed carry out parametrization and estimate, differentiate current Haar response by the parameter that estimates then and produced by background image.
3, according to texture on the image background image is divided into long fringe region, texture region, flat site, the method for division is simple, and is easy to operate.
4, method also comprises the step of background image updating information; When the background change, can upgrade background image information automatically.
Description of drawings
Fig. 1 is the inventive method embodiment FB(flow block)
Fig. 2 is the FB(flow block) of background information update method
Fig. 3 is a background image
Fig. 4 is the synoptic diagram of each area dividing in the background image information, and the white portion among the figure is long fringe region, and the lines fill area is a flat site, and black region is a texture region.
Fig. 5 is the present image of input
Fig. 6 is the image of output, and the white portion among the figure is the moving target that is partitioned into.
Embodiment
The concrete implementing method of the inventive method is as follows:
(1) sets up the step of background image subregion information; It comprises the step that background image is divided into long fringe region, texture region, flat site and storage;
(1.1) be divided into the step of long fringe region at background image.
(1.1.1) background image is grown the step of rim detection;
Use the Canny operator that background image is grown rim detection
(1.1.2) detected edge is carried out the vector quantization with edge image of reconnecting at broken edge;
Use Euler's helical model to mate to detected edge, to carry out the vector quantization with edge image of reconnecting at broken edge.
(1.1.3) length at every edge of calculating is amplified the edge of edge length greater than threshold value by setting value, obtain long fringe region
A, calculate the length at every edge, with edge length greater than the edge of threshold value Eth as long edge, be designated as Edge i
B, to the pixel on every long edge (x, y), define its N face the territory be point set (m, n) || x-m|<N, | y-n|<N} also remembers that its N faces the territory and is long fringe region.
(1.1.4) be labeled as long fringe region and storage.
(1.2) the area dividing texture region beyond long fringe region and the step of flat site.
Divide by the LBP local acknowledgement that sets with to the zone beyond the long fringe region, the zone that is higher than setting value 0 is a texture region, and the zone of being less than or equal to setting value 0 is a flat site.
Background image is done the described LBP filtering of step 4, and the statistics local acknowledgement and.Definition background image I BLast pixel (x, local acknowledgement y) and be:
Sum t ( x , y ) = &Sigma; | x - m | < N , | y - n | < N LBP ( m , n ) ,
Wherein function LBP (m, n) (x, the LBP that y) locates response on the background image.Definition threshold value L Th, then texture region be non-long fringe region and satisfy (x, y) | S (x, y)>L ThPoint set, flat site be point set (x, y) | S (x, y)≤L Th.
To newly being divided into the point of flat site or texture region, the method for using step (1.3) is to its initialization context parameter model.
The step of (1.3) initialization background model;
Feature according to extracting is each pixel initialization background model.Note from image, extract certain be characterized as f, then suppose being distributed as on background of this feature
p ( f ) = &Sigma; i &omega; i N i ( f , &mu; i , &sigma; i )
Be that average is μ i, variance is σ i, weight is ω iA plurality of Gaussian distribution.As the rgb space color characteristic to three-dimensional, then f is the vector of three passage gray-scale values.The number of Gaussian distribution is decided according to the complex situations of background in the model, selects 3-7 usually.The initialization of model is exactly to want initiation parameter μ iAnd σ i, its value is made as f and a predefined value respectively, and the weight of establishing first distribution is 1, and all the other are 0.
On arbitrary pixel,, then continue to use the model parameter of this feature if this feature has been carried out the model parameter initialization.
(2) step of inputted video image frame;
(3) current image frame is carried out the step of territorial classification according to the image background regions classified information;
(4) judge the step whether current image frame and background information mate, it comprises:
(4.1) judge the step whether long fringe region in the current image frame and the long fringe region in the background image mate; Judge the method that step that whether long fringe region in the current image frame and the long fringe region in the background image mate adopts Haar filtering to cut apart.
(4.1.1) current frame image is asked for the Haar characteristic response of tangential direction along long edge:
(4.1.1.1) establishing current frame image is I t, then (x, y) locate image and be along the Haar characteristic response of direction θ:
Haar ( x , y | &theta; ) = &Sigma; | x - m | < N , | y - n | < N sign ( arg ( m , n ) - &theta; ) I t ( x , y ) , Wherein sign ( x ) = 1 , x > 0 - 1 , x &le; 0 .
(4.1.1.2) each point on the long edge that step (1.1) is calculated calculates aforesaid Haar feature, remember its response be f (x, y).
(4.1.2) to long fringe region, the characteristic response of remembering non-long edge pixel for the characteristic response of the nearest long marginal point of its Euclidean distance.
Each Gaussian distribution of each pixel sorts according to weight in (4.1.3) should the zone, will satisfy formula
B = arg min b ( &Sigma; i = 1 N &omega; i > T ) Preceding i distribute and to distribute as a setting, all the other distribute as prospect
(4.1.4) note μ I, x, y, σ I, x, y(x y) locates the average and the variance of i Gaussian distribution for the Background picture point.According to the descending order of each distribution of weights,, make i Gaussian distribution satisfy if i occurs | and f (x, y)-μ i|<η σ i, think that then (x, y) with i Gaussian distribution coupling, wherein η is a constant to f.
(4.1.5) to any point in this zone (x, y), if with its coupling be background distributions, think that then this pixel of current image frame is the background pixel point, otherwise think the moving target pixel.
(4.2) judge the step whether texture region in the current image frame and the texture region in the background image mate; Judge the method that step that whether texture region in the current image frame and the texture region in the background image mate adopts the gradient feature to cut apart.The method that the gradient feature is cut apart is the histogrammic method of LBP filter field.
(4.2.1) texture region to current frame image extracts local binaryzation pattern (Local Binary Pattern LBP) histogram feature:
(4.2.1.1) the note prior image frame is I t, then (x, y) computing formula of locating local binaryzation pattern is:
LBP ( x , y ) = &Sigma; i = 0 N sign ( I t ( x , y ) - I t ( x + r * cos ( i * 2 &pi; N + 1 ) , y + r * sin ( i * 2 &pi; N + 1 ) ) - b th ) * 2 i
, wherein sign ( x ) = 1 , x > 0 0 , x &le; 0 . With respect to other gradient information extracting method, this local binaryzation pattern operator extraction pixel (x, the y) gradient on all directions, and calculate simply are easy to realize the real time implementation of calculating.
(4.2.1.2) obtain the LBP response of view picture present image after, to the texture region point (x y) calculates N and faces the corresponding histogram of LBP in the territory, be designated as LBPHist (x, y).
(4.2.2) according to (4.1.3)-(4.1.5) step to the mating and moving Object Segmentation of regional interior pixel point, matching formula wherein changes into
&Sigma; j = 1 M Min ( LBPHist j ( x , y ) , &mu; ij ) &Sigma; j = 1 M &mu; ij < Th 1
, wherein Th1 is a parameter that 0-1 sees.
(4.3) judge the step whether flat site in the current image frame and the flat site in the background image mate; Judge the method that step that whether flat site in the current image frame and the flat site in the background image mate adopts color characteristic to cut apart.
(4.3.1) the current frame image flat site is extracted the color characteristic of rgb space: its feature is the RGB passage gray-scale value of image, promptly f=(R, G, B) T
(4.3.2) according to (4.1.3)-(4.1.5) step mating and moving Object Segmentation to regional interior pixel point
(5) step (4) is judged as the step that area image is not exported.
(6) step of update background module.
(6.1) step of the update background module as a result of basis (4);
(6.1.1) (x, y), if there is not distribute to satisfy the matching condition of (4.1.4), (x, y), variance is bigger, the new Gaussian distribution that weight is very little, and carry out weight normalization then that distribution of weight minimum to be replaced with average and be f to f.
(6.1.2) weight is upgraded according to following formula
ω t+1,i=(1-α)ω t,i+α*S t,i
, S T, iBe a constant coefficient, be 1 to the Gaussian distribution on the quilt coupling, otherwise be 0 that α is for adjusting the parameter of context update speed.
(6.1.3), its average and variance parameter are upgraded according to following formula to the Gaussian distribution on the coupling
μ t+1=(1-ρ)μ t+ρ·f(x,y)
&sigma; t + 1 2 = ( 1 - &rho; ) &sigma; t 2 + &rho; &CenterDot; &lsqb; f ( x , y ) - &mu; t + 1 &rsqb; T &CenterDot; &lsqb; f ( x , y ) - &mu; t + 1 ]
To the LBPHist feature, only upgrade average, do not upgrade variance.
(6.2) step of the background image updating as a result of basis (6.1)
The note background image is I B, the background image after the renewal is I ' B, present image is I t(x y) is judged as the Background picture point, then if step (4) is with the point of present image
I′ B(x,y)=(1-α)*I B(x,y)+α*I t(x,y)
Otherwise
I′ B(x,y)=I B(x,y)
The same step of α (6) f wherein.
In the inventive method, background image being divided into long fringe region, texture region, flat site step and also can setting of step (1) by artificial division.
The background image subregion information that step (1) is set up also can be constant, and promptly the inventive method also can not adopt step (6) background image updating information.

Claims (10)

1. the method that video frequency motion target is cut apart the steps include:
(1) sets up the step of background image subregion information; It comprises the step that background image is divided into long fringe region, texture region, flat site and storage;
(2) step of inputted video image frame;
(3) according to background image subregion information current image frame is carried out subregional step;
(4) judge the step whether current image frame and background information mate, it comprises:
Judge the step whether long fringe region in the current image frame and the long fringe region in the background image mate;
Judge the step whether texture region in the current image frame and the texture region in the background image mate;
Judge the step whether flat site in the current image frame and the flat site in the background image mate;
(5) step (4) is judged as the step that area image is not exported.
2. the method for claim 1 is characterized in that: in the step (4),
Judge the method that step that whether long fringe region in the current image frame and the long fringe region in the background image mate adopts Haar filtering to cut apart.
3. the method for claim 1 is characterized in that: in the step (4),
Judge the method that step that whether texture region in the current image frame and the texture region in the background image mate adopts the gradient feature to cut apart.
4. the method for claim 1 is characterized in that: in the step (4),
Judge the method that step that whether flat site in the current image frame and the flat site in the background image mate adopts color characteristic to cut apart.
5. the method for claim 1, it is characterized in that: step (1) comprising:
(1.1) be divided into the step of long fringe region at background image;
(1.2) the area dividing texture region beyond long fringe region and the step of flat site.
6. method as claimed in claim 5 is characterized in that: step (1.1) is specially:
(1.1.1) background image is grown the step of rim detection;
(1.1.2) detected edge is carried out the vector quantization with edge image of reconnecting at broken edge;
(1.1.3) length at every edge of calculating is amplified the edge of edge length greater than threshold value by setting value, obtain long fringe region;
(1.1.4) be labeled as long fringe region and storage.
7. the method for claim 1, it is characterized in that: method also comprises:
(6) step of background image updating information.
8. method as claimed in claim 7 is characterized in that: step (6) specifically comprises:
(6.1) step of the update background module as a result of basis (4);
(6.2) step of the background image updating as a result of basis (6.1).
9. method as claimed in claim 3 is characterized in that: the method that the gradient feature is cut apart is the histogrammic method of LBP filter field.
10. method as claimed in claim 5 is characterized in that: step (1.2) is specially:
Divide by the LBP local acknowledgement that sets with to the zone beyond the long fringe region, the zone that is higher than setting value is a texture region, and the zone of being less than or equal to setting value is a flat site.
CN2008100466966A 2008-01-16 2008-01-16 A method for video moving object subdivision Expired - Fee Related CN101216943B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100466966A CN101216943B (en) 2008-01-16 2008-01-16 A method for video moving object subdivision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100466966A CN101216943B (en) 2008-01-16 2008-01-16 A method for video moving object subdivision

Publications (2)

Publication Number Publication Date
CN101216943A true CN101216943A (en) 2008-07-09
CN101216943B CN101216943B (en) 2010-07-14

Family

ID=39623371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100466966A Expired - Fee Related CN101216943B (en) 2008-01-16 2008-01-16 A method for video moving object subdivision

Country Status (1)

Country Link
CN (1) CN101216943B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101645171A (en) * 2009-09-15 2010-02-10 湖北莲花山计算机视觉和信息科学研究院 Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning
CN102307274A (en) * 2011-08-31 2012-01-04 南京南自信息技术有限公司 Motion detection method based on edge detection and frame difference
CN102568002A (en) * 2011-12-20 2012-07-11 福建省华大数码科技有限公司 Moving object detection algorithm based on fusion of texture pattern and movement pattern
CN102592271A (en) * 2012-01-16 2012-07-18 龙翔 Boundary connecting method of motion segmentation based on boundary extraction
CN102915544A (en) * 2012-09-20 2013-02-06 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN102999921A (en) * 2012-11-09 2013-03-27 山东大学 Pixel label propagation method based on directional tracing windows
CN103618846A (en) * 2013-11-22 2014-03-05 上海安奎拉信息技术有限公司 Background removing method for restricting influence of sudden changes of light in video analysis
CN104202559A (en) * 2014-08-11 2014-12-10 广州中大数字家庭工程技术研究中心有限公司 Intelligent monitoring system and intelligent monitoring method based on rotation invariant feature
CN109963118A (en) * 2018-07-24 2019-07-02 永康市异造科技有限公司 Scene monitoring system based on air-conditioning platform

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440668B (en) * 2013-08-30 2017-01-25 中国科学院信息工程研究所 Method and device for tracing online video target

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101645171A (en) * 2009-09-15 2010-02-10 湖北莲花山计算机视觉和信息科学研究院 Background modeling method (method of segmenting video moving object) based on space-time video block and online sub-space learning
CN102307274A (en) * 2011-08-31 2012-01-04 南京南自信息技术有限公司 Motion detection method based on edge detection and frame difference
CN102307274B (en) * 2011-08-31 2013-01-02 南京南自信息技术有限公司 Motion detection method based on edge detection and frame difference
CN102568002B (en) * 2011-12-20 2014-07-09 福建省华大数码科技有限公司 Moving object detection algorithm based on fusion of texture pattern and movement pattern
CN102568002A (en) * 2011-12-20 2012-07-11 福建省华大数码科技有限公司 Moving object detection algorithm based on fusion of texture pattern and movement pattern
CN102592271A (en) * 2012-01-16 2012-07-18 龙翔 Boundary connecting method of motion segmentation based on boundary extraction
CN102592271B (en) * 2012-01-16 2014-02-26 龙翔 Boundary connecting method of motion segmentation based on boundary extraction
CN102915544A (en) * 2012-09-20 2013-02-06 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN102915544B (en) * 2012-09-20 2015-04-15 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN102999921A (en) * 2012-11-09 2013-03-27 山东大学 Pixel label propagation method based on directional tracing windows
CN102999921B (en) * 2012-11-09 2015-01-21 山东大学 Pixel label propagation method based on directional tracing windows
CN103618846A (en) * 2013-11-22 2014-03-05 上海安奎拉信息技术有限公司 Background removing method for restricting influence of sudden changes of light in video analysis
CN104202559A (en) * 2014-08-11 2014-12-10 广州中大数字家庭工程技术研究中心有限公司 Intelligent monitoring system and intelligent monitoring method based on rotation invariant feature
CN109963118A (en) * 2018-07-24 2019-07-02 永康市异造科技有限公司 Scene monitoring system based on air-conditioning platform

Also Published As

Publication number Publication date
CN101216943B (en) 2010-07-14

Similar Documents

Publication Publication Date Title
CN101216943B (en) A method for video moving object subdivision
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN102136059B (en) Video- analysis-base smoke detecting method
CN106845415B (en) Pedestrian fine identification method and device based on deep learning
CN105354791B (en) A kind of improved ADAPTIVE MIXED Gauss foreground detection method
CN104933710A (en) Intelligent analysis method of store people stream track on the basis of surveillance video
CN101971190A (en) Real-time body segmentation system
CN105513053B (en) One kind is used for background modeling method in video analysis
CN107945200A (en) Image binaryzation dividing method
CN101751679A (en) Sorting method, detecting method and device of moving object
CN103118220B (en) A kind of Key-frame Extraction Algorithm based on multidimensional characteristic vectors
CN102147861A (en) Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN103198479A (en) SAR image segmentation method based on semantic information classification
CN101527043B (en) Video picture segmentation method based on moving target outline information
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN102496001A (en) Method of video monitor object automatic detection and system thereof
CN107909081A (en) The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN105869174B (en) A kind of Sky Scene image partition method
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN101908214B (en) Moving object detection method with background reconstruction based on neighborhood correlation
CN107330027A (en) A kind of Weakly supervised depth station caption detection method
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN104599291B (en) Infrared motion target detection method based on structural similarity and significance analysis
CN110084201A (en) A kind of human motion recognition method of convolutional neural networks based on specific objective tracking under monitoring scene
CN102663777A (en) Target tracking method and system based on multi-view video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100714

Termination date: 20150116

EXPY Termination of patent right or utility model