CN101315701A - Movement destination image partition method - Google Patents

Movement destination image partition method Download PDF

Info

Publication number
CN101315701A
CN101315701A CNA2008100538305A CN200810053830A CN101315701A CN 101315701 A CN101315701 A CN 101315701A CN A2008100538305 A CNA2008100538305 A CN A2008100538305A CN 200810053830 A CN200810053830 A CN 200810053830A CN 101315701 A CN101315701 A CN 101315701A
Authority
CN
China
Prior art keywords
image
background
movement destination
difference
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100538305A
Other languages
Chinese (zh)
Other versions
CN101315701B (en
Inventor
明东
刘双迟
张希
程龙龙
万柏坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN2008100538305A priority Critical patent/CN101315701B/en
Publication of CN101315701A publication Critical patent/CN101315701A/en
Application granted granted Critical
Publication of CN101315701B publication Critical patent/CN101315701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method for segmenting a moving target image, belonging to the technical field of a computer of image processing; the method comprises the following steps of: adopting a minimum median variance method to obtain a background image; adopting an indirect difference function to obtain a difference image; step 3, selecting a segmentation threshold T and carrying out binarization for the difference image so as to obtaining a binarization foreground image; updating the ground image dynamically according to the updated image, the current background image and the binarization foreground image; using a morphological filtering to eliminate the noise in the binary image and fill defects in the moving target image; the invention can provide help for effective use of a monitoring system and reliable evaluation of monitoring effect, and can be applied in various public places such as police, fire fighting, customs, ports and stations, etc. widely, thus obtaining considerable social effect and improving public safety service.

Description

Movement destination image partition method
Technical field
The invention belongs to the pattern process computer technical field, relate to a kind of movement destination image partition method.
Background technology
Be that video monitoring image is handled the most basic link cutting apart of movement human image under the complex background, and it is intended to from monitoring gained video sequence image the movement human zone be extracted from background images.The post-processed of effectively cutting apart for monitoring objective such as classification, tracking and identification of moving region are extremely important.Yet because the dynamic change of background images, confusion such as the variation of the variation of weather, illumination condition, background is disturbed, blocking even the motion of video camera etc. between shadow, object and the environment of moving target or between object and the object, makes the detection of movement human image become the quite work of difficulty.Therefore also cause domestic and international many researchers' great interest, become the forward position focus that video image information detection range in recent years receives much concern.
According to algorithm characteristic, under the complex background in the video image movement human detection method can roughly be divided into motion segmentation and the static two class algorithms of cutting apart.The motion segmentation algorithm is a movement properties of utilizing moving target to have, and it is distinguished from the sequence image background.Static partitioning algorithm then be at first to single-frame images according to it information such as each regional gray scale, lines or profile gradient do static state and cut apart, between sequential frame image, adopt the estimation of taking exercises of similar Block Matching Algorithm afterwards, remerge the extraction that each cut zone is finished moving target at last.For stationary background, have only the video image of simple global scene motion, adopt differential motion detection algorithm or background modeling motion detection algorithm usually.The differential motion detection algorithm is that the image difference of adjacent two frames or three frames is done the moving region that thresholding extracts image.The background modeling motion detection algorithm is one of motion segmentation method of using always.Usually, this algorithm will utilize complete sequence of video images information to carry out statistical modeling to distinguish pixel class (background/moving region) and generation background image earlier; Again from every two field picture background correction to obtain moving target.In a word, the motion segmentation algorithm is numerous, and key is to search out appropriate methods, promptly realizes the target of cutting apart of expection under the situation of cost minimum.
Summary of the invention
The present invention is directed to public place fixed point monitoring gained video image and have the characteristics of stationary background and panorama motion, propose a kind of movement destination image partition method in the complex background video monitoring that can be used for.The present invention can be effective use of supervisory system and the reliable evaluation of monitoring effect is offered help, and can be widely used in various public places such as public security, fire-fighting, customs, harbour, station, obtains the lifting of considerable social benefit and public safety service.
The present invention adopts following technical scheme:
A kind of movement destination image partition method, each image segmentation adopts following step:
Step 1: adopt minimum meta variance method to obtain background image;
Step 2: adopt indirect difference function f ( a , b ) = 1 - 2 ( a + 1 ) ( b + 1 ) ( a + 1 ) + ( b + 1 ) 2 ( 256 - a ) ( 256 - b ) ( 256 - a ) + ( 256 - b ) Obtain difference image, a wherein, b represent present image and background image gray scale (intensity) value in same pixel position respectively, and 0≤a, and b≤255,0≤f (a, b)≤1;
Step 3: select segmentation threshold T, M ( x , y ) = 1 f ( a ( x , y ) , b ( x , y ) ) ≥ T 0 Otherwise With the difference image binaryzation, obtain the binaryzation foreground image;
Step 4: establish I (x, y) tBe the sequence image that the N frame is gathered, B (x, y) N+1And B (x, y) nBe respectively after the renewal that obtains according to step 1 and current background image, M (x, y) nFor the binaryzation foreground image that obtains according to step 3, according to formula
Figure A20081005383000043
Dynamically update background image;
Step 5: utilize morphologic filtering to eliminate the noise in the bianry image and fill up disappearance in the movement destination image.
Above-mentioned movement destination image partition method, in the step 1, can be according to formula B ( x , y ) = arg { min p me d t ( I ( x , y ) t - p ) 2 } Obtain the background image of R, G, three components of B respectively, p is that (x y) locates coloured image value to be determined to location of pixels, and t is the frame index value in the formula; Can adopt maximization inter-class variance method to determine threshold value T in the step 3; In the step 4, α can get 0.8.
The used vedio data of the present invention has the characteristics of stationary background and panorama motion, adopts dynamic background modeling and differential motion to detect the algorithm that combines and cuts apart movement destination image.From each stage result of Fig. 3, Fig. 4 and Fig. 5, it is suitable that the present invention adopts the overall plan of movement human contour images dividing method in the complex background video monitoring; Wherein static background modeling, dynamic background are upgraded, differential motion is cut apart with processing links such as morphology aftertreatment all be indispensable or; Its treatment scheme is reasonable substantially, and final segmentation result is more clearly, agile.
In modern society's supervisory system, automatically the method for obtaining the monitored object data roughly can be divided into two classes: a class is to utilize sensors such as piezoelectricity, infrared, annular magnetic inductive coil to obtain the parameter of monitored object itself, this class methods Tracking Recognition rate is higher, but damage easily, install also inconvenient; Also have a class just to be based on the method (the invention belongs to this kind) of Flame Image Process and pattern-recognition, overcome the limitation of front one class methods, because the progress of Flame Image Process recognition technology and the significantly raising of hardware cost performance make that method proposed by the invention is implemented.Compare with preceding a kind of method, the obvious technical effects that the present invention produces, adaptive capacity to environment is strong, the energy long-term stable operation, and can under the unwitting situation of monitored object, monitor, make the effect of security monitoring improve greatly, and can avoid tradition (first kind) watch-dog when monitoring and monitored object generation unnecessary friction and contradiction.
Description of drawings
Fig. 1 movement human contour images is cut apart flow process.
The difference function of Fig. 2 different background gray scale.
The motion segmentation result of Fig. 3 (a) R component.
The motion segmentation result of Fig. 3 (b) G component.
The motion segmentation result of Fig. 3 (c) B component.
Fig. 3 (d) a, b, c three width of cloth figure get the inclusive-OR operation result.
Fig. 4 morphologic filtering is handled the back result.
The last moving Object Segmentation result of Fig. 5.
The dilation operation example of Fig. 6 bianry image is a width of cloth bianry image (a), and (b) figure is structural element B, indicates the reference point of "+" representative structure element, (c) expansion results figure.
The erosion operation example of Fig. 7 bianry image is that width of cloth bianry image (b) figure is structural element B (a), indicates the reference point of "+" representative structure element, (c) Corrosion results figure.
Embodiment
Below in conjunction with drawings and Examples the present invention is done detailed description.
As embodiment, the entire image cutting procedure comprises that static background modeling, dynamic background upgrade, differential motion is cut apart and the morphology post-treating and other steps, as shown in Figure 1 with the movement human contour images dividing method in the complex background video monitoring in the present invention.Respectively to each step, do describing in further detail in conjunction with the embodiments below.
1. static background modeling
The present invention tests the gait data storehouse that used data are the issue of Beijing robotization research institute of the Chinese Academy of Sciences.It is a toy data base that contains 20 objects, and all data all use separate unit video camera (Panasonic NV-DX100EN) to gather and get, the video sequence data of each object three direction walking of picked-up (0 °, 45 °, 90 °) out of doors.Only use the data of 0 ° of direction among the present invention, i.e. people's body side surface walking sequence.
The present invention adopts minimum meta variance method, and (Least median of squares LmedS) carries out modeling to background image.Minimum meta variance method (LmedS) is to be a kind of algorithm that theoretical foundation proposes with the robust statistics.Sane notion is meant the influence degree of indivedual exceptional values to statistic, and robust statistics is a kind of method for parameter estimation that has exceptional value in the sample cluster that is applicable to.Robust statistics has been subjected to paying close attention to widely in computer vision field, mainly is owing to usually disturbed by exceptional value in the input data of computer vision problem.
In robust statistics, in order to estimate the ability of certain algorithm opposing exceptional value disturbance, Hampel has proposed the notion of failpoint BP (Breakdown Point).Because originally it is an asymptotic result, inconvenience is calculated, so the BP that Donoho and Huber have defined for limited sample is:
Figure A20081005383000051
Wherein T is an estimates of parameters, and n is the content of sample X, β (m; T, X) expression will be replaced the supremum of the difference of both sides, m some back estimates of parameters arbitrarily among the X.For the BP of limited sample, when expression is made parameter estimation with this method, the abnormal point numerical purpose minimum scale that estimated value was lost efficacy that exists in the permission data, when the abnormity point ratio in the data surpassed BP, it is very unstable that estimated value can become.
Disposal route to abnormity point has two kinds substantially, and a kind of is self-coordinating method (Accommodation-basedapproach), and promptly method itself can be born the interference of abnormity point, and another kind then is a rejecting abnormalities point at first, handles by classic method again.Minimum meta variance algorithm (LmedS) is the combination of the two, and under the self-coordinating prerequisite, (Least mean square LMS) estimates, so it can eliminate abnormity point and disturb, and can obtain estimation effect preferably simultaneously to use the least mean-square estimate algorithm again.Rousseeuw and Leroy are defined as about minimum meta variance method:
Known array X={x 1, x 2... x i... x NMiddle x iBe N the observed reading of x, according to x iX is estimated, suppose that estimated value is
Figure A20081005383000061
Then:
θ ^ = arg { min θ med i ( x i - θ ) 2 } - - - ( 2 )
I=1 wherein, 2 ..., N.
Video sequence in the database used herein is that fixed cameras is taken gained, in theory, does not exist even fully and disturbs or other influences, and background is static.To the video sequence single-frame analysis, follow the tracks of wherein certain some gray scale change curve in time, this curve is held stationary substantially.Yet for moving object or other environmental interference factor origination points are arranged, its intensity profile curve can time to time change.The present invention attempts the gray scale of diverse location pixel in the gait sequence has been carried out tracing observation.
The static background modeling process is as described below.
If make I (x, y) tThe sequence image of representing the N frame to gather, wherein t representative frame index value (t=1,2 ..., N), (x, y) ∈ I t, background B then (x, y)For:
B ( x , y ) = arg { min p med t ( I ( x , y ) t - p ) 2 } - - - ( 3 )
In the formula p be location of pixels (x, y) locate coloured image to be determined (R, G, B) value, if each component is the image of 8 bits, then the span of p is 0~255; T is the frame index value, and it changes between 1~N.The idiographic flow of (arg represents to satisfy the value of the unknown number that requires in ()) algorithm is (being example with the R component only):
(i) a selected pixel position (x, y);
(ii)p=0;
(iii) calculate (I successively (x, y) 1-p) 2, (I (x, y) 2-p) 2..., (I (x, y) N-p) 2
(iv) to result of calculation ordering, if N is an even number, get the ordering back N/2 and (N+1)/2 mean value of number, if N is an odd number, then get the N/2 number, the result is saved among the array med, i.e. med 0
(v) p=p+1 when p<=255, returns (iii), repeats (iii), and (iv), (v), the result saves as med p, otherwise carry out (vi);
(vi) find out med 0, med 1... med 255Middle minimum value, corresponding p value is the background gray levels of this pixel position.
(vii) reselect the pixel position, return (ii) and repeat, all pixels all calculate and finish in image.
Consider that database images is rgb format, therefore here to R, G, the modeling respectively of three components of B, through the synthetic color background image that also can obtain R, G, B form.
1. differential motion is cut apart
In order to determine moving target, most popular method is present image and background model to be subtracted each other the difference image that obtains carry out Threshold Segmentation again.A very big deficiency of this method is to soft image, will be difficult to determine segmentation threshold because of its grey scale change is too little, promptly is difficult to moving target is fully clearly extracted from background.For this reason, the present invention uses an indirect difference function instead and carries out difference operation.Being expressed as of this difference function:
f ( a , b ) = 1 - 2 ( a + 1 ) ( b + 1 ) ( a + 1 ) + ( b + 1 ) 2 ( 256 - a ) ( 256 - b ) ( 256 - a ) + ( 256 - b ) - - - ( 5 )
A wherein, b represent present image and background image gray scale (intensity) value in same pixel position respectively, and 0≤a, and b≤255,0≤f (a, b)≤1.When a=b, and f (a, b)=0; Work as a, b not simultaneously, (a, b) and a, the difference between the b is directly proportional f.Simultaneously, the sensitivity of difference function can change automatically with background gray levels.With b=5 and b=100 is example, and its difference function as shown in Figure 2.B difference as can be seen from Figure, and difference function f (a, b) also different, when b less (b=5), difference function is along with the growth of a becomes big rapidly, and this explanation sensitivity meeting of difference function under low-contrast circumstances improves automatically, and this adaptivity has improved the accuracy of image segmentation.
Behind the difference value of having determined background image and present image, just need to select segmentation threshold T (0≤T≤1).The present invention takes Otsu method (source of paper: Otsu N.A threshold selection method from grey-levelhistograms.In:IEEE Trans.Systems, Man and Cybernetics, 1979, SMC-9 (1), 62 ~ 66), promptly determine threshold value T by the maximization inter-class variance.Its binaryzation process can be expressed as:
M ( x , y ) = 1 f ( a ( x , y ) , b ( x , y ) ) ≥ T 0 Otherwise - - - ( 6 )
2. dynamic background upgrades
Though more than set up the static background model by minimum meta variance method because the influence of noise and illumination variation, the background of actual video sequence is not to be constantly to keep static constant.Cut apart (also being background subtraction) effect in order to obtain accurate more differential motion, must dynamically upgrade background.The present invention adopts the kalman filter method of Karmann and Brandt to carry out the (source of paper: Karmann K that dynamically updates of background, Brandt A.Moving object recognitionusing an adaptive background memory.In:Cappellini V ed.Time-varying ImageProcessing and Moving Object Recognition.2.Elsevier, Amsterdam, The Netherlands, 1990).If B (x, y) N+1And B (x, y) nBe respectively and upgrade back and current background, according to present frame I (x, y) nIn finish the binaryzation foreground image M that motion detection is obtained (x, y) n, the context update process is:
Figure A20081005383000073
Wherein α is a weighting coefficient, the present invention analyze by experiment draw α get 0.8 comparatively suitable.Before upgrading rear backdrop and upgrading between the background enough near the time, stop to upgrade.
The result who carries out motion segmentation with the background of automatic renewal as shown in Figure 3.
3. morphology aftertreatment
Can there be noise unavoidably in the image after the motion segmentation, has a small amount of point in the moving target simultaneously and be mistaken for background, therefore also need image is carried out aftertreatment, to obtain best segmentation effect.The present invention uses morphologic filtering to eliminate the noise in the bianry image and fills up disappearance in the moving target.
In morphology, dilation operation and erosion operation are the most basic morphological transformations.
1. dilation operation (Dilation)
Dilation operation also claims to expand computing, uses symbol Expression, X expands with B and is designated as X
Figure A20081005383000082
B is defined as
Figure A20081005383000083
Expansion process can be described below: set B at first is the Mapping B ^ about initial point, and translation x forms set (B^) x then, and last set of computations (B^) x and set X are not the set of the structural element reference point of empty set.In other words, be the set of the displacement of the B^ reference point locations of structural element B when having at least a nonzero element to intersect with expand set that X obtains of B with set X.
Example 1. dilation operation examples
As Fig. 6 (a) is a width of cloth bianry image, and dash area is represented the zone of gray-scale value for high (being generally 1), and it is the zone of low (being generally 0) that white portion is represented gray-scale value, and its upper left corner volume coordinate is (0,0).(b) figure is structural element B, indicates the reference point of "+" representative structure element.The result who expands is shown in figure (c), and wherein black is the part of the expansion of expanding.X as a result
Figure A20081005383000084
B compares discovery with X, and X is according to the form of the B certain limit that expanded.Therefore, this computing is by the expansion that is of name.
2. erosion operation (Erosion)
Erosion operation also claims to corrode computing, uses symbol
Figure A20081005383000085
Expression, X corrodes with B and is designated as X
Figure A20081005383000086
B is defined as
Corrosion process can be described below: the still set of the structural element reference point in set X behind the set B translation x.In other words, corrode the set that set that X obtains is B reference point locations of B when being included among the set X fully with B.
Example 2 erosion operation examples
As Fig. 7 (a) is a width of cloth bianry image, and (b) figure is structural element B, indicates "+" and represents reference point.The result of corrosion is shown in figure (c), and wherein black is the part that stays after the corrosion.X as a result
Figure A20081005383000088
B compares discovery with X, and the regional extent of X is reduced, and as seen, part that can not the contained structure element all has been corroded.
In morphology, opening operation A ο B is meant the result who expands with B again after A is by the B corrosion, that is:
Figure A20081005383000089
represents erosion operation in the formula,
Figure A200810053830000810
The expression dilation operation.Opening operation can be deleted the object that can not comprise structural element fully, as the cam contour of smooth object, disconnect narrow connection, remove tiny jut.Closed operation is just opposite with opening operation, and its definition is meant the result of corroding with B again after A is expanded by B, that is:
But the hole that closed operation packing ratio structural element is little is as the concave contour of smooth object, connect into elongated curved mouthful with long and narrow breach.Can utilize the above-mentioned character of opening operation and closed operation, realize the function of filtering and filling cavity.Fig. 4 has provided the result that morphologic filtering is handled.
After morphologic filtering is handled, noise might not be eliminated fully, and the clutter noise that has may form piece not of uniform size, and moving target is maximum in these pieces often, therefore can carry out the connected domain analysis to image, purpose is only to keep moving target in image.The step of connected domain analysis is:
(i) mark connection matrix;
(ii) calculate each matrix pixel number;
(iii) find out the maximum matrix of pixel;
(iv) determine moving target.
The moving Object Segmentation of Huo Deing as shown in Figure 5 at last.

Claims (1)

1. movement destination image partition method, each image segmentation adopts following step:
Step 1: adopt minimum meta variance method to obtain background image;
Step 2: adopt indirect difference function f ( a , b ) = 1 - 2 ( a + 1 ) ( b + 1 ) ( a + 1 ) + ( b + 1 ) 2 ( 256 - a ) ( 256 - b ) ( 256 - a ) + ( 256 - b ) Obtain difference image, a wherein, b represent present image and background image gray scale (intensity) value in same pixel position respectively, and 0≤a, and b≤255,0≤f (a, b)≤1;
Step 3: select segmentation threshold T, M ( x , y ) = 1 f ( a ( x , y ) , b ( x , y ) ) ≥ T 0 Otherwise With the difference image binaryzation, obtain the binaryzation foreground image;
Step 4: establish I (x, y) tBe the sequence image that the N frame is gathered, B (x, y) N+1And B (x, y) nBe respectively after the renewal that obtains according to step 1 and current background image, M (x, y) nFor the binaryzation foreground image that obtains according to step 3, according to formula
Figure A2008100538300002C3
Dynamically update background image;
Step 5: utilize morphologic filtering to eliminate the noise in the bianry image and fill up disappearance in the movement destination image.
Movement destination image partition method according to claim 1 is characterized in that, in the step 1, according to formula B ( x , y ) = arg { min p med t ( I ( x , y ) t - p ) 2 } Obtain the background image of R, G, three components of B respectively, p is that (x y) locates coloured image value to be determined to location of pixels, and t is the frame index value in the formula.
Movement destination image partition method according to claim 1 is characterized in that, adopts maximization inter-class variance method to determine threshold value T in the step 3.
Movement destination image partition method according to claim 1 is characterized in that, in the step 4, α gets 0.8.
CN2008100538305A 2008-07-11 2008-07-11 Movement destination image partition method Active CN101315701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100538305A CN101315701B (en) 2008-07-11 2008-07-11 Movement destination image partition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100538305A CN101315701B (en) 2008-07-11 2008-07-11 Movement destination image partition method

Publications (2)

Publication Number Publication Date
CN101315701A true CN101315701A (en) 2008-12-03
CN101315701B CN101315701B (en) 2010-06-30

Family

ID=40106701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100538305A Active CN101315701B (en) 2008-07-11 2008-07-11 Movement destination image partition method

Country Status (1)

Country Link
CN (1) CN101315701B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510304A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image
CN101789128A (en) * 2010-03-09 2010-07-28 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN102096931A (en) * 2011-03-04 2011-06-15 中南大学 Moving target real-time detection method based on layering background modeling
CN101930610B (en) * 2009-06-26 2012-05-02 思创影像科技股份有限公司 Method for detecting moving object by using adaptable background model
CN103366569A (en) * 2013-06-26 2013-10-23 东南大学 Method and system for snapshotting traffic violation vehicle in real time
CN103745216A (en) * 2014-01-02 2014-04-23 中国民航科学技术研究院 Radar image clutter suppression method based on airspace feature
WO2016011641A1 (en) * 2014-07-24 2016-01-28 徐勇 Adaptive sobs improvement method and video surveillance system based on the method
CN105335942A (en) * 2015-09-22 2016-02-17 成都融创智谷科技有限公司 Local enhancement image acquisition method of moving object on the basis of Canny operator
CN105657317A (en) * 2014-11-14 2016-06-08 澜起科技(上海)有限公司 Interlaced video motion detection method and system in video de-interlacing
CN112074040A (en) * 2020-08-19 2020-12-11 福建众益太阳能科技股份公司 Solar intelligent monitoring street lamp and monitoring control method thereof
CN112418105A (en) * 2020-11-25 2021-02-26 湖北工业大学 High maneuvering satellite time sequence remote sensing image moving ship target detection method based on difference method
CN113160109A (en) * 2020-12-15 2021-07-23 宁波大学 Cell image segmentation method for preventing background difference
CN113411509A (en) * 2021-06-15 2021-09-17 西安微电子技术研究所 Satellite-borne autonomous vision processing system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510304B (en) * 2009-03-30 2014-05-21 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image
CN101510304A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image
CN101930610B (en) * 2009-06-26 2012-05-02 思创影像科技股份有限公司 Method for detecting moving object by using adaptable background model
CN101789128A (en) * 2010-03-09 2010-07-28 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN101789128B (en) * 2010-03-09 2012-01-18 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN102096931A (en) * 2011-03-04 2011-06-15 中南大学 Moving target real-time detection method based on layering background modeling
CN102096931B (en) * 2011-03-04 2013-01-09 中南大学 Moving target real-time detection method based on layering background modeling
CN103366569A (en) * 2013-06-26 2013-10-23 东南大学 Method and system for snapshotting traffic violation vehicle in real time
CN103745216A (en) * 2014-01-02 2014-04-23 中国民航科学技术研究院 Radar image clutter suppression method based on airspace feature
CN103745216B (en) * 2014-01-02 2016-10-26 中国民航科学技术研究院 A kind of radar image clutter suppression method based on Spatial characteristic
WO2016011641A1 (en) * 2014-07-24 2016-01-28 徐勇 Adaptive sobs improvement method and video surveillance system based on the method
CN105657317B (en) * 2014-11-14 2018-10-16 澜至电子科技(成都)有限公司 A kind of interlaced video method for testing motion in video release of an interleave and its system
CN105657317A (en) * 2014-11-14 2016-06-08 澜起科技(上海)有限公司 Interlaced video motion detection method and system in video de-interlacing
CN105335942A (en) * 2015-09-22 2016-02-17 成都融创智谷科技有限公司 Local enhancement image acquisition method of moving object on the basis of Canny operator
CN112074040A (en) * 2020-08-19 2020-12-11 福建众益太阳能科技股份公司 Solar intelligent monitoring street lamp and monitoring control method thereof
CN112418105A (en) * 2020-11-25 2021-02-26 湖北工业大学 High maneuvering satellite time sequence remote sensing image moving ship target detection method based on difference method
CN112418105B (en) * 2020-11-25 2022-09-27 湖北工业大学 High maneuvering satellite time sequence remote sensing image moving ship target detection method based on difference method
CN113160109A (en) * 2020-12-15 2021-07-23 宁波大学 Cell image segmentation method for preventing background difference
CN113160109B (en) * 2020-12-15 2023-11-07 宁波大学 Cell image segmentation method based on anti-background difference
CN113411509A (en) * 2021-06-15 2021-09-17 西安微电子技术研究所 Satellite-borne autonomous vision processing system
CN113411509B (en) * 2021-06-15 2023-09-26 西安微电子技术研究所 Satellite-borne autonomous vision processing system

Also Published As

Publication number Publication date
CN101315701B (en) 2010-06-30

Similar Documents

Publication Publication Date Title
CN101315701B (en) Movement destination image partition method
CN104303193B (en) Target classification based on cluster
CN103077539B (en) Motion target tracking method under a kind of complex background and obstruction conditions
CN102663743B (en) Personage's method for tracing that in a kind of complex scene, many Kameras are collaborative
US8243987B2 (en) Object tracking using color histogram and object size
CN103400120B (en) Video analysis-based bank self-service area push behavior detection method
CN103986906B (en) Door opening and closing detection method based on monitoring videos
CN101976504B (en) Multi-vehicle video tracking method based on color space information
CN101286239A (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN102663362B (en) Moving target detection method based on gray features
CN108765453B (en) Expressway agglomerate fog identification method based on video stream data
CN103793715B (en) The personnel in the pit's method for tracking target excavated based on scene information
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN109086682A (en) A kind of intelligent video black smoke vehicle detection method based on multi-feature fusion
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
KR101690050B1 (en) Intelligent video security system
CN107729811B (en) Night flame detection method based on scene modeling
CN107832732B (en) Lane line detection method based on treble traversal
CN111723757B (en) Method and system for monitoring refuse landfill
Tai et al. Background segmentation and its application to traffic monitoring using modified histogram
Raikar et al. Automatic building detection from satellite images using internal gray variance and digital surface model
Liu et al. A real-time vision-based vehicle tracking and traffic surveillance
CN108241837B (en) Method and device for detecting remnants
Iwasaki et al. Real-time vehicle detection using information of shadows underneath vehicles

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant