CN106446790A - Method for tracking and analyzing traffic video flow of fixed camera - Google Patents

Method for tracking and analyzing traffic video flow of fixed camera Download PDF

Info

Publication number
CN106446790A
CN106446790A CN201610782749.5A CN201610782749A CN106446790A CN 106446790 A CN106446790 A CN 106446790A CN 201610782749 A CN201610782749 A CN 201610782749A CN 106446790 A CN106446790 A CN 106446790A
Authority
CN
China
Prior art keywords
tracking
detection
video flow
traffic video
trail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610782749.5A
Other languages
Chinese (zh)
Inventor
盛斌
赵超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201610782749.5A priority Critical patent/CN106446790A/en
Publication of CN106446790A publication Critical patent/CN106446790A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention relates to a method for tracking and analyzing traffic video flow of a fixed camera. The method comprises the following steps of 1) selecting an input video; 2) obtaining an image frame; 3) performing foreground extraction and preprocessing, and performing regional growth and adhered object segmentation to obtain a detection target; 4) allocating detection to tracking; and 5) updating the tracking. Compared with the prior art, the method has the advantages that the errors of a shadow elimination technology of general HSV space fixed threshold segmentation can be effectively reduced, the method is superior to a way of fixing morphological operation of a transform core to obtain a car body, and the like.

Description

A kind of traffic video flow follow-up analysis method to fixing camera
Technical field
The present invention relates to graphical analyses and technical field of video processing, especially relate to a kind of traffic to fixing camera Video flow follow-up analysis method.
Background technology
Traffic flow follow-up analysis in cell gateway, the flow information extraction record on highway or traffic main artery etc. Aspect has a wide range of applications.
Moving body track comprises two parts of detect and track of moving target.The detection of moving target mainly has frame poor Point-score, background subtraction and optical flow method:Frame differential method, background method of elimination, optical flow method.Frame differential method refers to adjacent two Individual or three image interframe carry out calculus of differences to every bit pixel, by obtained result thresholding thus obtaining in image Moving region;Frame differential method has good adaptability for dynamic environment, but can not Ground Split Moving Objects very well.The back of the body The basic thought of scape method of elimination is exactly the background modeling to image, afterwards present frame with background image subtraction and then is moved Target area, its emphasis is that foundation and the renewal of background;Background subtraction can more fully extract impact point, but The dynamic scene change again illumination and external condition being caused is excessively sensitive.It is according to motion mesh based on the target detection of optical flow method Mark time dependent optical flow characteristic, research image intensity value change in time and object structures in scene and its motion Relation;Its advantage is to be capable of detecting when the object of self-movement it is not necessary to be known a priori by any information of scene, and shortcoming is right Light degree of dependence is big, is easily lost target, realizes complicated, needs specific hardware supported, be otherwise extremely difficult to certain effect Really.
After moving object detection completes, next to be done is exactly according to the position of former frame target, speed, state mould Type etc. positions target state in the current frame by track algorithm, obtains the movement locus of target, thus so-called to realize Target following.At present the track algorithm of comparative maturity has tracking based on model, the tracking of feature based, is based on castor The tracking of profile, the tracking based on agglomerate and the tracking based on Bayes framework.
Content of the invention
The purpose of the present invention be exactly provide to overcome the defect that above-mentioned prior art exists a kind of to fixing camera Traffic video flow follow-up analysis method.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of traffic video flow follow-up analysis method to fixing camera, comprises the following steps:
1) select input video;
2) obtain picture frame;
3) foreground extraction and pretreatment, region growing and adhesion object segmentation, obtain detection target;
4) distribution detection is to tracking;
5) update and follow the trail of.
Described step 3) it is specially:
Remove shade initially with the conversion of HSV space twice and OTSU algorithm, road boundary split using Sobel operator, And complete vehicle body is obtained using algorithm of region growing;
Then extract the two-value after shade using the foreground extractor that the background subtraction based on gauss hybrid models removes algorithm Change image;
Finally fill up the region of connection using morphological operation and eliminate little speckle noise.
Described HSV space conversion is specially:
Using HSV space luminance component, first image is converted into HSV space, V component is proceeded as follows:
Return it into rgb space afterwards again, that is, complete luminance proportion.
Described Sobel operator segmentation road boundary is specially:
This operator comprises the matrix of two groups of 3x3, respectively transverse direction and longitudinal direction, and it and image are made planar convolution, you can point Do not draw the brightness difference approximation of transverse direction and longitudinal direction;Open operation with morphology again and the speckle of trees house frontier inspection is removed in closed operation Refute.
Described distribution detection includes to the prediction followed the trail of to following the trail of, and the detection of associate and tracking.
Described is specially to the prediction followed the trail of:Using Kalman filter to obtained follow the trail of object at this The position of one frame is predicted, and updates these predicted trackings.
The detection of described associate and tracking are specially:Move and therewith the target detection needs obtaining in each frame The existing tracking association of dynamic Least-cost, and update tracking result;It is continuing with Kalman filter to detect respectively to distribute Individual tracking.
Described renewal is followed the trail of and is specially:
First all trackings being assigned with new detection and unappropriated tracking are carried out with the renewal of relevant statistics;
Frame number seen rate deficiency is followed the trail of and the tracking of unallocated detection for a long time is deleted, dependent thresholds data For arranging in advance;
For the detection being not yet assigned to tracking position new object enter camera view and be its newly-built new tracking with Distribution.
Compared with prior art, the present invention proposes the shadow removing technology of HSV space conversion twice and utilizes region Growth obtains the algorithm of complete vehicle body.Algorithm can effectively reduce the shadow removing technology of common HSV space fixed threshold segmentation Error simultaneously is better than fixing the way to obtain vehicle body for the morphological operation of transformation kernel.
Brief description
Fig. 1 is the flow chart of the present invention.
Fig. 2 is the shade schematic diagram of phase close-target.
Fig. 3 is sobel operator convolution schematic diagram.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention Premised on implemented, give detailed embodiment and specific operating process, but protection scope of the present invention be not limited to Following embodiments.
The present invention be directed to existing method above shortcomings it is proposed that twice HSV space conversion shadow removing technology And the algorithm of complete vehicle body is obtained using region growing.This algorithm can effectively reduce common HSV space fixed threshold segmentation The error of shadow removing technology simultaneously is better than fixing the way to obtain vehicle body for the morphological operation of transformation kernel.
Because traffic camera often overlooks ground, and the feelings in view of insufficient lights such as cloudy day driving and drivings at night Condition, carries out brightness correction to picture frame first:Using HSV space luminance component, first image is converted into HSV space, V is divided Amount proceeds as follows:
Return it into rgb space afterwards again, that is, complete luminance proportion.
Because image can show different spectral characteristics in different color spaces, thus being applied to different applications Occasion.In HSV color space, shade all has obvious difference in colourity and saturation passage with non-hatched area.According to This characteristic of shade, using double HSV color space transformation, can effectively distinguish in the picture shade with non- Shadow region.Further, since the size of different images, intensity profile, brightness and texture etc. are different, OTSU is adopted to calculate herein Method automatic threshold segmentation, reduces the correction difficulty that fixed threshold causes to client.In OTSU algorithm, to image Image, remember T is the segmentation threshold of prospect and background, and it is w0 that prospect points account for image scaled, and average gray is u0;Background points account for image ratio Example is w1, and average gray is u1.The grand mean gray scale of image is:U=w0*u0+w1*u1.From minimum gradation value to maximum gray scale Value traversal t, as t so that when value g=w0* (u0-u) 2+w1* (u1-u) 2 is maximum t be the optimal threshold split.This formula is actually It is exactly inter-class variance value, because variance is a kind of tolerance of intensity profile uniformity, variance yields are bigger, illustrate two of pie graph picture Divide difference bigger, when partial target mistake is divided into background or part background mistake to be divided into target that two parts difference all can be led to diminish, because This makes the maximum segmentation of inter-class variance mean that misclassification probability is minimum.
The road covered for no excessive shelter, according to road vertical and horizontal feature substantially, adopts sobel to calculate herein Son extracts image frame boundary.As shown in figure 3, this operator comprises the matrix of two groups of 3x3, respectively transverse direction and longitudinal direction, by it and figure As making planar convolution, you can draw the brightness difference approximation of transverse direction and longitudinal direction respectively.Open operation and closed operation with morphology again Remove the mottled of trees house frontier inspection.For the photographic head based on shooting road ahead, normalizable road area, reduce road The impact to foreground extraction for the motion of side pedestrian and leaveves.Choose in the result sorting out from the 3rd step and represent irregular movement A class or several type games vector, and extract individual features.
After modeling extraction display foreground using gauss hybrid models, the barycenter choosing every piece of prospect, as seed point, is adopted Obtain complete vehicle body with region-growing method.Region growing is the process of an iteration, and each sub-pixel point iteration grows.Just Choose a seed point during beginning, enter enqueue, repeat afterwards to take out pixel from queue, check most 8 pixels around it Point, the pixel being more than threshold value T from the difference of central point is labeled as classes different with central point, joins the team.Repeat this process, Until processing each of image pixel.
The invention belongs to utility software in terms of traffic flow analysis for the multi-target tracking, the rudimentary algorithm of employing is karr Graceful filtering algorithm.Software algorithm main purpose has two.
The mobile object in each frame of video is detected.
The target that obtain detection and the tracking of corresponding same object connect
The detection of mobile target uses the background subtraction division based on gauss hybrid models, then again to foreground mask figure As carrying out Morphological scale-space to filter out noise, finally the connection block obtaining is analyzed, each block has greatly may be used very much It can be exactly a target detecting.
The target that unification detects and existing tracking path are based on identical mobility.The mobility of target is by card Kalman Filtering algorithm obtains.Predict the position of object that each the has obtained prediction in this frame using Kalman filter Put, then estimate whether the detection target newly obtaining belongs to this object, if being added in the track of this object.
The algorithm that this software uses is processed frame by frame to video and is analyzed, and the processing procedure of the most each frame is divided into following several Individual concrete steps.
Foreground extraction and target detection
Shade, as regions of non-interest, can followed by moving object and move and be considered as prospect, and the depositing of shade Increase between vehicle in meeting, unnecessary connectedness between pedestrian and cause error, thus the removal of shade in actual applications Can not ignore.With processing procedure in, we remove shade using the conversion of HSV space twice and OTSU algorithm, are calculated using Sobel Son segmentation road boundary, and complete vehicle body is obtained using algorithm of region growing.
Then extract the two-value after shade using the foreground extractor that the background subtraction based on gauss hybrid models removes algorithm Change image.
Single distribution Gaussian Background model thinks, to a background image, the distribution of specific pixel brightness meets Gauss distribution, I.e. to background image B, the brightness that (x, y) puts meets:
IB(x, y)~N (u, d)
Each pixel attributes of so our background model include two parameters:Meansigma methodss u and variance d.
The image G that one width is given, if
Then think that (x, y) is background dot, otherwise be foreground point.
Meanwhile, change over time, background image also can occur slowly to change, and at this moment we will constantly update each The parameter of picture element.
U (t+1, x, y)=a × u (t, x, y)+(1-a) × I (x, y)
Here, a is referred to as undated parameter, represents the speed of background change, generally, we do not update d.
Second step filled up using morphology associative operation together with region (will act as detection) and the little speckle of cancellation Spot noise.
If if in practice examining software effect it has been found that when two detection objects are close together, two objects Shade will constitute a big connected domain so that algorithm can be as the object handles detecting.
Then we simply fill up connected domain and eliminate noise add we comparison innovation morphological operation at Reason:The larger connection shade being formed for two close targets, that is, as figure situation, we will be using it separately as two targets Process, as shown in Figure 2.
Foreground extraction and target detection
Shade, as regions of non-interest, can followed by moving object and move and be considered as prospect, and the depositing of shade Increase between vehicle in meeting, unnecessary connectedness between pedestrian and cause error, thus the removal of shade in actual applications Can not ignore.With processing procedure in, we remove shade using the conversion of HSV space twice and OTSU algorithm, are calculated using Sobel Son segmentation road boundary, and complete vehicle body is obtained using algorithm of region growing.
Then extract the two-value after shade using the foreground extractor that the background subtraction based on gauss hybrid models removes algorithm Change image.
Single distribution Gaussian Background model thinks, to a background image, the distribution of specific pixel brightness meets Gauss distribution, I.e. to background image B, the brightness that (x, y) puts meets:
IB(x, y)~N (u, d)
Each pixel attributes of so our background model include two parameters:Meansigma methodss u and variance d.
The image G that one width is given, if
Then think that (x, y) is background dot, otherwise be foreground point.
Meanwhile, change over time, background image also can occur slowly to change, and at this moment we will constantly update each The parameter of picture element.
U (t+1, x, y)=a × u (t, x, y)+(1-a) × I (x, y)
Here, a is referred to as undated parameter, represents the speed of background change, generally, we do not update d.
Second step filled up using morphology associative operation together with region (will act as detection) and the little speckle of cancellation Spot noise.
If if in practice examining software effect it has been found that when two detection objects are close together, two objects Shade will constitute a big connected domain so that algorithm can be as the object handles detecting.
Then we simply fill up connected domain and eliminate noise add we comparison innovation morphological operation at Reason:The larger connection shade being formed for two close targets, that is, as figure situation, we will be using it separately as two targets Process, as shown in Figure 2.
Implementation result
According to above-mentioned steps, we have found the monitor video of two sections of traffic cross-roads, calculates using shown in this software Method carries out the tracking to target in video., more than 85%, false drop rate is then higher for tracking rate, and about 30%.
In this software, due to temporarily not finding related training set and the restriction of self-ability, do not adopt engineering The method practised carries out the training to model, and what all parameters relied on is the feature to video.Machine learning will be taken afterwards Mode training pattern reaches more preferable tracking effect.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, and any Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replace Change, these modifications or replacement all should be included within the scope of the present invention.Therefore, protection scope of the present invention should be with right The protection domain requiring is defined.

Claims (8)

1. a kind of traffic video flow follow-up analysis method to fixing camera is it is characterised in that comprise the following steps:
1) select input video;
2) obtain picture frame;
3) foreground extraction and pretreatment, region growing and adhesion object segmentation, obtain detection target;
4) distribution detection is to tracking;
5) update and follow the trail of.
2. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 1, its feature exists In described step 3) it is specially:
Remove shade initially with the conversion of HSV space twice and OTSU algorithm, road boundary is split using Sobel operator, and Complete vehicle body is obtained using algorithm of region growing;
Then extract the binary picture after shade using the foreground extractor that the background subtraction based on gauss hybrid models removes algorithm Picture;
Finally fill up the region of connection using morphological operation and eliminate little speckle noise.
3. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 2, its feature exists In described HSV space conversion is specially:
Using HSV space luminance component, first image is converted into HSV space, V component is proceeded as follows:
X v ′ = 1 - ( X v - 1 ) 2
Return it into rgb space afterwards again, that is, complete luminance proportion.
4. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 2, its feature exists In described Sobel operator segmentation road boundary is specially:
This operator comprises the matrix of two groups of 3x3, respectively transverse direction and longitudinal direction, and it and image are made planar convolution, you can respectively Go out the brightness difference approximation of transverse direction and longitudinal direction;Open operation with morphology again and the mottled of trees house frontier inspection is removed in closed operation.
5. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 1, its feature exists In described distribution detection includes to the prediction followed the trail of to following the trail of, and the detection of associate and tracking.
6. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 5, its feature exists In described is specially to the prediction followed the trail of:Using Kalman filter to obtained follow the trail of object in this frame Position be predicted, and update these predicted trackings.
7. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 5, its feature exists It is specially in, the described detection of associate and tracking:By the target detection needs obtaining in each frame and move generation therewith The minimum existing tracking association of valency, and update tracking result;It is continuing with Kalman filter to detect each and chase after to distribute Track.
8. a kind of traffic video flow follow-up analysis method to fixing camera according to claim 1, its feature exists In described renewal is followed the trail of and is specially:
First all trackings being assigned with new detection and unappropriated tracking are carried out with the renewal of relevant statistics;
Frame number seen rate deficiency is followed the trail of and the tracking of unallocated detection for a long time is deleted, dependent thresholds data is to carry Front setting;
New object is positioned for the detection being not yet assigned to tracking and enters camera view and for its newly-built new tracking to distribute.
CN201610782749.5A 2016-08-30 2016-08-30 Method for tracking and analyzing traffic video flow of fixed camera Pending CN106446790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610782749.5A CN106446790A (en) 2016-08-30 2016-08-30 Method for tracking and analyzing traffic video flow of fixed camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610782749.5A CN106446790A (en) 2016-08-30 2016-08-30 Method for tracking and analyzing traffic video flow of fixed camera

Publications (1)

Publication Number Publication Date
CN106446790A true CN106446790A (en) 2017-02-22

Family

ID=58091092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610782749.5A Pending CN106446790A (en) 2016-08-30 2016-08-30 Method for tracking and analyzing traffic video flow of fixed camera

Country Status (1)

Country Link
CN (1) CN106446790A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538500A (en) * 2021-09-10 2021-10-22 科大讯飞(苏州)科技有限公司 Image segmentation method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385803A (en) * 2011-10-28 2012-03-21 南京邮电大学 All-weather urban vehicle tracking and counting method based on video monitoring
CN102426785A (en) * 2011-11-18 2012-04-25 东南大学 Traffic flow information perception method based on contour and local characteristic point and system thereof
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN102592454A (en) * 2012-02-29 2012-07-18 北京航空航天大学 Intersection vehicle movement parameter measuring method based on detection of vehicle side face and road intersection line
CN102810250A (en) * 2012-07-31 2012-12-05 长安大学 Video based multi-vehicle traffic information detection method
CN102842039A (en) * 2012-07-11 2012-12-26 河海大学 Road image detection method based on Sobel operator
CN103413046A (en) * 2013-08-14 2013-11-27 深圳市智美达科技有限公司 Statistical method of traffic flow
CN103455820A (en) * 2013-07-09 2013-12-18 河海大学 Method and system for detecting and tracking vehicle based on machine vision technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385803A (en) * 2011-10-28 2012-03-21 南京邮电大学 All-weather urban vehicle tracking and counting method based on video monitoring
CN102426785A (en) * 2011-11-18 2012-04-25 东南大学 Traffic flow information perception method based on contour and local characteristic point and system thereof
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN102592454A (en) * 2012-02-29 2012-07-18 北京航空航天大学 Intersection vehicle movement parameter measuring method based on detection of vehicle side face and road intersection line
CN102842039A (en) * 2012-07-11 2012-12-26 河海大学 Road image detection method based on Sobel operator
CN102810250A (en) * 2012-07-31 2012-12-05 长安大学 Video based multi-vehicle traffic information detection method
CN103455820A (en) * 2013-07-09 2013-12-18 河海大学 Method and system for detecting and tracking vehicle based on machine vision technology
CN103413046A (en) * 2013-08-14 2013-11-27 深圳市智美达科技有限公司 Statistical method of traffic flow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOEKK: "matlab示例程序—Motion-Based Multiple Object Tracking--卡尔曼多目标跟踪程序--解读", 《HTTPS://WWW.CNBLOGS.COM/YANGQIAOBLOG/P/5462453.HTML》 *
张先鹏 等: "结合多种特征的高分辨率遥感影像阴影检测", 《自动化学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538500A (en) * 2021-09-10 2021-10-22 科大讯飞(苏州)科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN113538500B (en) * 2021-09-10 2022-03-15 科大讯飞(苏州)科技有限公司 Image segmentation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2021208275A1 (en) Traffic video background modelling method and system
CN108596129B (en) Vehicle line-crossing detection method based on intelligent video analysis technology
CN100544446C (en) The real time movement detection method that is used for video monitoring
CN101872546B (en) Video-based method for rapidly detecting transit vehicles
CN105427626B (en) A kind of statistical method of traffic flow based on video analysis
CN104992447B (en) A kind of image automatic testing method of sewage motion microorganism
CN102280030B (en) Method and system for detecting vehicle at night
US8019157B2 (en) Method of vehicle segmentation and counting for nighttime video frames
CN104574440A (en) Video movement target tracking method and device
CN104952256A (en) Video information based method for detecting vehicles at intersection
CN108022249B (en) Automatic extraction method for target region of interest of remote sensing video satellite moving vehicle
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN104899881B (en) Moving vehicle shadow detection method in a kind of video image
CN103136537A (en) Vehicle type identification method based on support vector machine
CN101739827A (en) Vehicle detecting and tracking method and device
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN112364865B (en) Method for detecting small moving target in complex scene
CN105740835A (en) Preceding vehicle detection method based on vehicle-mounted camera under night-vision environment
CN105321189A (en) Complex environment target tracking method based on continuous adaptive mean shift multi-feature fusion
CN112560717B (en) Lane line detection method based on deep learning
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN103500451B (en) A kind of independent floating ice extracting method for satellite data
CN113077494A (en) Road surface obstacle intelligent recognition equipment based on vehicle orbit
CN106934819A (en) A kind of method of moving object segmentation precision in raising image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170222

RJ01 Rejection of invention patent application after publication