CN106910200B - A kind of moving Object Segmentation method based on phase information - Google Patents
A kind of moving Object Segmentation method based on phase information Download PDFInfo
- Publication number
- CN106910200B CN106910200B CN201510980610.7A CN201510980610A CN106910200B CN 106910200 B CN106910200 B CN 106910200B CN 201510980610 A CN201510980610 A CN 201510980610A CN 106910200 B CN106910200 B CN 106910200B
- Authority
- CN
- China
- Prior art keywords
- image block
- phase
- image
- difference
- moving object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
Abstract
The moving Object Segmentation method based on phase information that the present invention provides a kind of, includes the following steps, step 1, movement destination image frame is divided into the image block that size is 8 × 8;Step 2, each described image block is detected using the method for phase information, finds out all image blocks comprising moving boundaries whether comprising moving boundaries in described image block to detect;Step 3, the described image block center comprising moving boundaries is connected, the initial profile curve as active contour model;Step 4, the exact boundary of the movement destination image is found by the way that the initial profile curve is iterated and is restrained using the initial profile curve.
Description
Technical field
The moving Object Segmentation method based on phase information that the invention patent relates to a kind of.All have in national defence and civil field
There is important application value.
Background technique
Image segmentation is a key technology in image procossing, and the height of people is constantly subjected to from the 1970s
Pay attention to, have proposed thousands of kinds of partitioning algorithms so far, but because there is no general segmentation theory, it is proposed that partitioning algorithm be mostly needle
To particular problem, there is no a kind of general partitioning algorithms for being suitble to all images.Divide in addition, making selection not yet and being applicable in
The standard of algorithm is cut, this brings many practical problems to the application of image Segmentation Technology.Occur many new think ofs recent years again
Road, new method or innovatory algorithm are outlined some classical ways and emerging method.And it divides the image into method and is divided into
Threshold segmentation method, edge detection method, method for extracting region and the dividing method for combining Specific Theory Tools.The image of early stage
In research, the dividing method of image can be mainly divided into two major classes.One kind is boundary method, and the hypothesis of this method is image point
Certain sub-regions for cutting result centainly have edge presence in original image;One kind is region method, the vacation of this method
If being that the subregion of image segmentation result centainly has identical property, and the property that the pixel of different zones is not common.This
Both of which has disadvantage and advantage, and some scholars also attempt the two to be combined carry out image segmentation, at computer
The raising of reason ability, many methods continue to bring out, such as based on chrominance component segmentation, Study Of Segmentation Of Textured Images.Used teaching work
Tool and laboratory facilities are also constantly to extend, and are handled from time-domain signal to frequency-region signal, and recent wavelet transformation is also employed in image
In segmentation.
Moving Object Segmentation technology is grown up on the basis of image segmentation, it is based on still image and divides skill
Art is split the moving target in video in conjunction with the characteristics of video.The research of moving Object Segmentation algorithm is always image
The hot spot of boundary's research, while it is also the vital task in actual video processing system, is to be reliably completed object tracking and recognition
Key precondition, it affect including computer visual image processing etc. fields task completion.Moving Object Segmentation algorithm promotees
Into the development of the related application in the fields such as image procossing, pattern-recognition, artificial intelligence, computer vision, in video monitoring, view
There is application important extensively in the fields such as frequency compression, robot vision, military guidance, medical diagnosis.
Target Segmentation technology is the basis of many other video image processing technologies.It is usually sharp in conventional segmentation algorithm
It is partitioned into target with the Space Consistency of object and time consistency, these algorithms assume that object has Space Consistency and time
Consistency.However in nature, many objects are made of different color and texture, and moving object is in consecutive frame
Between will appear the inconsistent of movement, so being difficult accurately to be separated moving object from video with traditional method.
Active contour model is widely used in computer video and image procossing, but its a maximum problem is it
The setting problem of initial profile, which has limited its applications.Although can solve this problem, meeting by some innovatory algorithms
Lead to the increasing of calculation amount.
This patent introduces a kind of region detected based on the method for phase information comprising moving boundaries, then utilizes this
Approximate borderline region finds the initial profile of active contour model, then carries out the iteration and convergence of contour curve.Have one at present
Target Segmentation algorithm based on phase information a bit, such as;It is " infrared based on the Target Segmentation and tracking of phase equalization detection
With laser engineering ", 2007 volume 36 the 6th 984-987 pages of the phase;Target Segmentation algorithm based on phase information, " electric light and control
System ", 2014 volume 21 the 3rd 15-17 pages of the phase;The Study Of Segmentation Of Textured Images of the profile wave direction framework Yu Shuanshu and phase property, " light
Sub- journal ", the 8th 1400-1404 pages of the phase of volume 39 in 2010), the phase that the phase information that these methods are mentioned is mentioned with this patent
Position information has difference, and these methods are all only used for the segmentation of static image target.
Summary of the invention
The purpose of the invention patent is to provide a kind of moving Object Segmentation method based on phase information, including walks as follows
It is rapid:
Step 1, movement destination image frame is divided into the image block that size is 8 × 8;
Step 2, each described image block is detected using the method for phase information, to detect described image block
Whether include inside moving boundaries, finds out all image blocks comprising moving boundaries;
Step 3, the described image block center comprising moving boundaries is connected, the initial wheel as active contour model
Wide curve;
Step 4, it is found using the initial profile curve by the way that the initial profile curve is iterated and is restrained
The exact boundary of the movement destination image.
A kind of above-mentioned moving Object Segmentation method based on phase information, step 2 further includes including by described image block
It distinguishes and distinguishes in the image block on moving object boundary and the image block for deferring to translational motion.
A kind of above-mentioned moving Object Segmentation method based on phase information utilizes the inspection of phase information described in step 2
Survey method is to be matched and changing the phase of movement destination image in a frequency domain to image.
A kind of above-mentioned moving Object Segmentation method based on phase information, further includes following steps:
Step 21, the phase matched between the image block in the image block and reference frame in present frame after calculating matching
Difference;
Step 22, using the size of phase matched difference, identify in image block whether include Moving Objects, and then distinguish single
Moving image block and image block comprising moving object boundary.
A kind of above-mentioned moving Object Segmentation method based on phase information, the realization step of the phase matched difference, also
Include:
Step 211, an intermediate image block is created;
Step 212, calculate the phase of the intermediate image block, so acquire the phase of image block in present frame with it is described
The phase matched of intermediate image block is poor.
A kind of above-mentioned moving Object Segmentation method based on phase information, the step 22, further includes:
Step 221, poor using the phase matched, the Energy distribution of the phase matched difference is calculated, phase matched is calculated
Low frequency part accounts for the number of entire phase matched difference energy in difference.
A kind of above-mentioned moving Object Segmentation method based on phase information, the condition of the satisfaction of the intermediate image block are
Phase is identical as the corresponding image block of present frame, the corresponding image block of amplitude and reference frame is identical.
A kind of above-mentioned moving Object Segmentation method based on phase information, the phase matched difference EpmIt indicates are as follows:
Epm=ik-ikX
Wherein, ikFor current image block, ikXFor intermediate image block.
A kind of above-mentioned moving Object Segmentation method based on phase information, low frequency part accounts for whole in phase matched difference
The number of a phase matched difference energy is expressed as RL(Epm), formula indicates:
Wherein,It is phase matched difference through low pass treated part, phase matched difference is the matrix of a N × N,
The denominator of above formula may be calculated as:
For molecule, discrete cosine transform can be passed through.Use CpmIndicate matrix of the phase matched difference after dct transform:
Wherein (u, v) is image into the coefficient after DCT, and the part of small coefficient corresponds to the part of low frequency energy, so:
Wherein N0≤ N is used to determine which coefficient by the coefficient as low frequency energy;
When taking N0When=3N/4, the R comprising moving boundaries image blockL(Epm) and defer to the R in single movement regionL(Epm) it
Between difference it is maximum.
The present invention has the advantage that compared with the prior art
The invention patent proposes a kind of moving Object Segmentation method, and this method introduces a kind of method based on phase information
The region comprising moving boundaries is detected, then finds the initial wheel of active contour model using this approximate borderline region
Exterior feature, then carry out the iteration and convergence of contour curve.
Detailed description of the invention
The step flow chart of moving Object Segmentation method of the Fig. 1 based on phase information.
Fig. 2 is low energy phase than the response curve with difference.
Specific embodiment
The invention patent carries out the segmentation of moving target based on phase information.The invention patent is first by movement destination image
Frame is divided into the image block that size is 8 × 8, is detected using the method for phase information to each described image block, schemes to every piece
As being detected, sees wherein whether comprising moving object boundary, include movement side using being detected based on the method for phase information
The region on boundary.It can include movement mesh by those by this step using this step as the accurate first step for searching moving target
The image block on mark boundary is distinguished with the image block that those only include single translational motion.It finds comprising moving object boundary
After image block, the precise edge of moving object is next found.Image block center comprising moving boundaries is connected, as
The initial profile curve of active contour model finally finds the exact boundary of target using active contour model, at the beginning of described
Beginning contour curve finds the accurate side of the movement destination image by the way that the initial profile curve is iterated and is restrained
Boundary, since initial profile model is very close to object boundary, it is only necessary to which seldom the number of iterations can be obtained by final side
Boundary.
The region comprising moving boundaries is detected using the method based on phase information first.It is a kind of to include in video frame
The method that the image block on moving object boundary is distinguished and distinguished with the image block for deferring to consistent translational motion.
Assuming that, there are a translational motion between video frame k and video frame k+1, movement can indicate in a video
For following form:
ik(x, y)=ik+1(x+Δx,y+Δy) (1)
Wherein, ikAnd ik+1Corresponding two image blocks in two frames are respectively represented, (x, y) is the corresponding pixel coordinate of image block,
(Δx,Δy) be translational motion motion vector.
Linear displacement in the spatial domain can generate the variation of a phase in a frequency domain:
The both sides Fourier transform of formula (1) is obtained:
By calculating IkAnd Ik+1The general available relativity measurement between them of normalization alternating power:
(3) substitution (4) is obtained:
To the normalization general carry out inverse-Fourier transform of alternating power, obtain: ck,k+1=δ (x- Δx,y-Δy) (6)
δ is unit impulse function, and the phase correlation surface of formula (6) corresponding one in (Δx,Δy) point be unit pulse, and
Other points of curved surface are all zero curved surface.By the position for determining this pulse, so that it may acquire the flat of corresponding two field pictures block
The amount of shifting to.
The amplitude of pulse is influenced by several factors, such as noise and the aperiodicity influence of image etc..If in song
There is a unique absolute maximum pulse on face, it is believed that single translation only has occurred in image block.If image block includes more
A movement, the amplitude of this pulse will receive influence and reduce.In a practical situation, even if single put down only has occurred in image block
It moves, but due to the impression of noise, result can also be had an impact.Moreover, if two differences moved in image block very
It is small, it can also judge by accident.
The unique of image movement can be determined by determining the difference between present frame and reference frame after overmatching
Property, that is to say, that by differentiating the order of accuarcy of estimation, whether to identify in image block comprising Moving Objects, i.e., comprising fortune
The boundary of animal body.Because if there was only single translational motion in image block, the frame difference after matching is bound to very little.
Under Fourier transform, translational motion can generate the variation of phase, and amplitude will not change, it is possible to logical
It crosses and changes the phase of image in a frequency domain and image is registrated, the variation for first passing through phase matches image block, so
The difference between the image block in image block and reference frame after calculating matching afterwards, the so-called image block being mutually matched is exactly in difference
Corresponding identical image region in frame.The calculating for the image block phase matched difference that two are mutually matched first in a frequency domain, it
Amplitude be corresponding image block it is transformed after amplitude difference, and their phase be considered as exact matching.To in frame
Each image block calculates their motion vector using phase correlation.
Assuming that only translational motion, the position of correlation surface pulse is exactly their approximate motion vector (Δx,Δy), benefit
With this motion vector to the image block i in present framekWith the image block i in reference framek-1It is matched.
Create an intermediate image block ikX, the intermediate image block i of creationkXThe condition of satisfaction is phase and present frame
Corresponding image block is identical, amplitude is identical with the corresponding image block of reference frame, intermediate image block ikXIt indicates are as follows:
Wherein, θk=∠ F (ik) be reference frame phase angle.
Phase matched difference can indicate are as follows: Epm=ik-ikX (8)
Wherein, ikFor the image block of present frame, ikXFor intermediate image block, ik-1The image block of reference frame
It is as the phase angle of current frame image block: ∠ (Epm)=∠ F (ik) (9)
Its amplitude is the difference of current frame image block and reference frame image block amplitude:
|F(Epm) |=| F (ik)|-|F(ik-1)| (10)
If frame image block only includes translational motion, the result of formula (10) should be 0, but due to noise and frame image
As a result the influence of non-matching part in block generally will not be 0.
Following problem is how to distinguish single movement region and the region comprising moving object boundary.Utilize phase
As judgment criteria, this method calculates low frequency part in phase matched difference and accounts for entire phase matched difference energy Energy distribution with difference
Number, i.e. RL(Epm):
Wherein,It is phase matched difference through low pass treated part.
Assuming that phase matched difference is the matrix of a N × N, then the denominator of equation (11) may be calculated as:
For molecule, discrete cosine transform can be passed through.Use CpmIndicate matrix of the phase matched difference after dct transform
[10]:
Wherein (u, v) is image into the coefficient after DCT, and the part of small coefficient corresponds to the part of low frequency energy, so:
N0≤ N is used to determine which coefficient by the coefficient as low frequency energy.It can prove to work as and take N0When=3N/4, include
The R of moving object borderline regionL(Epm) and defer to the R in single movement regionL(Epm) between difference maximum distinguished from image
Out, best two kinds of regions can be come out respectively.
Based on match difference method the advantages of be, do not need interpolation arithmetic in the detection process.If a region
Be considered only comprising single movement, then can further with other methods accurately estimate to move number.
The quality for judging detection method, can see whether it has very high sensitivity.If including in an image block
The boundary of one moving object, the region inconsistent with the movement in dominant region is considered as separation unit in the picture
Point, separate section is bigger, and the boundary of moving object is easier to be found.Even if we require separate section very little, moving object
The boundary of body can be also found.
In order to prove the sensitivity of algorithm, the movement with observation window, R can be detected by the movement of an observation windowL
(Epm) there is any variation.The response that two are moved with inconsistency object takes two continuous frames from the video of the people of a walking,
The people of movement and background do different movements, it is assumed that in window, what they did is all translational motion.
From Fig. 2, it can be seen that RL(Epm) value it is very low in the region for doing single translational motion, as long as observation window includes
Moving boundaries, its value just will rise rapidly, even if the ratio very little that separate section is shared in observation window.It illustrates this
Algorithm has good sensitivity.
After finding the image block comprising moving object boundary, the precise edge of moving object is next found.
Active contour model is also known as Snake model, it has merged the three phases of cutting procedure, so that detection obtained
Object boundary is exactly a curve being smoothly connected.Its main thought be define an energy function, Snake from initial position to
When actual profile moves closer to, the local minimum of this energy function is found, i.e., is forced by the dynamic optimization to energy function
The actual profile of close-target.This energy function is mainly made of internal energy function and external energy function.Internal energy function
Consider envelope itself continuity and each point curvature it is big;External energy function is then mainly concerned with some specific feelings of image
Condition, such as the gradient factor of variation of image grayscale.
In Snake model, with expressed as parameters contour line υ (s)=(x (s), y (s)) (s is profile arc length), energy function
Is defined as:
E in formulaintThe internal energy for indicating Active contour models, is also internal power;EimageIndicate the energy that image active force generates
Amount, is also image force;EconIt indicates the energy that outer limit active force generates, is restraining force.Two and referred to as external energy afterwards.
Internal power plays the role of smoothed profile, keeps profile continuity;Image force indicates what profile point and image local feature were coincide
Situation, restraining force are various artificially defined constraint conditions.In Snake model, internal energy is represented by profile to arc length
First derivative item υs(s) and Derivative Terms υss(s) combination:
Eint=(α (s) | υs(s)|2+β(s)|υss(s)|2)/2 (16)
In formula, the continuity constraint of single order term coefficient α controlling profile, if smaller, continuity degree of the internal power to profile
It is insensitive, in contour line there are when notch, there will be biggish value;Second order term factor beta controls smoothness constraint, if smaller,
Internal power is insensitive to the smoothness of profile, will there is the larger value when contour line curvature becomes larger.
Image force is the linear combination of straight line, edge and boundary energy are as follows:
Eimage=ωlineEline+ωedgeEedge+ωtermEterm (17)
It is every in formula to be calculated from image I (x, y).Eline、EedgeAnd EtermIt is characterized coefficient, ElineIt is heat input,
Equal to brightness of image Eline=I (x, y);Pass through linear character coefficient ωlineIt is positive and negative, Snake can be made close to bright line or dark
Line.EedgeIt is edge energy, it can also be by a very simple function representation, i.e.,
Eedge=-| ▽ I (x, y) |;Edge feature coefficient ωedgeControl the constraint to the intensity gradient of profile region.
EtermIt is profile curvatures of a curve at different levels in the image for using Gaussian function smoothed, by curvature feature coefficient ωtermDetermine its influence.
This algorithm exist require external force can micro-, unstable, controling parameter can not determine, computationally intensive and time overhead it is big
The disadvantages of.Some researchers improve this algorithm, introduce hard power of enforcement, and substantially increase the speed of service.But, they
There are still some problems, as iteration effect depends on the selection of initial profile point;Control point is in iteration to higher curvature edge heap
Product;Control point number immobilizes;Adjusting etc. cannot be changed with target sizes.There are many researcher's lacking for original Snake
Point has carried out model refinement or algorithm improvement, and the Research on threshold selection as angle steel joint determines is improved, according to certain rules
Adjust control point spacing, using different characteristics of image energy model etc., but for the still sensitive or operation of initial profile point
It is more complicated.
Picture frame is divided into the image block that size is 8 × 8 first by the invention patent, the method for then using border detection,
Every block of image is detected, whether is seen wherein comprising moving object boundary.Using this step as accurate lookup moving target
The first step, by this step can by those include moving object boundary image blocks and those only include single translational motion
Image block is distinguished.Such as in video frame in pedestrian's power-walking, the body of pedestrian is not due to being stringent to carry out translation fortune
It is dynamic, and the gray scale of every frame image has certain variation, therefore affects the result of detection.The most of arm and leg of people all by
Think comprising moving boundaries.After obtaining the block that these include moving boundaries, in order to guarantee the correct of final result, we will most
The center of outer layer image block connects, and as the initial profile curve of active contour model, finally utilizes active contour model
Find the exact boundary of target.Since initial profile model is very close to object boundary, it is only necessary to which seldom the number of iterations is just
Available final boundary.
Claims (6)
1. a kind of moving Object Segmentation method based on phase information, which comprises the steps of:
Step 1, movement destination image frame is divided into the image block that size is 8 × 8;
Step 2, each described image block is detected using the method for phase information, is in described image block detecting
No includes moving boundaries, finds out all image blocks comprising moving boundaries;It includes moving object that step 2, which includes by described image block,
It distinguishes and distinguishes in the image block on boundary and the image block for deferring to translational motion;Phase information is utilized described in step 2
Detection method is to be matched and changing the phase of movement destination image in a frequency domain to image, further includes following steps:
Step 21, the phase matched between the image block in the image block and reference frame in present frame after calculating matching is poor;
Step 22, using the size of phase matched difference, identify in image block whether include Moving Objects, and then distinguish single movement
Image block and image block comprising moving object boundary;
Step 3, the described image block center comprising moving boundaries is connected, the initial profile as active contour model is bent
Line;
Step 4, it is found described using the initial profile curve by the way that the initial profile curve is iterated and is restrained
The exact boundary of movement destination image.
2. a kind of moving Object Segmentation method based on phase information as described in claim 1, which is characterized in that the phase
Match the realization step of difference, further includes:
Step 211, an intermediate image block is created;
Step 212, calculate the phase of the intermediate image block, so acquire image block in present frame phase and the centre
The phase matched of image block is poor.
3. a kind of moving Object Segmentation method based on phase information as described in claim 1, which is characterized in that the step
22, further includes:
Step 221, poor using the phase matched, the Energy distribution of the phase matched difference is calculated, is calculated in phase matched difference
Low frequency part accounts for the number of entire phase matched difference energy.
4. a kind of moving Object Segmentation method based on phase information as claimed in claim 2, which is characterized in that the centre
The condition that image block meets is that the corresponding image block of phase identical as the corresponding image block of present frame, amplitude and reference frame is identical.
5. a kind of moving Object Segmentation method based on phase information as claimed in claim 2, which is characterized in that the phase
Match difference EpmIt indicates are as follows:
Epm=ik-ikX
Wherein, ikFor current image block, ikX For intermediate image block.
6. a kind of moving Object Segmentation method based on phase information as claimed in claim 3, which is characterized in that the phase
Matching difference in low frequency part account for entire phase matched difference energy number be expressed as RL(Epm), formula indicates:
Wherein,It is phase matched difference through low pass treated part, phase matched difference is the matrix of a N × N, above formula
Denominator be calculated as follows:
C is used by discrete cosine transform for moleculepmIndicate matrix of the phase matched difference after dct transform:
Wherein α (u) and α (v) are images into the coefficient after DCT, u=1,2 ..., N-1;V=1,2 ..., N-1;Small coefficient
The part of the corresponding low frequency energy in part, so:
Wherein N0≤ N is used to determine which coefficient by the coefficient as low frequency energy;
When taking N0When=3N/4, the R comprising moving boundaries image blockL(Epm) and defer to the R in single movement regionL(Epm) between
Difference is maximum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510980610.7A CN106910200B (en) | 2015-12-23 | 2015-12-23 | A kind of moving Object Segmentation method based on phase information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510980610.7A CN106910200B (en) | 2015-12-23 | 2015-12-23 | A kind of moving Object Segmentation method based on phase information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106910200A CN106910200A (en) | 2017-06-30 |
CN106910200B true CN106910200B (en) | 2019-11-08 |
Family
ID=59200146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510980610.7A Active CN106910200B (en) | 2015-12-23 | 2015-12-23 | A kind of moving Object Segmentation method based on phase information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106910200B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101964113A (en) * | 2010-10-02 | 2011-02-02 | 上海交通大学 | Method for detecting moving target in illuminance abrupt variation scene |
CN102622602A (en) * | 2012-02-28 | 2012-08-01 | 中国农业大学 | Cotton foreign fiber image online dividing method and cotton foreign fiber image online dividing system |
CN102799883A (en) * | 2012-06-29 | 2012-11-28 | 广州中国科学院先进技术研究所 | Method and device for extracting movement target from video image |
CN103226834A (en) * | 2013-03-26 | 2013-07-31 | 长安大学 | Quick search method for target character points of image motion |
CN104125430A (en) * | 2013-04-28 | 2014-10-29 | 华为技术有限公司 | Method and device for detecting video moving objects as well as video monitoring system |
-
2015
- 2015-12-23 CN CN201510980610.7A patent/CN106910200B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101964113A (en) * | 2010-10-02 | 2011-02-02 | 上海交通大学 | Method for detecting moving target in illuminance abrupt variation scene |
CN102622602A (en) * | 2012-02-28 | 2012-08-01 | 中国农业大学 | Cotton foreign fiber image online dividing method and cotton foreign fiber image online dividing system |
CN102799883A (en) * | 2012-06-29 | 2012-11-28 | 广州中国科学院先进技术研究所 | Method and device for extracting movement target from video image |
CN103226834A (en) * | 2013-03-26 | 2013-07-31 | 长安大学 | Quick search method for target character points of image motion |
CN104125430A (en) * | 2013-04-28 | 2014-10-29 | 华为技术有限公司 | Method and device for detecting video moving objects as well as video monitoring system |
Also Published As
Publication number | Publication date |
---|---|
CN106910200A (en) | 2017-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107424177B (en) | Positioning correction long-range tracking method based on continuous correlation filter | |
WO2017049994A1 (en) | Hyperspectral image corner detection method and system | |
Peng et al. | An improved snake model for building detection from urban aerial images | |
CN106651908B (en) | Multi-moving-target tracking method | |
CN104820997B (en) | A kind of method for tracking target based on piecemeal sparse expression Yu HSV Feature Fusion | |
TWI497450B (en) | Visual object tracking method | |
CN109272521A (en) | A kind of characteristics of image fast partition method based on curvature analysis | |
CN111161222A (en) | Printing roller defect detection method based on visual saliency | |
CN102903111B (en) | Large area based on Iamge Segmentation low texture area Stereo Matching Algorithm | |
CN110827262A (en) | Weak and small target detection method based on continuous limited frame infrared image | |
CN112164093A (en) | Automatic person tracking method based on edge features and related filtering | |
CN110706208A (en) | Infrared dim target detection method based on tensor mean square minimum error | |
CN104966296B (en) | Sliding window N Smoothlets method for detecting image edge | |
CN107292910A (en) | Moving target detecting method under a kind of mobile camera based on pixel modeling | |
CN106910200B (en) | A kind of moving Object Segmentation method based on phase information | |
CN105243661A (en) | Corner detection method based on SUSAN operator | |
Qiao et al. | An adaptive algorithm for grey image edge detection based on grey correlation analysis | |
KR101426864B1 (en) | Region-based edge enhancement method for short-distance thermal target tracking | |
He et al. | A modified SUSAN corner detection algorithm based on adaptive gradient threshold for remote sensing image | |
Xiu et al. | Tracking algorithm based on the improved template matching | |
Wang et al. | A multi-object image segmentation C–V model based on region division and gradient guide | |
CN103559722B (en) | Based on the sequence image amount of jitter computing method of gray scale linear modelling | |
Zhu et al. | Moving vehicle detection and tracking algorithm in traffic video | |
CN106934818B (en) | Hand motion tracking method and system | |
CN110827324B (en) | Video target tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |