CN104156932A - Moving object segmentation method based on optical flow field clustering - Google Patents

Moving object segmentation method based on optical flow field clustering Download PDF

Info

Publication number
CN104156932A
CN104156932A CN201310174529.0A CN201310174529A CN104156932A CN 104156932 A CN104156932 A CN 104156932A CN 201310174529 A CN201310174529 A CN 201310174529A CN 104156932 A CN104156932 A CN 104156932A
Authority
CN
China
Prior art keywords
pixel
edge
sample
motion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310174529.0A
Other languages
Chinese (zh)
Inventor
张泽旭
王纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HARBIN DIANSHI SIMULATION TECHNOLOGY Co Ltd
Original Assignee
HARBIN DIANSHI SIMULATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HARBIN DIANSHI SIMULATION TECHNOLOGY Co Ltd filed Critical HARBIN DIANSHI SIMULATION TECHNOLOGY Co Ltd
Priority to CN201310174529.0A priority Critical patent/CN104156932A/en
Publication of CN104156932A publication Critical patent/CN104156932A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a moving object segmentation method based on optical flow field clustering. The method is characterized in that optical flow fields of an image sequence are clustered to effectively detect and segment single or multiple moving object(s) in a complex image background. An object area is segmented by utilizing moving internal epipolar constraint and C-mean value cluster algorithm to obtain a segmentation image; a detailed target area is obtained from the segmentation image by utilizing a Canny edge operator, and an edge image is obtained; and the segmentation image is merged with the edge image according to the flow velocity in the optical flow field, and complete single or multiple moving objects is/are detected. Thus, the moving object can be reliably segmented and detected under the condition that a camera moves.

Description

A kind of moving Object Segmentation method based on optical flow field cluster
Technical field
The present invention is relevant with computer graphics and image understanding, the in the situation that of camera motion, the background of image sequence is very complicated, this is just for the detection of target has brought challenge with cutting apart, the present invention relates to a kind of dividing method that solves moving target under this complex background condition, utilize optical flow field cluster to realize single goal and multiobject reliable detection.
Background technology
Moving object detection is the very important research contents of machine vision, image understanding and field of Computer Graphics always.Under the condition of camera motion, especially, when scene is very complicated, only rely on single detection algorithm to be difficult to detect complete moving target.For the situation of a plurality of moving targets, it is more complicated that the detection of moving target becomes.
Formerly method [1] is (referring to Thompson W. B, Pong T. C. Detecting moving object. Int. J. Comp. Vision, 1990,4:39 ~ 57) utilize the difference of the definite bias light flow path direction of the light stream direction of moving target and motion epipolar constraint to detect moving target, but in comparatively complicated natural background, only utilize epipolar constraint to be difficult to obtain complete moving target.
Formerly method [2] is (referring to Sasa G., Loncaric S. Spatio temporal image segmentation using optical flow and clustering algorithm. First Int ' l workshop on image and signal processing and analysis, Pula, Croatia:2000,63 ~ 68) proposed a kind of method of utilizing optical flow field movable information to complete Target Segmentation, but be only applied in the situation of simple background and static video camera.
Formerly method [3] is (referring to Adiv G.. Determining three Dimensional motion and structure from optical flow generated by several moving objects. IEEE Trans, 1985, PAMI-7 (4): 384-401) by utilizing six parameters of affined transformation to complete cutting apart the optical flow field of a plurality of moving targets, but this calculating cost of cutting apart is sizable.
Be different from formerly method [1,2,3], the present invention is in the situation that the image background of this complexity of camera motion, a kind of detection that completes moving target with the blending algorithm of Canny edge detection operator of cutting apart based on optical flow field is proposed, this method comprises that optical flow field is cut apart, Canny edge extracting and cut apart figure and three steps such as outline map fusion, finally realizes the complete detection to single motion and multiple mobile object.
Summary of the invention
The present invention sets up a kind of moving Object Segmentation method based on optical flow field cluster, and this method can be divided into three steps: epipolar constraint and the C-means clustering algorithm of first step utilization campaign complete cutting apart of target area, and figure is cut apart in acquisition; Second step utilizes Canny boundary operator to obtain the target area outline map of refinement in cutting apart figure; The 3rd step completes the fusion of cutting apart figure and outline map according to the flow speed value in optical flow field, and detects complete moving target.
Ultimate principle of the present invention is as follows:
1, a kind of motion epipolar constraint of optical flow field. consider the relative fixed scene motion of video camera, and by perspective projection by scenery imaging to the plane of delineation.If coordinate system is fixed on video camera, can think that so scene is with respect to camera motion, the motion of scenery can be described with the flow velocity of the plane of delineation, now speed be the body surface pixel coordinate that projects to the plane of delineation, video camera with respect to the motion of body surface and the function of video camera and body surface distance, by formula (1), described
(1)
(2)
(3)
In formula, at image pixel coordinate the flow velocity at place, by focal length normalization, translational component, wherein be the third dimension coordinate of the point in corresponding scene, and for rotational component; with be respectively space three-dimensional point-to-point speed and the angular velocity of rotation of camera.
If video camera only has translation motion with respect to scene and does not rotatablely move, scene will produce a kind of optical flow field form of uniqueness so, be scene to project to the motion of pixel on image plane seem to generate along extending straight line from a point of fixity of the plane of delineation, this point is focus of expansion FOE (Focus Of Expansion). this motion morphology by the determined optical flow field of FOE is called the epipolar constraint of motion. from equation (1), can obtain the position of FOE
(4)
The direction of translation is depended in the position of visible focus of expansion, rather than velocity magnitude, therefore, and relatively static image pixel in scene place light stream direction can by motion epipolar constraint, be determined according to the direction of camera motion,
(5)
When time, focus of expansion is in image coordinate at a distance, now corresponding optical flow field is parallel, and in image light stream with there is the very pixel place correspondence of big-difference the target area of moving, thus, can detect moving target.
When video camera rotatablely moves, it is more complicated that situation becomes, because only relevant with rotation parameter, and irrelevant with the shape attribute of image, so can be by right estimation predict , each the pixel place on the plane of delineation can deduct in observing flow velocity thereby, obtain the translation composition of flow velocity , by the flow field of setting up meets the epipolar constraint condition of motion, thereby can be determined by this constraint the target area of motion, but the error of calculation and the evaluated error to video camera rotation parameter due to light stream, only utilize epipolar constraint to be difficult to intactly determine motion target area, also need to cut apart utilizing the optical flow field of epipolar constraint acquisition to carry out dynamic clustering.
2, the vector field based on C-means clustering algorithm is cut apart
C-means clustering algorithm is the Dynamic Clustering Algorithm based on error sum of squares criterion, the error sum of squares of the present invention's definition clustering criteria function is
(6)
(7)
Wherein for mixing sample collection in sample, this sample set is aggregated into the individual subset of separating, comprises respectively individual sample. height is concentrated the average of sample.
The sample of the present invention's definition is the coordinate of optical flow field pixel, and error sum of squares criterion is Euclid distance criterion, and C-means clustering algorithm, on the basis of initial division, is used iterative algorithm, progressively optimizes cluster result, makes criterion function reach minimal value, obtain individual type.Then, the number of samples in each type relatively, if number of samples is very few, thinks false-alarm and is eliminated; If also there are a plurality of types, to think and have a plurality of moving targets, in experiment, we completed the cutting apart of the target area of single and a plurality of motions, and obtained and well cut apart figure.
3, a kind of Canny of utilization boundary operator carries out the method for refinement to target area
For two dimensional image, Canny operator think the shape of optimal edge detecting device at step change type edge and the first order derivative of Gaussian function similar, utilize circular symmetry and the decomposability of two-dimensional Gaussian function, can calculate directional derivative that Gaussian function goes up in any direction and the convolution of image, establish two-dimensional Gaussian function as shown in the formula
(8)
In one direction first order derivative be
(9)
In formula, , be unit direction vector; it is gradient vector.The present invention is by image with make convolution, change simultaneously direction, when time, can solve and work as while obtaining maximal value
(10)
Obviously the direction that is orthogonal to Edge detected, makes progress the party, there is maximum output response
(11)
In actual applications, in formula (8) affect primary template and block into limited size size, wherein for weights.
Utilizing partitioning algorithm to obtain light stream cuts apart behind field, in these cut zone, having comprised all moving targets. the present invention will utilize Canny operator extraction edge in these cut zone, background interference can be greatly limited on the one hand, the speed of operation can be effectively improved on the other hand.Cutting apart on the basis of figure like this, obtaining the outline map of reliable movement target area, supposing that number is n, edge pixel collection is used represent, , sample is pixel coordinate.
4, a kind of will cut apart the Pixel-level blending algorithm that merges of figure and outline map. suppose by cutting apart of optical flow field resulting moving region cut apart in figure and have individual pixel, the set of all pixel coordinates is .Each point in cut zone all has flow velocity, has individual velocity vector .Order
(12)
Can form the mixing sample collection of flow velocity mould value .
Within figure is cut apart in light stream, utilize Canny edge detection operator can obtain edge pixel collection , owing to selecting higher thresholding, to disturbing, there is very large inhibition, also lost part edge pixel simultaneously. at edge pixel collection basis on, carry out C-means clustering algorithm, can be divided into with two classes: for object edge class, sample set is , sample number is ; for background edge class, sample set is , sample number is , .It is obvious, , and
, and (13)
Due to sample set the edge pixel comprising is sufficiently complete, wishes from sample set middle Extraction parts edge pixel supplements.For this reason, at mixing sample collection in, can determine with the edge subset of corresponding flow velocity mould value , this set has reflected the size of object edge pixel place flow velocity, therefore, can therefrom obtain the thresholding of edge flow velocity , a kind of threshold value that can select is
(14)
The another kind of method of selecting be by the minimum value of middle sample is as thresholding,
(15)
for fine setting parameter. to sample set in sample can basis be divided into two classes,
(16)
Obviously, sample set each sample be the flow velocity mould value of edge pixel, sample number is , another kind of is the flow velocity mould value of other background pixel, sample number is , .According to , be not difficult to obtain corresponding pixel coordinate collection ; the object edge comprising has very large relation with the precision of optical flow field, comprise most of strong object edge, also recovered well weak object edge.And only comprise more intense object edge.Like this, according to with , make relevant pixel fusion, can obtain the edge collection of complete target area , blending algorithm is mainly operating as
(17)
Technique effect of the present invention:
The present invention and formerly technology [1, 2, 3] difference is, essence of the present invention is a kind of blending algorithm of Pixel-level, the present invention proposes a kind of epipolar constraint of motion and method of C-means clustering algorithm realize target Region Segmentation utilized, figure is cut apart in acquisition, wherein, in C-means clustering algorithm, sample is the coordinate of optical flow field pixel, error sum of squares criterion is Euclid distance criterion, use iterative algorithm progressively to optimize cluster result, in addition, propose a kind of Canny of utilization boundary operator and target area is carried out to the method for refinement, this refinement is carried out in the target area of having cut apart, can greatly limit background interference on the one hand, can effectively improve the speed of operation on the other hand.Finally, utilize a kind of blending algorithm to cut apart figure and outline map merges effectively, obtain more complete target area.
Accompanying drawing explanation:
The 15th frame in Fig. 1 CAR image sequence, size is 256 * 256
The optical flow field that Fig. 2 is calculated by the 15th frame and 16 two field pictures
The optical flow field that Fig. 3 utilizes C-means clustering algorithm to obtain is cut apart figure
The enclosed region that on Fig. 4 original image, white curve forms is cut zone
The outline map that Fig. 5 is calculated by Canny operator in light stream cut zone
Fig. 6 carries out the last segmentation result obtaining after blending algorithm
The 15th frame in Fig. 7 TAXI image sequence
Fig. 8 is according to the optical flow field of the 15th frame and the calculating of 17 two field pictures
Fig. 9 utilizes the optical flow field that partitioning algorithm obtains to cut apart figure
The enclosed region that on Figure 10 original image, white curve forms is Target Segmentation region
The outline map that Figure 11 is calculated by Canny operator in cut zone
Figure 12 carries out the end product of blending algorithm.
Embodiment:
Present embodiment is specifically introduced in conjunction with Fig. 1-12 couple the present invention:
1, optical flow field is cut apart, figure is cut apart in formation, consider a relative fixed scene motion of video camera, and by perspective projection by scenery imaging to the plane of delineation, if coordinate system is fixed on video camera, can think that so scene is with respect to camera motion, the motion of scenery can be described with the flow velocity of the plane of delineation, now speed be the body surface pixel coordinate that projects to the plane of delineation, video camera with respect to the motion of body surface and the function of video camera and body surface distance, by formula (1), described
(1)
(2)
(3)
In formula, at image pixel coordinate the flow velocity at place, by focal length normalization, translational component, wherein be the third dimension coordinate of the point in corresponding scene, and for rotational component; with be respectively space three-dimensional point-to-point speed and the angular velocity of rotation of camera.
To the optical flow field calculating, utilize C-means clustering algorithm to carry out Dynamic Cluster Analysis, the error sum of squares of the present invention's definition clustering criteria function is
(4)
(5)
Wherein for mixing sample collection in sample, this sample set is aggregated into the individual subset of separating, comprises respectively individual sample. height is concentrated the average of sample.
In implementation process, sample is the coordinate of optical flow field pixel, and error sum of squares criterion is Euclid distance criterion, and C-means clustering algorithm, on the basis of initial division, is used iterative algorithm, progressively optimizes cluster result, makes criterion function reach minimal value, obtain individual type.Then, the number of samples in each type relatively, if number of samples is very few, thinks false-alarm and is eliminated; If also there are a plurality of types, think and have a plurality of moving targets, finally obtain and well cut apart figure, at these, cut apart and in figure, comprised all moving targets.
2, utilize Canny boundary operator to carry out refinement to target area, obtain outline map. for two dimensional image, Canny operator think the shape of optimal edge detecting device at step change type edge and the first order derivative of Gaussian function similar, utilize circular symmetry and the decomposability of two-dimensional Gaussian function, can calculate directional derivative that Gaussian function goes up in any direction and the convolution of image, establish two-dimensional Gaussian function as shown in the formula
(6)
In one direction first order derivative be
(7)
In formula, , be unit direction vector; be gradient vector, the present invention is by image with make convolution, change simultaneously direction, when time, can solve and work as while obtaining maximal value
(8)
Obviously the direction that is orthogonal to Edge detected, makes progress the party, there is maximum output response
(9)
In actual applications, affect primary template and block into limited size size, wherein for weights.Cutting apart on the basis of figure, obtaining the outline map of reliable movement target area, supposing that number is n, edge pixel collection is used represent, , sample is pixel coordinate.
3, will cut apart figure and outline map and carry out Pixel-level fusion. suppose by cutting apart of optical flow field resulting moving region cut apart in figure total individual pixel, the set of all pixel coordinates is , the each point in cut zone all has flow velocity, has individual velocity vector , order
(10)
Can form the mixing sample collection of flow velocity mould value .
Within figure is cut apart in light stream, utilize Canny edge detection operator can obtain edge pixel collection , owing to selecting higher thresholding, to disturbing, there is very large inhibition, also lost part edge pixel, at edge pixel collection simultaneously basis on, carry out C-means clustering algorithm, can be divided into with two classes: for object edge class, sample set is , sample number is ; for background edge class, sample set is , sample number is , .It is obvious, , and
, and (11)
Due to sample set the edge pixel comprising is sufficiently complete, wishes from sample set middle Extraction parts edge pixel supplements.For this reason, at mixing sample collection in, can determine with the edge subset of corresponding flow velocity mould value , this set has reflected the size of object edge pixel place flow velocity, therefore, can therefrom obtain the thresholding of edge flow velocity , a kind of threshold value that can select is
(12)
The another kind of method of selecting be by the minimum value of middle sample is as thresholding,
(13)
for fine setting parameter, to sample set in sample can basis be divided into two classes,
(14)
Obviously, sample set each sample be the flow velocity mould value of edge pixel, sample number is , another kind of is the flow velocity mould value of other background pixel, sample number is , , according to , be not difficult to obtain corresponding pixel coordinate collection . the object edge comprising has very large relation with the precision of optical flow field, comprise most of strong object edge, also recovered well weak object edge.And only comprise more intense object edge.Like this, according to with , make relevant pixel fusion, can obtain the edge collection of complete target area , blending algorithm is mainly operating as
(15)
4. the present invention adopts the split-run test of single moving target and a plurality of moving targets is described, Fig. 1 is the gray level image (the 15th frame) of the size 256 * 256 that extracts from natural image CAR sequence of a width, in image sequence, background is mottled playground, a minicar moves to lower right from the upper left corner of image, for following the trail of the objective, video camera also moves lentamente; Fig. 2 is according to the optical flow field of the 15th frame and the calculating of 16 two field pictures; Fig. 3 is that the optical flow field that utilizes C-means clustering algorithm to obtain is cut apart figure; Fig. 4 is the cut zone figure based on light stream providing on original image, and the enclosed region that white curve forms is cut zone; Fig. 5 is the outline map being calculated by Canny operator in light stream cut zone, and Fig. 6 carries out the last segmentation result obtaining after blending algorithm, and target is intactly detected; Fig. 7 is the 15th frame of TAXI image sequence, has the target of 3 motions, marks in the drawings, and wherein target 3 parts are set coverage, and Fig. 8 is that Fig. 9 utilizes the optical flow field that partitioning algorithm obtains to cut apart figure according to the optical flow field of the 15th frame and the calculating of 17 two field pictures; Figure 10 is the cut zone figure based on light stream providing on original image, and the enclosed region that white curve forms is cut zone; Figure 11 is the outline map being calculated by Canny operator in cut zone; Figure 12 is the end product of carrying out blending algorithm, has detected 3 complete moving targets.

Claims (11)

1. the motion epipolar constraint of an optical flow field. consider a relative fixed scene motion of video camera, and by perspective projection by scenery imaging to the plane of delineation, if coordinate system is fixed on video camera, can think that so scene is with respect to camera motion, the motion of scenery can be described with the flow velocity of the plane of delineation, now speed be the body surface pixel coordinate that projects to the plane of delineation, video camera with respect to the motion of body surface and the function of video camera and body surface distance, by formula (1), described
(1)
(2)
(3)
In formula, at image pixel coordinate the flow velocity at place, by focal length normalization, translational component, wherein be the third dimension coordinate of the point in corresponding scene, and for rotational component; with be respectively space three-dimensional point-to-point speed and the angular velocity of rotation of camera.
2. if video camera only has translation motion with respect to scene and does not rotatablely move, scene will produce a kind of optical flow field form of uniqueness so, be that to project to the motion of pixel on image plane seem to generate along extending straight line from a point of fixity of the plane of delineation to scene, this point is focus of expansion FOE (Focus Of Expansion). this motion morphology by the determined optical flow field of FOE is called the epipolar constraint of motion, can obtain the position of FOE from equation (1)
(4)
The direction of translation is depended in the position of visible focus of expansion, rather than velocity magnitude, therefore, and relatively static image pixel in scene place light stream direction can by motion epipolar constraint, be determined according to the direction of camera motion,
(5)
When time, focus of expansion is in image coordinate at a distance, now corresponding optical flow field is parallel, and in image light stream with there is the very pixel place correspondence of big-difference the target area of moving, thus, can detect moving target.
3. when video camera rotatablely moves, it is more complicated that situation becomes, because only relevant with rotation parameter, and irrelevant with the shape attribute of image, so can be by right estimation predict , each the pixel place on the plane of delineation can deduct in observing flow velocity thereby, obtain the translation composition of flow velocity , by the flow field of setting up meets the epipolar constraint condition of motion, thereby can be determined by this constraint the target area of motion, but the error of calculation and the evaluated error to video camera rotation parameter due to light stream, only utilize epipolar constraint to be difficult to intactly determine motion target area, also need to cut apart utilizing the optical flow field of epipolar constraint acquisition to carry out dynamic clustering.
4. the vector field based on C-means clustering algorithm is cut apart, and C-means clustering algorithm is the Dynamic Clustering Algorithm based on error sum of squares criterion, the error sum of squares of the present invention's definition clustering criteria function is
(6)
(7)
Wherein for mixing sample collection in sample, this sample set is aggregated into the individual subset of separating, comprises respectively individual sample, height is concentrated the average of sample.
5. the sample of the present invention's definition is the coordinate of optical flow field pixel, and error sum of squares criterion is Euclid distance criterion, and C-means clustering algorithm, on the basis of initial division, is used iterative algorithm, progressively optimizes cluster result, makes criterion function reach minimal value, obtain individual type, then, the number of samples in each type relatively, if number of samples is very few, thinks false-alarm and is eliminated; If also there are a plurality of types, to think and have a plurality of moving targets, in experiment, we completed the cutting apart of the target area of single and a plurality of motions, and obtained and well cut apart figure.
6. one kind is utilized Canny boundary operator target area to be carried out to the method for refinement, for two dimensional image, Canny operator think the shape of optimal edge detecting device at step change type edge and the first order derivative of Gaussian function similar, utilize circular symmetry and the decomposability of two-dimensional Gaussian function, can calculate directional derivative that Gaussian function goes up in any direction and the convolution of image, establish two-dimensional Gaussian function as shown in the formula
(8)
In one direction first order derivative be
(9)
In formula, , be unit direction vector; it is gradient vector.
7. the present invention is by image with make convolution, change simultaneously direction, when time, can solve and work as while obtaining maximal value
(10)
Obviously the direction that is orthogonal to Edge detected, makes progress the party, there is maximum output response
(11)
In actual applications, in formula (8) affect primary template and block into limited size size, wherein for weights, utilizing partitioning algorithm to obtain light stream cuts apart behind field, in these cut zone, having comprised all moving targets. the present invention will utilize Canny operator extraction edge in these cut zone, can greatly limit background interference on the one hand, can effectively improve the speed of operation on the other hand, cut apart on the basis of figure like this, obtain the outline map of reliable movement target area, suppose that number is n, edge pixel collection is used represent, , sample is pixel coordinate.
One kind will cut apart the Pixel-level blending algorithm that merges of figure and outline map. suppose by cutting apart of optical flow field resulting moving region cut apart in figure and have individual pixel, the set of all pixel coordinates is , the each point in cut zone all has flow velocity, has individual velocity vector , order
(12)
Can form the mixing sample collection of flow velocity mould value .
9. within figure is cut apart in light stream, utilize Canny edge detection operator can obtain edge pixel collection , owing to selecting higher thresholding, to disturbing, there is very large inhibition, also lost part edge pixel simultaneously. at edge pixel collection basis on, carry out C-means clustering algorithm, can be divided into with two classes: for object edge class, sample set is , sample number is ; for background edge class, sample set is , sample number is , , it is obvious, , and
, and (13)
Due to sample set the edge pixel comprising is sufficiently complete, wishes from sample set middle Extraction parts edge pixel supplements, for this reason, and at mixing sample collection in, can determine with the edge subset of corresponding flow velocity mould value , this set has reflected the size of object edge pixel place flow velocity, therefore, can therefrom obtain the thresholding of edge flow velocity , a kind of threshold value that can select is
(14)
The another kind of method of selecting be by the minimum value of middle sample is as thresholding,
(15)
for fine setting parameter. to sample set in sample can basis be divided into two classes,
(16)。
10. sample set each sample be the flow velocity mould value of edge pixel, sample number is , another kind of is the flow velocity mould value of other background pixel, sample number is , , according to , be not difficult to obtain corresponding pixel coordinate collection ; the object edge comprising has very large relation with the precision of optical flow field, comprise most of strong object edge, also recovered well weak object edge, and only comprise more intense object edge.
11. like this, according to with , make relevant pixel fusion, can obtain the edge collection of complete target area , blending algorithm is mainly operating as
(17)
According to formula (17), obtain last complete object edge.
CN201310174529.0A 2013-05-13 2013-05-13 Moving object segmentation method based on optical flow field clustering Pending CN104156932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310174529.0A CN104156932A (en) 2013-05-13 2013-05-13 Moving object segmentation method based on optical flow field clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310174529.0A CN104156932A (en) 2013-05-13 2013-05-13 Moving object segmentation method based on optical flow field clustering

Publications (1)

Publication Number Publication Date
CN104156932A true CN104156932A (en) 2014-11-19

Family

ID=51882423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310174529.0A Pending CN104156932A (en) 2013-05-13 2013-05-13 Moving object segmentation method based on optical flow field clustering

Country Status (1)

Country Link
CN (1) CN104156932A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913002A (en) * 2016-04-07 2016-08-31 杭州电子科技大学 On-line adaptive abnormal event detection method under video scene
CN105957060A (en) * 2016-04-22 2016-09-21 天津师范大学 Method for dividing TVS events into clusters based on optical flow analysis
CN106204659A (en) * 2016-07-26 2016-12-07 浙江捷尚视觉科技股份有限公司 Elevator switch door detection method based on light stream
CN106340032A (en) * 2016-08-27 2017-01-18 浙江捷尚视觉科技股份有限公司 Moving target detection method based on optical flow field clustering
WO2017020182A1 (en) * 2015-07-31 2017-02-09 SZ DJI Technology Co., Ltd. System and method for constructing optical flow fields
CN107025658A (en) * 2015-11-13 2017-08-08 本田技研工业株式会社 The method and system of moving object is detected using single camera
CN107507224A (en) * 2017-08-22 2017-12-22 明见(厦门)技术有限公司 Mobile object detection method, device, medium and computing device
CN110047093A (en) * 2019-04-23 2019-07-23 南昌航空大学 Edge-protected type RGBD scene flows estimation method in high precision
CN110147837A (en) * 2019-05-14 2019-08-20 中国电子科技集团公司第二十八研究所 The intensive object detection method of any direction, system and the equipment focused based on feature
CN111028263A (en) * 2019-10-29 2020-04-17 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN112204614A (en) * 2018-05-28 2021-01-08 根特大学 Motion segmentation in video from non-stationary cameras
CN114419073A (en) * 2022-03-09 2022-04-29 荣耀终端有限公司 Motion blur generation method and device and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1582460A (en) * 2001-11-05 2005-02-16 皇家飞利浦电子股份有限公司 A method for computing optical flow under the epipolar constraint
US20120293658A1 (en) * 2004-12-23 2012-11-22 Donnelly Corporation Imaging system for vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1582460A (en) * 2001-11-05 2005-02-16 皇家飞利浦电子股份有限公司 A method for computing optical flow under the epipolar constraint
US20120293658A1 (en) * 2004-12-23 2012-11-22 Donnelly Corporation Imaging system for vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张泽旭等: "基于光流场分割和Canny边缘提取融合算法的运动目标检测", 《电子学报》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020182A1 (en) * 2015-07-31 2017-02-09 SZ DJI Technology Co., Ltd. System and method for constructing optical flow fields
US10904562B2 (en) 2015-07-31 2021-01-26 SZ DJI Technology Co., Ltd. System and method for constructing optical flow fields
US10321153B2 (en) 2015-07-31 2019-06-11 SZ DJI Technology Co., Ltd. System and method for constructing optical flow fields
CN107025658A (en) * 2015-11-13 2017-08-08 本田技研工业株式会社 The method and system of moving object is detected using single camera
CN107025658B (en) * 2015-11-13 2022-06-28 本田技研工业株式会社 Method and system for detecting moving object by using single camera
CN105913002A (en) * 2016-04-07 2016-08-31 杭州电子科技大学 On-line adaptive abnormal event detection method under video scene
CN105913002B (en) * 2016-04-07 2019-04-23 杭州电子科技大学 The accident detection method of online adaptive under video scene
CN105957060A (en) * 2016-04-22 2016-09-21 天津师范大学 Method for dividing TVS events into clusters based on optical flow analysis
CN106204659A (en) * 2016-07-26 2016-12-07 浙江捷尚视觉科技股份有限公司 Elevator switch door detection method based on light stream
CN106204659B (en) * 2016-07-26 2018-11-02 浙江捷尚视觉科技股份有限公司 Elevator switch door detection method based on light stream
CN106340032B (en) * 2016-08-27 2019-03-15 浙江捷尚视觉科技股份有限公司 A kind of moving target detecting method based on optical flow field cluster
CN106340032A (en) * 2016-08-27 2017-01-18 浙江捷尚视觉科技股份有限公司 Moving target detection method based on optical flow field clustering
CN107507224B (en) * 2017-08-22 2020-04-24 明见(厦门)技术有限公司 Moving object detection method, device, medium and computing device
CN107507224A (en) * 2017-08-22 2017-12-22 明见(厦门)技术有限公司 Mobile object detection method, device, medium and computing device
CN112204614A (en) * 2018-05-28 2021-01-08 根特大学 Motion segmentation in video from non-stationary cameras
CN112204614B (en) * 2018-05-28 2024-01-05 根特大学 Motion segmentation in video from non-stationary cameras
CN110047093A (en) * 2019-04-23 2019-07-23 南昌航空大学 Edge-protected type RGBD scene flows estimation method in high precision
CN110047093B (en) * 2019-04-23 2021-04-27 南昌航空大学 High-precision edge protection type RGBD scene flow estimation method
CN110147837A (en) * 2019-05-14 2019-08-20 中国电子科技集团公司第二十八研究所 The intensive object detection method of any direction, system and the equipment focused based on feature
CN111028263A (en) * 2019-10-29 2020-04-17 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN111028263B (en) * 2019-10-29 2023-05-05 福建师范大学 Moving object segmentation method and system based on optical flow color clustering
CN114419073A (en) * 2022-03-09 2022-04-29 荣耀终端有限公司 Motion blur generation method and device and terminal equipment
CN114419073B (en) * 2022-03-09 2022-08-12 荣耀终端有限公司 Motion blur generation method and device and terminal equipment

Similar Documents

Publication Publication Date Title
CN104156932A (en) Moving object segmentation method based on optical flow field clustering
Geiger et al. Are we ready for autonomous driving? the kitti vision benchmark suite
CN109059895B (en) Multi-mode indoor distance measurement and positioning method based on mobile phone camera and sensor
Herbst et al. Toward online 3-d object segmentation and mapping
WO2013029675A1 (en) Method for estimating a camera motion and for determining a three-dimensional model of a real environment
JP6985897B2 (en) Information processing equipment and its control method, program
Wu et al. [poster] a benchmark dataset for 6dof object pose tracking
Usenko et al. Reconstructing street-scenes in real-time from a driving car
CN110021029B (en) Real-time dynamic registration method and storage medium suitable for RGBD-SLAM
CN110516639B (en) Real-time figure three-dimensional position calculation method based on video stream natural scene
CN113888639B (en) Visual odometer positioning method and system based on event camera and depth camera
Dragon et al. Ground plane estimation using a hidden markov model
CN111709982B (en) Three-dimensional reconstruction method for dynamic environment
KR100574227B1 (en) Apparatus and method for separating object motion from camera motion
CN103077536B (en) Space-time mutative scale moving target detecting method
Petrovai et al. Obstacle detection using stereovision for Android-based mobile devices
TW201516965A (en) Method of detecting multiple moving objects
CN114199205B (en) Binocular Ranging Method Based on Improved Quadtree ORB Algorithm
JP2007156897A (en) Speed-measuring apparatus, method, and program
Ratajczak et al. Vehicle size estimation from stereoscopic video
Zhang et al. Kinect-based universal range sensor for laboratory experiments
Kniaz et al. An algorithm for pedestrian detection in multispectral image sequences
CN103295220A (en) Application method of binocular vision technology in recovery physiotherapy system
Mahabalagiri et al. Camera motion detection for mobile smart cameras using segmented edge-based optical flow
Huang et al. Real-Time 6-DOF Monocular Visual SLAM based on ORB-SLAM2

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141119