CN110473229B - Moving object detection method based on independent motion characteristic clustering - Google Patents
Moving object detection method based on independent motion characteristic clustering Download PDFInfo
- Publication number
- CN110473229B CN110473229B CN201910773313.3A CN201910773313A CN110473229B CN 110473229 B CN110473229 B CN 110473229B CN 201910773313 A CN201910773313 A CN 201910773313A CN 110473229 B CN110473229 B CN 110473229B
- Authority
- CN
- China
- Prior art keywords
- image
- independent
- feature
- motion
- tau
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a moving target detection method based on independent moving feature clustering, which comprises the following steps: s1, carrying out local feature detection, description and matching on the video stream output by the image sensor to obtain adjacent time matching feature point pairs, and carrying out global motion estimation to obtain global motion parameters; s2, carrying out global motion compensation on the feature point coordinates to obtain independent motion feature point pairs; and S3, clustering the independent motion characteristic point pairs to obtain independent motion characteristic clusters, and post-processing the independent motion characteristic clusters to obtain a final detection result.
Description
Technical Field
The invention relates to the field of image processing, in particular to a moving target detection method based on independent moving feature clustering.
Background
The moving target detection technology based on computer vision has important theoretical research significance and practical application value in the fields of intelligent video monitoring, autonomous navigation and the like.
The difficulty of moving object detection is mainly reflected in the complexity of background motion and diversity of objects. On the one hand, the movement of the photodetector and its mounting platform causes image translation, rotation, and even nonlinear deformation, which, superimposed on the image, appear as complex movements of the background. On the other hand, the kind, number, moving speed of the object in the scene, and the distance, angle, etc. from the detector are random, and these are represented on the image as a variety of kinds, numbers, dimensions, shapes, moving speeds, etc. of the object.
The traditional moving object detection algorithm is divided into three types, namely a frame difference method, an optical flow method and a background subtraction method. The frame difference method (framing) performs pixel-based temporal difference and threshold processing on adjacent frames of a time series image to extract a motion region in the image. The method is simple to implement, small in calculated amount and good in real-time performance, but cannot well overcome the interference of noise in the background on the target, and the detection effect depends on the target speed, the inter-frame interval and the like. In addition, for target detection under a moving platform, background motion compensation is performed before frame difference: estimating background motion based on certain image transformation model; and calculating a background motion compensation image by adopting a certain interpolation method. This undoubtedly increases the memory and time overhead of the algorithm, and is not suitable for embedded implementation. Therefore, the method is suitable for application scenes with static platforms, small environmental interference and high real-time requirement. An Optical flow method (Optical flow) firstly estimates the motion field of the image to obtain an Optical flow field, and then clusters the motion field according to the distribution characteristics of the Optical flow vectors of the image to obtain a motion area. The method has the advantages that no matter the camera is static or moving, as long as relative motion exists between the target and the background, even if no scene is a priori, the moving target can be extracted from the background; the method has the defects of high calculation complexity, sensitivity to illumination change and noise and incapability of well coping with moving target detection under a complex background. Background subtraction (background subtraction) is used for subtracting the current frame image from the background model, so that the moving target detection is realized. The algorithm has better anti-interference performance, can finely simulate a complex scene, and can detect all pixel points related to a moving target; however, the method has a large calculation amount, is very sensitive to the change of the monitoring environment, requires a relatively stable background, and cannot well cope with the situation of camera motion.
Disclosure of Invention
The invention aims to provide a moving target detection method based on independent motion characteristic clustering, which improves the accuracy of moving target detection under a moving platform.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a moving object detection method based on independent motion characteristic clustering is characterized by comprising the following steps:
s1, carrying out local feature detection, description and matching on the video stream output by the image sensor to obtain adjacent time matching feature point pairs, and carrying out global motion estimation to obtain global motion parameters;
s2, carrying out global motion compensation on the feature point coordinates to obtain independent motion feature point pairs;
and S3, clustering the independent motion characteristic point pairs to obtain independent motion characteristic clusters, and post-processing the independent motion characteristic clusters to obtain a final detection result.
The global motion compensation for the feature point coordinates in step S2 includes the following steps:
s2.1, calculating the three-axis rotation angle of the coordinate system of the detector body of the adjacent frame according to the three-axis rotation angle rate output by the gyroscope;
and S2.2, establishing a mapping relation between the three-axis rotation of the detector body coordinate system and image transformation, and calculating the image transformation caused by the three-axis rotation of the detector body coordinate system, thereby realizing global motion compensation.
The step S2 of obtaining the independent motion feature point pair includes the following steps:
s2.3, carrying out corner point detection on the image at the previous moment;
s2.4, extracting the matching feature points on the image at the current moment by adopting a pyramid optical flow method to obtain a matching feature point set;
s2.5, carrying out global motion compensation on the feature points of the image at the previous moment to obtain compensation feature points;
and S2.6, obtaining the independent motion characteristic point pairs through threshold processing.
The step S3 includes the steps of:
s3.1, describing the independent motion of the feature points by adopting the position and speed combined feature to obtain an independent motion feature;
s3.2, clustering independent motion features, measuring similarity by taking position features as main features, adding a speed amplitude weighting factor and a speed direction weighting factor at the same time, and increasing the discrimination of independent motion features with similar positions;
and S3.3, performing post-processing on the clustering result to obtain a final detection result.
The step S2.1 includes:
establishing a detector body coordinate system by taking the mass center of the photoelectric detector as an origin, wherein the Ox axis is positive towards the outside along the direction of an optical center, the Oy axis is positioned in a longitudinal symmetrical plane of the detector and is vertical to the Ox axis, the upward direction is positive, the Oz axis is vertical to the Oxy plane, the direction is determined according to a right-hand rectangular coordinate system, and the detector body coordinate system is fixedly connected with the detector and moves or rotates in space along with the motion of the detector;
the gyroscope is coaxially and fixedly connected with the optical center of the photoelectric detector to provide three-axis rotation angular rate, namely omegax、ωyAnd ωzLet the data update frequency of the gyroscope be fGThe acquisition frequency of the image sensor is fCWherein f isG≥fCThe gyroscope data time stamp and the image data time stamp are utilized to realize the synchronization of the gyroscope data and the image data, the time from (tau-1) to tau is set, the images collected by the image sensor are I (tau-1) and I (tau), and the triaxial rotation angular rate output by the gyroscope is omegaxk、ωykAnd ωzk,k=1,…,N,N=round(fG/fC) Then (τ -1) by time τ, the detector is rotated about the Ox, Oy and Oz axes by an angle θx、θyAnd thetazThe calculation can be made according to the following formula:
the step S2.2 includes:
calculating image transformation caused by three-axis rotation of a detector body coordinate system:
if the detector body coordinate system rotates theta around the Ox axisxAngle, then the imaging window rotates clockwise around its center by thetaxCorner, then the position of the same pixel on the background in the image coordinate system is rotated counterclockwise by θ with the image center as the rotation centerxThe angle, rotation transformation matrix is
Conversion of rotation of any pixel (x, y) in an image from (τ -1) to time τ
Wherein, W and H are the width and height of the image, respectively;
the image is rotated by theta around Oy and Oz axes respectivelyyAnd thetazThe angle and translation transformation matrix can be solved by the angular resolution of the imaging pixel, and the field angle of the camera is set to be thetaH×θVWherein, thetaHFor horizontal field angle, θVA vertical field of view angle; the imaged pixel resolution is W × H, then the angular resolution (°/pix) of the pixel is:
(τ -1) to time τ, the translation transformation matrix for the image is:
the image of any pixel (x, y) on the image from (tau-1) to time tau is transformed into:
namely, it is
The step S2.3 is specifically:
Shi-Tomasi corner detection is carried out on the previous frame image I (tau-1), and the detected corner coordinates are recorded as { (x)i,yi)},i=1,…,N;
Extracting matching feature points on the current frame image I (tau) by adopting a pyramid optical flow method to obtain a matching feature point pair set G2={(xi,yi),(x′i,y′i)},i=1,…,N;
Using equation (7) to determine the feature point { (x) on image I (τ -1)i,yi) Global motion compensation is carried out on the i-1, …, N, and compensation characteristic points are obtained
Independent motion rate of feature point is obtainedIf v ismin<||v||<vmaxThen the feature point is retained, otherwise removed, where vminAnd vmaxIs the rate threshold.
Step S3.1 includes:
independent motion characteristic point pairs { (x, y), (x ', y') }, wherein, (x, y) and (x ', y') are position coordinates of a certain characteristic point on the images I (tau-1) and I (tau), respectively, and the independent motion of the characteristic point is described by adopting a position and speed combined characteristic s:
s=(x',y',vx,vy), (8)
wherein the content of the first and second substances,a motion vector from image I (τ -1) to I (τ) for the feature point;
the independent motion of the cluster-like object is described by using a position and speed combined characteristic mu:
wherein the content of the first and second substances,andrespectively representing the mass center and the average motion vector of all the feature points of the class;
the similarity measure in step S3.2 includes:
let s1=(x′1,y′1,vx1,vy1) And s2=(x'2,y'2,vx2,vy2) For independent movement characteristics of different characteristic points, recordingIndependent motion characteristic s1And s2Can be determined by the distance function d(s) of equation (10)1,s2) To measure:
wherein the content of the first and second substances,is the Euclidean distance of the location feature; (v)1·v2)/(||v1||||v2| | l) +2) is equal to or more than 1 and is a speed direction weighting factor; | v | (V)2||/||v1| | | > 1 is a velocity amplitude weighting factor;
is provided withAndfor independent movement characteristics of different clusters, noteThe similarity of the independent motion features of the clusters can be determined by the distance function d (mu) of equation (11)1,μ2) To measure:
let s1=(x′1,y′1,vx1,vy1) Andfeatures of independent motion, respectively, of feature points and clustersThe similarity between the feature points and the cluster-like features can be determined by the distance function d(s) of equation (12)1,μ1) To measure:
the step S3.3 is specifically:
in the clustering result, each class corresponds to a candidate region, and the region boundary is the boundary of all sample position characteristics, namely
Wherein count represents the number of samples of the class; { (x'j,y'j) 1,2, …, count represents the position characteristics of all samples in the class; (x)min,ymin) And (x)max,ymax) Respectively representing the coordinates of the upper left corner and the lower right corner of a rectangular candidate region, wherein the candidate region corresponding to the class is (x)min,yminWidth, height), wherein width ═ xmax-xmin+1,height=ymax-ymin+1;
Performing threshold processing on the pixel aspect ratio of the candidate region to obtain a final detection result:
the pixel aspect ratio of the candidate region refers to the aspect ratio of the rectangular candidate region, and γ is width/height; if the pixel aspect ratio of the candidate region exceeds the threshold τγ(τγ>1),γ>τγOr gamma < 1/tauγThen the candidate area is deleted and the final detection result is obtained.
Compared with the prior art, the invention has the following advantages:
1) the method does not need prior knowledge of the number and the size of any target, is suitable for most application scenes, has small operand and small memory overhead, is suitable for embedded implementation, and has important practical application value;
(2) global motion compensation is carried out on the feature point coordinates by adopting a gyroscope, on one hand, the two-dimensional image coordinates are operated, so that complex image registration and global motion estimation are avoided, and the time cost of the algorithm is saved; on the other hand, only the coordinates of the feature points need to be saved, and the complete image does not need to be saved, so that the space cost of the algorithm is saved;
(3) the position and speed combined characteristics are adopted to describe the independent movement, a speed amplitude weighting factor and a speed direction weighting factor are added on the basis of the Euclidean distance of the position characteristics to serve as similarity measurement of the combined characteristics, and the discrimination of independent movement characteristics with similar positions is increased, so that the discrimination of adjacent targets is increased, and the recall ratio and precision ratio of the moving target detection are improved;
(4) in independent motion feature clustering, only the local mean and variance of the position and velocity joint features need to be computed recursively, thus easily meeting memory and time constraints of data stream processing.
Drawings
FIG. 1 is a general flow chart of a moving object detection method based on independent moving feature clustering according to the present invention;
FIG. 2 is a flow chart of target detection based on IELM clustering according to the present invention;
FIG. 3 is a coordinate system of the probe body;
FIGS. 4a to 4c are the mapping relationships between the three-axis rotation and the image transformation of the body coordinate system (the dotted line indicates τ1Time of day, realization of representation τ2Time of day), where fig. 4a shows the rotation of the body coordinate system about the x-axis, fig. 4b shows the rotation of the body coordinate system about the y-axis, and fig. 4c shows the rotation of the body coordinate system about the z-axis;
fig. 5 is a flow chart of the IELM clustering algorithm.
Detailed Description
The present invention will now be further described by way of the following detailed description of a preferred embodiment thereof, taken in conjunction with the accompanying drawings.
As shown in fig. 1, a method for detecting a moving object based on independent motion feature clustering includes the following steps:
s1, carrying out local feature detection, description and matching on the video stream output by the image sensor to obtain adjacent time matching feature point pairs, and carrying out global motion estimation to obtain global motion parameters; if the system is not provided with a gyroscope, global motion estimation can be carried out by utilizing the matched feature point pairs;
s2, carrying out global motion compensation on the feature point coordinates to obtain independent motion feature point pairs;
and S3, clustering the independent motion characteristic point pairs to obtain independent motion characteristic clusters, and post-processing the independent motion characteristic clusters to obtain a final detection result.
As shown in fig. 2, the present invention specifically includes:
the global motion compensation for the feature point coordinates in step S2 includes the following steps:
s2.1, calculating the three-axis rotation angle of the coordinate system of the detector body of the adjacent frame according to the three-axis rotation angle rate output by the gyroscope;
and S2.2, establishing a mapping relation between the three-axis rotation of the detector body coordinate system and image transformation, and calculating the image transformation caused by the three-axis rotation of the detector body coordinate system, thereby realizing global motion compensation.
In particular, the amount of the solvent to be used,
and (2.1) constructing a detector body coordinate system. A detector body coordinate system (see figure 3) is established by taking the centroid of the photoelectric detector as an origin, the axis of Ox is positive towards the outside along the direction of the optical center, the axis of Oy is positioned in the longitudinal symmetric plane of the detector and is vertical to the axis of Ox, the axis of Oz is positive upwards and is vertical to the plane of Oxy, and the direction is determined according to a right-hand rectangular coordinate system. The detector body coordinate system is fixedly connected with the detector and moves or rotates in space along with the motion of the detector. Setting the detectors at angular rates omega respectivelyx、ωyAnd ωzRotate about the Ox, Oy and Oz axes. Setting the body coordinate systems at the time of (tau-1) and tau as Oxyz and Ox ' y ' z ' respectively; (τ -1) by time τ, the projectile has rotated around the Ox, Oy and Oz axes by an angle θx、θyAnd thetaz. In the figure, the dotted and solid rectangular boxes represent the images at times (τ -1) and τ, respectively, Oτ-1And OτRepresenting the image center at time (τ -1) and τ, respectively. Note that the center of the image (i.e., O)τ-1To OτPoint) only by the projectile coordinate system around OyShaft and OzTranslation transformation of the image coordinate system caused by rotation of the axis, i.e. (t)x,ty);
And calculating the three-axis rotation angle of the detector body coordinate system at the adjacent moment. The gyroscope is coaxially and fixedly connected with the optical center of the photoelectric detector to provide three-axis rotation angular rate, namely omegax、ωyAnd ωz. Let the data update frequency of the gyroscope be fGThe acquisition frequency of the image sensor is fC(fG≥fC). And synchronizing the gyroscope data and the image data by utilizing the gyroscope data timestamp and the image data timestamp. And (tau-1) to the time of tau, the images acquired by the image sensor are I (tau-1) and I (tau), and the triaxial rotation angular rate output by the gyroscope is omegaxk、ωykAnd ωzk,k=1,…,N,N=round(fG/fC). Then (τ -1) by time τ, the detector is rotated about the Ox, Oy and Oz axes by an angle θx、θyAnd thetazThe calculation can be made according to the following formula.
And (2.2) calculating image transformation caused by three-axis rotation of the detector body coordinate system. And establishing an image coordinate system by taking the upper left corner of the image as an origin, the horizontal direction as an x axis, the right direction as a positive direction, the vertical direction as a y axis and the downward direction as a positive direction. The detector rotates around the Ox axis to cause image rotation transformation; rotation about the Oy axis causes a lateral translation transformation of the image; rotation about the Oz axis causes a longitudinal translational transformation of the image. The mapping relation between the three-axis rotation of the detector body coordinate system and the image transformation is shown in FIGS. 4 a-4 c.
If the detector body coordinate system rotates theta around the Ox axisxAngle, then the imaging window rotates clockwise around its center by thetaxAnd (4) an angle. Then, the position of the same pixel on the background in the image coordinate system is rotated counterclockwise by θ with the image center as the rotation centerxThe angle, rotation transformation matrix is
Note that the imaging window coordinate system does not coincide with the image coordinate system, with the origin being the image center, the x-axis being horizontal and positive to the right, the y-axis being vertical and positive downward, as in fig. 4A. Thus, the rotation of any pixel (x, y) in the image from (τ -1) to time τ is transformed into
Where W and H are the width and height of the image, respectively.
The image is known to be rotated by theta about Oy and Oz axes, respectivelyyAnd thetazThe angular, translational transformation matrix may be solved by the angular resolution of the imaging pixels. Let the angle of view of the camera be thetaH×θVWherein, thetaHFor horizontal field angle, θVA vertical field of view angle; pixel resolution of imagingIs W × H. Then the angular resolution (°/pix) of the pixel is
(τ -1) to time τ, the translation transformation matrix of the image is
In summary, the image conversion from (τ -1) to time τ of any pixel (x, y) on the image is performed
Namely, it is
The step S2 of obtaining the independent motion feature point pair includes the following steps:
s2.3, carrying out corner point detection on the image at the previous moment;
s2.4, extracting the matching feature points on the image at the current moment by adopting a pyramid optical flow method to obtain a matching feature point set:
s2.5, carrying out global motion compensation on the feature points of the image at the previous moment to obtain compensation feature points;
and S2.6, obtaining the independent motion characteristic point pairs through threshold processing.
In particular, the amount of the solvent to be used,
(2.3) carrying out Shi-Tomasi corner detection on the previous frame image I (tau-1), and recording the detected corner coordinates as { (x)i,yi)},i=1,…,N。
(2.4) adopting a PyrLK optical flow method to extract matching feature points on the current frame image I (tau) and obtaining a matching feature point pair set G2={(xi,yi),(x′i,y′i)},i=1,…,N。
(2.5) feature points { (x) on image I (τ -1) using equation (1.7)i,yi) Global motion compensation is carried out on the i-1, …, N, and compensation characteristic points are obtained
(2.6) determining the independent movement rate of the feature pointsIf v ismin<||v||<vmaxThe feature point is retained, otherwise it is removed. Wherein v isminAnd vmaxIs the rate threshold.
The step S3 includes the steps of:
s3.1, describing the independent motion of the feature points by adopting the position and speed combined feature to obtain an independent motion feature;
s3.2, clustering independent motion characteristics by adopting an Improved evolved local mean algorithm (IELM), carrying out similarity measurement by taking position characteristics as a main characteristic, and adding a velocity amplitude weighting factor and a velocity direction weighting factor to increase the discrimination of independent motion characteristics with similar positions;
and S3.3, performing post-processing on the clustering result to obtain a final detection result.
The specific method for describing the independent motion characteristics comprises the following steps:
and describing the independent motion characteristics of the characteristic points. Independent motion characteristic point pairs are provided { (x, y), (x ', y') }. Where (x, y) and (x ', y') are the position coordinates of a certain feature point on the images I (τ -1) and I (τ), respectively. And describing the independent movement of the feature points by adopting a position and speed combined feature s:
s=(x',y',vx,vy), (1.8)
wherein the content of the first and second substances,the motion vector from image I (τ -1) to I (τ) for the feature point.
Independent motion characterization of the clusters. The independent motion of the cluster-like object is described by using a position and speed combined characteristic mu:
wherein the content of the first and second substances,andrespectively representing the centroid and the average motion vector of all the feature points of the class.
The specific method for measuring the similarity comprises the following steps:
similarity of feature point independent motion features. Let s1=(x′1,y′1,vx1,vy1) And s2=(x'2,y'2,vx2,vy2) For independent movement characteristics of different characteristic points, recordingIndependent motion characteristic s1And s2The similarity of (A) can be determined by the distance function d(s) of equation (1.10)1,s2) To measure.
Wherein, item IIIIs the Euclidean distance of the location feature; second term ((v)1·v2)/(||v1||||v2|) +2) is a speed direction weighting factor which is more than or equal to 1, when the sign is taken, the motion directions of the two characteristic points are the same, and the value of the item is increased along with the increase of the motion direction deviation of the two characteristic points; first term | | v2||/||v1| | > 1 as velocity amplitude weightingWhen the factor is given as ═ sign, the motion rates of the two characteristic points are the same, and the value of the term is increased along with the increase of the difference of the motion rates of the two characteristic points. Note that the distance function defined by equation (1.10) has significant physical significance.When the mark is taken, the moving direction and the moving speed of the two characteristic points are the same, and s1And s2Is the distance between points (x'1,y′1) And (x'2,y'2) Distance in the image plane.
Similarity of cluster-like independent motion features. Is provided withAndfor independent movement characteristics of different clusters, noteThe similarity of the independent motion features of the clusters can be represented by the distance function d (mu) of equation (1.11)1,μ2) To measure.
Similarity of feature points and cluster-like independent motion features. Let s1=(x′1,y′1,vx1,vy1) Andfeatures of independent motion, respectively, of feature points and clustersThe similarity between the feature points and the cluster-like features can be determined by the distance function d(s) of equation (1.12)1,μ1) To measure.
The specific method for IELM clustering comprises the following steps:
representation of samples and clusters of classes. Representing a sample by adopting (s, r), wherein s represents the independent motion characteristic of the characteristic point, and r represents the neighborhood radius of the sample; and (mu, sigma) is used for representing a class cluster, wherein mu represents the independent motion characteristic of the class cluster, and sigma represents the radius of the class area.
And (5) clustering.
(a) Inputting a new sample si。
(b) Calculate the distance of the new sample to all class centers: dj=||si-μj||。
(c) If it isSo that dj<(max(σjR) + r), then there is a class region and a sample siOverlapping the neighborhoods of (a), turning to step (e);
else sample siAnd (d) turning to the step (d) when no overlapping exists with any existing class area.
(d) With the sample siAnd (4) creating a new class for the center, and turning to the step (a).
(e) Sample siAssigned to the class (μ) closest theretok,σk) Update the parameter mukAnd σk. Wherein the k value is obtained by the following formula:
(f) class of examination (μ)k,σk) Whether there is an overlap with other class regions: if so, the two classes are fused and the class parameters are updated.
(g) Turning to step (a).
The flow of IELM clustering is shown in FIG. 5. Wherein, each parameter is described as follows:
r: a neighborhood radius of the sample;
σi: the region radius of class i is the location feature (x ') of all samples of this class'j,y'j),j=1,2,…,countiTo the center of massA function of the distance of (c);
c: the number of classes;
counti: number of samples of class i;
the specific method for post-processing the clustering result comprises the following steps:
the position and size of the candidate region are calculated. In the clustering result, each class corresponds to a candidate region, and the region boundary is the boundary of all sample position characteristics, namely
Wherein count represents the number of samples of the class; { (x'j,y'j) 1,2, …, count represents the position characteristics of all samples in the class; (x)min,ymin) And (x)max,ymax) Respectively representing the coordinates of the upper left corner and the lower right corner of the rectangular candidate region. Then, the candidate region corresponding to this class is (x)min,yminWidth, height), wherein width ═ xmax-xmin+1,height=ymax-ymin+1。
And carrying out threshold processing on the aspect ratio of the pixels of the candidate area to obtain a final detection result. The pixel aspect ratio of the candidate region refers to the aspect ratio of the rectangular candidate region, i.e., γ ═ width/height. If the pixel aspect ratio of the candidate region exceeds the threshold τγ(τγ> 1), i.e. gamma > tauγOr gamma < 1/tauγThen the candidate area is deleted. Thereby, the final detection result is obtained.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.
Claims (6)
1. A moving object detection method based on independent moving feature clustering is characterized by comprising the following steps:
s1, carrying out local feature detection, description and matching on the video stream output by the image sensor to obtain adjacent time matching feature point pairs, and carrying out global motion estimation to obtain global motion parameters;
s2, carrying out global motion compensation on the feature point coordinates to obtain independent motion feature point pairs;
s3, clustering the independent motion characteristic point pairs to obtain independent motion characteristic clusters, and post-processing the independent motion characteristic clusters to obtain a final detection result;
the global motion compensation for the feature point coordinates in step S2 includes the following steps:
s2.1, calculating the three-axis rotation angle of the coordinate system of the detector body of the adjacent frame according to the three-axis rotation angle rate output by the gyroscope;
s2.2, establishing a mapping relation between the three-axis rotation of the detector body coordinate system and image transformation, and calculating the image transformation caused by the three-axis rotation of the detector body coordinate system so as to realize global motion compensation;
the step S2.1 includes:
establishing a detector body coordinate system by taking the mass center of the photoelectric detector as an origin, wherein the Ox axis is positive towards the outside along the direction of an optical center, the Oy axis is positioned in a longitudinal symmetrical plane of the detector and is vertical to the Ox axis, the upward direction is positive, the Oz axis is vertical to the Oxy plane, the direction is determined according to a right-hand rectangular coordinate system, and the detector body coordinate system is fixedly connected with the detector and moves or rotates in space along with the motion of the detector;
the gyroscope is coaxially and fixedly connected with the optical center of the photoelectric detector to provide three-axis rotation angular rate, namely omegax、ωyAnd ωzLet the data update frequency of the gyroscope be fGThe acquisition frequency of the image sensor is fCWherein f isG≥fCThe gyroscope data time stamp and the image data time stamp are utilized to realize the synchronization of the gyroscope data and the image data, the time from (tau-1) to tau is set, the images collected by the image sensor are I (tau-1) and I (tau), and the triaxial rotation angular rate output by the gyroscope is omegaxk、ωykAnd ωzk,k=1,…,N,N=round(fG/fC) Then (τ -1) by time τ, the detector is rotated about the Ox, Oy and Oz axes by an angle θx、θyAnd thetazThe calculation can be made according to the following formula:
the step S2.2 includes:
calculating image transformation caused by three-axis rotation of a detector body coordinate system:
if the detector body coordinate system rotates theta around the Ox axisxAngle, then the imaging window rotates clockwise around its center by thetaxCorner, then, the position of the same pixel on the background in the image coordinate system rotates counterclockwise with the image center as the rotation centerθxThe angle, rotation transformation matrix is
Conversion of rotation of any pixel (x, y) in an image from (τ -1) to time τ
Wherein, W and H are the width and height of the image, respectively;
the image is rotated by theta around Oy and Oz axes respectivelyyAnd thetazThe angle and translation transformation matrix can be solved by the angular resolution of the imaging pixel, and the field angle of the camera is set to be thetaH×θVWherein, thetaHFor horizontal field angle, θVA vertical field of view angle; the imaged pixel resolution is W × H, then the angular resolution of the pixel is:
(τ -1) to time τ, the translation transformation matrix for the image is:
the image of any pixel (x, y) on the image from (tau-1) to time tau is transformed into:
namely, it is
2. The method for detecting moving objects based on independent moving feature clustering according to claim 1, wherein the step S2 of obtaining the independent moving feature point pairs comprises the following steps:
s2.3, carrying out corner point detection on the image at the previous moment;
s2.4, extracting the matching feature points on the image at the current moment by adopting a pyramid optical flow method to obtain a matching feature point set;
s2.5, carrying out global motion compensation on the feature points of the image at the previous moment to obtain compensation feature points;
and S2.6, obtaining the independent motion characteristic point pairs through threshold processing.
3. The method for detecting moving objects based on independent moving feature clustering according to claim 1, wherein the step S3 comprises the steps of:
s3.1, describing the independent motion of the feature points by adopting the position and speed combined feature to obtain an independent motion feature;
s3.2, clustering independent motion features, measuring similarity by taking position features as main features, adding a speed amplitude weighting factor and a speed direction weighting factor at the same time, and increasing the discrimination of independent motion features with similar positions;
and S3.3, performing post-processing on the clustering result to obtain a final detection result.
4. The method for detecting a moving object based on independent moving feature clustering according to claim 2, wherein the step S2.3 specifically comprises:
Shi-Tomasi corner detection is carried out on the previous frame image I (tau-1), and the detected corner coordinates are recorded as { (x)i,yi)},i=1,…,N;
Extracting matching feature points on the current frame image I (tau) by adopting a pyramid optical flow method to obtain a matching feature point pair set G2={(xi,yi),(x′i,y′i)},i=1,…,N;
Using equation (7) to determine the feature point { (x) on image I (τ -1)i,yi) Global motion compensation is carried out on the i-1, …, N, and compensation characteristic points are obtained
5. The method of claim 3, wherein the step S3.1 comprises:
independent motion characteristic point pairs { (x, y), (x ', y') }, wherein, (x, y) and (x ', y') are position coordinates of a certain characteristic point on the images I (tau-1) and I (tau), respectively, and the independent motion of the characteristic point is described by adopting a position and speed combined characteristic s:
s=(x',y',vx,vy), (8)
wherein the content of the first and second substances,a motion vector from image I (τ -1) to I (τ) for the feature point;
the independent motion of the cluster-like object is described by using a position and speed combined characteristic mu:
wherein the content of the first and second substances,andrespectively representing the mass center and the average motion vector of all the feature points of the class;
the similarity measure in step S3.2 includes:
let s1=(x′1,y′1,vx1,vy1) And s2=(x′2,y′2,vx2,vy2) For independent movement characteristics of different characteristic points, recordingIndependent motion characteristic s1And s2Can be determined by the distance function d(s) of equation (10)1,s2) To measure:
wherein the content of the first and second substances,is the Euclidean distance of the location feature; (v)1·v2)/(||v1||||v2| | l) +2) is equal to or more than 1 and is a speed direction weighting factor; | v | (V)2||/||v1| | | > 1 is a velocity amplitude weighting factor;
is provided withAndfor independent movement characteristics of different clusters, noteThe similarity of the independent motion features of the clusters can be determined by the distance function d (mu) of equation (11)1,μ2) To measure:
let s1=(x′1,y′1,vx1,vy1) Andfeatures of independent motion, respectively, of feature points and clustersThe similarity between the feature points and the cluster-like features can be determined by the distance function d(s) of equation (12)1,μ1) To measure:
6. the method for detecting a moving object based on independent moving feature clustering according to claim 5, wherein the step S3.3 specifically comprises:
in the clustering result, each class corresponds to a candidate region, and the region boundary is the boundary of all sample position characteristics, namely
Wherein count represents the number of samples of the class; { (x'j,y′j) 1,2, …, count represents the position characteristics of all samples in the class; (x)min,ymin) And (x)max,ymax) Respectively representing the coordinates of the upper left corner and the lower right corner of a rectangular candidate region, wherein the candidate region corresponding to the class is (x)min,yminWidth, height), wherein width ═ xmax-xmin+1,height=ymax-ymin+1;
Performing threshold processing on the pixel aspect ratio of the candidate region to obtain a final detection result:
the pixel aspect ratio of the candidate region refers to the aspect ratio of the rectangular candidate region, and γ is width/height; gamma is the pixel aspect ratio of the candidate region, tauγIs a pixel aspect ratio threshold, whereγ> 1, when gamma > tauγOr gamma < 1/tauγAnd if so, deleting the candidate area to obtain a final detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910773313.3A CN110473229B (en) | 2019-08-21 | 2019-08-21 | Moving object detection method based on independent motion characteristic clustering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910773313.3A CN110473229B (en) | 2019-08-21 | 2019-08-21 | Moving object detection method based on independent motion characteristic clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110473229A CN110473229A (en) | 2019-11-19 |
CN110473229B true CN110473229B (en) | 2022-03-29 |
Family
ID=68512063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910773313.3A Active CN110473229B (en) | 2019-08-21 | 2019-08-21 | Moving object detection method based on independent motion characteristic clustering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110473229B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116205914B (en) * | 2023-04-28 | 2023-07-21 | 山东中胜涂料有限公司 | Waterproof coating production intelligent monitoring system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102231792A (en) * | 2011-06-29 | 2011-11-02 | 南京大学 | Electronic image stabilization method based on characteristic coupling |
CN103426182A (en) * | 2013-07-09 | 2013-12-04 | 西安电子科技大学 | Electronic image stabilization method based on visual attention mechanism |
CN105045841A (en) * | 2015-07-01 | 2015-11-11 | 北京理工大学 | Image feature query method in combination with gravity sensor and image feature point angles |
CN105138982A (en) * | 2015-08-21 | 2015-12-09 | 中南大学 | Crowd abnormity detection and evaluation method based on multi-characteristic cluster and classification |
CN105913459A (en) * | 2016-05-10 | 2016-08-31 | 中国科学院自动化研究所 | Moving object detection method based on high resolution continuous shooting images |
CN106295568A (en) * | 2016-08-11 | 2017-01-04 | 上海电力学院 | The mankind's naturalness emotion identification method combined based on expression and behavior bimodal |
CN106981073A (en) * | 2017-03-31 | 2017-07-25 | 中南大学 | A kind of ground moving object method for real time tracking and system based on unmanned plane |
CN109146972A (en) * | 2018-08-21 | 2019-01-04 | 南京师范大学镇江创新发展研究院 | Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction |
CN109727273A (en) * | 2018-12-29 | 2019-05-07 | 北京茵沃汽车科技有限公司 | A kind of Detection of Moving Objects based on vehicle-mounted fisheye camera |
CN110046555A (en) * | 2019-03-26 | 2019-07-23 | 合肥工业大学 | Endoscopic system video image stabilization method and device |
-
2019
- 2019-08-21 CN CN201910773313.3A patent/CN110473229B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102231792A (en) * | 2011-06-29 | 2011-11-02 | 南京大学 | Electronic image stabilization method based on characteristic coupling |
CN103426182A (en) * | 2013-07-09 | 2013-12-04 | 西安电子科技大学 | Electronic image stabilization method based on visual attention mechanism |
CN105045841A (en) * | 2015-07-01 | 2015-11-11 | 北京理工大学 | Image feature query method in combination with gravity sensor and image feature point angles |
CN105138982A (en) * | 2015-08-21 | 2015-12-09 | 中南大学 | Crowd abnormity detection and evaluation method based on multi-characteristic cluster and classification |
CN105913459A (en) * | 2016-05-10 | 2016-08-31 | 中国科学院自动化研究所 | Moving object detection method based on high resolution continuous shooting images |
CN106295568A (en) * | 2016-08-11 | 2017-01-04 | 上海电力学院 | The mankind's naturalness emotion identification method combined based on expression and behavior bimodal |
CN106981073A (en) * | 2017-03-31 | 2017-07-25 | 中南大学 | A kind of ground moving object method for real time tracking and system based on unmanned plane |
CN109146972A (en) * | 2018-08-21 | 2019-01-04 | 南京师范大学镇江创新发展研究院 | Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction |
CN109727273A (en) * | 2018-12-29 | 2019-05-07 | 北京茵沃汽车科技有限公司 | A kind of Detection of Moving Objects based on vehicle-mounted fisheye camera |
CN110046555A (en) * | 2019-03-26 | 2019-07-23 | 合肥工业大学 | Endoscopic system video image stabilization method and device |
Non-Patent Citations (1)
Title |
---|
无人机航拍云台专利申请分析;张茂于;《产业专利分析报告-无人机》;20170630;第85页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110473229A (en) | 2019-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110009681B (en) | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method | |
CN107869989B (en) | Positioning method and system based on visual inertial navigation information fusion | |
CN112598757B (en) | Multi-sensor time-space calibration method and device | |
WO2020119140A1 (en) | Method, apparatus and smart device for extracting keyframe in simultaneous localization and mapping | |
WO2022188094A1 (en) | Point cloud matching method and apparatus, navigation method and device, positioning method, and laser radar | |
CN108406731A (en) | A kind of positioning device, method and robot based on deep vision | |
CN109949361A (en) | A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning | |
CN107357286A (en) | Vision positioning guider and its method | |
CA3071299C (en) | Initial alignment system and method for strap-down inertial navigation of shearer based on optical flow method | |
CN101383899A (en) | Video image stabilizing method for space based platform hovering | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN208323361U (en) | A kind of positioning device and robot based on deep vision | |
CN111899276A (en) | SLAM method and system based on binocular event camera | |
CN112815939A (en) | Pose estimation method for mobile robot and computer-readable storage medium | |
CN109242887A (en) | A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU | |
Ramezani et al. | Omnidirectional visual-inertial odometry using multi-state constraint Kalman filter | |
CN115717867A (en) | Bridge deformation measurement method based on airborne double cameras and target tracking | |
CN110473229B (en) | Moving object detection method based on independent motion characteristic clustering | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
CN113701750A (en) | Fusion positioning system of underground multi-sensor | |
Li et al. | RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments | |
CN112393721A (en) | Camera pose estimation method | |
JP2017182564A (en) | Positioning device, positioning method, and positioning computer program | |
Iida et al. | High-accuracy Range Image Generation by Fusing Binocular and Motion Stereo Using Fisheye Stereo Camera | |
CN113628279B (en) | Panoramic vision SLAM mapping method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |