CN110910417A - Weak and small moving target detection method based on super-pixel adjacent frame feature comparison - Google Patents
Weak and small moving target detection method based on super-pixel adjacent frame feature comparison Download PDFInfo
- Publication number
- CN110910417A CN110910417A CN201911038717.4A CN201911038717A CN110910417A CN 110910417 A CN110910417 A CN 110910417A CN 201911038717 A CN201911038717 A CN 201911038717A CN 110910417 A CN110910417 A CN 110910417A
- Authority
- CN
- China
- Prior art keywords
- superpixel
- pixel
- super
- center
- adjacent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a weak and small moving target detection method based on super-pixel adjacent frame feature comparison, which comprises the following steps: step 1: realizing superpixel segmentation on a target frame image by using a simple linear iterative clustering algorithm (SLIC); step 2: generating a graph theory model according to the adjacency relation of the superpixels, designing superpixel characteristics, calculating characteristic differences among the superpixels, taking the characteristic differences as edge values in the graph theory model, and realizing potential target extraction by utilizing a graph segmentation algorithm; and step 3: and carrying out adjacent frame color feature comparison on the potential target area, and marking as a moving target if the color feature difference exceeds a set threshold. The invention avoids interframe search and improves the efficiency of detecting weak and small moving targets.
Description
Technical Field
The invention relates to a weak and small moving target detection method based on super-pixel adjacent frame feature comparison, and belongs to the technical field of image processing.
Background
The detection of the weak and small moving targets is an important research subject in the field of image processing and machine vision, the application of the detection to the military and civil fields is widely concerned by researchers, the detection is widely applied to the fields of security monitoring, remote sensing images and the like, and the detection of the moving targets is to distinguish and extract the interested moving targets in an image sequence or a video from the background. In recent years, with the development of unmanned aerial vehicles, the demand for security monitoring of unmanned aerial vehicles is increasing, and therefore how to realize the detection of small and weak moving targets becomes one of research hotspots.
In congratulatory department (moving object detection based on subtraction of continuity constraint background model, computer science, 2019,06,311-315), a time continuity constraint low-rank decomposition background update model is proposed and applied to video moving object detection of background model subtraction. The method carries out low-rank decomposition on the video to obtain a low-rank component and a sparse component, updates the background by adopting constraint based on continuity, constructs the background, and finally realizes the detection of the moving target under the complex background through background subtraction. However, the method adopts low-rank decomposition, the calculation efficiency is low, and holes still exist in the result. When the image pixels have noise, the false alarm rate is high, and the detection precision of the weak and small moving targets is seriously reduced.
Disclosure of Invention
Technical problem to be solved
Aiming at the problems of high false alarm rate and low detection efficiency of most existing detection methods for weak and small moving targets in visible light sequence images, the method adopts a mode of super-pixel graph theory segmentation and adjacent frame super-pixel feature comparison to detect the weak and small moving targets in the visible light images.
Technical scheme
A weak and small moving object detection method based on super pixel adjacent frame feature contrast is characterized by comprising the following steps:
step 1: for each frame image F in the sequence image F with the length of nvThe method adopts a simple linear iterative clustering algorithm SLIC to realize superpixel segmentation, and decomposes each frame into m superpixel blocks SPk(xz,yz) Suppose a certain superpixel SPkContains Z pixel points, then (x)z,yz) Is a super pixel block SPkThe horizontal and vertical coordinates of the contained pixel point z, the origin of the whole image is at the upper left corner, the X axis is the horizontal axis and points to the right, the Y axis is the vertical axis and points to the down; wherein v 1,2,., n, k 1,2, …, m, Z1, 2,., Z;
the SLIC algorithm is specifically as follows:
initializing a clustering center: uniformly distributing clustering centers in the single-frame image according to the set number m of the super pixels; if the picture has Q pixel points, the size of each super pixel isPixel, then the adjacent initialized cluster center-to-center spacing is approximatelyfloor () function represents rounding down; in the 3 x 3 neighborhood of the clustering center, calculating the position with the minimum gradient, and moving the clustering center to the position to finish the initialization of the clustering center;
search strategy and distance metric: in each initial clustering center 2CdIst multiplied by 2CdIst, the clustering category of each pixel point is determined according to the distance between the pixel and the clustering center, and the distance D between the pixel and the clustering center contains the color distance DcDistance d from spacesThe calculation is as follows:
whereinj is a pixel in a 2CdIst × 2CdIst neighborhood belonging to the cluster center _ i; dcIs the distance in Lab color space, l, a, b are the Lab color channel values, dsIs the spatial euclidean distance; n is a radical ofcIs taken as a constant value, NsTaking CdIst; wherein j is 12;
Dividing each pixel point to a corresponding clustering center with the minimum D value to form a super pixel;
iterative operation: calculating the Lab color mean value and the (x, y) coordinate mean value of all pixels in each current new superpixel, moving the clustering center to the (x, y) mean value, and repeatedly searching until convergence;
step 2: establishing a graph theory representation of the superpixels, and representing a graph theory model by G (V, E), wherein V represents nodes, E represents edges, the nodes are the superpixels, the edges are the adjacent relation of the superpixels and other superpixels, if two superpixels are adjacent, an edge exists, otherwise, the edge does not exist; the side length is the characteristic difference between adjacent superpixels, and in mathematics, a graph theory model is usually represented by an adjacent matrix, which is denoted as Adj, if the whole image can be divided into m superpixels, Adj is an m × m symmetric matrix, and diagonal elements of the symmetric matrix are represented by the adjacent relation of nodes and the symmetric matrix, so that Adj (p, q) is set to be 0, and if the superpixel SP is a superpixel SPpAnd SPqIf the two are not adjacent, setting Adj (p, q) ∞; after the construction of the adjacent matrix is completed, cutting the edge of which the side length exceeds the threshold value in the node and the adjacent nodes around by using a graph cutting mode, wherein the edge is the edge of which the cutting characteristic difference exceeds the set threshold value; the method specifically comprises three processes:
(1) designing super pixel characteristics: the superpixel feature comprises three-channel color mean value R of the pixel blockmean,Gmean,Bmean]TThe variance of each channel is characterized [ R ]variance,Gvariance,Bvariance]TAnd the gradient feature of the peripheral edge [ Upgrad,Rightgrad,Downgrad,Leftgrad]TAdding superpixel features, a superpixel SPkCan be represented by the original image f corresponding to the contour of the numbered graph I (x, y)vCalculating gradients of the uppermost Up, the rightmost Right, the lowermost Down and the leftmost Left at each edge position; where the definition of numbering scheme I (x, y) is: the v frame picture fvIn (2), the pixel identification in each superpixel is the number index of the superpixelkThe pictures thus obtained are denoted by the numbering diagrams I (x, y), I (x)z,yz)=indexk,(xz,yz)∈SPk(ii) a The features are specifically defined as follows:
| Up | represents belonging to SPkThe number of the pixels at the top in the middle, and the rest directions are analogized from the above; the superpixel is then characterized as:
Feature(SPk)=[Rmean,Gmean,Bmean,Rvariance,Gvariance,Bvariance,Upgrad,Rightgrad,Downgrad,Leftgrad]Taccording to the characteristics of the super pixels, the characteristic difference between every two adjacent super pixels is calculated subsequently;
(2) and (3) establishing a graph: obtaining each super pixel SP from step 1k(xz,yz) For obtaining the adjacency relation between super pixels, the method depends onJudging the outline of each number in the number map I (x, y);
the adjacency relation is defined as 4-adjacency, i.e. the coordinates of any two pixels z, h satisfy the relation [ | x [ ]z-xh|+|yz-yh|=1,(z≠h)](ii) a If SP is to be obtainedkAdjacent super pixel number index ofrOnly need to traverse SPkThe outlines in the numbered graph I (x, y) are adjacently numbered index according to a 4-adjacencyrRecording; let SPpAnd SPqNext, p, q ≠ q, m, p ≠ q, calculates the coordinate center of each superpixel from the numbering map I (x, y), and is denoted as (x)center_p,ycenter_p),(xcenter_q,ycenter_q) (ii) a With SPpAs a center, assume SPqLocated in SPpThe right upper part of the two forms an included angle between the center of the two and the horizontal lineAs can be seen from the superpixel features designed below, the differences between features are made up of three parts: color variance Colordist, variance vardis, gradient reinforcement variance Graddist;
to ensure symmetry of the adjacency matrix, SPpAnd SPqThe difference in gradient enhancement between the two Graddist must be guaranteed to be the same, so SPpThe gradient enhancement difference at the center is:
note that: wherein the first two terms are SPpThe last two terms are SPqThe gradient characteristic of (a);
then SP finallypAnd SPqThe value of the boundary between Adj (p, q) ═ α Colordist+ β vardest + γ graddest, where α ═ 1,γ is 0.1; therefore, the construction of the graph theory model G (V, E) is completed, namely the calculation of all elements in the Adj adjacency matrix is completed;
(3) and (3) dividing the graph: according to the graph theory model, a graph cut algorithm is adopted to complete extraction of the potential target superpixel; giving a threshold EdgeThreshold of the edge value, setting a marking vector IsVisited as an m-dimensional vector of all zeros, wherein the total length of the marking vector is the number of the superpixels; first, starting with the super-pixel numbered 1, a depth first search is applied to Adj (p, q) according to the adjacency matrix Adj<The superpixel numbers of EdgeThreshold are all changed to 1, Adj (p, q)>Edge of EdgeThreshold is broken, and the traversed super pixel SP is dividedkUntil the depth search is finished, the flag vector of (1) is set, i.e. isvisited (k) ═ 1; secondly, searching a superpixel where the element 0 in the IsVisited marker vector is located, and performing depth-first search according to the operation steps of imitating the superpixel with the serial number of 1 in the serial number sequence; step three, the process of the step two is circulated until all IsVisied vectors are 1; at this time, the whole number image I (x, y) is represented by a plurality of large-area areas with the same number and a small-area with the same number, so that the image segmentation is completed, and the image segmentation of the current frame is correspondingly completed; in I (x, y), the same numbered region of each small area is the potential target regionWhere superscript A represents the A-th potential region, (x)z,yz) For the coordinates of the pixels in each potential region, each potential region can form a new superpixel block
And step 3: according to the potential target area in the step 2, weak and small moving target detection is realized through super pixel feature comparison of adjacent frames; the specific process is as follows:
adjacent frame super-pixel feature comparison: if the current frame is fvLast frame ofIs fv-1Current frame fvEach potential target area has been obtained from step 2The positions of the potential areas are projected to f in a one-to-one mannerv-1Frames, projecting these asObtained according to the color mean value calculation method in the step 2 The three-channel color mean of (1);calculating the color difference between two areas at the same positionIf the current frame exceeds the ColThreshold threshold, judging the current frame fvIn the areaIs a moving object.
Advantageous effects
According to the weak and small moving object detection method based on the super-pixel adjacent frame feature comparison, a single frame image is segmented according to a super-pixel algorithm, an adjacent matrix is constructed, each super-pixel is represented by adopting a manually designed feature, and the representation capability of the super-pixels is improved. By using the segmentation algorithm of the graph, the clustering speed of the superpixels is accelerated, and the superpixels with obvious characteristic difference with the adjacent superpixels are extracted. And comparing the super-pixel characteristics of the adjacent video frames in the same area, and judging that the super-pixel is a moving target when the characteristic difference exceeds a threshold value. The whole algorithm realizes accurate characterization of weak and small moving objects in the sequence image. Compared with the prior art, the invention has the following beneficial effects:
(1) and 2, performing graph theory modeling on the superpixels, constructing an adjacency relation among the superpixels, designing superpixel characteristics, improving the representation capability of the superpixels, realizing graph cutting through depth-first search and accelerating extraction of potential moving targets.
(2) And 3, adjacent frame superpixel feature comparison is adopted, so that interframe search is avoided, and the efficiency of detecting weak and small moving targets is improved.
Drawings
FIG. 1 is an algorithm flow chart
FIG. 2 is a graph of algorithm test results
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the invention provides a weak and small moving target detection method based on super-pixel adjacent frame feature comparison, which comprises the following steps: step 1: realizing superpixel segmentation on a target frame image by using a simple linear iterative clustering algorithm (SLIC); step 2: generating a graph theory model according to the adjacency relation of the superpixels, designing superpixel characteristics, calculating characteristic differences among the superpixels, taking the characteristic differences as edge values in the graph theory model, and realizing potential target extraction by utilizing a graph segmentation algorithm; and step 3: and carrying out adjacent frame color feature comparison on the potential target area, and marking as a moving target if the color feature difference exceeds a set threshold.
As shown in fig. 1, the specific embodiment comprises the following steps:
step 1: obtaining sequence images F with the same frame width and frame height, and obtaining each frame image F in the sequence images F with the length of nv(v ═ 1, 2., n) adopt SLIC (simple linear iterative clustering algorithm) to realize superpixel segmentation, and each frame is decomposed into m superpixel blocks SP by artificially set segmentation number mk(xz,yz) (k 1,2, …, m), assuming a certain superpixel SPkContains Z pixels, then (x)z,yz) (Z1, 2.. Z.) is a superpixel block SPkThe horizontal and vertical coordinates of the contained pixel point z. The origin of the whole image is at the upper left corner, and the X-axis isThe horizontal axis, pointing to the right, and the Y-axis, the vertical axis, pointing downward. The SLIC algorithm is implemented as follows:
initializing a clustering center: in the current frame image, the number m of superpixels needing to be set is estimated according to the frame size and the size of the moving target, and clustering centers are uniformly distributed in the single frame image. If the picture has Q pixel points, the size of each super pixel is aboutPixel, then the adjacent initialized cluster center-to-center spacing is approximatelyThe floor () function represents a round down. In order to avoid the initialized clustering center from falling into the edge position with larger image gradient, the position with minimum gradient is calculated in the 3 multiplied by 3 neighborhood of the clustering center, and the clustering center is moved to the position, thereby completing the initialization of the clustering center.
Search strategy and distance metric: around each initial cluster center (limited to 2Cdist × 2 Cdist), a cluster category to which each pixel belongs is determined. The classification is determined according to the distance between the pixel and the cluster center, and the distance D includes the color distance DcDistance d from spaces。
Wherein(j=1,...,4Cdist2) And j is a pixel within a 2Cdist × 2Cdist neighborhood belonging to the cluster center _ i. dcIs the distance in Lab color space, l, a,b is Lab color channel value, dsIs the spatial euclidean distance. D is two kinds of distance (color distance D)cDistance d from spaces) Weighted sum of squares of (1), wherein NcIs taken as a constant value, NsTaking CdIst, both play a role in weighting. Because each pixel point is included by a plurality of surrounding clustering centers, each pixel point is divided to the clustering center with the minimum corresponding D value to form the super pixel.
Iterative operation: calculating the Lab color mean value and the (x, y) coordinate mean value of all pixels in each current new superpixel, moving the clustering center to the (x, y) mean value, and repeating the search until convergence (generally taking 10 iterations).
Step 2: and (3) establishing a graph theory model G (V, E) for the superpixels obtained in the current frame in the step (1), wherein the clustering and the segmentation among the superpixels can be accelerated by adopting the graph theory model. In the graph model, V represents a node, E represents an edge, and the node is each super pixel. The length of the edge is the characteristic difference between adjacent superpixels. In mathematics, a graph theory model is usually represented by an adjacency matrix, which is denoted as Adj, and if the whole image can be divided into m superpixels, Adj is an m × m symmetric matrix, and diagonal elements thereof represent the adjacency relation between nodes and itself, so that Adj (p, q) is set to 0, and if the superpixel SP is providedpAnd SPqIf Adj (p, q) is not adjacent, Adj (p, q) is set to ∞. After the graph theory model representation is completed, a graph segmentation algorithm is used for cutting off the edges with the side length exceeding a certain threshold, and the isolated nodes are the nodes needing to be extracted, namely the superpixels with the characteristic difference exceeding the threshold with the adjacent superpixels around are extracted. The method specifically comprises three processes:
(1) designing super pixel characteristics: in the construction of the graph theory model, the characteristic difference between the superpixels needs to be calculated, so that the superpixel characteristics need to be designed firstly, and the inherent attributes of the superpixels are represented by the superpixel characteristics. It can be seen visually that the superpixel feature comprises the three-channel color mean value R of the pixel blockmean,Gmean,Bmean]T. To be liftedThe utilization degree of each pixel information in the super-high pixel further leads each channel variance characteristic [ R [ ]variance,Gvariance,Bvariance]TThe super-pixel characteristics are added, and the variance reflects the deviation degree of the color of each channel in the super-pixel block and reflects the texture characteristics in the block to a certain degree. In order to enhance the effect of graph theory segmentation, the gradient feature [ Up ] of the peripheral edge of the superpixel is increasedgrad,Rightgrad,Downgrad,Leftgrad]T. A certain super pixel SPkCan be represented by the original image f corresponding to the contour of the numbered graph I (x, y)vAt each edge position, the gradients of the uppermost Up, rightmost Right, lowermost Down and leftmost Left are calculated. Where the definition of numbering scheme I (x, y) is: the v frame picture fvIn (2), the pixel identification in each superpixel is the number index of the superpixelkThe pictures thus obtained are denoted by the numbering diagrams I (x, y), I (x)z,yz)=indexk,(xz,yz)∈SPk. Each symbol is specifically defined as follows:
| Up | represents belonging to SPkThe number of the uppermost pixels in the sequence, and the rest of the sequence is analogized from the above. The superpixel is then characterized as:
Feature(SPk)=[Rmean,Gmean,Bmean,Rvariance,Gvariance,Bvariance,Upgrad,Rightgrad,Downgrad,Leftgrad]Tfrom the superpixel features, feature differences between adjacent superpixels can be subsequently calculated.
(2) And (3) establishing a graph: characterizing the superpixels as graph theory models according to the adjacency relation, and representing each superpixel SP obtained in the step 1k(xz,yz) According to the contour of each number in the numbered graph I (x, y), the adjacency relation among all nodes can be obtained according to the 4-adjacency relation, the adjacency relation is characterized as a graph theory model, and the side length of each side in the graph theory model is represented by the characteristic difference among the adjacent nodes.
The adjacency relation is defined as 4-adjacency, i.e. the coordinates of any two pixels z, h satisfy the relation [ | x [ ]z-xh|+|yz-yh|=1,(z≠h)]. If SP is to be obtainedkAdjacent super pixel number index ofrOnly need to traverse SPkThe outlines in the numbered graph I (x, y) are adjacently numbered index according to a 4-adjacencyrAnd (6) recording. Let SPpAnd SPqAdjacent to each other, (p, q ═ 1, 2., m, p ≠ q) from the numbering map I (x, y), the coordinate center of each superpixel can be calculated, which is denoted as (x, y)center_p,ycenter_p),(xcenter_q,ycenter_q). For convenience of description, use SPpAs a center, assume SPqLocated in SPpThe right upper part of the two forms an included angle between the center of the two and the horizontal lineAs can be seen from the superpixel features designed below, the differences between features are made up of three parts: color variance Colordist, variance vardis, gradient reinforcement variance Graddist.
To ensure symmetry of the adjacency matrix, SPpAnd SPqThe difference in gradient enhancement between the two Graddist must be guaranteed to be the same, so SPpThe gradient enhancement difference at the center is:
note that: wherein the first two terms are SPpThe last two terms are SPqThe gradient characteristic of (a).
Then SP finallypAnd SPqThe boundary value between is Adj (p, q) ═ α Colordist + β vardest + γ graddest, where the coefficients take the value α ═ 1,since the variance difference is generally large, it is divided by the number of pixels included in each super pixel, and γ is 0.1. The construction of the graph theory model G (V, E), namely the calculation of all elements in the Adj adjacency matrix, is completed.
(3) And (3) dividing the graph: and (4) according to the graph theory model, adopting a graph cut algorithm to complete the extraction of the potential target superpixel. And setting a threshold EdgeThreshold of the edge value, setting a marking vector IsVisited as an m-dimensional vector of all zeros, wherein the total length of the marking vector is the number of the superpixels. First, starting with the super-pixel numbered 1, a depth first search is applied to Adj (p, q) according to the adjacency matrix Adj<The superpixel numbers of EdgeThreshold are all changed to 1, Adj (p, q)>Edge of EdgeThreshold is broken, and the traversed super pixel SP is dividedkUntil the depth search is finished, the flag vector of (1) is set, i.e., isvisited (k) ═ 1. And secondly, searching the superpixel where the element 0 in the IsVisited marker vector is located, and performing depth-first search according to the operation steps of imitating the superpixel with the serial number of 1 in the serial number sequence. And thirdly, the process of the second step is circulated until all IsVisied vectors are 1. At this time, the whole numbered image I (x, y) is represented by a plurality of large-area regions with the same number and a small-area region with the same number (generally, the number of pixels in the numbered region is set to be not more than 5 times of the number of pixels contained in the super-pixel at most, and the numbered region can be considered as a small area), so that the image segmentation is completed, and the image segmentation of the current frame is correspondingly completed. In I (x, y), the same numbered region of each small area is the potential target regionWhere superscript A represents the A-th potential region, (x)z,yz) For the coordinates of the pixels in each potential region, each potential region can form a new superpixel block
And step 3: according to the potential target area in the step 2, the detection of the weak and small moving target is realized through the super pixel feature comparison of adjacent frames, and the specific implementation mode is as follows:
adjacent frame super-pixel feature comparison: if the current frame is fvThe last frame is fv-1Current frame fvEach potential target area has been obtained by step (2)The positions of the potential areas are projected to f in a one-to-one mannerv-1Frames, projecting these asObtained according to the color mean value calculation method in the step (2) Three channel color mean of (1).
Setting the color difference of two areas at the same position as Coldiff:
Claims (1)
1. A weak and small moving object detection method based on super pixel adjacent frame feature contrast is characterized by comprising the following steps:
step 1: for each frame image F in the sequence image F with the length of nvThe method adopts a simple linear iterative clustering algorithm SLIC to realize superpixel segmentation, and decomposes each frame into m superpixel blocks SPk(xz,yz) Suppose a certain superpixel SPkContains Z pixel points, then (x)z,yz) Is a super pixel block SPkThe horizontal and vertical coordinates of the contained pixel point z, the origin of the whole image is at the upper left corner, the X axis is the horizontal axis and points to the right, the Y axis is the vertical axis and points to the down; wherein v 1,2,., n, k 1,2, …, m, Z1, 2,., Z;
the SLIC algorithm is specifically as follows:
initializing a clustering center: uniformly distributing clustering centers in the single-frame image according to the set number m of the super pixels; if the picture has Q pixel points, the size of each super pixel isPixel, then the adjacent initialized cluster center-to-center spacing is approximatelyfloor () function represents rounding down; in the 3 x 3 neighborhood of the clustering center, calculating the position with the minimum gradient, and moving the clustering center to the position to finish the initialization of the clustering center;
search strategy and distance metric: in each initial clustering center 2CdIst multiplied by 2CdIst, the clustering category of each pixel point is determined according to the distance between the pixel and the clustering center, and the distance D between the pixel and the clustering center contains the color distance DcDistance d from spacesThe calculation is as follows:
whereinj is a pixel in a 2CdIst × 2CdIst neighborhood belonging to the cluster center _ i; dcIs the distance in Lab color space, l, a, b are the Lab color channel values, dsIs the spatial euclidean distance; n is a radical ofcIs taken as a constant value, NsTaking CdIst; wherein j is 12;
Dividing each pixel point to a corresponding clustering center with the minimum D value to form a super pixel;
iterative operation: calculating the Lab color mean value and the (x, y) coordinate mean value of all pixels in each current new superpixel, moving the clustering center to the (x, y) mean value, and repeatedly searching until convergence;
step 2: establishing a graph theory representation of the superpixels, and representing a graph theory model by G (V, E), wherein V represents nodes, E represents edges, the nodes are the superpixels, the edges are the adjacent relation of the superpixels and other superpixels, if two superpixels are adjacent, an edge exists, otherwise, the edge does not exist; the side length is the characteristic difference between adjacent superpixels, and in mathematics, a graph theory model is usually represented by an adjacent matrix, which is denoted as Adj, if the whole image can be divided into m superpixels, Adj is an m × m symmetric matrix, and diagonal elements of the symmetric matrix are represented by the adjacent relation of nodes and the symmetric matrix, so that Adj (p, q) is set to be 0, and if the superpixel SP is a superpixel SPpAnd SPqIf the two are not adjacent, then Adj (p, q) is set to be equal toInfinity; after the construction of the adjacent matrix is completed, cutting the edge of which the side length exceeds the threshold value in the node and the adjacent nodes around by using a graph cutting mode, wherein the edge is the edge of which the cutting characteristic difference exceeds the set threshold value; the method specifically comprises three processes:
(1) designing super pixel characteristics: the superpixel feature comprises three-channel color mean value R of the pixel blockmean,Gmean,Bmean]TThe variance of each channel is characterized [ R ]variance,Gvariance,Bvariance]TAnd the gradient feature of the peripheral edge [ Upgrad,Rightgrad,Downgrad,Leftgrad]TAdding superpixel features, a superpixel SPkCan be represented by the original image f corresponding to the contour of the numbered graph I (x, y)vCalculating gradients of the uppermost Up, the rightmost Right, the lowermost Down and the leftmost Left at each edge position; where the definition of numbering scheme I (x, y) is: the v frame picture fvIn (2), the pixel identification in each superpixel is the number index of the superpixelkThe pictures thus obtained are denoted by the numbering diagrams I (x, y), I (x)z,yz)=indexk,(xz,yz)∈SPk(ii) a The features are specifically defined as follows:
| Up | represents belonging to SPkThe number of the pixels at the top in the middle, and the rest directions are analogized from the above; the superpixel is then characterized as: feature (SP)k)=[Rmean,Gmean,Bmean,Rvariance,Gvariance,Bvariance,Upgrad,Rightgrad,Downgrad,Leftgrad]TAccording to the characteristics of the super pixels, the characteristic difference between every two adjacent super pixels is calculated subsequently;
(2) and (3) establishing a graph: obtaining each super pixel SP from step 1k(xz,yz) In order to obtain the adjacency relation between the super pixels, the judgment needs to depend on the outline of each number in the number map I (x, y);
the adjacency relation is defined as 4-adjacency, i.e. the coordinates of any two pixels z, h satisfy the relation [ | x [ ]z-xh|+|yz-yh|=1,(z≠h)](ii) a If SP is to be obtainedkAdjacent super pixel number index ofrOnly need to traverse SPkThe outlines in the numbered graph I (x, y) are adjacently numbered index according to a 4-adjacencyrRecording; let SPpAnd SPqNext, p, q ≠ q, m, p ≠ q, calculates the coordinate center of each superpixel from the numbering map I (x, y), and is denoted as (x)center_p,ycenter_p),(xcenter_q,ycenter_q) (ii) a With SPpAs a center, assume SPqLocated in SPpThe right upper part of the two forms an included angle between the center of the two and the horizontal lineAs can be seen from the superpixel features designed below, the differences between features are made up of three parts: color variance Colordist, variance vardis, gradient reinforcement variance Graddist;
to ensure symmetry of the adjacency matrix, SPpAnd SPqThe difference in gradient enhancement between the two Graddist must be guaranteed to be the same, so SPpThe gradient enhancement difference at the center is:
note that: wherein the first two terms are SPpThe last two terms are SPqThe gradient characteristic of (a);
then SP finallypAnd SPqThe edge value between is Adj (p, q) ═ α Colordist + β vardest + γ graddest, where α ═ 1,γ is 0.1; therefore, the construction of the graph theory model G (V, E) is completed, namely the calculation of all elements in the Adj adjacency matrix is completed;
(3) and (3) dividing the graph: according to the graph theory model, a graph cut algorithm is adopted to complete extraction of the potential target superpixel; giving a threshold EdgeThreshold of the edge value, setting a marking vector IsVisited as an m-dimensional vector of all zeros, wherein the total length of the marking vector is the number of the superpixels; first, starting with the super-pixel numbered 1, a depth first search is applied to Adj (p, q) according to the adjacency matrix Adj<The superpixel numbers of EdgeThreshold are all changed to 1, Adj (p, q)>Edge of EdgeThreshold is broken, and the traversed super pixel SP is dividedkUntil the depth search is finished, the flag vector of (1) is set, i.e. isvisited (k) ═ 1; secondly, searching a superpixel where the element 0 in the IsVisited marker vector is located, and performing depth-first search according to the operation steps of imitating the superpixel with the serial number of 1 in the serial number sequence; step three, the process of the step two is circulated until all IsVisied vectors are 1; at this time, the whole number image I (x, y) is represented by a plurality of large-area areas with the same number and a small-area with the same number, so that the image segmentation is completed, and the image segmentation of the current frame is correspondingly completed; in I (x, y), the same numbered region of each small area is the potential target regionWhere superscript A represents the A-th potential region, (x)z,yz) For the coordinates of the pixels in each potential region, each potential region can form a new superpixel block
And step 3: according to the potential target area in the step 2, weak and small moving target detection is realized through super pixel feature comparison of adjacent frames; the specific process is as follows:
adjacent frame super-pixel feature comparison: if the current frame is fvThe last frame is fv-1Current frame fvEach potential target area has been obtained from step 2The positions of the potential areas are projected to f in a one-to-one mannerv-1Frames, projecting these asObtained according to the color mean value calculation method in the step 2 The three-channel color mean of (1);calculating the color difference between two areas at the same positionIf the current frame exceeds the ColThreshold threshold, judging the current frame fvIn the areaIs a moving object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911038717.4A CN110910417B (en) | 2019-10-29 | 2019-10-29 | Weak and small moving target detection method based on super-pixel adjacent frame feature comparison |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911038717.4A CN110910417B (en) | 2019-10-29 | 2019-10-29 | Weak and small moving target detection method based on super-pixel adjacent frame feature comparison |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110910417A true CN110910417A (en) | 2020-03-24 |
CN110910417B CN110910417B (en) | 2022-03-29 |
Family
ID=69815750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911038717.4A Active CN110910417B (en) | 2019-10-29 | 2019-10-29 | Weak and small moving target detection method based on super-pixel adjacent frame feature comparison |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110910417B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113112527A (en) * | 2021-03-26 | 2021-07-13 | 西北工业大学 | Moving small target detection method based on H264 video code stream |
CN115311276A (en) * | 2022-10-11 | 2022-11-08 | 江苏华维光电科技有限公司 | Intelligent segmentation method for ferrographic image based on machine vision |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120229790A1 (en) * | 2008-04-11 | 2012-09-13 | Microsoft Corporation | Method and system to reduce stray light reflection error in time-of-flight sensor arrays |
CN104966286A (en) * | 2015-06-04 | 2015-10-07 | 电子科技大学 | 3D video saliency detection method |
CN105760886A (en) * | 2016-02-23 | 2016-07-13 | 北京联合大学 | Image scene multi-object segmentation method based on target identification and saliency detection |
CN105930868A (en) * | 2016-04-20 | 2016-09-07 | 北京航空航天大学 | Low-resolution airport target detection method based on hierarchical reinforcement learning |
CN105976378A (en) * | 2016-05-10 | 2016-09-28 | 西北工业大学 | Graph model based saliency target detection method |
CN106447679A (en) * | 2016-10-17 | 2017-02-22 | 大连理工大学 | Obviousness detection method based on grabcut and adaptive cluster clustering |
CN106780430A (en) * | 2016-11-17 | 2017-05-31 | 大连理工大学 | A kind of image significance detection method based on surroundedness and Markov model |
CN106997597A (en) * | 2017-03-22 | 2017-08-01 | 南京大学 | It is a kind of based on have supervision conspicuousness detection method for tracking target |
CN107016691A (en) * | 2017-04-14 | 2017-08-04 | 南京信息工程大学 | Moving target detecting method based on super-pixel feature |
CN108717539A (en) * | 2018-06-11 | 2018-10-30 | 北京航空航天大学 | A kind of small size Ship Detection |
CN109559316A (en) * | 2018-10-09 | 2019-04-02 | 浙江工业大学 | A kind of improved graph theory dividing method based on super-pixel |
CN110163822A (en) * | 2019-05-14 | 2019-08-23 | 武汉大学 | The netted analyte detection and minimizing technology and system cut based on super-pixel segmentation and figure |
-
2019
- 2019-10-29 CN CN201911038717.4A patent/CN110910417B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120229790A1 (en) * | 2008-04-11 | 2012-09-13 | Microsoft Corporation | Method and system to reduce stray light reflection error in time-of-flight sensor arrays |
CN104966286A (en) * | 2015-06-04 | 2015-10-07 | 电子科技大学 | 3D video saliency detection method |
CN105760886A (en) * | 2016-02-23 | 2016-07-13 | 北京联合大学 | Image scene multi-object segmentation method based on target identification and saliency detection |
CN105930868A (en) * | 2016-04-20 | 2016-09-07 | 北京航空航天大学 | Low-resolution airport target detection method based on hierarchical reinforcement learning |
CN105976378A (en) * | 2016-05-10 | 2016-09-28 | 西北工业大学 | Graph model based saliency target detection method |
CN106447679A (en) * | 2016-10-17 | 2017-02-22 | 大连理工大学 | Obviousness detection method based on grabcut and adaptive cluster clustering |
CN106780430A (en) * | 2016-11-17 | 2017-05-31 | 大连理工大学 | A kind of image significance detection method based on surroundedness and Markov model |
CN106997597A (en) * | 2017-03-22 | 2017-08-01 | 南京大学 | It is a kind of based on have supervision conspicuousness detection method for tracking target |
CN107016691A (en) * | 2017-04-14 | 2017-08-04 | 南京信息工程大学 | Moving target detecting method based on super-pixel feature |
CN108717539A (en) * | 2018-06-11 | 2018-10-30 | 北京航空航天大学 | A kind of small size Ship Detection |
CN109559316A (en) * | 2018-10-09 | 2019-04-02 | 浙江工业大学 | A kind of improved graph theory dividing method based on super-pixel |
CN110163822A (en) * | 2019-05-14 | 2019-08-23 | 武汉大学 | The netted analyte detection and minimizing technology and system cut based on super-pixel segmentation and figure |
Non-Patent Citations (7)
Title |
---|
AMANDA K. ZIEMANN 等: "Hyperspectral target detection using graph theory models and manifold geometry via an adaptive implementation of locally linear embedding", 《PROCEEDINGS OF SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING》 * |
WENWEN PAN 等: "Image Saliency Detection Algorithm Based on Super Pixels Partition", 《4TH INTERNATIONAL CONFERENCE ON ADVANCED MATERIALS AND INFORMATION TECHNOLOGY PROCESSING (AMITP 2016)》 * |
XIAOFEI SUN 等: "Salient Region Detection Based on SLIC and Graph-based Segmentation", 《4TH INTERNATIONAL CONFERENCE ON ADVANCED MATERIALS AND INFORMATION TECHNOLOGY PROCESSING (AMITP 2016)》 * |
云红全 等: "基于超像素时空显著性的运动目标检测算法", 《红外技术》 * |
苏帅 等: "基于图论的复杂交通环境下车辆检测方法", 《北京交通大学学报》 * |
陈佳 等: "一种基于帧差分法与快速图分割相结合的运动目标检测方法", 《现代电子技术》 * |
魏伟一 等: "基于邻域优化机制的图像显著性目标检测", 《计算机工程与科学》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113112527A (en) * | 2021-03-26 | 2021-07-13 | 西北工业大学 | Moving small target detection method based on H264 video code stream |
CN113112527B (en) * | 2021-03-26 | 2024-01-09 | 西北工业大学 | H264 video code stream-based small moving object detection method |
CN115311276A (en) * | 2022-10-11 | 2022-11-08 | 江苏华维光电科技有限公司 | Intelligent segmentation method for ferrographic image based on machine vision |
CN115311276B (en) * | 2022-10-11 | 2023-01-17 | 江苏华维光电科技有限公司 | Intelligent segmentation method for ferrographic image based on machine vision |
Also Published As
Publication number | Publication date |
---|---|
CN110910417B (en) | 2022-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537239B (en) | Method for detecting image saliency target | |
CN107862702B (en) | Significance detection method combining boundary connectivity and local contrast | |
CN104820990A (en) | Interactive-type image-cutting system | |
CN109974743B (en) | Visual odometer based on GMS feature matching and sliding window pose graph optimization | |
CN102542571B (en) | Moving target detecting method and device | |
CN106952294B (en) | A kind of video tracing method based on RGB-D data | |
CN110910421B (en) | Weak and small moving object detection method based on block characterization and variable neighborhood clustering | |
CN106611427A (en) | A video saliency detection method based on candidate area merging | |
CN112184759A (en) | Moving target detection and tracking method and system based on video | |
CN105809716B (en) | Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method | |
CN109064522A (en) | The Chinese character style generation method of confrontation network is generated based on condition | |
CN110006444B (en) | Anti-interference visual odometer construction method based on optimized Gaussian mixture model | |
CN110910417B (en) | Weak and small moving target detection method based on super-pixel adjacent frame feature comparison | |
CN108154158B (en) | Building image segmentation method for augmented reality application | |
CN111310768B (en) | Saliency target detection method based on robustness background prior and global information | |
CN103051915A (en) | Manufacture method and manufacture device for interactive three-dimensional video key frame | |
CN112465021B (en) | Pose track estimation method based on image frame interpolation method | |
CN109754440A (en) | A kind of shadow region detection method based on full convolutional network and average drifting | |
CN109559328A (en) | A kind of Fast image segmentation method and device based on Bayesian Estimation and level set | |
CN110853064A (en) | Image collaborative segmentation method based on minimum fuzzy divergence | |
CN109034258A (en) | Weakly supervised object detection method based on certain objects pixel gradient figure | |
CN107392211B (en) | Salient target detection method based on visual sparse cognition | |
CN111414938B (en) | Target detection method for bubbles in plate heat exchanger | |
Xu et al. | Crosspatch-based rolling label expansion for dense stereo matching | |
CN113409332B (en) | Building plane segmentation method based on three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |