CN109087330A - It is a kind of based on by slightly to the moving target detecting method of smart image segmentation - Google Patents

It is a kind of based on by slightly to the moving target detecting method of smart image segmentation Download PDF

Info

Publication number
CN109087330A
CN109087330A CN201810589151.3A CN201810589151A CN109087330A CN 109087330 A CN109087330 A CN 109087330A CN 201810589151 A CN201810589151 A CN 201810589151A CN 109087330 A CN109087330 A CN 109087330A
Authority
CN
China
Prior art keywords
pixel
super
segmentation
image
labeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810589151.3A
Other languages
Chinese (zh)
Inventor
朱效洲
曹璐
姚雯
陈小前
赵勇
白玉铸
王祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical National Defense Technology Innovation Institute PLA Academy of Military Science
Priority to CN201810589151.3A priority Critical patent/CN109087330A/en
Publication of CN109087330A publication Critical patent/CN109087330A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

It is a kind of based on by slightly to the moving target detecting method of smart image segmentation, comprising: obtain n frame detection image;Feature point extraction is carried out to first frame image, and the characteristic point is tracked in n-1 frame image behind to generate movement clue, by carrying out motion segmentation to the movement clue, determines the classification of characteristic point described in n-th frame image;Super-pixel segmentation is carried out to n-th frame image, to carry out dimension-reduction treatment to n-th frame image;According to the classification of mark point in n-th frame image, super-pixel is marked, and super-pixel is clustered using method for measuring similarity, the coarse segmentation of image is completed on the basis of cluster;Fine segmentation is carried out to obtained four value figure of coarse segmentation picture construction, and then to the four values figure, to realize the accurate detection to moving target.By by slightly to the segmentation of essence, not only accuracy rate is high so that the present invention is to the detection of moving object in image, and has detection speed faster, being more suitable the detection of rapid moving object.

Description

It is a kind of based on by slightly to the moving target detecting method of smart image segmentation
Technical field
The present invention relates to the target detection technique field in spacecraft closely perception, more particularly to it is a kind of based on by The thick moving target detecting method to smart image segmentation, for fast and accurately being detected to the object moving in image.
Background technique
The detection of the moving target of existing view-based access control model is broadly divided into background subtraction (Background Subtr Action) method and two class of method based on movement clue (Motion Cue).
Background subtraction method use first pixel characteristic such as gauss hybrid models (Gaussian Mixture Model, GMM), code book (Codebook) etc. or textural characteristics such as local binary patterns (Local Binary Patterns, LBP), ruler Constant three value mode of part (Scale Invariant Local Ternary Patterns, SILTP) etc. is spent to background appearance It is modeled.Before detection when scape, image and model are subtracted each other, wherein difference is more than that the region of given threshold value is considered as prospect.To the greatest extent Pipe background subtraction method achieves significant progress in the past ten years, but such method be still more suitable for camera it is static, It is applied in background static state or the slow scene of variation.
Clue is moved, as the term suggests moving target is detected using different types of motion profile.Its process is substantially, first The light stream (Optical Flow) between adjacent two field pictures is first calculated, uses it as movement clue to initialize moving target Boundary will be located at the inside and outside pixel in boundary and be respectively labeled as foreground and background.Then by iteration to prospect, context marker into Row optimization, target is separated from background.However motion information of the light stream merely with target in two continuous frames image, When being displaced insufficient between two field pictures or occurring blocking, when low texture region, optical flow computation all may failure.Therefore, will The locus of points (Point Trajectory) with classification marker buds out into popularity in recent years as movement clue.It is different from light stream, The locus of points actually contains motion information of the target in continuous multiple frames image.In its generating process, first in the picture Characteristic point is extracted, then characteristic point is tracked respectively in subsequent image, pixel of the same characteristic point in consecutive image Coordinate information connection just obtains a track.In view of the characteristic point motion conditions from same target are identical, different mesh are come from Target characteristic point motion conditions are different, and it is consistent that factorization, stochastical sampling can be used in each locus of points of generation The motion segmentation methods such as (RANdomSAmple Consensus, RANSAC), spectral clustering (Spectral Clustering) carry out Classify and mark, such as labeled as belonging to background or prospect.Locus of points motion conditions with same tag are identical, corresponding Characteristic point belongs to same target, which can be used to detect moving target in subsequent processing.
The locus of points that Sheikh etc. and Petit uses the prospect of having, background class to mark utilizes corresponding spy as clue Sign point pixel establishes the display model of foreground and background respectively, then passes through maximum a posteriori probability as sparse sampling Pixel in image is marked in (Maximum a Posteriori, MAP) point by point.However it directly operates and means in pixel level Need to handle thousands of a pixels, therefore calculation amount is larger.
The common method for solving the problems, such as this is extraction super-pixel (Superpixel) in preprocessing process.Be processed into Thousand pixels up to ten thousand are compared, and the complexity that can greatly reduce subsequent processing is operated to hundreds super-pixel.Super-pixel Concept by Ren etc. 2003 propose, refer in color or other low-level features with similitude one group of pixel.
Ochs etc. uses the half dense tape label locus of points as movement clue detection moving target.Why half is known as thick Close is because the density of tape label point is between sparse (characteristic point of extraction) and dense (point-by-point label) in image.Algorithm is first Multilayer super-pixel first is generated using layered image segmentation (Hierarchical Image Segmentation, HIS) method, with Super-pixel is merged using multilayer variational method afterwards, half dense label is extended to dense label, obtains good effect. However each component part calculation amount of this method is larger, the especially suitable committed memory of the calculating process of HIS, therefore is more suitable for answering For offline scenario.Ellis etc. proposes a kind of on-line study method for moving Object Segmentation.The sparse tape label locus of points On the one hand sampling is provided for study appearance clue, on the other hand provides spatial coordinated information for extraction shape, place cue.It utilizes Above-mentioned clue, algorithm is using online random forest (Online Random Forest, ORF) in the super-pixel level of multiple scales On learnt and classified.However even if super-pixel profile also can not necessarily be realized with objective contour using multi-scale strategy Good agreement.
Summary of the invention
For above content, the invention discloses a kind of based on by slightly to the moving target detecting method of smart image segmentation, It include: to obtain n frame detection image;Feature point extraction is carried out to first frame image, and to the spy in n-1 frame image behind Sign point is tracked to generate movement clue, by carrying out motion segmentation to the movement clue, is determined described in n-th frame image The classification of characteristic point;Super-pixel segmentation is carried out to n-th frame image, to carry out dimension-reduction treatment to n-th frame image;According to n-th frame The classification of mark point, is marked super-pixel in image, and is clustered using method for measuring similarity to super-pixel, poly- The coarse segmentation of image is completed on the basis of class;Essence is carried out to obtained four value figure of coarse segmentation picture construction, and then to the four values figure Subdivision is cut, to realize the accurate detection to moving target.
Further, the method for carrying out feature point extraction to first frame image is by angular-point detection method to feature Point extracts, this method comprises: according to the neighborhood information of each pixel in the first frame image, after calculating its small translation Auto-correlation quadratic term function, to obtain multiple auto-correlation quadratic functions;Two corresponding to each auto-correlation quadratic function Lesser one is used as judgment criteria in a characteristic value, and the part picture with larger characteristic value is chosen in all smaller characteristic values Element is used as characteristic point.
Further, before carrying out feature point extraction to the first frame image, net is carried out to the first frame image It formats processing.
Further, the characteristic point includes: foreground features point and background characteristics point.
Further, when carrying out super-pixel segmentation to n-th frame image, the dividing number of super-pixel by desired amt into Row setting, is set as the desired amt to carry out gridding with first frame image that treated that number of grid is identical, in the hope of each It include the characteristic point with classification marker in super-pixel.
Further, described that super-pixel is marked, comprising: if only comprising the feature labeled as background in super-pixel Point, then the super-pixel is labeled as background;If only comprising the characteristic point labeled as prospect in super-pixel, which is also labeled as Prospect;If in super-pixel all include or do not include two kinds of labels characteristic points, the super-pixel is labeled as uncertain.
Further, described that super-pixel is clustered, comprising: feature extraction to be carried out to super-pixel, wherein the feature Including color characteristic and textural characteristics;According to the color characteristic and textural characteristics of super-pixel, using Spectral Clustering to super-pixel It is clustered, obtains cluster areas.
Further, the rough segmentation is segmented into the segmentation carried out in image level, comprising: if before cluster areas internal standard is denoted as The super-pixel quantity of scape is more than the super-pixel quantity labeled as background, then uncertain super-pixel mark all in the cluster areas It is denoted as prospect;If the super-pixel quantity that cluster areas internal standard is denoted as background is more than the super-pixel quantity labeled as prospect, then should All uncertain super-pixel are labeled as background in cluster areas;If the super-pixel quantity that cluster areas internal standard is denoted as prospect is equal to Labeled as the super-pixel quantity of background, then uncertain super-pixel all in the cluster areas is labeled as background.
Further, the four values figure building, comprising: to the super-pixel that label is, if closing on for the super-pixel is super Pixel is collectively labeled as prospect, then the super-pixel is labeled as determining prospect super-pixel, and otherwise it is possible prospect super-pixel;To label For the super-pixel of background, if the super-pixel of closing on of the super-pixel is collectively labeled as background, which is labeled as determining that background is super Pixel, otherwise it is possible background super-pixel.
Further, the fine segmentation is the segmentation carried out in pixel level, comprising: is adopted on the basis of four value figures Fine segmentation is carried out with Grabcut processor, keep determining foreground pixel during its iterative analysis and determines background pixel It is constant, analysis is iterated again to possible foreground pixel and possible background pixel, until reaching satisfied display effect.
It is an advantage of the invention that by combining superpixel segmentation method with the movement clue generation method of tape label, So that the present invention cuts being divided into for image in the coarse segmentation in image level and the subdivision in pixel level, and by The dimension of image is reduced using super-pixel method before handling in pixel level, and is that iteration updates by constructing four value figures Provide better initial value so that the method for the present invention to the detection speed of moving object in image faster;Pass through tape label Movement clue generation method super-pixel is marked, obtain coarse segmentation image;It is introduced on the basis of traditional three value figure The super-pixel of tape label is to four value figures of building, by being iterated analysis to four value figures, so that the method for the present invention has Higher accuracy in detection obtains higher detection image precision.
Detailed description of the invention
By reading the detailed description of following detailed description, various other advantages and benefits are common for this field Technical staff will become clear.Attached drawing is only used for showing the purpose of specific embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is the method flow diagram of moving object detection of the present invention.
Fig. 2 is the process flow diagram flow chart of moving object detection of the present invention.
Fig. 3 is the characteristic point distribution schematic diagram obtained by art methods.
Fig. 4 is the mark point distribution schematic diagram that the present invention obtains.
Fig. 5 is the feature point trajectory schematic diagram obtained using movement clue generation method of the invention.
Fig. 6 is the characteristic point classification schematic diagram that the present invention obtains.
Fig. 7 is super-pixel segmentation process schematic of the invention.
Fig. 8 is super-pixel initial markers schematic diagram of the invention.
Fig. 9 is that the super-pixel that utilization super-pixel segmentation process view (Fig. 6) of the invention is obtained by Spectral Clustering is poly- Class schematic diagram.
Figure 10 is coarse segmentation schematic diagram of the invention,
Figure 11 is that four values of the invention using coarse segmentation picture construction illustrate intention.
Figure 12 is that the four values diagram after four value figures of the invention are updated is intended to.
Figure 13 is that the fine segmentation of the invention using Grabcut processor by successive ignition analysis acquisition crosses signal Figure.
Specific embodiment
The illustrative embodiments of the disclosure are more fully described below with reference to accompanying drawings.Although showing this public affairs in attached drawing The illustrative embodiments opened, it being understood, however, that may be realized in various forms the disclosure without the reality that should be illustrated here The mode of applying is limited.It is to be able to thoroughly understand the disclosure on the contrary, providing these embodiments, and can be by this public affairs The range opened is fully disclosed to those skilled in the art.
As shown in Figure 1, be the method flow diagram of moving object detection of the present invention, including: obtain multi frame detection image; Feature point extraction is carried out to first frame image, and the characteristic point is tracked in n-1 frame image behind to generate fortune Moving-wire rope determines the classification of characteristic point described in n-th frame image by carrying out motion segmentation to the movement clue;To n-th frame Image carries out super-pixel segmentation, to carry out dimension-reduction treatment to n-th frame image;It is right according to the classification of mark point in n-th frame image Super-pixel is marked, and is clustered using method for measuring similarity to super-pixel, and the thick of image is completed on the basis of cluster Segmentation;Fine segmentation is carried out to obtained four value figure of coarse segmentation picture construction, and then to the four values figure, to realize to movement The accurate detection of target.In order to quickly detect to the moving target in image, the present invention is first by scheming image As carrying out coarse segmentation in level, acquisition coarse segmentation image then carries out fine segmentation to coarse segmentation image again in pixel level, Since coarse segmentation process has greatly reduced the dimension of image, the difficulty and calculation amount of fine segmentation are thus greatly reduced, Realize the quick detection of moving object in image;And the present invention is by combining band classification marker during coarse segmentation The movement clue method of point ensure that the relative precision of coarse segmentation, ensures that and is finely divided using coarse segmentation image It cuts, the accuracy of the fine segmentation image of acquisition, so that the present invention is not only quick to the detection of moving object in image, and And accuracy with higher.The method of the present invention is described in detail below in conjunction with other attached drawings:
As shown in Fig. 2, be the process flow diagram flow chart of moving object detection of the present invention, including: it moves the generation of clue, surpass The coarse segmentation of pixel image level and the fine segmentation of pixel level.The present invention is (following by the characteristic point using tape label Abbreviation mark point) super-pixel is marked, to complete the coarse segmentation of image, it is ensured that the accuracy of coarse segmentation, and The process for obtaining mark point is handled using gridding, so that the distribution of mark point is more uniform, it is also more accurate rough segmentation It cuts and lays a good foundation.Each step of the process to moving object detection of the present invention is illustrated below:
The acquisition of characteristic point
Present invention employs Shi-Tomasi angular-point detection methods to extract the mark point in first frame image, In, the mark point is with markd characteristic point, and characteristic point is the higher pixel of pixel response degree in first frame image Partial pixel point in point.The extraction process of characteristic point is, first according to the neighborhood information of pixel each in image, in partial zones The auto-correlation quadratic term function after the small translation of each pixel itself is calculated in domain, wherein obtained auto-correlation quadratic term Function may be one or more;Thereafter, since each quadratic term function corresponds to two characteristic values, wherein lesser one is chosen A characteristic value chooses partial pixel in multiple smaller characteristic values with larger characteristic value as characteristic point as judgment criteria, To complete the extraction to first frame image characteristic point.However, the extracting method using only features described above is likely to occur feature The case where point is unevenly distributed.It is as shown below:
Fig. 3 shows the characteristic point distribution schematic diagram obtained by art methods, wherein light grey "+" label is Extracted characteristic point is all concentrated in foreground image in a large amount of feature point set in Fig. 3, and after occupying image larger proportion In scape image, the state that quantity lacks density concentration is but presented in characteristic point, and this phenomenon is unfavorable for the analysis of subsequent image;To prevent The appearance of this phenomenon, by carrying out gridding processing to image first in the method for the present invention, then using after to gridding Image carries out feature point extraction, i.e., image averaging is divided into multiple grids, chosen according to a certain percentage in each grid respectively Stronger partial pixel is responded as characteristic point, distribution of the characteristic point of extraction in entire image is ensured by method in this It is generally uniform, thereby completing the present invention to the extraction of characteristic point in first frame image, the specific mark obtained such as Fig. 4 present invention Shown in note point distribution schematic diagram, wherein light grey and grey "+" label respectively indicates two kinds of characteristic points.In addition, by gridding, The quantity of grid will provide direct quantity reference for the desired setting of super-pixel number during subsequent super-pixel segmentation, thus So that two methods preferably combine, better coarse segmentation effect is realized.
The tracking of characteristic point
The present invention carries out mark point using Kanade-Lucas-Tomasi (KLT) tracking final.In first frame figure The characteristic point extracted in first frame image is tracked in n-1 frame image as after, and by applying two-way mistake about Beam, so that it is more preferable to the noise robustness under complex background condition, to increase the success in characteristic point in tracing process Rate.
Acquisition mark point is marked to characteristic point
Fig. 5 is the feature point trajectory schematic diagram obtained using movement clue generation method of the invention, wherein with short-term table Show the motion profile of different characteristic;The present invention utilizes the movement clue generation method of tape label, successful according to continuous tracking The motion profile of pixel acquisition characteristic point;Motion profile according to the characteristic point for belonging to a rigid objects is identical, belongs to The characteristic point motion profile of the moving object of different objects is different, to classify to characteristic point.It is as shown in Figure 6:
Fig. 6 is the characteristic point classification schematic diagram that obtains of the present invention, and wherein "+" label is mark point, white marking point and Black note point respectively indicates the mark point from two kinds of different depth of field (prospect or background), at the gridding of above-mentioned image So that mark point is more evenly distributed in image.Optionally, to different characteristic points respectively in different colors in n-th frame figure It is marked as in.Detailed process is as follows for characteristic point classification:
If the track of ith feature point is expressed asWhereinIndicate that characteristic point exists Pixel transverse and longitudinal coordinate in n-th frame image, F is related with the tracking number of the pixel, then the track of all N number of characteristic points can be with It is indicated with matrix M are as follows:
In view of the track of each point may be expressed as 2F × N-dimensional vector, then every a kind of locus of points must be to belong to's Linear subspaces;If foreground and background can be seen as Rigid Bodies, then according to order theorem, calculation matrix is in affine projection model Under be low-rank, and byIn 2 lower-dimensional subspaces at;To which the locus of points that different characteristic point generates be classified Motion segmentation problem is converted into subspace and concentrates the clustering problem of data point.Sparse subspace clustering is used in the present invention (Sparse Subspace Clustering, SSC) method classifies to the data point, this method using data from I describes characteristic, each data point in matrix M is indicated with other data points in data set, such as institute in formula (2) Show:
min||C||1S.t.M=MC, diag (C)=0 (2)
Wherein.M is characterized track matrix a little, and C is sparse matrix, and assorting process includes: first under constraint condition It seeks sparse matrix C and makes its Norm minimum, the locus of points is then divided into two according to sparse matrix C using Spectral Clustering Class.After the classification situation for obtaining the locus of points, corresponding characteristic point can be marked in each frame image.For no prison The case where belonging to prospect or background to characteristic point with superintending and directing distinguishes, and is surrounded by background if only having one in image and occupies one The prospect of certainty ratio.Optionally, a kind of characteristic point for more concentrating will be distributed in image labeled as prospect mark point, will distribution compared with Context marker point is labeled as a kind of characteristic point of dispersion;Finally, classification results are as shown in Figure 6.
The super-pixel segmentation of image
Image segmentation (Segmentation) refers to for digital picture being subdivided into the multiple images subregion (collection of pixel Close, also referred to as super-pixel) process.And super-pixel is adjacent by a series of positions and color, brightness, Texture eigenvalue are similar The zonule of pixel composition.These zonules remain the effective information of further progress image segmentation mostly, and generally not The boundary information of objects in images can be destroyed.
Improve detection speed, the present invention be also n-th frame image carried out simultaneously in the acquisition process of mark point it is super Pixel level coarse segmentation process, detailed process be, using Preemptive SLIC method to the super-pixel of n-th frame image into Row extracts, and the principle of this method is given seed point, and similar pixel is searched for around it, to realize super-pixel segmentation; Its cutting procedure is as follows: the similitude set between pixel i and pixel j is measured by distance d between the two, definition such as public affairs Formula are as follows:
Wherein, dcFor indicating two pixels at [l, a, b]TEuclidean distance in space, [l, a, b]TFor indicating CIELAB Color space;dsFor indicating two pixels at [u, v]TEuclidean distance in space, [u, v]TFor indicating pixel institute in the picture Coordinate system space;M is for balancing dcAnd dsBetween relative importance close sex factor, m is bigger, and two pixels are opposite more Inessential, that is, a possibility that being belonging respectively to two objects, is bigger;It is expected S is defined as putting down for pixel quantity and super-pixel ratio of number Root.
In the present invention, definition PreemptiveSLIC method is around seed point to similar picture within the scope of 2S × 2S Element scans for, to promote search speed;Also, herein on basis, local stop criterion is used to the update of super-pixel, with Avoid there is no the super-pixel of significant change and image-region to carry out multiplicating visit in last time circulation, so as to avoid calculating The increase of amount greatly improves the calculating speed of super-pixel.It is produced in F frame image using Preemptive SLIC method Raw super-pixel is as shown in Figure 7.
Fig. 7 is super-pixel segmentation process schematic of the invention.It is noted that herein in the process for extracting super-pixel In, it would be desirable to super-pixel quantity identical number of grid when being set as with gridding feature point extraction.To ensure that significantly It can include mark point in each super-pixel.
Super-pixel initial markers
Fig. 8 is super-pixel initial markers schematic diagram of the invention, wherein different greyscale colors are indicated with not isolabeling Super-pixel.Super-pixel is marked using the mark point in each super-pixel in the present invention.For each super-pixel, according to Its characteristic point situation with classification marker for being included carries out initial markers, and specific rules are as follows:
(1) if only comprising the characteristic point labeled as background, super-pixel is also labeled as background;
(2) if only comprising the characteristic point labeled as prospect, super-pixel is also labeled as prospect;
(3) if the characteristic points of two kinds of labels all include or do not include, super-pixel is labeled as uncertain.
Above once point out that the characteristic point extracted preferably is uniformly distributed in entire image.Take this strategy true It protects containing the characteristic point with classification marker in super-pixel as much as possible, for use in super-pixel initial markers.However it is uncertain Super-pixel still cannot completely eliminate presence, outstanding in the case where the low texture region of large area occur and leading to feature point tracking failure It is obvious.In order to solve this problem, the present invention is marked uncertain region by using super-pixel cluster.
Feature extraction and super-pixel cluster
After n-th frame image zooming-out goes out super-pixel, need from feature is wherein extracted for clustering, present invention selection can The color characteristic and textural characteristics to complement one another is clustered, and for color characteristic, passes through pixel included in statistics super-pixel Distribution in hsv color space obtains color histogram feature Hc.Hsv color space why is selected, is because of itself and biography System RGB color is compared, and has more robustness for the light conditions of variation.Wherein tone (Hue), saturation degree (Saturation), three channels of value (Value) are discrete respectively turns to 9,8,6 sections, therefore the dimension of color histogram is The product of three dimension of the channel is 432..For textural characteristics, son (Weber ' s law is described using Weber's law first Descriptor, WLD) calculate the rate response of each pixel.After WLD value is normalized to [0,255], count in super-pixel The WLD Distribution value situation of included pixel, obtains Texture similarity feature Ht, dimension 256.
After feature extraction completion, super-pixel is clustered using spectral clustering.In spectral clustering, data are added with undirected Weigh similar diagramForm be indicated, whereinIndicate the set on vertex, each of which element viRepresent one A super-pixel data;Indicate the set on the side on connection vertex;W indicates weighting matrix, wherein each element ωijIndicate vertex viAnd vjBetween similarity degree.The given figure with N number of vertexSpectral clustering is by maximizing similitude in class, minimizing Super-pixel data are divided into the set of expectation classification number by similitude between class, and detailed process is described as follows:
(1) the degree matrix D of diagonalization, i-th of element d on diagonal line are calculatediFor vertex viDegree, be defined as
(2) normalization Laplacian Matrix L=D is calculated1/2(D-W)D1/2
(3) feature vector of the corresponding minimum non-trivial characteristic value of Laplacian Matrix L after calculating normalizationWherein, the classification number of the number and super-pixel of minimal eigenvalue and feature vector is all K;
(4) it usesMatrix is constructed as each columnAnd to every row of UIt is normalized;
(5) use k-means algorithm willGather for k class.
Use from the feature that N number of super-pixel is extracted as vertex setTo devise phase between a kind of two super-pixel of measurement Like property distance metric method to calculate weighting matrix W, and then realize to undirected weighting similar diagramBuilding.
If the distance between two super-pixel i and j are expressed as d, then the distance is by dc、dtAnd dsThree parts form, respectively It is defined as follows:
(1)dcTwo super-pixel are indicated in the distance of color space, by calculating the color histogram extracted in two super-pixel Figure feature HcBetween correlation distance obtain, value range be [0,1].
The histogram h of two super-pixeliAnd hjBetween correlation distance be defined as
(2)dtIndicate two super-pixel in the distance of texture space, calculating and dcSimilar, value range is also [0,1].
(3)dsIndicate the manhatton distance between the image pixel coordinates at two super-pixel centers.In all dsIt calculates and completes Afterwards, by it divided by the maximum value in all values, to be normalized to [0,1].
ds=| ui-uj|+|vi-vj| (7)
Define three above distance after, three parts are combined using adaptive weighting, formed it is final away from From d
WhereinWithIt is d respectivelyc、dtAnd dsMean value.
After defining distance metric, weighting matrix W can be constructed according to it, element definition is as follows
Wherein
The result cluster to super-pixel in Fig. 7 is as shown in figure 9, wherein Fig. 9 is utilization super-pixel segmentation process of the invention View (Fig. 7) clusters schematic diagram by the super-pixel that Spectral Clustering obtains, wherein every kind of greyscale color represents a kind of super-pixel Point region.
Among the above, for Fig. 9 not according to Fig. 8 acquisition, Fig. 8 is only situation about showing after initial markers.
Coarse segmentation
Figure 10 is coarse segmentation schematic diagram of the invention, since the super-pixel for needing to save in image only has two classes, i.e. prospect Super-pixel and background super-pixel are indicated in Figure 10 with different greyscale colors, and for each cluster areas, with mark point Classification method is identical, wins the strategy of (winner-take-all) entirely using victor, by wherein with the super-pixel throwing for determining label The fixed label situation for not knowing super-pixel wherein of voting adopted, specific rules are as follows:
It (1), should if the super-pixel quantity that region internal standard is denoted as prospect is more than the super-pixel quantity labeled as background All uncertain super-pixel are labeled as prospect in region.
It (2), should if the super-pixel quantity that region internal standard is denoted as background is more than the super-pixel quantity labeled as prospect All uncertain super-pixel are labeled as background in region.
(3) if there is draw, i.e., the super-pixel quantity that region internal standard is denoted as prospect is equal to the super-pixel labeled as background Quantity, then uncertain super-pixel all in the region is labeled as background.
It can be seen from fig. 10 that the label situation for not knowing super-pixel in Figure 10 has been updated to background or prospect.Together When may be noted that coarse segmentation result is not satisfactory, the reason is that being misfitted by the actual profile of super-pixel profile and moving target Caused by.Therefore, it is also desirable to fine segmentation be carried out in pixel level to the image after coarse segmentation, to obtain it to pixel Point-by-point label, to realize the optimization to result.Detailed process is as follows for fine segmentation:
Construct four value figures
Figure 11 is that four values of the invention using coarse segmentation picture construction illustrate intention, wherein four kinds of different colours gray scale generations Four kinds of table different super-pixel specification areas.Specifically, three value figures (Trimap) will be for that will input figure in the application such as image segmentation As being divided into prospect, background, unknown three kinds of regions to facilitate subsequent processing.The building of three value figures is usually by user with human-computer interaction side Formula generates, and such as foreground and background is marked by way of scribbling manually.The present invention passes through eight to tape label super-pixel Connection adjacency is analyzed, and discovery zone of ignorance is only possible to be occurred in foreground and background adjacent.Based on the above observation, this hair It is bright that four value figure needed for fine segmentation is constructed using the automatic method for constructing four value figures (Quadmap), it is similar with three value figures, four Input picture is divided into determining prospect D by value figuref, determine background Db, may prospect Pf, may background PbFour regions, specific rules It is as follows:
It (1) is the super-pixel of prospect for coarse segmentation phased markers, as shown in gray value smaller in Figure 10 region, if its It closes on super-pixel and is collectively labeled as prospect, then it is to determine prospect Df, as shown in 1 gray scale of region in Figure 11;Otherwise before it is possible Scape Pf, as shown in 2 gray scale of region in Figure 11;
It (2) is the super-pixel of background for coarse segmentation phased markers, as shown in gray value larger in Figure 10 region, if its It closes on super-pixel and is collectively labeled as background, then it is to determine background Db, as shown in 4 gray scale of region in Figure 11, otherwise it is possible back Scape Pb, as shown in 3 gray scale of region in Figure 11.
Figure 12 is that the four values diagram after four value figures of the invention are updated is intended to, and is analyzed as shown in the figure by an iteration Afterwards, super-pixel is merged, image becomes more fully apparent.
Fine segmentation
It needs to provide the initial markers to pixel when carrying out image segmentation using GrabCut, then iteration optimization marks.? In three traditional value figures, accurate marker can be carried out to foreground pixel and background pixel, and for uncertain pixel be then at random into Line flag.The present invention is by being further described uncertain region, to construct four value figures, i.e., by uncertain region Domain, which is divided into, may be prospect or may be background, by this method, provide more accurate initial markers, accurate being promoted The convergence rate of iteration optimization is more accelerated while rate.Detailed process is as follows for fine segmentation:
Figure 13 is that the fine segmentation of the invention using Grabcut processor by successive ignition analysis acquisition crosses signal Figure, by multiple (2-3 times general) iterative analysis, foreground image (i.e. moving object image) is completely segregated with background image, And moving object is clear-cut.
Divide specifically, carrying out pixel level essence using Grabcut.Given three value figures, Grabcut is by the pixel in imageIt is divided into background and prospect, alternative optimization segmentation result isWherein αiIth pixel Segmentation result, αiPresentation class is background, α when being 0iPresentation class is prospect when being 1.Grabcut is by minimizing in formula Energy function finds optimum segmentation, and formula is as follows:
E (α, θ, p)=U (α, θ, p)+V (α, p) (10)
Wherein, U (α, θ, p) is the fitting degree for evaluation mark α and pixel p, and V (α, p) is regular terms, and effect is By introducing the case where Small variables resolve gaps can not differentiate;Among the above, θ=(θ01), θ0And θ1It is for describing background With the display model of prospect, θ0And θ1Divide Utilization prospects pixel and background pixel, is calculated using gauss hybrid models. The minimum of energy function can be realized by alternative optimization segmentation result α and display model θ.In an iterative process, three value figure In foreground and background region remain unchanged, zone of ignorance is updated in each iteration, by being by affiliated pixel classifications Foreground and background.
The present invention classifies to zone of ignorance after three value map analysis every time, thus on the one hand realize non-formaldehyde finishing, On the other hand a better initial value is provided to solve.In an iterative process, it will determine as prospect DfPixel and be determined as Background DbPixel remain unchanged, and to for may prospect PfPixel and for may background PbPixel changing every time It is updated in generation.Since four value figures provide preferable initial value (foreground pixel, background pixel, possible foreground pixel, possible background The label of pixel is clear), good effect can be obtained merely through 2-3 iteration, as shown in figure 13.
More than, illustrative specific embodiment only of the invention, but scope of protection of the present invention is not limited thereto, appoints In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of, all by what those familiar with the art It is covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.

Claims (10)

1. a kind of based on by slightly to the moving target detecting method of smart image segmentation characterized by comprising
Obtain n frame detection image;
Feature point extraction is carried out to first frame image, and the characteristic point is tracked with life in n-1 frame image behind The classification of characteristic point described in n-th frame image is determined by carrying out motion segmentation to the movement clue at movement clue;
Super-pixel segmentation is carried out to n-th frame image, to carry out dimension-reduction treatment to n-th frame image;
According to the classification of mark point in n-th frame image, super-pixel is marked, and using method for measuring similarity to super-pixel It is clustered, the coarse segmentation of image is completed on the basis of cluster;
Fine segmentation is carried out to obtained four value figure of coarse segmentation picture construction, and then to the four values figure, to realize to movement The accurate detection of target.
2. moving target detecting method according to claim 1, which is characterized in that described to carry out feature to first frame image The method extracted is put to extract by angular-point detection method to characteristic point, this method comprises:
Auto-correlation quadratic term letter according to the neighborhood information of each pixel in the first frame image, after calculating its small translation Number, to obtain multiple auto-correlation quadratic functions;
Lesser one using in two characteristic values corresponding to each auto-correlation quadratic function as judgment criteria, all smaller Choosing in characteristic value has the partial pixel of larger characteristic value as characteristic point.
3. moving target detecting method according to claim 2, which is characterized in that carrying out spy to the first frame image Before sign point extracts, gridding processing is carried out to the first frame image.
4. moving target detecting method according to claim 1, which is characterized in that the characteristic point includes: foreground features Point and background characteristics point.
5. moving target detecting method according to claim 1, which is characterized in that carrying out super-pixel to n-th frame image When segmentation, the dividing number of super-pixel is set by desired amt, by the desired amt be set as with first frame image into Treated that number of grid is identical for row gridding, in the hope of including the characteristic point with classification marker in each super-pixel.
6. moving target detecting method according to claim 1, which is characterized in that described that super-pixel is marked, packet It includes:
If only comprising the characteristic point labeled as background in super-pixel, which is labeled as background;
If only comprising the characteristic point labeled as prospect in super-pixel, which is also labeled as prospect;
If in super-pixel all include or do not include two kinds of labels characteristic points, the super-pixel is labeled as uncertain.
7. moving target detecting method according to claim 1, which is characterized in that described to be clustered to super-pixel, packet It includes:
Feature extraction is carried out to super-pixel, wherein the feature includes color characteristic and textural characteristics;
According to the color characteristic and textural characteristics of super-pixel, super-pixel is clustered using Spectral Clustering, obtains cluster area Domain.
8. moving target detecting method according to claim 1, which is characterized in that the rough segmentation is segmented into image level The segmentation of progress, comprising:
If the super-pixel quantity that cluster areas internal standard is denoted as prospect is more than the super-pixel quantity labeled as background, then the cluster area All uncertain super-pixel are labeled as prospect in domain;
If the super-pixel quantity that cluster areas internal standard is denoted as background is more than the super-pixel quantity labeled as prospect, then the cluster area All uncertain super-pixel are labeled as background in domain;
If the super-pixel quantity that cluster areas internal standard is denoted as prospect is equal to the super-pixel quantity labeled as background, then the cluster area All uncertain super-pixel are labeled as background in domain.
9. moving target detecting method according to claim 1, which is characterized in that the four values figure building, comprising:
To the super-pixel that label is, if the super-pixel of closing on of the super-pixel is collectively labeled as prospect, which is labeled as Determine prospect super-pixel, otherwise it is possible prospect super-pixel;
To the super-pixel that label is, if the super-pixel of closing on of the super-pixel is collectively labeled as background, which is labeled as Determine background super-pixel, otherwise it is possible background super-pixel.
10. moving target detecting method according to claim 1, which is characterized in that the fine segmentation is in pixel layer The segmentation carried out on face, comprising:
Fine segmentation is carried out using Grabcut processor on the basis of four value figures, keeps determining during its iterative analysis Foreground pixel and determining background pixel are constant, are iterated analysis again to possible foreground pixel and possible background pixel, until Reach satisfied display effect.
CN201810589151.3A 2018-06-08 2018-06-08 It is a kind of based on by slightly to the moving target detecting method of smart image segmentation Pending CN109087330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810589151.3A CN109087330A (en) 2018-06-08 2018-06-08 It is a kind of based on by slightly to the moving target detecting method of smart image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810589151.3A CN109087330A (en) 2018-06-08 2018-06-08 It is a kind of based on by slightly to the moving target detecting method of smart image segmentation

Publications (1)

Publication Number Publication Date
CN109087330A true CN109087330A (en) 2018-12-25

Family

ID=64839819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810589151.3A Pending CN109087330A (en) 2018-06-08 2018-06-08 It is a kind of based on by slightly to the moving target detecting method of smart image segmentation

Country Status (1)

Country Link
CN (1) CN109087330A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697432A (en) * 2018-12-30 2019-04-30 陕西师范大学 Merge learner's gesture recognition method of improved SILTP and local direction mode
CN110246153A (en) * 2019-04-30 2019-09-17 安徽四创电子股份有限公司 A kind of moving target real-time detection tracking based on video monitoring
CN110276777A (en) * 2019-06-26 2019-09-24 山东浪潮人工智能研究院有限公司 A kind of image partition method and device based on depth map study
CN110533593A (en) * 2019-09-27 2019-12-03 山东工商学院 A kind of method of the accurate trimap of quick creation
CN110827311A (en) * 2019-11-05 2020-02-21 中铁十一局集团电务工程有限公司 Cable conductor sectional area measuring method and system based on imaging method
CN111539993A (en) * 2020-04-13 2020-08-14 中国人民解放军军事科学院国防科技创新研究院 Space target visual tracking method based on segmentation
CN111798481A (en) * 2019-04-09 2020-10-20 杭州海康威视数字技术股份有限公司 Image sequence segmentation method and device
CN112017158A (en) * 2020-07-28 2020-12-01 中国科学院西安光学精密机械研究所 Spectral characteristic-based adaptive target segmentation method in remote sensing scene
CN112308077A (en) * 2020-11-02 2021-02-02 中科麦迪人工智能研究院(苏州)有限公司 Sample data acquisition method, image segmentation method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871076A (en) * 2014-02-27 2014-06-18 西安电子科技大学 Moving object extraction method based on optical flow method and superpixel division
CN106991686A (en) * 2017-03-13 2017-07-28 电子科技大学 A kind of level set contour tracing method based on super-pixel optical flow field
CN107016691A (en) * 2017-04-14 2017-08-04 南京信息工程大学 Moving target detecting method based on super-pixel feature
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871076A (en) * 2014-02-27 2014-06-18 西安电子科技大学 Moving object extraction method based on optical flow method and superpixel division
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
CN106991686A (en) * 2017-03-13 2017-07-28 电子科技大学 A kind of level set contour tracing method based on super-pixel optical flow field
CN107016691A (en) * 2017-04-14 2017-08-04 南京信息工程大学 Moving target detecting method based on super-pixel feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAOZHOU ZHU ET AL.: "Unsupervised Single Moving Object Detection Based on Coarse-to-Fine Segmentation", 《KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697432B (en) * 2018-12-30 2023-04-07 陕西师范大学 Learner posture identification method integrating improved SILTP and local direction mode
CN109697432A (en) * 2018-12-30 2019-04-30 陕西师范大学 Merge learner's gesture recognition method of improved SILTP and local direction mode
CN111798481A (en) * 2019-04-09 2020-10-20 杭州海康威视数字技术股份有限公司 Image sequence segmentation method and device
CN111798481B (en) * 2019-04-09 2023-10-20 杭州海康威视数字技术股份有限公司 Image sequence segmentation method and device
CN110246153A (en) * 2019-04-30 2019-09-17 安徽四创电子股份有限公司 A kind of moving target real-time detection tracking based on video monitoring
CN110276777B (en) * 2019-06-26 2022-03-22 山东浪潮科学研究院有限公司 Image segmentation method and device based on depth map learning
CN110276777A (en) * 2019-06-26 2019-09-24 山东浪潮人工智能研究院有限公司 A kind of image partition method and device based on depth map study
CN110533593A (en) * 2019-09-27 2019-12-03 山东工商学院 A kind of method of the accurate trimap of quick creation
CN110827311A (en) * 2019-11-05 2020-02-21 中铁十一局集团电务工程有限公司 Cable conductor sectional area measuring method and system based on imaging method
CN110827311B (en) * 2019-11-05 2023-07-21 中铁十一局集团电务工程有限公司 Imaging method-based cable conductor sectional area measurement method and system
CN111539993A (en) * 2020-04-13 2020-08-14 中国人民解放军军事科学院国防科技创新研究院 Space target visual tracking method based on segmentation
CN112017158A (en) * 2020-07-28 2020-12-01 中国科学院西安光学精密机械研究所 Spectral characteristic-based adaptive target segmentation method in remote sensing scene
CN112017158B (en) * 2020-07-28 2023-02-14 中国科学院西安光学精密机械研究所 Spectral characteristic-based adaptive target segmentation method in remote sensing scene
CN112308077A (en) * 2020-11-02 2021-02-02 中科麦迪人工智能研究院(苏州)有限公司 Sample data acquisition method, image segmentation method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN109087330A (en) It is a kind of based on by slightly to the moving target detecting method of smart image segmentation
Ibrahim et al. Image segmentation methods based on superpixel techniques: A survey
Li et al. SAR image change detection using PCANet guided by saliency detection
CN109522908B (en) Image significance detection method based on region label fusion
Yao et al. Multi-layer background subtraction based on color and texture
Guo et al. Single-image shadow detection and removal using paired regions
CN108537239B (en) Method for detecting image saliency target
Almogdady et al. A flower recognition system based on image processing and neural networks
CN105761238B (en) A method of passing through gray-scale statistical data depth information extraction well-marked target
CN110717896A (en) Plate strip steel surface defect detection method based on saliency label information propagation model
CN107767400A (en) Remote sensing images sequence moving target detection method based on stratification significance analysis
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN110472081B (en) Shoe picture cross-domain retrieval method based on metric learning
CN107067037B (en) Method for positioning image foreground by using LL C criterion
Galsgaard et al. Circular hough transform and local circularity measure for weight estimation of a graph-cut based wood stack measurement
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search
Ecins et al. Shadow free segmentation in still images using local density measure
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN113705579A (en) Automatic image annotation method driven by visual saliency
Rotem et al. Combining region and edge cues for image segmentation in a probabilistic gaussian mixture framework
Siegmund et al. An integrated deep neural network for defect detection in dynamic textile textures
Chen et al. Illumination-invariant video cut-out using octagon sensitive optimization
Jensch et al. A comparative evaluation of three skin color detection approaches
CN105868789B (en) A kind of target detection method estimated based on image-region cohesion
Feng et al. An Improved Saliency Detection Algorithm Based on Edge Boxes and Bayesian Model.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181225

RJ01 Rejection of invention patent application after publication