CN107229917A - A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration - Google Patents
A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration Download PDFInfo
- Publication number
- CN107229917A CN107229917A CN201710395719.3A CN201710395719A CN107229917A CN 107229917 A CN107229917 A CN 107229917A CN 201710395719 A CN201710395719 A CN 201710395719A CN 107229917 A CN107229917 A CN 107229917A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- image
- super
- class
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The present invention discloses a kind of several remote sensing image general character well-marked target detection methods clustered based on iteration, belongs to remote sensing image process field.Implementation process includes:1) gray level co-occurrence matrixes of several remote sensing images are calculated, four parameters of contrast, energy, entropy, correlation of gray level co-occurrence matrixes are obtained, with reference to the length and width of remote sensing image, super-pixel number are calculated;2) super-pixel segmentation is completed to remote sensing image according to super-pixel number and K means clusters is carried out to segmentation result, calculated conspicuousness between class, obtain the initial notable figure of image;3) Target Segmentation is carried out to all initial notable figures, segmentation result is carried out to the K means based on super-pixel again and clusters and calculates conspicuousness between class, the final notable figure of image is obtained;4) the general character well-marked target of several remote sensing images is obtained using Threshold segmentation.The present invention can accurately detect the general character well-marked target of several remote sensing images while ambient interferences are effectively suppressed, available for multiple fields such as environmental monitoring, the reallocations of land.
Description
Technical field
The invention belongs to Remote Sensing Image Processing Technology field, and in particular to a kind of several remote sensing images clustered based on iteration
General character well-marked target detection method.
Background technology
In recent years, satellite technology, remote sensing technology etc. are continued to develop, and the mankind have been realized in comprehensive, round-the-clock, multi-angle
Earth observation.With developing rapidly for High Resolution Remote Sensing Satellites, the quantity of remote sensing image also constantly increases.Remote sensing image mesh
Mark detection contributes to the computing resource of reasonable distribution subsequent treatment, reduces the complexity of subsequent treatment.Thus as remote sensing image
Primary study problem in treatment technology.
Existing remote sensing image object detection method can be divided into top-down and bottom-up two major class.One is top-down
Method.This kind of method carries out machine learning to features such as the color of known target object, texture, brightness first, then basis
The feature learnt carries out target detection.Top-down method need utilize a large amount of prioris, therefore computation complexity compared with
Height, it is poor for different target adaptability.Two be bottom-up method.Such method vision significance based on image point
Analysis, can effectively improve target detection efficiency.Significance analysis be by human visual system conspicuousness attention mechanism inspire and
Come, existing significance analysis method can be divided into the method based on biological model, the method based on computation model, and be based on
The class of method three of mixed model.ITTI methods (ITTI) are the algorithms based on biological model the most classical, are also many follow-up
The basis of significance analysis method.This method is wild by calculating linearity center-periphery differential mode apery class visual experience, carries out
Multiple dimensioned color, brightness and Directional feature extraction, then obtain the characteristic remarkable picture of single yardstick by multi-scale feature fusion,
Characteristic point selection is carried out finally by neutral net.In method based on computation model, the method (FT based on frequency tuning:
Frequency Tuned) image low-frequency information is obtained to image progress difference of Gaussian filtering first, it is then low by calculating image
Frequency information obtains final notable figure with artwork aberration value.The marking area that FT methods are obtained has good border.Based on mixing
In the method for model, the method (GBVS based on graph theory:Graph Based Visual Saliency) pass through the prospect to image
Similarity measurement is carried out with background element, and according to each element and default seed or the Similarity measures of sequence its conspicuousnesses.
Significance analysis method based on single image takes in the target detection of Images of Natural Scenery and remote sensing image
Obtained preferable effect.Because the significance analysis method based on single image can not effectively utilize the common information between image,
The notable figure thus obtained only indicates the higher region of saliency value in single image.But for some images, saliency value
Higher region not necessarily required target area.It is single for the complex remote sensing image of characters of ground object
It is likely to occur that there is the background area of similar features with target area in width image or has compared with target area higher
The background area of saliency value.And the significance analysis method based on single image can not be to similar or higher saliency value
Background area is effectively suppressed.
The present invention an important feature be:General character can be completed to several remote sensing images with similar characters of ground object to show
Write accurate, the efficient detection of target.In several remote sensing images with similar characters of ground object, when most of remote sensing images all have
When having the higher same class target area of vision significance, this class target is thus referred to as general character well-marked target.General character is notable
Object detection method introduces remote sensing image process field, using the notable feature common to several images, provides mutually with reference to letter
Breath, can effectively suppress the higher ambient interferences of conspicuousness in these images, so as to accurately and efficiently detect several remote sensing shadows
The general character well-marked target of picture.
The present invention has obtained project of national nature science fund project:" remote sensing image based on joint significance analysis is interested
Extracted region key technology research " (numbering:61571050) subsidy energetically.
The content of the invention
For problem present in above technology, it is total to the invention provides a kind of based on several remote sensing images that iteration is clustered
Property well-marked target detection method.This method calculates the gray level co-occurrence matrixes of several remote sensing images first, obtains gray level co-occurrence matrixes
Contrast, energy, entropy, four parameters of correlation, with reference to the length and width of remote sensing image, calculate super-pixel number;Then
Super-pixel segmentation is completed to remote sensing image according to super-pixel number and segmentation result is carried out between K-means clusters, calculating class to show
Work property, obtains the initial notable figure of image;Secondly Target Segmentation is carried out to all initial notable figures, segmentation result is carried out again
K-means based on super-pixel is clustered and is calculated conspicuousness between class, obtains the final notable figure of image;Finally utilize Threshold segmentation
Obtain the general character well-marked target of several remote sensing images.The inventive method can be extracted accurately while ambient interferences are effectively suppressed
The general character well-marked target of several remote sensing images, available for multiple fields such as environmental monitoring, the reallocations of land.Present invention is primarily concerned with two
Individual aspect:
1) the general character well-marked target in several remote sensing images is extracted exactly, lifts remote sensing image target detection precision
2) the higher background information of saliency value in image is effectively suppressed
The technical solution used in the present invention is:Gray scale is calculated every width image in several remote sensing images respectively first to be total to
Raw matrix, according to the contrast of gray level co-occurrence matrixes, energy, entropy, four parameters of correlation and the length and width that combine image,
Calculate the super-pixel number needed for every width remote sensing image;Secondly, according to obtained super-pixel number in several remote sensing images
Every width image carry out super-pixel segmentation, and to super-pixel segmentation result carry out K-means clusters, obtain different terrestrial object information institutes
Corresponding class, calculates conspicuousness between class, obtains the initial notable figure of each width image in several remote sensing images.Again, to all
Initial notable figure carries out Target Segmentation, and object segmentation result is carried out into the K-means clusters based on super-pixel again, calculates
Conspicuousness between class, obtains the final notable figure of several remote sensing images, finally completes several remote sensing image general character using Threshold segmentation
The automatic detection of well-marked target.Specifically include following steps:
Step one:Gray level co-occurrence matrixes are calculated to every width image in several remote sensing images, gray scale symbiosis square is then utilized
Contrast, energy, entropy, four parameters of correlation of battle array, in combination with the length and width of image, calculate every width remote sensing image institute
The super-pixel number K needed;
Step 2:The super-pixel number obtained according to step one carries out super-pixel to every width image in several remote sensing images
Segmentation, obtains several remote sensing images after super-pixel segmentation;
Step 3:The color average of each super-pixel in every width remote sensing image after super-pixel segmentation is calculated, as
The color average of the super-pixel, the color average based on super-pixel carries out K- to all remote sensing images after super-pixel segmentation
Means is clustered;
Step 4:The color histogram of each class is counted using K-means cluster results, then according to color histogram meter
Color distance between class is calculated, based on conspicuousness between color distance between class and spatial weighting information calculating class, several remote sensing are finally given
The initial notable figure of every width image in image;
Step 5:Row threshold division is entered using maximum variance between clusters to the initial notable figure of every width remote sensing image, so that
These initial notable figures are divided into target area and the class of background area two, every width image in several remote sensing images is finally given
Initial target splits image;
Step 6:Super-pixel number K is halved, super picture then is carried out to the initial target segmentation image of every width remote sensing image
Element segmentation, reuses K-means algorithms and all initial targets segmentation image after super-pixel segmentation is clustered, statistics is poly-
The color histogram of each class in class result, then according to color distance between color histogram calculating class, is again based on face between class
Conspicuousness between color distance and spatial weighting information calculating class, obtains the final notable figure of every width image in several remote sensing images;
Step 7:Row threshold division is entered using maximum between-cluster variance method to the final notable figure of every width image, so as to carry
Take out the general character well-marked target of several remote sensing images.
The inventive method unit based on super-pixel carries out general character well-marked target detection, ensures that region is complete to greatest extent
Whole property, it is to avoid target detection fragmentation;The smaller super-pixel of simultaneous selection carries out the iteration cluster based on super-pixel, further suppression
Target periphery processed has the background area of similar features.
Brief description of the drawings
Fig. 1 is flow chart of the invention.
Fig. 2 is the width exemplary image in several remote sensing images used herein.
Fig. 3 is the final notable figure of exemplary image of the present invention and object detection results, and (a) is the final notable figure of exemplary image,
(b) it is exemplary image object detection results.
Fig. 4 is the inventive method and FT methods, ITTI methods, the final notable figure results contrast of GBVS method exemplary images,
(a) it is FT method notable figures, (b) is ITTI method notable figures, and (c) is GBVS method notable figures, and (d) is that the inventive method is notable
Figure.Fig. 5 is the inventive method and FT methods, ITTI methods, GBVS method exemplary image final target detection results contrasts, (a)
For FT method object detection results, (b) is ITTI method object detection results, and (c) is GBVS method object detection results, (d)
For the inventive method object detection results.
Fig. 6 schemes for ground truth (Ground-Truth) mark of exemplary image.
Fig. 7 is the inventive method and FT methods, ITTI methods, the Receiver Operating Characteristics ROC (ROC of GBVS methods:
Receiver Operating Characteristic) curve map.
Embodiment
The present invention is described in further details below in conjunction with the accompanying drawings.The overall framework of the present invention is as shown in figure 1, existing introduction
Each step realizes details.
Step one:Gray level co-occurrence matrixes GLCM is calculated to every width image in several remote sensing images, is then total to using gray scale
Tetra- parameter values of contrast C on, energy Asm, entropy Ent, correlation Corr of raw matrix, in combination with length M and the width of image
N is spent, the super-pixel number K of remote sensing image is calculated;Detailed process is as follows:
Remote sensing image P tonal range be [0, G-] 1, P (i, j) be in remote sensing image P coordinate be (i, j) i ∈ 1 ...,
M }, the gray value of j ∈ { 1 ..., N } pixel.Gray value is x pixel from image, and statistics is with it apart from d=1 gray scales
It is worth the frequency of pixel (i+a, the j+b) appearance for y, is designated as gray level co-occurrence matrixes GLCM (x, y), wherein a2+b2=d2.Gray scale model
Enclose for the remote sensing image of [0, G-1], its gray level co-occurrence matrixes GLCM (x, y) is G × G matrix, and GLCM (x, y) calculation formula is such as
Under:
GLCM (x, y)=(i, j), (i+a, j+b) ∈ M × N | P (i, j)=x, P (i+a, j+b)=y }
x∈{0,…,G-1},y∈{0,…,G-1}
Gray level co-occurrence matrixes GLCM tetra- parameter value calculation public affairs of contrast C on, energy Asm, entropy Ent, correlation Corr
Formula is as follows:
Wherein μxAnd σxRespectively the average of gradation of image distribution and standard deviation and there is μx=μy, σx=σy。
Using contrast C on, energy Asm, entropy Ent, tetra- parameter values of correlation Corr, calculating obtains textural characteristics weight
w。
Then remote sensing image length M, width N and textural characteristics weight w are utilized, calculating obtains super-pixel number K.
Step 2:The super-pixel number obtained according to step one is surpassed to every width remote sensing image in several remote sensing images
Pixel is split, and SLIC (SLIC have been used in the present invention:Simple Linear Iterative Clustering) super-pixel
Dividing method, to affiliated super-pixel SP (i, the j)=SLIC of each element marking of remote sensing imageK(P (i, j)), K represents super-pixel
Number, obtains several remote sensing images after super-pixel segmentation;
SLIC superpixel segmentation methods K initial seed point of uniform design in the picture first, each super-pixel is with these
Centered on seed point, initial size is M × N/K, then for other pixels in image, calculate its with K seed point away from
From, and assign it to the super-pixel belonging to closest seed point, final updating seed point location.Repeat said process,
Until the distance of new seed point and former seed point is less than the threshold value of setting, algorithmic statement obtains super-pixel segmentation result.
Step 3:The color average of each super-pixel in every width remote sensing image after super-pixel segmentation is calculated, as
The color average of the super-pixel, the color average based on super-pixel carries out K- to all remote sensing images after super-pixel segmentation
Means is clustered, and obtains the class corresponding to different terrestrial object informations;
K-means clustering methods choose C barycenter first in data set, then for other data in data set
Point, calculates its distance with C barycenter, assigns it to the class belonging to closest barycenter, finally individual to obtained C
Class recalculates barycenter.Said process is repeated, until the distance of new barycenter and the protoplasm heart is less than the threshold value of setting, algorithm is received
Hold back, obtain cluster result.C=3 is taken in the methods of the invention.
Step 4:The color histogram of each class is counted using K-means cluster results, then according to color histogram meter
Color distance between class is calculated, based on conspicuousness between color distance between class and spatial weighting information calculating class, several remote sensing are finally given
The initial notable figure of every width image in image;Detailed process is as follows:
The color histogram of each class in cluster result first obtained by calculation procedure three, then according to color histogram
Calculate color distance d (c between classi,cj)。
Wherein L represents different colours total number in image, fi,lIt is class ciIn l kinds color L kinds color sum occur
Frequency, fj,lClass cjIn l kinds color L kinds color sum appearance frequency;
Then calculate spatial weighting informationObtain the saliency value S (c of each classi)。
Wherein D (ci,cj) it is class ciWith class cjThe Euclidean distance of barycenter, σ2=0.4;r(cj) it is class cjPixel quantity with
The ratio between sum of all pixels in image.Each pixel saliency value is finally obtained according to the affiliated class of each pixel in former remote sensing image, obtained
The initial notable figure of more every width remote sensing image.
Step 5:Row threshold division is entered using maximum variance between clusters to the initial notable figure of every width remote sensing image, obtained
The optimal segmenting threshold of every initial notable figure of width, so that these initial notable figures are divided into target area and the class of background area two,
Represented with bianry image Bw (i, j).The bianry image of generation is multiplied with former remote sensing image, finally gives every in several remote sensing images
The initial target segmentation image ROI (i, j) of width image.
Step 6:Super-pixel number K is halved, super picture then is carried out to the initial target segmentation image of every width remote sensing image
Element segmentation, reuses K-means algorithms and all initial targets segmentation image after super-pixel segmentation is clustered, statistics is poly-
The color histogram of each class in class result, then according to color distance between color histogram calculating class, is again based on face between class
Conspicuousness between color distance and spatial weighting information calculating class, obtains the final notable figure of every width image in several remote sensing images;
Step 7:Row threshold division is entered using maximum between-cluster variance method to the final notable figure of every width remote sensing image, obtained
To the optimal segmenting threshold of every final notable figure of width remote sensing image, so that these final notable figures are divided into target area and background
The class of region two, is represented with bianry image.The bianry image of generation is multiplied with former remote sensing image, obtains the general character of several remote sensing images
Well-marked target.
The effect of the present invention can be further illustrated by following experimental result and analysis:
1. experimental data
Experiment data used are the Beijing Suburb remote sensing image from SPOT5 satellites, shear some 512 from image ×
The image of 512 sizes uses experimental data example as shown in Figure 2 the present invention as experimental data:
2. contrast experiment and experimental evaluation index
The inventive method is as shown in Figure 3 to the final notable figure result and object detection results of exemplary image.Present invention side
Method compared for traditional FT methods, ITTI methods and GBVS methods.The notable of distinct methods generation is compared for respectively from subjective
Figure and object detection results, respectively as shown in Figure 4 and Figure 5.In Fig. 4, (a) is the notable figure that FT methods are generated, and (b) is ITTI side
The notable figure of method generation, (c) is the notable figure that GBVS methods are generated, and (d) is the notable figure that the inventive method is generated.In Fig. 5,
(a) it is FT method object detection results, (b) is ITTI method object detection results, and (c) is GBVS method object detection results,
(d) it is the inventive method object detection results.
Invention also uses ROC (ROC:Receiver Operating Characteristic) curve (also known as subject's work
Make indicatrix) objectively evaluate above-mentioned object detection method.ROC curve is a two dimension for showing two-value grader effect
Plane curve, abscissa is false positive rate (False Positive Rate, FPR), and ordinate is True Positive Rate (True
Positive Rate, TPR).
FPR is by ratio that error flag is total nontarget area shared by the nontarget area of target area in image.TPR
The ratio in general objective region shared by the target area that is correctly marked in image.By changing the cutting threshold to notable figure,
It is changed in tonal range [0-255], obtain a series of bianry imagesCalculate simultaneously and obtain a series of FPR values
With TPR values, drafting obtains ROC curve.
The real goal region of image represents that FPR and TPR calculation formula are with gt (i, j):
Fig. 6 is identified ground truth (Ground-Truth).Fig. 7 is ROC curve figure.In ROC curve figure, work as FPR
When being worth identical, TPR values are higher, and the region that method for expressing is correctly detected is more.As can be seen from the figure method performance of the invention
It is substantially better than FT methods, ITTI methods and GBVS methods.
Claims (2)
1. a kind of several remote sensing image general character well-marked target detection methods clustered based on iteration are proposed, in the method, first,
Gray level co-occurrence matrixes are calculated respectively to every width image in several remote sensing images, according to the contrast of gray level co-occurrence matrixes, energy,
Entropy, four parameters of correlation and the length and width that combine image, calculate the super-pixel number needed for every width remote sensing image, its
It is secondary, super-pixel segmentation is carried out to every width image in several remote sensing images according to obtained super-pixel number, and to super-pixel point
Cut result and carry out K-means clusters, obtain the class corresponding to different terrestrial object informations, calculate conspicuousness between class, obtain several remote sensing
All initial notable figures again, are carried out Target Segmentations by the initial notable figure of each width image in image, and by Target Segmentation knot
Fruit carries out the K-means clusters based on super-pixel again, calculates conspicuousness between class, obtains the final notable of several remote sensing images
Figure, finally, the automatic detection of several remote sensing image general character well-marked targets is completed using Threshold segmentation, it is characterised in that including with
Lower step:
Step one:Gray level co-occurrence matrixes are calculated to every width image in several remote sensing images, gray level co-occurrence matrixes are then utilized
Contrast, energy, entropy, four parameters of correlation, in combination with the length and width of image, needed for calculating every width remote sensing image
Super-pixel number K;
Step 2:The super-pixel number obtained according to step one carries out super-pixel point to every width image in several remote sensing images
Cut, obtain several remote sensing images after super-pixel segmentation;
Step 3:The color average of each super-pixel in every width remote sensing image after super-pixel segmentation is calculated, it is super as this
The color average of pixel, the color average based on super-pixel carries out K-means to all remote sensing images after super-pixel segmentation and gathered
Class;
Step 4:The color histogram of each class is counted using K-means cluster results, class is then calculated according to color histogram
Between color distance, conspicuousness between class is calculated based on color distance between class and spatial weighting information, several remote sensing images are finally given
In every width image initial notable figure;
Step 5:Row threshold division is entered using maximum variance between clusters to the initial notable figure of every width remote sensing image, thus by this
Some initial notable figures are divided into target area and the class of background area two, finally give the initial of every width image in several remote sensing images
Target Segmentation image;
Step 6:Super-pixel number K is halved, super-pixel point then is carried out to the initial target segmentation image of every width remote sensing image
Cut, reuse K-means algorithms and all initial targets segmentation image after super-pixel segmentation is clustered, Statistical Clustering Analysis knot
The color histogram of each class in fruit, then calculates color distance between class according to color histogram, be again based between class color away from
From conspicuousness between spatial weighting information calculating class, the final notable figure of every width image in several remote sensing images is obtained;
Step 7:Row threshold division is entered using maximum between-cluster variance method to the final notable figure of every width image, so as to extract
The general character well-marked target of several remote sensing images.
2. a kind of several remote sensing image general character well-marked target detection methods clustered based on iteration according to claim 1,
Characterized in that, the detailed process of the step one is:
1) gray level co-occurrence matrixes of remote sensing image, the contrast C on of acquisition gray level co-occurrence matrixes, energy Asm, entropy Ent, phase are calculated
Closing property tetra- parameter values of Corr, utilize formulaCalculating obtains textural characteristics weight w;
2) remote sensing image length M, width N and textural characteristics weight w are substituted into formulaCalculated, so that
Obtain super-pixel number K.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710395719.3A CN107229917B (en) | 2017-05-31 | 2017-05-31 | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710395719.3A CN107229917B (en) | 2017-05-31 | 2017-05-31 | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107229917A true CN107229917A (en) | 2017-10-03 |
CN107229917B CN107229917B (en) | 2019-10-15 |
Family
ID=59933930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710395719.3A Active CN107229917B (en) | 2017-05-31 | 2017-05-31 | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107229917B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992875A (en) * | 2017-12-25 | 2018-05-04 | 北京航空航天大学 | A kind of well-marked target detection method based on image bandpass filtering |
CN107992874A (en) * | 2017-12-20 | 2018-05-04 | 武汉大学 | Image well-marked target method for extracting region and system based on iteration rarefaction representation |
CN108052559A (en) * | 2017-12-01 | 2018-05-18 | 国电南瑞科技股份有限公司 | Distribution terminal defect mining analysis method based on big data processing |
CN108596832A (en) * | 2018-04-18 | 2018-09-28 | 中国计量大学 | The super-pixel parameter adaptive selection method of visual perception saturation strategy |
CN108871342A (en) * | 2018-07-06 | 2018-11-23 | 北京理工大学 | Subaqueous gravity aided inertial navigation based on textural characteristics is adapted to area's choosing method |
CN109086776A (en) * | 2018-07-06 | 2018-12-25 | 航天星图科技(北京)有限公司 | Typical earthquake disaster information extraction algorithm based on the detection of super-pixel region similitude |
CN110070545A (en) * | 2019-03-20 | 2019-07-30 | 重庆邮电大学 | A kind of method that textural characteristics density in cities and towns automatically extracts cities and towns built-up areas |
CN110570352A (en) * | 2019-08-26 | 2019-12-13 | 腾讯科技(深圳)有限公司 | image labeling method, device and system and cell labeling method |
CN110827298A (en) * | 2019-11-06 | 2020-02-21 | 齐鲁工业大学 | Method for automatically identifying retina area from eye image |
CN111553222A (en) * | 2020-04-21 | 2020-08-18 | 中国电子科技集团公司第五十四研究所 | Remote sensing ground feature classification post-processing method based on iteration superpixel segmentation |
CN111583279A (en) * | 2020-05-12 | 2020-08-25 | 重庆理工大学 | Super-pixel image segmentation method based on PCBA |
CN112017159A (en) * | 2020-07-28 | 2020-12-01 | 中国科学院西安光学精密机械研究所 | Ground target reality simulation method in remote sensing scene |
CN112347823A (en) * | 2019-08-09 | 2021-02-09 | 中国石油天然气股份有限公司 | Sedimentary facies boundary identification method and device |
CN113658129A (en) * | 2021-08-16 | 2021-11-16 | 中国电子科技集团公司第五十四研究所 | Position extraction method combining visual saliency and line segment strength |
CN114663682A (en) * | 2022-03-18 | 2022-06-24 | 北京理工大学 | Target significance detection method for improving anti-interference performance |
CN115147733A (en) * | 2022-09-05 | 2022-10-04 | 山东东盛澜渔业有限公司 | Artificial intelligence-based marine garbage recognition and recovery method |
CN112347823B (en) * | 2019-08-09 | 2024-05-03 | 中国石油天然气股份有限公司 | Deposition phase boundary identification method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020993A (en) * | 2012-11-28 | 2013-04-03 | 杭州电子科技大学 | Visual saliency detection method by fusing dual-channel color contrasts |
CN103208001A (en) * | 2013-02-06 | 2013-07-17 | 华南师范大学 | Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction |
CN103413120A (en) * | 2013-07-25 | 2013-11-27 | 华南农业大学 | Tracking method based on integral and partial recognition of object |
CN103955913A (en) * | 2014-02-18 | 2014-07-30 | 西安电子科技大学 | SAR image segmentation method based on line segment co-occurrence matrix characteristics and regional maps |
US20140267583A1 (en) * | 2013-03-13 | 2014-09-18 | Futurewei Technologies, Inc. | Augmented Video Calls on Mobile Devices |
-
2017
- 2017-05-31 CN CN201710395719.3A patent/CN107229917B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020993A (en) * | 2012-11-28 | 2013-04-03 | 杭州电子科技大学 | Visual saliency detection method by fusing dual-channel color contrasts |
CN103208001A (en) * | 2013-02-06 | 2013-07-17 | 华南师范大学 | Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction |
US20140267583A1 (en) * | 2013-03-13 | 2014-09-18 | Futurewei Technologies, Inc. | Augmented Video Calls on Mobile Devices |
CN103413120A (en) * | 2013-07-25 | 2013-11-27 | 华南农业大学 | Tracking method based on integral and partial recognition of object |
CN103955913A (en) * | 2014-02-18 | 2014-07-30 | 西安电子科技大学 | SAR image segmentation method based on line segment co-occurrence matrix characteristics and regional maps |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108052559A (en) * | 2017-12-01 | 2018-05-18 | 国电南瑞科技股份有限公司 | Distribution terminal defect mining analysis method based on big data processing |
CN107992874B (en) * | 2017-12-20 | 2020-01-07 | 武汉大学 | Image salient target region extraction method and system based on iterative sparse representation |
CN107992874A (en) * | 2017-12-20 | 2018-05-04 | 武汉大学 | Image well-marked target method for extracting region and system based on iteration rarefaction representation |
CN107992875A (en) * | 2017-12-25 | 2018-05-04 | 北京航空航天大学 | A kind of well-marked target detection method based on image bandpass filtering |
CN108596832A (en) * | 2018-04-18 | 2018-09-28 | 中国计量大学 | The super-pixel parameter adaptive selection method of visual perception saturation strategy |
CN108871342A (en) * | 2018-07-06 | 2018-11-23 | 北京理工大学 | Subaqueous gravity aided inertial navigation based on textural characteristics is adapted to area's choosing method |
CN109086776A (en) * | 2018-07-06 | 2018-12-25 | 航天星图科技(北京)有限公司 | Typical earthquake disaster information extraction algorithm based on the detection of super-pixel region similitude |
CN110070545A (en) * | 2019-03-20 | 2019-07-30 | 重庆邮电大学 | A kind of method that textural characteristics density in cities and towns automatically extracts cities and towns built-up areas |
CN110070545B (en) * | 2019-03-20 | 2023-05-26 | 重庆邮电大学 | Method for automatically extracting urban built-up area by urban texture feature density |
CN112347823A (en) * | 2019-08-09 | 2021-02-09 | 中国石油天然气股份有限公司 | Sedimentary facies boundary identification method and device |
CN112347823B (en) * | 2019-08-09 | 2024-05-03 | 中国石油天然气股份有限公司 | Deposition phase boundary identification method and device |
CN110570352A (en) * | 2019-08-26 | 2019-12-13 | 腾讯科技(深圳)有限公司 | image labeling method, device and system and cell labeling method |
CN110827298A (en) * | 2019-11-06 | 2020-02-21 | 齐鲁工业大学 | Method for automatically identifying retina area from eye image |
CN111553222A (en) * | 2020-04-21 | 2020-08-18 | 中国电子科技集团公司第五十四研究所 | Remote sensing ground feature classification post-processing method based on iteration superpixel segmentation |
CN111583279A (en) * | 2020-05-12 | 2020-08-25 | 重庆理工大学 | Super-pixel image segmentation method based on PCBA |
CN112017159B (en) * | 2020-07-28 | 2023-05-05 | 中国科学院西安光学精密机械研究所 | Ground target realism simulation method under remote sensing scene |
CN112017159A (en) * | 2020-07-28 | 2020-12-01 | 中国科学院西安光学精密机械研究所 | Ground target reality simulation method in remote sensing scene |
CN113658129A (en) * | 2021-08-16 | 2021-11-16 | 中国电子科技集团公司第五十四研究所 | Position extraction method combining visual saliency and line segment strength |
CN113658129B (en) * | 2021-08-16 | 2022-12-09 | 中国电子科技集团公司第五十四研究所 | Position extraction method combining visual saliency and line segment strength |
CN114663682A (en) * | 2022-03-18 | 2022-06-24 | 北京理工大学 | Target significance detection method for improving anti-interference performance |
CN115147733A (en) * | 2022-09-05 | 2022-10-04 | 山东东盛澜渔业有限公司 | Artificial intelligence-based marine garbage recognition and recovery method |
CN115147733B (en) * | 2022-09-05 | 2022-11-25 | 山东东盛澜渔业有限公司 | Artificial intelligence-based marine garbage recognition and recovery method |
Also Published As
Publication number | Publication date |
---|---|
CN107229917B (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107229917B (en) | A kind of several remote sensing image general character well-marked target detection methods based on iteration cluster | |
CN107944370B (en) | Classification of Polarimetric SAR Image method based on DCCGAN model | |
CN105184309B (en) | Classification of Polarimetric SAR Image based on CNN and SVM | |
CN104951799B (en) | A kind of SAR remote sensing image oil spilling detection recognition method | |
CN103413151B (en) | Hyperspectral image classification method based on figure canonical low-rank representation Dimensionality Reduction | |
CN102982338B (en) | Classification of Polarimetric SAR Image method based on spectral clustering | |
CN107832797B (en) | Multispectral image classification method based on depth fusion residual error network | |
CN106296695A (en) | Adaptive threshold natural target image based on significance segmentation extraction algorithm | |
CN105718942B (en) | High spectrum image imbalance classification method based on average drifting and over-sampling | |
CN106611423B (en) | SAR image segmentation method based on ridge ripple filter and deconvolution structural model | |
CN107330875A (en) | Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images | |
CN104123555A (en) | Super-pixel polarimetric SAR land feature classification method based on sparse representation | |
CN107123150A (en) | The method of global color Contrast Detection and segmentation notable figure | |
CN105138970A (en) | Spatial information-based polarization SAR image classification method | |
CN104102928B (en) | A kind of Classifying Method in Remote Sensing Image based on texture primitive | |
CN107292336A (en) | A kind of Classification of Polarimetric SAR Image method based on DCGAN | |
CN104217436B (en) | SAR image segmentation method based on multiple features combining sparse graph | |
CN107341813A (en) | SAR image segmentation method based on structure learning and sketch characteristic inference network | |
Deng et al. | Cloud detection in satellite images based on natural scene statistics and gabor features | |
CN106683102A (en) | SAR image segmentation method based on ridgelet filters and convolution structure model | |
CN109447111A (en) | A kind of remote sensing supervised classification method based on subclass training sample | |
CN102289671A (en) | Method and device for extracting texture feature of image | |
CN104408472B (en) | Classification of Polarimetric SAR Image method based on Wishart and SVM | |
CN113657326A (en) | Weed detection method based on multi-scale fusion module and feature enhancement | |
CN106611422A (en) | Stochastic gradient Bayesian SAR image segmentation method based on sketch structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |