The content of the invention
To solve the problems of above-mentioned prior art, the present invention proposes a kind of the effective of Characteristics of The Remote Sensing Images identification
Method.
The present invention is adopted the following technical scheme that:A kind of online image processing method, for entering to the feature in image
Row ONLINE RECOGNITION, it is characterised in that including:
Step one:Visual scanning is carried out to gray scale remote sensing images, the central point of each scan line is searched for, it is poor by central periphery
Computing, obtains the notable figure and blinkpunkt of remote sensing images;
Step 2:According to self-adaption gradient threshold value calculation method, the poor threshold value of adaptive central periphery and each note are utilized
The conspicuousness rank of viewpoint, determines emphasis characteristic area;
Step 3:Laddering training network is constructed based on convolutional neural networks, based on the laddering training network by emphasis
The convolutional neural networks hierarchical model that the pixel grey scale signal input in region has been trained, obtains the laddering instruction of emphasis characteristic area
Practice substantive characteristics, so as to obtain Characteristics of The Remote Sensing Images.
Preferably, the step one further comprises:
(1) remote sensing images are carried out with line line by line to scan,
(2) each local minimum peak point h (x of search jth row grey scale curvei,yj) it is central point, i=1,2 ..., n, during n is
Entreat the quantity of point;The corresponding pixel coordinate in each central point is (xi,yj),
The top of its nearest neighbours, respectively h (x are searched in both sides respectively to the left and right by each central pointiL,yj) and h (xiR,
yj), i=1,2 ..., n;Corresponding pixel is peripheral point at the top of each, is respectively (xiL,yj) and (xiR,yj), i=1,2 ...,
N,
(3) target area N (x are definedi,yj)={ (xim,yj) | m ∈ Z, L≤m≤R }, between the peripheral point of the left and right sides
Pixel is constituted, and is possible blinkpunkt region;Wherein j=1,2 ..., k, k are picturedeep, and Z represents that set of integers, L represent target
Region left margin, R represents target area right margin,
Calculate central periphery difference Δ h (xiL,yj)=h (xiL,yj)-h(xi,yj), Δ h (xiR,yj)=h (xiR,yj)-h(xi,
yj), wherein i=1,2 ..., n;Take Δ h (xiL,yj) and Δ h (xiR,yj) smaller centered on the poor Δ h in periphery,
(4) the poor threshold T of adaptive central periphery of jth horizontal scanning line is calculatedj:Tj=μΔf+kσΔf, wherein μΔfFor scanning
The average of line central periphery difference;σΔfFor the standard deviation of scan line central periphery difference;K is constant coefficient,
(5) each point conspicuousness rank in scan line is calculated:By Δ h (xiL,yj) and Δ h (xiR,yj) with each scan line from
Adapt to the poor threshold T of central peripheryjCompare, it is assumed that Δ h (xiL,yj)≥Δh(xiR,yj), by Δ h (xiL,yj)≥Tj, then the mesh
Mark region N (xi,yj) notable focus, conspicuousness rank of each point in notable figure be set to S (xi,yj)=Δ h (xiL,yj), scanning
Remaining each point conspicuousness rank is set to 0 on line,
(6) after the completion of whole sub-picture scanning, merge connected marking area, assembly section is used as using maximum conspicuousness rank
The conspicuousness rank in domain, obtains the notable figure of remote sensing images remote sensing images figure.
Preferably, in the determination emphasis characteristic area of step 2, the sequence based on the conspicuousness rank to each blinkpunkt
Come sequential processes marking area, the conspicuousness rank S (x of such as blinkpunkti,yj) > T, then the blinkpunkt is emphasis characteristic area, its
Middle T is default decision gate limit value.
Preferably, the step 4 further comprises:
Gray level image is believed as analysis object the key area sorted according to conspicuousness rank that above-mentioned steps two are obtained
Number as network input, pass through the convolutional neural networks hierarchical model trained, obtain emphasis characteristic area laddering instruction
Practice substantive characteristics, be identified by output layer radial basis function network, be output as characteristic type.
Compared to prior art, technical scheme has an advantages below:
(1) remote sensing images emphasis characteristic area is quickly focused on using concern mechanism, image real time transfer amount is subtracted significantly
It is few, it is ensured that algorithm has higher efficiency, reduce extraneous data interference and improve algorithm accuracy simultaneously;
(2) laddering training is taken, applied to image recognition, accuracy is improved, and avoid manual features extraction
Time loss, so as to improve computational efficiency, new thinking can be provided for the research of visual signature detection field.
Embodiment
Various ways can be used for (including being embodied as process;Device;System;Material composition;In computer-readable storage medium
The computer program product included in matter;And/or (such as following processor, the processor is configured to perform in coupling processor
Instruction that is being stored on the memory of processor and/or being provided by the memory is provided)) implement the present invention.In this manual,
Any other form that these are implemented or the present invention can be used is properly termed as technology.In general, can be the present invention's
In the range of change disclosed process the step of order.Unless otherwise expressing, it is described as being configured to the part of execution task (such as
Processor or memory) it may be embodied as by provisional configuration into the general part that the task is performed in preset time or made
Cause the specific part of the execution task.
Retouching in detail to one or more embodiment of the invention is hereafter provided together with illustrating the accompanying drawing of the principle of the invention
State.The present invention is described with reference to such embodiment, but the invention is not restricted to any embodiment.The scope of the present invention is only by right
Claim is limited, and the present invention covers many replacements, modification and equivalent.Illustrate in the following description many details with
Thorough understanding of the present invention is just provided.These details are provided for exemplary purposes, and without in these details
Some or all details can also realize the present invention according to claims.
It is an object of the invention to detect the difficult point and the deficiencies in the prior art that exist for existing remote sensing images, it will pay close attention to
Mechanism is introduced into remote Sensing Image Analysis, is proposed a kind of effective ways of Characteristics of The Remote Sensing Images, is overcome and asked present in prior art
Topic.
Conventional method need on an equal basis be treated to all remote sensing images regions, carry out equal detection and analysis, but actually institute
The emphasis characteristic area of care generally only accounts for a part very small in remote sensing images general image, ratio that may be less than 1%,
It is even less.This not only causes the computing resource of feature detection identifying system with calculating waste of time, and aggravates detection
Analysis difficulty, the reduction accuracy of identification.The selective concern mechanism of human eye is incorporated into Characteristics of The Remote Sensing Images target by the present invention
In detection, adaptive central periphery difference is carried out by simulating the scanning of human eye vision and calculates to directly obtain the side of notable figure
Method, quickly and effectively to reduce the data volume of image procossing, improves detection speed;More importantly effectively eliminate invalid data
Interference to feature detection region, is favorably improved accuracy of detection and the degree of accuracy.Also, laddering training is taken by key area
Pixel grey scale signal directly by the Networks Layered Model trained, the substantive characteristics for obtaining emphasis characteristic area is directly carried out
Identification.
Fig. 1 is the method flow diagram of Characteristics of The Remote Sensing Images automatic identification according to embodiments of the present invention.As shown in figure 1, real
Apply comprising the following steps that for the present invention:
Step one:Obtain the notable figure of remote sensing images.By the way that directly gray scale remote sensing images are carried out with visual scanning, search is each
The central point of scan line, by central periphery difference operation, obtains the notable figure and blinkpunkt of remote sensing images.
The stimulation that human vision neuron is pointed to its receptive field center cell domain is most sensitive, and around central area more
Extensively, the stimulation more in weak-strong test will suppress the response of the neuron.This sensitive structure to local space with discontinuity
It is particularly suitable for use in detecting the region that local conspicuousness is stronger, this is the linear center peripheral operation of biological receptive field.The present invention
Calculating to strength characteristic, is central to obtain viewpoint by way of line scanner uni calculates the poor threshold value of adaptive central periphery
With the difference of peripheral part, so as to realize the linear center peripheral operation similar with biological receptive field.
In one embodiment of the invention, the remarkable picture capturing method of image is as follows:
(1) image is carried out scanning by row (column) line.Fig. 1 b are the intensity profile curves in scan line.
(2) each local minimum peak point h (x of search jth row grey scale curvei,yj) it is central point, i=1,2 ..., n, during n is
Entreat the quantity of point;The corresponding pixel coordinate in each central point is (xi,yj), i=1,2 ..., n.
The top of its nearest neighbours, respectively h (x are searched in both sides respectively to the left and right by each central pointiL,yj) and h (xiR,
yj), i=1,2 ..., n;Corresponding pixel is peripheral point at the top of each, is respectively (xiL,yj) and (xiR,yj), i=1,2 ...,
n。
(3) target area N (x are definedi,yj):N(xi,yj)={ (xim,yj) | m ∈ Z, L≤m≤R }, by left and right sides week
Pixel composition between edge point, is possible blinkpunkt region.Wherein j=1,2 ..., k, k are picturedeep, and Z represents integer
Collection, L represent target area left margin, and R represents target area right margin, such as xiLThe row coordinate of target area left margin is represented,
xiRRepresent the row coordinate of target area right margin.
Central periphery difference Δ h:Δh(xiL,yj)=h (xiL,yj)-h(xi,yj), Δ h (xiR,yj)=h (xiR,yj)-h(xi,
yj), wherein i=1,2 ..., n;Take Δ h (xiL,yj) and Δ h (xiR,yj) smaller centered on the poor Δ h in periphery.
(4) the poor threshold T of adaptive central periphery of jth horizontal scanning line is calculatedj:Tj=μΔf+kσΔf, wherein μΔfFor scanning
The average of line central periphery difference;σΔfFor the standard deviation of scan line central periphery difference;K is constant coefficient, typically takes 3~5.
(5) each point conspicuousness rank in scan line:By Δ h (xiL,yj) and Δ h (xiR,yj) adaptive with each scan line
Central periphery difference threshold TjCompare, it is assumed that Δ h (xiL,yj)≥Δh(xiR,yj).By Δ h (xiL,yj)≥Tj, then the target area
Domain N (xi,yj) notable focus, conspicuousness rank of each point in notable figure be set to S (xi,yj)=Δ h (xiL,yj), in scan line
Remaining each point conspicuousness rank is set to 0.
(6) after the completion of whole sub-picture scanning, merge connected marking area, assembly section is used as using maximum conspicuousness rank
The conspicuousness rank in domain, obtains the notable figure S of remote sensing images remote sensing images figure after the completion of merging.
Step 2:Judge remote sensing images key area.According to self-adaption gradient threshold value calculation method, in adaptive
The conspicuousness rank of the poor threshold value in heart periphery and each blinkpunkt, determines emphasis characteristic area, so as to these emphasis characteristic areas
Enter row major, careful processing, improve Characteristics of The Remote Sensing Images detection efficiency and accuracy;
Specifically, according to the conspicuousness rank tag align sort of each blinkpunkt, the high marking area priority treatment of rank.
Such as the conspicuousness rank S (x of blinkpunkti,yj) > T, T is decision gate limit value, then the blinkpunkt is emphasis characteristic area
Domain, need to make further careful analysis by the laddering training network of step 3.
Step 3:Laddering training network construction based on convolutional neural networks, using the growing method of convolutional network, from
Initial network starts, according to growing strategy, and automatic growth to recognition capability and detection efficiency all reach expectation threshold value.
Similar to biological vision neural network, convolutional neural networks have stratification and local sensing region extraction feature
Feature, suitably increases the quantity of each layer perceptron, it is possible to increase the feature quantity that each layer can be extracted in network, improves network
Recognition capability, also can be more preferable relative to the robustness with noise, translation and disturbance, but on condition that situation about being met in sample size
Under.Such as sample size relative deficiency, it is likely to result in complicated convolutional network and trains insufficient, and reduces recognition capability;Even and if
Sample size is sufficient, and convolutional network scale increase, operand can also be multiplied, and is likely to result in recognition capability and improves a little, and
Detection efficiency reduction is a lot.Suitable convolutional neural networks should take into account recognition capability and detection speed simultaneously.
In view of this, present invention improves over the growing method of convolutional network, since initial network, according to growing strategy,
Automatic growth is untill recognition capability and detection efficiency all reach expectation threshold value.Initial network structure is as shown in Figure 2.
The convolutional neural networks basic structure has 7 layers, not comprising input, and every layer all comprising can training parameter (connection weight
Weight).Input picture is the area-of-interest obtained, and specification is 32 × 32 to size in proportion.Potential obvious characteristic such as oil spilling etc.
The center that top feature monitors sub- receptive field can be appeared in.
C1 layers are a convolutional layers, are two 5 × 5 convolution kernel convolution input pictures acquisitions, are made up of 2 characteristic patterns.It is special
Each neuron in figure is levied with input 5 × 5 neighborhood to be connected.The size of characteristic pattern is 28 × 28.C1 has 52 can train ginseng
It is several that (each 5 × 5=25 unit parameter of wave filter and a bias parameter, 2 wave filters, are total to (5 × 5+1) × 2=52 altogether
Individual parameter), the individual connection in totally 52 × (28 × 28).
S2 layers are a down-sampling layers, are the principles using image local correlation, carry out sub-sample acquisition to image, have
The characteristic pattern of 2 14 × 14.2 × 2 neighborhoods of each unit characteristic pattern corresponding with C1 in characteristic pattern are connected.S2 layers every
4 of individual unit inputs are added, be multiplied by one can training parameter, biasing can be trained along with one.As a result Gaussian function is passed through
Calculate.Coefficient and biasing can be trained to control the nonlinear degree of Gaussian function.If coefficients comparison is small, then computing is similar to
Linear operation, sub-sampling is equivalent to blurred picture.If coefficient ratio is larger, it can be regarded as according to the size sub-sampling of biasing
Noisy inclusive-OR operation or noisy AND operation.2 × 2 receptive fields of each unit are not overlapping, therefore every in S2
The size of individual characteristic pattern is 1/4 (each 1/2) of row and column of characteristic pattern size in C1.S2 layers have 4 can training parameter and totally 4 ×
(14 × 14) individual connection.
As shown in figure 3, convolution process includes:With a trainable wave filter fxDeconvolute an image inputted (
One stage was the image of input, and the stage below is exactly convolution feature map), then add a biasing bx, obtain convolutional layer Cx。
Sub-sampling procedures include:It is changed into a pixel per four pixel summations of neighborhood, then passes through scalar Wx+1Weighting, is further added by biasing
bx+1, then by a Gauss activation primitive, produce a Feature Mapping figure S for probably reducing four timesx+1。
So can be regarded as making convolution algorithm from the mapping of plane to a next plane, S- layers are considered as obscuring
Wave filter, plays a part of Further Feature Extraction.Spatial resolution is successively decreased between hidden layer and hidden layer, and the number of planes contained by every layer
It is incremented by, so can be used for detecting more characteristic informations.
C3 layers are also a convolutional layer, are deconvoluted a layer S2 by 3 kinds of 5 × 5 different convolution kernels, by 3 10 × 10 features
Figure composition, i.e., every containing 10 × 10 neurons.Each characteristic pattern in C3 is connected to all 1 or 2 features in S2
Figure, the characteristic pattern for representing this layer is the various combination for the characteristic pattern that last layer is extracted, as shown in Figure 4.Due to different features
Figure has different inputs, so different features can be extracted.As the vision system of people, the structure of bottom constitutes upper strata more
Abstract structure, such as edge constitute the part of shape or target.
S4 layers are a down-sampling layers, are made up of the characteristic pattern of 16 5 × 5 sizes.Each unit and C3 in characteristic pattern
2 × 2 neighborhoods of middle individual features figure are connected, with the connection between C1 and S2.S4 layers have 4 can training parameter it is (each
1 factor of characteristic pattern and a biasing).
C5 layers are a convolutional layers, there is 100 characteristic patterns (having output layer and F6 layers of decision).Each unit is complete with S4 layers
5 × 5 neighborhoods of portion's unit are connected.Because the size of S4 layers of characteristic pattern is also 5 × 5 (the same with convolution kernel), therefore C5 characteristic patterns
Size is 1 × 1:Which constitute the full connection between S4 and C5.
F6 layers have a Unit 50 (being determined by the design of output layer), are connected entirely with C5 layers.Such as classical neural network, F6 layers
The dot product between input vector and weight vectors is calculated, along with a biasing.It is then passed to Gaussian function and produces list
One state of member.
Output layer is RBF units by Euclidean RBF (Euclidean Radial Basis Function) unit
Composition, per one unit of class, each there is 50 inputs.I.e. each output RBF units are calculated between input vector and parameter vector
Euclidean distance.Input it is more remote from parameter vector, RBF output it is bigger.RBF parameter vectors play the angle of F6 layers of object vector
Color.These vectors are+1 or -1, and this is just in the range of F6 Gausses, therefore can prevent Gaussian function saturation.In fact,
+ 1 and -1 is at the point of maximum deflection of Gaussian function.This causes F6 units to operate in maximum non-linear domain.It must avoid
The saturation of Gaussian function, because this will cause the slower convergence of loss function and ill-conditioning problem.
Step 4:Characteristics of The Remote Sensing Images identification based on laddering training network, by the pixel grey scale signal of key area
Directly by the convolutional neural networks hierarchical model trained, the laddering training essential feature for obtaining emphasis characteristic area is direct
It is identified.
The key area according to conspicuousness rank tag align sort that above-mentioned steps two are obtained is used as selective analysis object, emphasis
The region gray level image that specification is 32 × 32 in proportion, it is ensured that potential obvious characteristic such as oil spilling vestige etc. can appear in highest
Layer feature monitors the center of sub- receptive field, its as convolutional neural networks input picture.The gray scale of 32 × 32 pixels is believed
Number as network input, directly by the convolutional neural networks hierarchical model trained, obtain 50 of emphasis characteristic area
Laddering training essential feature, is directly identified by output layer radial basis function network, is output as characteristic type.
In summary, instant invention overcomes traditional shortcoming, can preferably solve artifact cause efficiency it is low, can not
Lean on, the problem of uniformity is poor.Without other image preprocessings, traditional remote sensing features image detecting method adaptability is overcome not
By force, versatility is bad, efficiency is low, be difficult to detection Weak characteristic the problem of.
Obviously, can be with general it should be appreciated by those skilled in the art, above-mentioned each module of the invention or each step
Computing system realize that they can be concentrated in single computing system, or be distributed in multiple computing systems and constituted
Network on, alternatively, the program code that they can be can perform with computing system be realized, it is thus possible to they are stored
Performed within the storage system by computing system.So, the present invention is not restricted to any specific hardware and software combination.
It should be appreciated that the above-mentioned embodiment of the present invention is used only for exemplary illustration or explains the present invention's
Principle, without being construed as limiting the invention.Therefore, that is done without departing from the spirit and scope of the present invention is any
Modification, equivalent substitution, improvement etc., should be included in the scope of the protection.In addition, appended claims purport of the present invention
Covering the whole changes fallen into scope and border or this scope and the equivalents on border and repairing
Change example.