CN104143102B - Online image processing method - Google Patents

Online image processing method Download PDF

Info

Publication number
CN104143102B
CN104143102B CN201410381571.4A CN201410381571A CN104143102B CN 104143102 B CN104143102 B CN 104143102B CN 201410381571 A CN201410381571 A CN 201410381571A CN 104143102 B CN104143102 B CN 104143102B
Authority
CN
China
Prior art keywords
layers
characteristic
characteristic pattern
remote sensing
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410381571.4A
Other languages
Chinese (zh)
Other versions
CN104143102A (en
Inventor
夏正新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Diyou Software Development Company Limited
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201410381571.4A priority Critical patent/CN104143102B/en
Publication of CN104143102A publication Critical patent/CN104143102A/en
Application granted granted Critical
Publication of CN104143102B publication Critical patent/CN104143102B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of online image processing method, for recognizing that the feature in image, including visual scanning, central periphery difference operation, gray scale notable figure obtain, watch point search, key area determination attentively;Take laddering training by the pixel grey scale signal of key area directly by the Networks Layered Model trained, the substantive characteristics for obtaining emphasis characteristic area is directly identified.The present invention is according to the serial selection of vision significance power order and identification image object, and the efficiency and accuracy, strong adaptability, versatility for improving graphical analysis and identification are good.

Description

Online image processing method
Technical field
The present invention relates to image procossing, the more particularly to a kind of online characteristic recognition method of image for environment measuring and dress Put.
Background technology
As computer technology, automatic control technology and information and software engineering promptly introduce field of environment protection, Automation, the intelligent important directions for having become the development of environmental monitoring of remote sensing technology.Computer vision technique is with its information content Greatly, the features such as precision is high, detection range is big, is widely used in the detection field based on remote sensing water quality.And remote sensing figure As the result of detection will be used as the important evidence of water analysis and pollutant evaluation.Evaluation remote sensing images testing result has two kinds of sides Method:Artificial evaluation and computer picture recognition evaluation.
Traditional characteristic detection method is mainly had been manually done by monitoring personnel.In current actual production, the side mainly taken Formula is manual analysis remote sensing images, empirically determines pollution-free and pollution type, position etc., thus evaluates water quality, for example Marine oil spill accident.Artificial evaluation mode is influenceed by human factor and external condition, and efficiency is low, unreliable, uniformity is poor.Adopt With computer image processing technology, Characteristics of The Remote Sensing Images is analyzed, detected and recognized, can preferably solve artificial evaluation Above mentioned problem, make water quality evaluation it is more scientific, objective.
So far, domestic and foreign scholars have carried out a large amount of researchs detected based on Characteristics of The Remote Sensing Images, in defect recognition, algorithm point Analysis, obtain certain progress in terms of experiment effect, but also not up to can practical application in production level.Remote sensing Detection image background is uneven, gray scale rises and falls big and fuzzy edge feature, relatively low contrast, and picture noise is more.With Exemplified by marine environment, complicated situation can be impacted to the spectral signature of oil film, reduce oil film accuracy of identification.At present, to oil The identification of film spectral response feature is based primarily upon the spectroscopic data (point data) of sea trial acquisition, is obtained using field spectroradiometer Oil analyzes spectral signature change and the profit contrast rule of oil film in the spectral signature of visible ray near infrared band.Based on point Spectroscopic data can not provide film distribution information, the substantive raising of the emergent prevention and control level of influence sea pollution.
Therefore, for the above mentioned problem in the presence of correlation technique, effective solution is not yet proposed at present.
The content of the invention
To solve the problems of above-mentioned prior art, the present invention proposes a kind of the effective of Characteristics of The Remote Sensing Images identification Method.
The present invention is adopted the following technical scheme that:A kind of online image processing method, for entering to the feature in image Row ONLINE RECOGNITION, it is characterised in that including:
Step one:Visual scanning is carried out to gray scale remote sensing images, the central point of each scan line is searched for, it is poor by central periphery Computing, obtains the notable figure and blinkpunkt of remote sensing images;
Step 2:According to self-adaption gradient threshold value calculation method, the poor threshold value of adaptive central periphery and each note are utilized The conspicuousness rank of viewpoint, determines emphasis characteristic area;
Step 3:Laddering training network is constructed based on convolutional neural networks, based on the laddering training network by emphasis The convolutional neural networks hierarchical model that the pixel grey scale signal input in region has been trained, obtains the laddering instruction of emphasis characteristic area Practice substantive characteristics, so as to obtain Characteristics of The Remote Sensing Images.
Preferably, the step one further comprises:
(1) remote sensing images are carried out with line line by line to scan,
(2) each local minimum peak point h (x of search jth row grey scale curvei,yj) it is central point, i=1,2 ..., n, during n is Entreat the quantity of point;The corresponding pixel coordinate in each central point is (xi,yj),
The top of its nearest neighbours, respectively h (x are searched in both sides respectively to the left and right by each central pointiL,yj) and h (xiR, yj), i=1,2 ..., n;Corresponding pixel is peripheral point at the top of each, is respectively (xiL,yj) and (xiR,yj), i=1,2 ..., N,
(3) target area N (x are definedi,yj)={ (xim,yj) | m ∈ Z, L≤m≤R }, between the peripheral point of the left and right sides Pixel is constituted, and is possible blinkpunkt region;Wherein j=1,2 ..., k, k are picturedeep, and Z represents that set of integers, L represent target Region left margin, R represents target area right margin,
Calculate central periphery difference Δ h (xiL,yj)=h (xiL,yj)-h(xi,yj), Δ h (xiR,yj)=h (xiR,yj)-h(xi, yj), wherein i=1,2 ..., n;Take Δ h (xiL,yj) and Δ h (xiR,yj) smaller centered on the poor Δ h in periphery,
(4) the poor threshold T of adaptive central periphery of jth horizontal scanning line is calculatedj:TjΔf+kσΔf, wherein μΔfFor scanning The average of line central periphery difference;σΔfFor the standard deviation of scan line central periphery difference;K is constant coefficient,
(5) each point conspicuousness rank in scan line is calculated:By Δ h (xiL,yj) and Δ h (xiR,yj) with each scan line from Adapt to the poor threshold T of central peripheryjCompare, it is assumed that Δ h (xiL,yj)≥Δh(xiR,yj), by Δ h (xiL,yj)≥Tj, then the mesh Mark region N (xi,yj) notable focus, conspicuousness rank of each point in notable figure be set to S (xi,yj)=Δ h (xiL,yj), scanning Remaining each point conspicuousness rank is set to 0 on line,
(6) after the completion of whole sub-picture scanning, merge connected marking area, assembly section is used as using maximum conspicuousness rank The conspicuousness rank in domain, obtains the notable figure of remote sensing images remote sensing images figure.
Preferably, in the determination emphasis characteristic area of step 2, the sequence based on the conspicuousness rank to each blinkpunkt Come sequential processes marking area, the conspicuousness rank S (x of such as blinkpunkti,yj) > T, then the blinkpunkt is emphasis characteristic area, its Middle T is default decision gate limit value.
Preferably, the step 4 further comprises:
Gray level image is believed as analysis object the key area sorted according to conspicuousness rank that above-mentioned steps two are obtained Number as network input, pass through the convolutional neural networks hierarchical model trained, obtain emphasis characteristic area laddering instruction Practice substantive characteristics, be identified by output layer radial basis function network, be output as characteristic type.
Compared to prior art, technical scheme has an advantages below:
(1) remote sensing images emphasis characteristic area is quickly focused on using concern mechanism, image real time transfer amount is subtracted significantly It is few, it is ensured that algorithm has higher efficiency, reduce extraneous data interference and improve algorithm accuracy simultaneously;
(2) laddering training is taken, applied to image recognition, accuracy is improved, and avoid manual features extraction Time loss, so as to improve computational efficiency, new thinking can be provided for the research of visual signature detection field.
Brief description of the drawings
Fig. 1 a are the method flow diagrams of Characteristics of The Remote Sensing Images automatic identification according to embodiments of the present invention.
Fig. 1 b are the intensity profile curve maps of remote sensing images scan line according to embodiments of the present invention.
Fig. 2 is the level schematic diagram of initial network CN1 structures according to embodiments of the present invention.
Fig. 3 is the schematic diagram of convolution according to embodiments of the present invention and sub-sampling procedures.
Fig. 4 is C3 layers according to embodiments of the present invention and S2 layers of neuron connected mode table.
Fig. 5 is hierarchical network CN1 according to embodiments of the present invention training process schematic diagram.
Fig. 6 is hierarchical network CN2 according to embodiments of the present invention training process schematic diagram.
Fig. 7 is hierarchical network CN3 according to embodiments of the present invention training process schematic diagram.
Fig. 8 is hierarchical network CN4 according to embodiments of the present invention training process schematic diagram.
Fig. 9 is hierarchical network CN3 according to embodiments of the present invention layer of structure schematic diagram.
Figure 10 is the connected mode table of C3 layers and S2 layers of neuron according to embodiments of the present invention.
Figure 11 is hierarchical network CN3 according to embodiments of the present invention experimental result schematic diagram.
Embodiment
Various ways can be used for (including being embodied as process;Device;System;Material composition;In computer-readable storage medium The computer program product included in matter;And/or (such as following processor, the processor is configured to perform in coupling processor Instruction that is being stored on the memory of processor and/or being provided by the memory is provided)) implement the present invention.In this manual, Any other form that these are implemented or the present invention can be used is properly termed as technology.In general, can be the present invention's In the range of change disclosed process the step of order.Unless otherwise expressing, it is described as being configured to the part of execution task (such as Processor or memory) it may be embodied as by provisional configuration into the general part that the task is performed in preset time or made Cause the specific part of the execution task.
Retouching in detail to one or more embodiment of the invention is hereafter provided together with illustrating the accompanying drawing of the principle of the invention State.The present invention is described with reference to such embodiment, but the invention is not restricted to any embodiment.The scope of the present invention is only by right Claim is limited, and the present invention covers many replacements, modification and equivalent.Illustrate in the following description many details with Thorough understanding of the present invention is just provided.These details are provided for exemplary purposes, and without in these details Some or all details can also realize the present invention according to claims.
It is an object of the invention to detect the difficult point and the deficiencies in the prior art that exist for existing remote sensing images, it will pay close attention to Mechanism is introduced into remote Sensing Image Analysis, is proposed a kind of effective ways of Characteristics of The Remote Sensing Images, is overcome and asked present in prior art Topic.
Conventional method need on an equal basis be treated to all remote sensing images regions, carry out equal detection and analysis, but actually institute The emphasis characteristic area of care generally only accounts for a part very small in remote sensing images general image, ratio that may be less than 1%, It is even less.This not only causes the computing resource of feature detection identifying system with calculating waste of time, and aggravates detection Analysis difficulty, the reduction accuracy of identification.The selective concern mechanism of human eye is incorporated into Characteristics of The Remote Sensing Images target by the present invention In detection, adaptive central periphery difference is carried out by simulating the scanning of human eye vision and calculates to directly obtain the side of notable figure Method, quickly and effectively to reduce the data volume of image procossing, improves detection speed;More importantly effectively eliminate invalid data Interference to feature detection region, is favorably improved accuracy of detection and the degree of accuracy.Also, laddering training is taken by key area Pixel grey scale signal directly by the Networks Layered Model trained, the substantive characteristics for obtaining emphasis characteristic area is directly carried out Identification.
Fig. 1 is the method flow diagram of Characteristics of The Remote Sensing Images automatic identification according to embodiments of the present invention.As shown in figure 1, real Apply comprising the following steps that for the present invention:
Step one:Obtain the notable figure of remote sensing images.By the way that directly gray scale remote sensing images are carried out with visual scanning, search is each The central point of scan line, by central periphery difference operation, obtains the notable figure and blinkpunkt of remote sensing images.
The stimulation that human vision neuron is pointed to its receptive field center cell domain is most sensitive, and around central area more Extensively, the stimulation more in weak-strong test will suppress the response of the neuron.This sensitive structure to local space with discontinuity It is particularly suitable for use in detecting the region that local conspicuousness is stronger, this is the linear center peripheral operation of biological receptive field.The present invention Calculating to strength characteristic, is central to obtain viewpoint by way of line scanner uni calculates the poor threshold value of adaptive central periphery With the difference of peripheral part, so as to realize the linear center peripheral operation similar with biological receptive field.
In one embodiment of the invention, the remarkable picture capturing method of image is as follows:
(1) image is carried out scanning by row (column) line.Fig. 1 b are the intensity profile curves in scan line.
(2) each local minimum peak point h (x of search jth row grey scale curvei,yj) it is central point, i=1,2 ..., n, during n is Entreat the quantity of point;The corresponding pixel coordinate in each central point is (xi,yj), i=1,2 ..., n.
The top of its nearest neighbours, respectively h (x are searched in both sides respectively to the left and right by each central pointiL,yj) and h (xiR, yj), i=1,2 ..., n;Corresponding pixel is peripheral point at the top of each, is respectively (xiL,yj) and (xiR,yj), i=1,2 ..., n。
(3) target area N (x are definedi,yj):N(xi,yj)={ (xim,yj) | m ∈ Z, L≤m≤R }, by left and right sides week Pixel composition between edge point, is possible blinkpunkt region.Wherein j=1,2 ..., k, k are picturedeep, and Z represents integer Collection, L represent target area left margin, and R represents target area right margin, such as xiLThe row coordinate of target area left margin is represented, xiRRepresent the row coordinate of target area right margin.
Central periphery difference Δ h:Δh(xiL,yj)=h (xiL,yj)-h(xi,yj), Δ h (xiR,yj)=h (xiR,yj)-h(xi, yj), wherein i=1,2 ..., n;Take Δ h (xiL,yj) and Δ h (xiR,yj) smaller centered on the poor Δ h in periphery.
(4) the poor threshold T of adaptive central periphery of jth horizontal scanning line is calculatedj:TjΔf+kσΔf, wherein μΔfFor scanning The average of line central periphery difference;σΔfFor the standard deviation of scan line central periphery difference;K is constant coefficient, typically takes 3~5.
(5) each point conspicuousness rank in scan line:By Δ h (xiL,yj) and Δ h (xiR,yj) adaptive with each scan line Central periphery difference threshold TjCompare, it is assumed that Δ h (xiL,yj)≥Δh(xiR,yj).By Δ h (xiL,yj)≥Tj, then the target area Domain N (xi,yj) notable focus, conspicuousness rank of each point in notable figure be set to S (xi,yj)=Δ h (xiL,yj), in scan line Remaining each point conspicuousness rank is set to 0.
(6) after the completion of whole sub-picture scanning, merge connected marking area, assembly section is used as using maximum conspicuousness rank The conspicuousness rank in domain, obtains the notable figure S of remote sensing images remote sensing images figure after the completion of merging.
Step 2:Judge remote sensing images key area.According to self-adaption gradient threshold value calculation method, in adaptive The conspicuousness rank of the poor threshold value in heart periphery and each blinkpunkt, determines emphasis characteristic area, so as to these emphasis characteristic areas Enter row major, careful processing, improve Characteristics of The Remote Sensing Images detection efficiency and accuracy;
Specifically, according to the conspicuousness rank tag align sort of each blinkpunkt, the high marking area priority treatment of rank.
Such as the conspicuousness rank S (x of blinkpunkti,yj) > T, T is decision gate limit value, then the blinkpunkt is emphasis characteristic area Domain, need to make further careful analysis by the laddering training network of step 3.
Step 3:Laddering training network construction based on convolutional neural networks, using the growing method of convolutional network, from Initial network starts, according to growing strategy, and automatic growth to recognition capability and detection efficiency all reach expectation threshold value.
Similar to biological vision neural network, convolutional neural networks have stratification and local sensing region extraction feature Feature, suitably increases the quantity of each layer perceptron, it is possible to increase the feature quantity that each layer can be extracted in network, improves network Recognition capability, also can be more preferable relative to the robustness with noise, translation and disturbance, but on condition that situation about being met in sample size Under.Such as sample size relative deficiency, it is likely to result in complicated convolutional network and trains insufficient, and reduces recognition capability;Even and if Sample size is sufficient, and convolutional network scale increase, operand can also be multiplied, and is likely to result in recognition capability and improves a little, and Detection efficiency reduction is a lot.Suitable convolutional neural networks should take into account recognition capability and detection speed simultaneously.
In view of this, present invention improves over the growing method of convolutional network, since initial network, according to growing strategy, Automatic growth is untill recognition capability and detection efficiency all reach expectation threshold value.Initial network structure is as shown in Figure 2.
The convolutional neural networks basic structure has 7 layers, not comprising input, and every layer all comprising can training parameter (connection weight Weight).Input picture is the area-of-interest obtained, and specification is 32 × 32 to size in proportion.Potential obvious characteristic such as oil spilling etc. The center that top feature monitors sub- receptive field can be appeared in.
C1 layers are a convolutional layers, are two 5 × 5 convolution kernel convolution input pictures acquisitions, are made up of 2 characteristic patterns.It is special Each neuron in figure is levied with input 5 × 5 neighborhood to be connected.The size of characteristic pattern is 28 × 28.C1 has 52 can train ginseng It is several that (each 5 × 5=25 unit parameter of wave filter and a bias parameter, 2 wave filters, are total to (5 × 5+1) × 2=52 altogether Individual parameter), the individual connection in totally 52 × (28 × 28).
S2 layers are a down-sampling layers, are the principles using image local correlation, carry out sub-sample acquisition to image, have The characteristic pattern of 2 14 × 14.2 × 2 neighborhoods of each unit characteristic pattern corresponding with C1 in characteristic pattern are connected.S2 layers every 4 of individual unit inputs are added, be multiplied by one can training parameter, biasing can be trained along with one.As a result Gaussian function is passed through Calculate.Coefficient and biasing can be trained to control the nonlinear degree of Gaussian function.If coefficients comparison is small, then computing is similar to Linear operation, sub-sampling is equivalent to blurred picture.If coefficient ratio is larger, it can be regarded as according to the size sub-sampling of biasing Noisy inclusive-OR operation or noisy AND operation.2 × 2 receptive fields of each unit are not overlapping, therefore every in S2 The size of individual characteristic pattern is 1/4 (each 1/2) of row and column of characteristic pattern size in C1.S2 layers have 4 can training parameter and totally 4 × (14 × 14) individual connection.
As shown in figure 3, convolution process includes:With a trainable wave filter fxDeconvolute an image inputted ( One stage was the image of input, and the stage below is exactly convolution feature map), then add a biasing bx, obtain convolutional layer Cx。 Sub-sampling procedures include:It is changed into a pixel per four pixel summations of neighborhood, then passes through scalar Wx+1Weighting, is further added by biasing bx+1, then by a Gauss activation primitive, produce a Feature Mapping figure S for probably reducing four timesx+1
So can be regarded as making convolution algorithm from the mapping of plane to a next plane, S- layers are considered as obscuring Wave filter, plays a part of Further Feature Extraction.Spatial resolution is successively decreased between hidden layer and hidden layer, and the number of planes contained by every layer It is incremented by, so can be used for detecting more characteristic informations.
C3 layers are also a convolutional layer, are deconvoluted a layer S2 by 3 kinds of 5 × 5 different convolution kernels, by 3 10 × 10 features Figure composition, i.e., every containing 10 × 10 neurons.Each characteristic pattern in C3 is connected to all 1 or 2 features in S2 Figure, the characteristic pattern for representing this layer is the various combination for the characteristic pattern that last layer is extracted, as shown in Figure 4.Due to different features Figure has different inputs, so different features can be extracted.As the vision system of people, the structure of bottom constitutes upper strata more Abstract structure, such as edge constitute the part of shape or target.
S4 layers are a down-sampling layers, are made up of the characteristic pattern of 16 5 × 5 sizes.Each unit and C3 in characteristic pattern 2 × 2 neighborhoods of middle individual features figure are connected, with the connection between C1 and S2.S4 layers have 4 can training parameter it is (each 1 factor of characteristic pattern and a biasing).
C5 layers are a convolutional layers, there is 100 characteristic patterns (having output layer and F6 layers of decision).Each unit is complete with S4 layers 5 × 5 neighborhoods of portion's unit are connected.Because the size of S4 layers of characteristic pattern is also 5 × 5 (the same with convolution kernel), therefore C5 characteristic patterns Size is 1 × 1:Which constitute the full connection between S4 and C5.
F6 layers have a Unit 50 (being determined by the design of output layer), are connected entirely with C5 layers.Such as classical neural network, F6 layers The dot product between input vector and weight vectors is calculated, along with a biasing.It is then passed to Gaussian function and produces list One state of member.
Output layer is RBF units by Euclidean RBF (Euclidean Radial Basis Function) unit Composition, per one unit of class, each there is 50 inputs.I.e. each output RBF units are calculated between input vector and parameter vector Euclidean distance.Input it is more remote from parameter vector, RBF output it is bigger.RBF parameter vectors play the angle of F6 layers of object vector Color.These vectors are+1 or -1, and this is just in the range of F6 Gausses, therefore can prevent Gaussian function saturation.In fact, + 1 and -1 is at the point of maximum deflection of Gaussian function.This causes F6 units to operate in maximum non-linear domain.It must avoid The saturation of Gaussian function, because this will cause the slower convergence of loss function and ill-conditioning problem.
Step 4:Characteristics of The Remote Sensing Images identification based on laddering training network, by the pixel grey scale signal of key area Directly by the convolutional neural networks hierarchical model trained, the laddering training essential feature for obtaining emphasis characteristic area is direct It is identified.
The key area according to conspicuousness rank tag align sort that above-mentioned steps two are obtained is used as selective analysis object, emphasis The region gray level image that specification is 32 × 32 in proportion, it is ensured that potential obvious characteristic such as oil spilling vestige etc. can appear in highest Layer feature monitors the center of sub- receptive field, its as convolutional neural networks input picture.The gray scale of 32 × 32 pixels is believed Number as network input, directly by the convolutional neural networks hierarchical model trained, obtain 50 of emphasis characteristic area Laddering training essential feature, is directly identified by output layer radial basis function network, is output as characteristic type.
In summary, instant invention overcomes traditional shortcoming, can preferably solve artifact cause efficiency it is low, can not Lean on, the problem of uniformity is poor.Without other image preprocessings, traditional remote sensing features image detecting method adaptability is overcome not By force, versatility is bad, efficiency is low, be difficult to detection Weak characteristic the problem of.
Obviously, can be with general it should be appreciated by those skilled in the art, above-mentioned each module of the invention or each step Computing system realize that they can be concentrated in single computing system, or be distributed in multiple computing systems and constituted Network on, alternatively, the program code that they can be can perform with computing system be realized, it is thus possible to they are stored Performed within the storage system by computing system.So, the present invention is not restricted to any specific hardware and software combination.
It should be appreciated that the above-mentioned embodiment of the present invention is used only for exemplary illustration or explains the present invention's Principle, without being construed as limiting the invention.Therefore, that is done without departing from the spirit and scope of the present invention is any Modification, equivalent substitution, improvement etc., should be included in the scope of the protection.In addition, appended claims purport of the present invention Covering the whole changes fallen into scope and border or this scope and the equivalents on border and repairing Change example.

Claims (1)

1. a kind of online image processing method, for carrying out ONLINE RECOGNITION to the feature in image, it is characterised in that bag Include:
Step one:Visual scanning is carried out to gray scale remote sensing images, the central point of each scan line is searched for, is transported by central periphery difference Calculate, obtain the notable figure and blinkpunkt of remote sensing images;
Step 2:According to self-adaption gradient threshold value calculation method, the poor threshold value of adaptive central periphery and each blinkpunkt are utilized Conspicuousness rank, determine emphasis characteristic area;
Step 3:Laddering training network is constructed based on convolutional neural networks, based on the laddering training network by key area The convolutional neural networks hierarchical model trained of pixel grey scale signal input, obtain the laddering training book of emphasis characteristic area Matter feature, so as to obtain Characteristics of The Remote Sensing Images;
The step one further comprises:
(1) remote sensing images are carried out with line line by line to scan,
(2) each local minimum peak point h (x of search jth row grey scale curvei,yj) it is central point, i=1,2 ..., n, n is central point Quantity;The corresponding pixel coordinate in each central point is (xi,yj),
The top of its nearest neighbours, respectively h (x are searched in both sides respectively to the left and right by each central pointiL,yj) and h (xiR,yj), i =1,2 ..., n;Corresponding pixel is peripheral point at the top of each, is respectively (xiL,yj) and (xiR,yj), i=1,2 ..., n,
(3) target area N (x are definedi,yj)={ (xim,yj) m ∈ Z, L≤m≤R, the pixel between the peripheral point of the left and right sides Composition, is possible blinkpunkt region;Wherein j=1,2 ..., k, k are picturedeep, and Z represents that set of integers, L represent target area Left margin, R represents target area right margin,
Calculate central periphery difference Δ h (xiL,yj)=h (xiL,yj)-h(xi,yj), Δ h (xiR,yj)=h (xiR,yj)-h(xi,yj), Wherein i=1,2 ..., n;Take Δ h (xiL,yj) and Δ h (xiR,yj) smaller centered on the poor Δ h in periphery,
(4) the poor threshold T of adaptive central periphery of jth horizontal scanning line is calculatedj:TjΔf+kσΔf, wherein μΔfFor in scan line The average of heart periphery difference;σΔfFor the standard deviation of scan line central periphery difference;K is constant coefficient,
(5) each point conspicuousness rank in scan line is calculated:By Δ h (xiL,yj) and Δ h (xiR,yj) adaptive with each scan line Central periphery difference threshold TjCompare, it is assumed that Δ h (xiL,yj)≥Δh(xiR,yj), by Δ h (xiL,yj)≥Tj, then the target area Domain N (xi,yj) notable focus, conspicuousness rank of each point in notable figure be set to S (xi,yj)=Δ h (xiL,yj), in scan line Remaining each point conspicuousness rank is set to 0,
(6) after the completion of whole sub-picture scanning, merge connected marking area, combined region is used as using maximum conspicuousness rank Conspicuousness rank, obtains the notable figure of remote sensing images remote sensing images figure;
In the determination emphasis characteristic area of step 2, the sequence based on the conspicuousness rank to each blinkpunkt is aobvious come sequential processes Write region, the conspicuousness rank S (x of such as blinkpunkti,yj) > T, then the blinkpunkt is emphasis characteristic area, and wherein T sentences to be default Determine threshold value;
The step 3 further comprises:
Gray image signals are made as analysis object the key area sorted according to conspicuousness rank that above-mentioned steps two are obtained For the input of network, by the convolutional neural networks hierarchical model trained, the laddering training book of emphasis characteristic area is obtained Matter feature, is identified by output layer radial basis function network, is output as characteristic type;
The convolutional neural networks are since initial network, according to growing strategy, automatic growth to recognition capability and detection efficiency All reach expectation threshold value untill, convolutional neural networks basic structure do not include input be respectively C1 layers, S2 layers, C3 layers, S4 layers, C5 layers, F6 layers, output layer;Every layer all comprising can training parameter;Input picture is the area-of-interest obtained, and size is advised in proportion Model is 32 × 32;
C1 layers are a convolutional layers, are two 5 × 5 convolution kernel convolution input pictures acquisitions, are made up of 2 characteristic patterns;Characteristic pattern In each neuron with input in 5 × 5 neighborhood be connected;The size of characteristic pattern is 28 × 28;C1 have 52 can training parameter, often Individual 5 × 5=25 unit parameter of wave filter and a bias parameter, 2 wave filters altogether, totally 52 parameters, totally 52 × (28 × 28) individual connection;
S2 layers are a down-sampling layers, by carrying out sub-sample acquisition to image, the characteristic pattern for having 2 14 × 14;In characteristic pattern 2 × 2 neighborhoods of each unit characteristic pattern corresponding with C1 be connected;4 inputs of S2 layers of each unit are added, and are multiplied by one It is individual can training parameter, biasing can be trained along with one;As a result calculated by Gaussian function;Coefficient and biasing can be trained to control The nonlinear degree of Gaussian function;If coefficients comparison is small, then computing is similar to linear operation, and sub-sampling is equivalent to fuzzy graph Picture;If coefficient ratio is larger, noisy inclusive-OR operation can be regarded as according to the size sub-sampling of biasing or have noise AND operation;2 × 2 receptive fields of each unit are not overlapping, and the size of each characteristic pattern is characteristic pattern size in C1 in S2 1/4;S2 layers have 4 can training parameter and the individual connection in totally 4 × (14 × 14);
Convolution process includes:With a trainable wave filter fxDeconvolute an image inputted, then adds a biasing bx, Obtain convolutional layer Cx;Sub-sampling procedures include:It is changed into a pixel per four pixel summations of neighborhood, then passes through scalar Wx+1Plus Power, is further added by biasing bx+1, then by a Gauss activation primitive, produce a Feature Mapping figure for probably reducing four times Sx+1
C3 layers are also a convolutional layer, are deconvoluted a layer S2 by 3 kinds of 5 × 5 different convolution kernels, by 3 10 × 10 characteristic pattern groups Into that is, every containing 10 × 10 neurons;Each characteristic pattern in C3 is connected to all 1 or 2 characteristic patterns in S2, table The characteristic pattern for showing this layer is the various combination for the characteristic pattern that last layer is extracted;
S4 layers are a down-sampling layers, are made up of the characteristic pattern of 16 5 × 5 sizes;Each unit in characteristic pattern and phase in C3 2 × 2 neighborhoods of characteristic pattern are answered to be connected, with the connection between C1 and S2;S4 layers have 4 can training parameter, each feature Fig. 1 factor and a biasing;
C5 layers are a convolutional layers, there is 100 characteristic patterns;Each unit is connected with 5 × 5 neighborhoods of S4 layers of whole units;S4 The size of layer characteristic pattern is also that the size of 5 × 5, C5 characteristic patterns is 1 × 1:Which constitute the full connection between S4 and C5;
F6 layers have a Unit 50, are determined, are connected entirely with C5 layers by the design of output layer;F6 layers calculate input vector and weight vectors Between dot product, along with one biasing;It is then passed to a state of Gaussian function generation unit;
Output layer is that RBF units are constituted by Euclidean RBF (Euclidean Radial Basis Function) unit, Per one unit of class, each there are 50 inputs;I.e. each output RBF units calculate the Euclidean between input vector and parameter vector Distance.
CN201410381571.4A 2014-08-05 2014-08-05 Online image processing method Expired - Fee Related CN104143102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410381571.4A CN104143102B (en) 2014-08-05 2014-08-05 Online image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410381571.4A CN104143102B (en) 2014-08-05 2014-08-05 Online image processing method

Publications (2)

Publication Number Publication Date
CN104143102A CN104143102A (en) 2014-11-12
CN104143102B true CN104143102B (en) 2017-08-11

Family

ID=51852272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410381571.4A Expired - Fee Related CN104143102B (en) 2014-08-05 2014-08-05 Online image processing method

Country Status (1)

Country Link
CN (1) CN104143102B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989336B (en) * 2015-02-13 2020-11-17 中国科学院西安光学精密机械研究所 Scene recognition method based on deconvolution deep network learning with weight
WO2017015649A1 (en) * 2015-07-23 2017-01-26 Mireplica Technology, Llc Performance enhancement for two-dimensional array processor
CN105139385B (en) * 2015-08-12 2018-04-17 西安电子科技大学 Image vision salient region detection method based on the reconstruct of deep layer autocoder
CN105760442B (en) * 2016-02-01 2019-04-26 中国科学技术大学 Characteristics of image Enhancement Method based on database neighborhood relationships
CN106127783A (en) * 2016-07-01 2016-11-16 武汉泰迪智慧科技有限公司 A kind of medical imaging identification system based on degree of depth study
CN106778687B (en) * 2017-01-16 2019-12-17 大连理工大学 Fixation point detection method based on local evaluation and global optimization
CN109410187B (en) * 2017-10-13 2021-02-12 北京昆仑医云科技有限公司 Systems, methods, and media for detecting cancer metastasis in a full image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470806A (en) * 2007-12-27 2009-07-01 东软集团股份有限公司 Vehicle lamp detection method and apparatus, interested region splitting method and apparatus
CN101521753A (en) * 2007-12-31 2009-09-02 财团法人工业技术研究院 Image processing method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590310B2 (en) * 2004-05-05 2009-09-15 Facet Technology Corp. Methods and apparatus for automated true object-based image analysis and retrieval
US8275170B2 (en) * 2006-12-08 2012-09-25 Electronics And Telecommunications Research Institute Apparatus and method for detecting horizon in sea image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470806A (en) * 2007-12-27 2009-07-01 东软集团股份有限公司 Vehicle lamp detection method and apparatus, interested region splitting method and apparatus
CN101521753A (en) * 2007-12-31 2009-09-02 财团法人工业技术研究院 Image processing method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
增长式卷积神经网络及其在人脸检测中的应用;顾佳玲,彭宏京;《系统仿真学报》;20090430;第21卷(第8期);第2页到第3页第2章 *
大幅面可见光遥感图像典型目标识别关键技术研究;韩现伟;《中国博士学位论文全文数据库 信息科技辑》;20140115;第46页到第47页第3.2节 *
磁瓦表面图像的自适应形态学滤波缺陷提取方法;余永维等;《计算机辅助设计与图形学学报》;20120330;第24卷(第3期);第4页2.2节 *

Also Published As

Publication number Publication date
CN104143102A (en) 2014-11-12

Similar Documents

Publication Publication Date Title
CN104143102B (en) Online image processing method
CN104103033B (en) View synthesis method
CN104299006B (en) A kind of licence plate recognition method based on deep neural network
Zhou et al. Fusion PSPnet image segmentation based method for multi-focus image fusion
CN109446992A (en) Remote sensing image building extracting method and system, storage medium, electronic equipment based on deep learning
CN108399628A (en) Method and system for tracking object
CN101013469B (en) Image processing method and image processor
CN106326874A (en) Method and device for recognizing iris in human eye images
CN109800824A (en) A kind of defect of pipeline recognition methods based on computer vision and machine learning
CN107705288A (en) Hazardous gas spillage infrared video detection method under pseudo- target fast-moving strong interferers
CN101615292B (en) Accurate positioning method for human eye on the basis of gray gradation information
CN109978882A (en) A kind of medical imaging object detection method based on multi-modal fusion
CN109034184A (en) A kind of grading ring detection recognition method based on deep learning
CN105608429A (en) Differential excitation-based robust lane line detection method
CN108154147A (en) The region of interest area detecting method of view-based access control model attention model
CN110135446A (en) Method for text detection and computer storage medium
CN110428389A (en) Low-light-level image enhancement method based on MSR theory and exposure fusion
CN103149163A (en) Multispectral image textural feature-based beef tenderness detection device and method thereof
CN104361571B (en) Infrared and low-light image fusion method based on marginal information and support degree transformation
Niu et al. Automatic localization of optic disc based on deep learning in fundus images
CN105930793A (en) Human body detection method based on SAE characteristic visual learning
CN101694385B (en) Small target detection instrument based on Fourier optics and detection method thereof
Hirsch et al. Color visual illusions: A statistics-based computational model
Wang et al. MeDERT: A metal surface defect detection model
Niu et al. Underwater Waste Recognition and Localization Based on Improved YOLOv5.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Xia Zhengxin

Inventor before: Mao Li

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20170713

Address after: Nanjing City, Jiangsu province 210046 Yuen Road No. 9

Applicant after: Nanjing Post & Telecommunication Univ.

Address before: 610000 A, building, No. two, Science Park, high tech Zone, Sichuan, Chengdu, China 103B

Applicant before: Sichuan Jiucheng Information Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171206

Address after: Room 101, room 1, Baima Mountain Village, Xuanwu District, Nanjing, Jiangsu Province, 210042

Patentee after: Nanjing Diyou Software Development Company Limited

Address before: Nanjing City, Jiangsu province 210046 Yuen Road No. 9

Patentee before: Nanjing Post & Telecommunication Univ.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170811

Termination date: 20180805

CF01 Termination of patent right due to non-payment of annual fee