CN104143102A - Online image data processing method - Google Patents

Online image data processing method Download PDF

Info

Publication number
CN104143102A
CN104143102A CN201410381571.4A CN201410381571A CN104143102A CN 104143102 A CN104143102 A CN 104143102A CN 201410381571 A CN201410381571 A CN 201410381571A CN 104143102 A CN104143102 A CN 104143102A
Authority
CN
China
Prior art keywords
remote sensing
area
point
sensing images
poor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410381571.4A
Other languages
Chinese (zh)
Other versions
CN104143102B (en
Inventor
毛力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Diyou Software Development Company Limited
Original Assignee
SICHUAN JIUCHENG INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN JIUCHENG INFORMATION TECHNOLOGY Co Ltd filed Critical SICHUAN JIUCHENG INFORMATION TECHNOLOGY Co Ltd
Priority to CN201410381571.4A priority Critical patent/CN104143102B/en
Publication of CN104143102A publication Critical patent/CN104143102A/en
Application granted granted Critical
Publication of CN104143102B publication Critical patent/CN104143102B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides an online image data processing method which is used for recognizing features in an image. The method comprises the steps of visual scanning, center periphery difference operation, grey level saliency map obtaining, fixation point search and key area determination. Pixel grey level signals of a key area are made to directly pass through a trained network layer model by the adoption of progressive training, and the substantive features of the key feature area are obtained for direct recognition. According to the online image data processing method, on the basis of visual saliency intensity sequence serial selection and image target recognition, image analysis and recognition efficiency and accuracy are improved, adaptability is high, and universality is good.

Description

Online image processing method
Technical field
The present invention relates to image processing, particularly the online characteristic recognition method of a kind of image for environment measuring and device.
Background technology
Along with computer technology, automatic control technology and information and software engineering are promptly introduced field of environment protection, robotization, the intelligent important directions that has become environmental monitoring of remote sensing technical development.Computer vision technique with its contain much information, the feature such as precision is high, sensing range is large, be widely used at the detection field based on remote sensing water quality.And the result that remote sensing images detect is using the important evidence as water analysis and pollutant evaluation.Evaluation remote sensing images testing result has two kinds of methods: artificial evaluation and computer picture recognition evaluation.
Traditional characteristic detection method is mainly completed by hand by monitoring personnel.In actual production at present, the mode of mainly taking is manual analysis remote sensing images, defines by rule of thumb type, position etc. pollution-free and that pollute, evaluates thus water quality, for example marine oil spill accident.Artificial evaluation mode is subject to the impact of human factor and external condition, and efficiency is low, unreliable, consistance is poor.Adopt computer image processing technology, to Characteristics of The Remote Sensing Images analyze, detection and Identification, can solve preferably the problems referred to above of artificial evaluation, make waters grade estimation more scientific, objective.
So far, Chinese scholars has been carried out the research detecting based on Characteristics of The Remote Sensing Images in a large number, has all obtained certain progress, but also do not reach the level of practical application aborning at aspects such as defect recognition, Algorithm Analysis, experiment effects.It is large that remote sensing detected image background is inhomogeneous, gray scale rises and falls, and fuzzy edge feature, lower contrast, and picture noise is many.Taking marine environment as example, complicated situation can impact the spectral signature of oil film, reduces oil film accuracy of identification.At present, the spectroscopic data (some data) that the identification of oil film spectrum response characteristic is mainly obtained based on sea trial, utilize field spectroradiometer to obtain the spectral signature of oil at visible ray near-infrared band, the spectral signature of analyzing oil film changes and profit contrast rule.Spectroscopic data based on point cannot provide film distribution information, and the substance that affects the emergent prevention and control level of sea pollution improves.
Therefore,, for existing the problems referred to above in correlation technique, effective solution is not yet proposed at present.
Summary of the invention
For solving the existing problem of above-mentioned prior art, the present invention proposes a kind of effective ways of Characteristics of The Remote Sensing Images identification.
The present invention adopts following technical scheme: a kind of online image processing method, for the feature of image is carried out to ONLINE RECOGNITION, it is characterized in that, and comprising:
Step 1: gray scale remote sensing images are carried out to visual scanning, search for the central point of each sweep trace, by central periphery difference operation, obtain remarkable figure and the blinkpunkt of remote sensing images;
Step 2: according to self-adaption gradient threshold value computing method, utilize the conspicuousness rank of the poor threshold value of self-adaptation central periphery and each blinkpunkt, determine emphasis characteristic area;
Step 3: construct laddering training network based on convolutional neural networks, the convolutional neural networks hierarchical model of the input of the pixel grey scale signal of key area having been trained based on this laddering training network, obtain the laddering training essential characteristic of emphasis characteristic area, thereby obtain Characteristics of The Remote Sensing Images.
Preferably, described step 1 further comprises:
(1) remote sensing images are carried out to line sweep line by line,
(2) the each local minimum peak point of search j row grey scale curve h (x i, y j) be central point, i=1,2 ..., n, the quantity that n is central point; Pixel coordinate corresponding to each central point is (x i, y j),
Start both sides to the left and right by each central point and search for respectively from its nearest top, be respectively h (x iL, y j) and h (x iR, y j), i=1,2 ..., n; Pixel corresponding to each top is peripheral point, is respectively (x iL, y j) and (x iR, y j), i=1,2 ..., n,
(3) objective definition region N (x i, y j)={ (x im, y j) | m ∈ Z, L≤m≤R}, is made up of the pixel between the peripheral point of the left and right sides, is possible blinkpunkt region; Wherein j=1,2 ..., k, k is picturedeep, and Z represents that set of integers, L represent target area left margin, and R represents target area right margin,
The peripheral poor Δ h (x of computing center iL, y j)=h (x iL, y j)-h (x i, y j), Δ h (x iR, y j)=h (x iR, y j)-h (x i, y j), wherein i=1,2 ..., n; Get Δ h (x iL, y j) and Δ h (x iR, y j) smaller centered by peripheral poor Δ h,
(4) the poor threshold T of self-adaptation central periphery of calculating j horizontal scanning line j: T jΔ f+ k σ Δ f, wherein μ Δ ffor the poor average of sweep trace central periphery; σ Δ ffor the poor standard deviation of sweep trace central periphery; K is constant coefficient,
(5) calculate each point conspicuousness rank on sweep trace: by Δ h (x iL, y j) and Δ h (x iR, y j) with the poor threshold T of self-adaptation central periphery of each sweep trace jrelatively, suppose Δ h (x iL, y j)>=Δ h (x iR, y j), by Δ h (x iL, y j)>=T j, this target area N (x i, y j) remarkable focus, the conspicuousness rank of each point in remarkable figure is set to S (x i, y j)=Δ h (x iL, y j), on sweep trace, all the other each point conspicuousness ranks are set to 0,
(6) after whole sub-picture has scanned, merge connected marking area, using maximum conspicuousness rank as the conspicuousness rank that merges region, obtain the remarkable figure of remote sensing images remote sensing images figure.
Preferably, in definite emphasis characteristic area of step 2, the sequence of the conspicuousness rank based on to each blinkpunkt carrys out sequential processes marking area, as the conspicuousness rank S (x of blinkpunkt i, y j) > T, this blinkpunkt characteristic area of attaching most importance to, wherein T is default decision gate limit value.
Preferably, described step 4 further comprises:
The key area according to the sequence of conspicuousness rank that above-mentioned steps two is obtained is as analytic target, input using gray image signals as network, by the convolutional neural networks hierarchical model of having trained, obtain the laddering training essential characteristic of emphasis characteristic area, identify by output layer radial basis function network, be output as characteristic type.
Than prior art, the having the following advantages of technical scheme of the present invention:
(1) utilize concern mechanism to focus on fast remote sensing images emphasis characteristic area, view data treatment capacity is greatly reduced, ensured that algorithm has higher efficiency, reduce extraneous data interference and improve algorithm accuracy simultaneously;
(2) take laddering training, be applied to image recognition, promoted accuracy, and the time loss of having avoided manual features to extract, thereby improve counting yield, can be the research of visual signature detection field new thinking is provided.
Brief description of the drawings
Fig. 1 a is according to the Characteristics of The Remote Sensing Images of the embodiment of the present invention method flow diagram of identification automatically.
Fig. 1 b is according to the intensity profile curve map of the remote sensing images sweep trace of the embodiment of the present invention.
Fig. 2 is according to the level schematic diagram of the initial network CN1 structure of the embodiment of the present invention.
Fig. 3 is according to the schematic diagram of the convolution of the embodiment of the present invention and sub sampling process.
Fig. 4 is according to the C3 layer of the embodiment of the present invention and S2 layer neuron connected mode table.
Fig. 5 is according to the training process schematic diagram of the hierarchical network CN1 of the embodiment of the present invention.
Fig. 6 is according to the training process schematic diagram of the hierarchical network CN2 of the embodiment of the present invention.
Fig. 7 is according to the training process schematic diagram of the hierarchical network CN3 of the embodiment of the present invention.
Fig. 8 is according to the training process schematic diagram of the hierarchical network CN4 of the embodiment of the present invention.
Fig. 9 is according to the layer of structure schematic diagram of the hierarchical network CN3 of the embodiment of the present invention.
Figure 10 is according to the C3 layer of the embodiment of the present invention and the neuronic connected mode table of S2 layer.
Figure 11 is according to the experimental result schematic diagram of the hierarchical network CN3 of the embodiment of the present invention.
Embodiment
Various ways can be for (comprising the process of being embodied as; Device; System; Material composition; The computer program comprising on computer-readable recording medium; And/or processor (such as following processor, this processor is configured to carry out instruction that store on the storer that is coupled to processor and/or that provided by this storer)) enforcement the present invention.In this manual, any other form that these enforcements or the present invention can adopt can be called technology.Generally speaking, can change within the scope of the invention the sequence of steps of disclosed process.Unless separately had and expressed, the parts (such as processor or storer) that are described as being configured to execute the task may be embodied as by provisional configuration to become in preset time to carry out the general parts of this task or be manufactured into the concrete parts of carrying out this task.
Below provide the detailed description to one or more embodiment of the present invention together with illustrating the accompanying drawing of the principle of the invention.Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain manyly substitute, amendment and equivalent.Set forth in the following description many details to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some or all details in these details.
The object of the invention is to detect for existing remote sensing images the difficult point and the deficiencies in the prior art that exist, the mechanism of paying close attention to is introduced in remote Sensing Image Analysis, proposes a kind of effective ways of Characteristics of The Remote Sensing Images, overcomes problems of the prior art.
Classic method need be treated on an equal basis all remote sensing images region, be carried out equal detection and analysis, but the emphasis characteristic area be in fact concerned about only accounts for a part very little in remote sensing images general image conventionally, may be less than 1% ratio, even still less.This has not only caused the computational resource of feature detection recognition system and the waste of computing time, and adds analysis difficulty, the reduction accuracy of re-detection identification.The present invention pays close attention to mechanism by human eye selectivity and is incorporated in the detection of Characteristics of The Remote Sensing Images target, carry out the poor calculating of self-adaptation central periphery and directly obtain the method for remarkable figure by the scanning of simulation human eye vision, to reduce quickly and effectively the data volume of image processing, improve detection speed; The more important thing is and effectively eliminate the interference of invalid data to feature detection region, contribute to improve accuracy of detection and accuracy.And, take laddering training by the pixel grey scale signal of key area directly by the Networks Layered Model of having trained, obtain the essential characteristic of emphasis characteristic area and directly identify.
Fig. 1 is according to the Characteristics of The Remote Sensing Images of the embodiment of the present invention method flow diagram of identification automatically.As shown in Figure 1, implement concrete steps of the present invention as follows:
Step 1: the remarkable figure that obtains remote sensing images.By directly gray scale remote sensing images being carried out to visual scanning, search for the central point of each sweep trace, by central periphery difference operation, obtain remarkable figure and the blinkpunkt of remote sensing images.
Human vision neuron is the most responsive to being positioned at the stimulation in its territory, receptive field center cell, and the central area thorn more extensively, in more weak region is around goaded into action and suppressed this neuronic response.This sensitive structure that local space is had to a uncontinuity is specially adapted to detect the region that local conspicuousness is stronger, and this is the linear center peripheral operation of biological receptive field.The calculating of the present invention to strength characteristic, is to obtain the difference of viewpoint central authorities and peripheral part by the mode of line sweep and the poor threshold value of calculating self-adaptation central periphery, thereby realizes and the similar linear center peripheral operation of biological receptive field.
In one embodiment of the invention, the remarkable picture capturing method of image is as follows:
(1) image is carried out by row (column) line sweep.Fig. 1 b is the intensity profile curve on sweep trace.
(2) the each local minimum peak point of search j row grey scale curve h (x i, y j) be central point, i=1,2 ..., n, the quantity that n is central point; Pixel coordinate corresponding to each central point is (x i, y j), i=1,2 ..., n.
Start both sides to the left and right by each central point and search for respectively from its nearest top, be respectively h (x iL, y j) and h (x iR, y j), i=1,2 ..., n; Pixel corresponding to each top is peripheral point, is respectively (x iL, y j) and (x iR, y j), i=1,2 ..., n.
(3) objective definition region N (x i, y j): N (x i, y j)={ (x im, y j) | m ∈ Z, L≤m≤R}, is made up of the pixel between the peripheral point of the left and right sides, is possible blinkpunkt region.Wherein j=1,2 ..., k, k is picturedeep, and Z represents that set of integers, L represent target area left margin, and R represents target area right margin, for example x iLrepresent the row-coordinate of target area left margin, x iRrepresent the row-coordinate of target area right margin.
The poor Δ h of central periphery: Δ h (x iL, y j)=h (x iL, y j)-h (x i, y j), Δ h (x iR, y j)=h (x iR, y j)-h (x i, y j), wherein i=1,2 ..., n; Get Δ h (x iL, y j) and Δ h (x iR, y j) smaller centered by peripheral poor Δ h.
(4) the poor threshold T of self-adaptation central periphery of calculating j horizontal scanning line j: T jΔ f+ k σ Δ f, wherein μ Δ ffor the poor average of sweep trace central periphery; σ Δ ffor the poor standard deviation of sweep trace central periphery; K is constant coefficient, generally gets 3~5.
(5) each point conspicuousness rank on sweep trace: by Δ h (x iL, y j) and Δ h (x iR, y j) with the poor threshold T of self-adaptation central periphery of each sweep trace jrelatively, suppose Δ h (x iL, y j)>=Δ h (x iR, y j).By Δ h (x iL, y j)>=T j, this target area N (x i, y j) remarkable focus, the conspicuousness rank of each point in remarkable figure is set to S (x i, y j)=Δ h (x iL, y j), on sweep trace, all the other each point conspicuousness ranks are set to 0.
(6) after whole sub-picture has scanned, merge connected marking area, using maximum conspicuousness rank as the conspicuousness rank that merges region, after having merged, obtain the remarkable figure S of remote sensing images remote sensing images figure.
Step 2: judge remote sensing images key area.According to self-adaption gradient threshold value computing method, utilize the conspicuousness rank of the poor threshold value of self-adaptation central periphery and each blinkpunkt, determine emphasis characteristic area, thereby these emphasis characteristic areas are carried out to preferential, careful processing, improve efficiency and accuracy that Characteristics of The Remote Sensing Images detects;
Particularly, according to the conspicuousness rank tag align sort of each blinkpunkt, the marking area priority processing that rank is high.
As the conspicuousness rank S (x of blinkpunkt i, y j) > T, T is decision gate limit value, this blinkpunkt characteristic area of attaching most importance to, need make further careful analysis by the laddering training network of step 3.
Step 3: based on the laddering training network structure of convolutional neural networks, utilize the growing method of convolution network, from initial network, according to growth rule, automatic growth all reaches expectation threshold value to recognition capability and detection efficiency.
Be similar to biological vision neural network, convolutional neural networks has the feature of stratification and local sensing region extraction feature, suitably increase the quantity of each layer of perceptron, can improve the each layer of feature quantity that can extract in network, improve the recognition capability of network, also can be better with respect to the robustness of noise, translation and disturbance, but prerequisite is in the situation that sample size is satisfied.As sample size relative deficiency, may cause complicated convolution network training insufficient, and recognition capability is reduced; Even and sample size abundance, convolution network size increases, and operand also can be multiplied, and may cause recognition capability to improve a little, and detection efficiency reduces a lot.Suitable convolutional neural networks should be taken into account recognition capability and detection speed simultaneously.
In view of this, the present invention has improved the growing method of convolution network, and from initial network, according to growth rule, automatic growth is expected threshold value till recognition capability and detection efficiency all reach.Initial network structure as shown in Figure 2.
This convolutional neural networks basic structure has 7 layers, does not comprise input, and every layer all comprise can training parameter (connection weight).Input picture is the area-of-interest obtaining, and size in proportion specification is 32 × 32.Potential obvious characteristic is monitored the center of sub-receptive field as oil spilling etc. can appear at top feature.
C1 layer is a convolutional layer, is that two 5 × 5 convolution kernel convolution input pictures obtain, and is made up of 2 characteristic patterns.In characteristic pattern, each neuron is connected with 5 × 5 neighborhood in input.The size of characteristic pattern is 28 × 28.C1 have 52 can training parameter (each wave filter 5 × 5=25 unit parameter and a bias parameter, 2 wave filters altogether, altogether (5 × 5+1) × 2=52 parameter), totally 52 × (28 × 28) individual connections.
S2 layer is a down-sampling layer, is the principle of utilizing image local correlation, image is carried out to son sampling and obtain, and has the characteristic pattern of 2 14 × 14.Each unit in characteristic pattern is connected with 2 × 2 neighborhoods of corresponding characteristic pattern in C1.4 of the each unit of S2 layer inputs are added, and being multiplied by one can training parameter, adds one and can train biasing.Result is calculated by Gaussian function.Can train coefficient and biasing controlling the nonlinear degree of Gaussian function.If coefficients comparison is little, computing is similar to linear operation so, and sub-sampling is equivalent to blurred picture.If coefficients comparison is large, can be regarded as noisy inclusive-OR operation or noisy AND operation according to the big or small sub-sampling of biasing.2 × 2 receptive fields of each unit are not overlapping, and therefore in S2, the size of each characteristic pattern is 1/4 (row and column each 1/2) of characteristic pattern size in C1.S2 layer have 4 can training parameter and totally 4 × (14 × 14) individual connections.
As shown in Figure 3, convolution process comprises: with a trainable wave filter f xdeconvolute an input image (first stage be input image, the stage has below been exactly convolution feature map), then add one biasing b x, obtain convolutional layer C x.Sub sampling process comprises: four pixel summations of every neighborhood become a pixel, then by scalar W x+1weighting, then increase biasing b x+1, then, by Gauss's activation function, produce one and probably dwindle the Feature Mapping figure S of four times x+1.
So can be regarded as and make convolution algorithm to the mapping of next plane from a plane, S-layer can be regarded fuzzy filter as, play the effect that quadratic character extracts.Between hidden layer and hidden layer, spatial resolution is successively decreased, and every layer of contained number of planes increases progressively, and can be used for like this detecting more characteristic information.
C3 layer is also a convolutional layer, by 3 kinds of 5 × 5 different convolution kernels layer S2 that deconvolute, is made up of 3 10 × 10 characteristic patterns, and every containing 10 × 10 neurons.Each characteristic pattern in C3 is connected to all 1 or 2 characteristic patterns in S2, and the characteristic pattern that represents this layer is the various combination of the characteristic pattern that extracts of last layer, as shown in Figure 4.Because different characteristic patterns has different inputs, so can extract different features.As people's vision system, the structure of bottom forms the more abstract structure in upper strata, and for example edge forms the part of shape or target.
S4 layer is a down-sampling layer, is made up of the characteristic pattern of 16 5 × 5 sizes.Each unit in characteristic pattern is connected with 2 × 2 neighborhoods of individual features figure in C3, the same with the connection between C1 and S2.S4 layer have 4 can training parameter (1 factor of each characteristic pattern and a biasing).
C5 layer is a convolutional layer, has 100 characteristic patterns (having output layer and F6 layer to determine).Each unit is connected with 5 × 5 neighborhoods of whole unit of S4 layer.Because the size of S4 layer characteristic pattern is also 5 × 5 (the same with convolution kernel), therefore the size of C5 characteristic pattern is 1 × 1: this has formed the full connection between S4 and C5.
F6 layer has a Unit 50 (design by output layer determines), is entirely connected with C5 layer.As classical neural network, F6 layer calculates the dot product between input vector and weight vectors, adds a biasing.Then passed to a state of Gaussian function generation unit.
Output layer is that RBF unit forms by Euclidean radial basis function (Euclidean Radial Basis Function) unit, unit of every class, and each have 50 inputs.Be that each output RBF unit calculates the Euclidean distance between input vector and parameter vector.Input from parameter vector more away from, RBF output larger.RBF parameter vector plays the role of F6 layer object vector.These vectors be+1 or-1, this just in time, in F6 Gauss's scope, therefore can prevent that Gaussian function is saturated.In fact ,+1 and the-1st, the some place of the maximum deflection of Gaussian function.This operates in maximum non-linear domain F6 unit.Must avoid the saturated of Gaussian function, because this will cause the slower convergence of loss function and ill-conditioning problem.
Step 4: the Characteristics of The Remote Sensing Images identification based on laddering training network, the pixel grey scale signal of key area, directly by the convolutional neural networks hierarchical model of having trained, is obtained to the laddering training essential characteristic of emphasis characteristic area and directly identifies.
The key area according to conspicuousness rank tag align sort that above-mentioned steps two is obtained is as selective analysis object, the key area gray level image that specification is 32 × 32 in proportion, ensure that potential obvious characteristic monitors the center of sub-receptive field as oil spilling vestige etc. can appear at top feature, it is as the input picture of convolutional neural networks.Input using the grey scale signal of 32 × 32 pixels as network, directly by the convolutional neural networks hierarchical model of having trained, obtain 50 laddering training essential characteristic of emphasis characteristic area, directly identify by output layer radial basis function network, be output as characteristic type.
In sum, the present invention has overcome traditional shortcoming, can solve preferably artifact and cause the problem that efficiency is low, unreliable, consistance is poor.Without other image pre-service, overcome traditional remote sensing features image detecting method adaptability not strong, versatility is bad, the problem that efficiency is low, be difficult to detect Weak characteristic.
Obviously, it should be appreciated by those skilled in the art, above-mentioned of the present invention each module or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on the network that multiple computing systems form, alternatively, they can be realized with the executable program code of computing system, thereby, they can be stored in storage system and be carried out by computing system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention is only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore any amendment of, making, be equal to replacement, improvement etc., within protection scope of the present invention all should be included in without departing from the spirit and scope of the present invention in the situation that.In addition, claims of the present invention are intended to contain whole variations and the modification in the equivalents that falls into claims scope and border or this scope and border.

Claims (4)

1. an online image processing method, carries out ONLINE RECOGNITION for the feature to image, it is characterized in that, comprising:
Step 1: gray scale remote sensing images are carried out to visual scanning, search for the central point of each sweep trace, by central periphery difference operation, obtain remarkable figure and the blinkpunkt of remote sensing images;
Step 2: according to self-adaption gradient threshold value computing method, utilize the conspicuousness rank of the poor threshold value of self-adaptation central periphery and each blinkpunkt, determine emphasis characteristic area;
Step 3: construct laddering training network based on convolutional neural networks, the convolutional neural networks hierarchical model of the input of the pixel grey scale signal of key area having been trained based on this laddering training network, obtain the laddering training essential characteristic of emphasis characteristic area, thereby obtain Characteristics of The Remote Sensing Images.
2. method according to claim 1, is characterized in that, described step 1 further comprises:
(1) remote sensing images are carried out to line sweep line by line,
(2) the each local minimum peak point of search j row grey scale curve h (x i, y j) be central point, i=1,2 ..., n, the quantity that n is central point; Pixel coordinate corresponding to each central point is (x i, y j),
Start both sides to the left and right by each central point and search for respectively from its nearest top, be respectively h (x iL, y j) and h (x iR, y j), i=1,2 ..., n; Pixel corresponding to each top is peripheral point, is respectively (x iL, y j) and (x iR, y j), i=1,2 ..., n,
(3) objective definition region N (x i, y j)={ (x im,y j) | m ∈ Z, L≤m≤R}, is made up of the pixel between the peripheral point of the left and right sides, is possible blinkpunkt region; Wherein j=1,2 ..., k, k is picturedeep, and Z represents that set of integers, L represent target area left margin, and R represents target area right margin,
The peripheral poor Δ h (x of computing center iL, y j)=h (x iL,y j)-h (x i, y j), Δ h (x iR, y j)=h (x iR, y j)-h (x i, y j), wherein i=1,2 ..., n; Get Δ h (x iL, y j) and Δ h (x iR, y j) smaller centered by peripheral poor Δ h,
(4) the poor threshold T of self-adaptation central periphery of calculating j horizontal scanning line j: T jΔ f+ k σ Δ f, wherein μ Δ ffor the poor average of sweep trace central periphery; σ Δ ffor the poor standard deviation of sweep trace central periphery; K is constant coefficient,
(5) calculate each point conspicuousness rank on sweep trace: by Δ h (x iL, y j) and Δ h (x iR, y j) with the poor threshold T of self-adaptation central periphery of each sweep trace jrelatively, suppose Δ h (x iL, y j)>=Δ h (x iR, y j), by Δ h (x iL, y j)>=T j, this target area N (x i, y j) remarkable focus, the conspicuousness rank of each point in remarkable figure is set to S (x i, y j)=Δ h (x iL, y j), on sweep trace, all the other each point conspicuousness ranks are set to 0,
(6) after whole sub-picture has scanned, merge connected marking area, using maximum conspicuousness rank as the conspicuousness rank that merges region, obtain the remarkable figure of remote sensing images remote sensing images figure.
3. method according to claim 2, is characterized in that, in definite emphasis characteristic area of step 2, the sequence of the conspicuousness rank based on to each blinkpunkt carrys out sequential processes marking area, as the conspicuousness rank S (x of blinkpunkt i, y j) > T, this blinkpunkt characteristic area of attaching most importance to, wherein T is default decision gate limit value.
4. method according to claim 3, is characterized in that, described step 3 further comprises:
The key area according to the sequence of conspicuousness rank that above-mentioned steps two is obtained is as analytic target, input using gray image signals as network, by the convolutional neural networks hierarchical model of having trained, obtain the laddering training essential characteristic of emphasis characteristic area, identify by output layer radial basis function network, be output as characteristic type.
CN201410381571.4A 2014-08-05 2014-08-05 Online image processing method Expired - Fee Related CN104143102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410381571.4A CN104143102B (en) 2014-08-05 2014-08-05 Online image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410381571.4A CN104143102B (en) 2014-08-05 2014-08-05 Online image processing method

Publications (2)

Publication Number Publication Date
CN104143102A true CN104143102A (en) 2014-11-12
CN104143102B CN104143102B (en) 2017-08-11

Family

ID=51852272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410381571.4A Expired - Fee Related CN104143102B (en) 2014-08-05 2014-08-05 Online image processing method

Country Status (1)

Country Link
CN (1) CN104143102B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139385A (en) * 2015-08-12 2015-12-09 西安电子科技大学 Image visual saliency region detection method based on deep automatic encoder reconfiguration
CN105760442A (en) * 2016-02-01 2016-07-13 中国科学技术大学 Image feature enhancing method based on database neighborhood relation
CN105989336A (en) * 2015-02-13 2016-10-05 中国科学院西安光学精密机械研究所 Scene identification method based on deconvolution deep network learning with weight
CN106127783A (en) * 2016-07-01 2016-11-16 武汉泰迪智慧科技有限公司 A kind of medical imaging identification system based on degree of depth study
CN106778687A (en) * 2017-01-16 2017-05-31 大连理工大学 Method for viewing points detecting based on local evaluation and global optimization
CN107851214A (en) * 2015-07-23 2018-03-27 米雷普里卡技术有限责任公司 For the performance enhancement of two-dimensional array processor
CN109410187A (en) * 2017-10-13 2019-03-01 北京昆仑医云科技有限公司 For detecting system, method and the medium of cancer metastasis in full sheet image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050271304A1 (en) * 2004-05-05 2005-12-08 Retterath Jamie E Methods and apparatus for automated true object-based image analysis and retrieval
US20080137960A1 (en) * 2006-12-08 2008-06-12 Electronics And Telecommunications Research Insititute Apparatus and method for detecting horizon in sea image
CN101470806A (en) * 2007-12-27 2009-07-01 东软集团股份有限公司 Vehicle lamp detection method and apparatus, interested region splitting method and apparatus
CN101521753A (en) * 2007-12-31 2009-09-02 财团法人工业技术研究院 Image processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050271304A1 (en) * 2004-05-05 2005-12-08 Retterath Jamie E Methods and apparatus for automated true object-based image analysis and retrieval
US20080137960A1 (en) * 2006-12-08 2008-06-12 Electronics And Telecommunications Research Insititute Apparatus and method for detecting horizon in sea image
CN101470806A (en) * 2007-12-27 2009-07-01 东软集团股份有限公司 Vehicle lamp detection method and apparatus, interested region splitting method and apparatus
CN101521753A (en) * 2007-12-31 2009-09-02 财团法人工业技术研究院 Image processing method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余永维等: "磁瓦表面图像的自适应形态学滤波缺陷提取方法", 《计算机辅助设计与图形学学报》 *
韩现伟: "大幅面可见光遥感图像典型目标识别关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *
顾佳玲,彭宏京: "增长式卷积神经网络及其在人脸检测中的应用", 《系统仿真学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989336A (en) * 2015-02-13 2016-10-05 中国科学院西安光学精密机械研究所 Scene identification method based on deconvolution deep network learning with weight
CN105989336B (en) * 2015-02-13 2020-11-17 中国科学院西安光学精密机械研究所 Scene recognition method based on deconvolution deep network learning with weight
CN107851214A (en) * 2015-07-23 2018-03-27 米雷普里卡技术有限责任公司 For the performance enhancement of two-dimensional array processor
CN105139385A (en) * 2015-08-12 2015-12-09 西安电子科技大学 Image visual saliency region detection method based on deep automatic encoder reconfiguration
CN105139385B (en) * 2015-08-12 2018-04-17 西安电子科技大学 Image vision salient region detection method based on the reconstruct of deep layer autocoder
CN105760442A (en) * 2016-02-01 2016-07-13 中国科学技术大学 Image feature enhancing method based on database neighborhood relation
CN105760442B (en) * 2016-02-01 2019-04-26 中国科学技术大学 Characteristics of image Enhancement Method based on database neighborhood relationships
CN106127783A (en) * 2016-07-01 2016-11-16 武汉泰迪智慧科技有限公司 A kind of medical imaging identification system based on degree of depth study
CN106778687A (en) * 2017-01-16 2017-05-31 大连理工大学 Method for viewing points detecting based on local evaluation and global optimization
CN106778687B (en) * 2017-01-16 2019-12-17 大连理工大学 Fixation point detection method based on local evaluation and global optimization
CN109410187A (en) * 2017-10-13 2019-03-01 北京昆仑医云科技有限公司 For detecting system, method and the medium of cancer metastasis in full sheet image
CN109410187B (en) * 2017-10-13 2021-02-12 北京昆仑医云科技有限公司 Systems, methods, and media for detecting cancer metastasis in a full image

Also Published As

Publication number Publication date
CN104143102B (en) 2017-08-11

Similar Documents

Publication Publication Date Title
CN104103033B (en) View synthesis method
CN104143102A (en) Online image data processing method
CN104299006B (en) A kind of licence plate recognition method based on deep neural network
Liu et al. An adaptive and robust edge detection method based on edge proportion statistics
CN104063719B (en) Pedestrian detection method and device based on depth convolutional network
CN108399628A (en) Method and system for tracking object
Yang et al. A dual attention network based on efficientNet-B2 for short-term fish school feeding behavior analysis in aquaculture
Guo et al. BARNet: Boundary aware refinement network for crack detection
CN102096824B (en) Multi-spectral image ship detection method based on selective visual attention mechanism
CN111259827B (en) Automatic detection method and device for water surface floating objects for urban river supervision
CN103295016A (en) Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN109034184A (en) A kind of grading ring detection recognition method based on deep learning
He et al. Automatic recognition of traffic signs based on visual inspection
Fei et al. A novel visual attention method for target detection from SAR images
CN107704833A (en) A kind of front vehicles detection and tracking based on machine learning
Han et al. KCPNet: Knowledge-driven context perception networks for ship detection in infrared imagery
Jbene et al. Fusion of convolutional neural network and statistical features for texture classification
CN108765439A (en) A kind of sea horizon detection method based on unmanned water surface ship
Wang et al. An enhanced YOLOv4 model with self-dependent attentive fusion and component randomized mosaic augmentation for metal surface defect detection
Shustanov et al. A Method for Traffic Sign Recognition with CNN using GPU.
CN109284752A (en) A kind of rapid detection method of vehicle
CN101694385A (en) Small target detection instrument based on Fourier optics and detection method thereof
Zhang et al. A precise apple leaf diseases detection using BCTNet under unconstrained environments
Teutsch et al. Noise resistant gradient calculation and edge detection using local binary patterns
Deng et al. Research on pedestrian detection algorithms based on video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Xia Zhengxin

Inventor before: Mao Li

CB03 Change of inventor or designer information
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170713

Address after: Nanjing City, Jiangsu province 210046 Yuen Road No. 9

Applicant after: Nanjing Post & Telecommunication Univ.

Address before: 610000 A, building, No. two, Science Park, high tech Zone, Sichuan, Chengdu, China 103B

Applicant before: Sichuan Jiucheng Information Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171206

Address after: Room 101, room 1, Baima Mountain Village, Xuanwu District, Nanjing, Jiangsu Province, 210042

Patentee after: Nanjing Diyou Software Development Company Limited

Address before: Nanjing City, Jiangsu province 210046 Yuen Road No. 9

Patentee before: Nanjing Post & Telecommunication Univ.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170811

Termination date: 20180805

CF01 Termination of patent right due to non-payment of annual fee