CN106991397A - View-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network - Google Patents
View-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network Download PDFInfo
- Publication number
- CN106991397A CN106991397A CN201710211411.9A CN201710211411A CN106991397A CN 106991397 A CN106991397 A CN 106991397A CN 201710211411 A CN201710211411 A CN 201710211411A CN 106991397 A CN106991397 A CN 106991397A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- detection method
- sensing images
- image
- depth confidence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides the remote sensing images detection method that a kind of view-based access control model conspicuousness constrains depth confidence network, comprise the following steps:S1:The target in testing image is positioned using the image object rough localization method of view-based access control model conspicuousness and candidate's window to be detected is produced;S2:Depth confidence network model is obtained using model training;S3:The image in candidate's window to be detected is classified using the step S2 depth confidence network models obtained and final testing result is produced.The efficiency and precision of target detection can be improved using the remote sensing images detection method of the present invention.
Description
Technical field
The present invention relates to technical field of remote sensing image processing, more particularly to a kind of view-based access control model conspicuousness constraint depth confidence
The Remote Sensing Target quick determination method of network.
Background technology
Target detection is the important base application of a class in remote sensing image interpretation analysis, is had in military and civilian field
Important application value.With the development of remotely sensed image technology, the terrestrial object information being reflected on remote sensing images increasingly enriches, this
More available targets information are provided for image object Detection task, but target detection is carried out under complex background still to be had
It is extremely challenging.The conventional method for solving target detection problems in the past is as instruction first with the image slice for containing target
Practice data and train a target detection machine, then in view picture image to be detected using exhaustive search method target detection
Each position of machine scan image, although this kind of method has been achieved for certain effect in actual applications, but still suffers from perhaps
Many problems and challenge.
Remote sensing images Automatic Targets task exist a key issue be how to select and extract differentiate performance compared with
Strong feature, target is accurately distinguished with background, and this feature is needed in face of the complicated and changeable of remote sensing images
Background when keep enough robustness.Asked to solve " semantic gap " during low layer pictures feature understands with high-level semantic
Topic, researcher is for how the high-level semantics features of design stability have carried out substantial amounts of research.It is high-rise relative to low-level feature
Semantic feature can preferably reflect the priori and semantic information of target stability characteristic (quality), but conventional semantic feature extraction
Algorithm is highly dependent on manual features design and chosen, complex or in the case that data volume is larger in environment, by artificial
The stable feature of selection is still a more difficult job.
Another problem present in remote sensing images Automatic Targets is how to carry out picture search to deposit to have found that it is likely that
Target, the amount of calculation required for the exhaustive search strategy based on sliding window method commonly used in target search is huge and non-
Often time-consuming, this causes conventional algorithm of target detection generally very slow, and practicality is relatively low, therefore how to enter in remote sensing images
The quick target search of row is still a challenge.
The content of the invention
(1) technical problem to be solved
In order to solve prior art manual features in large scene remote sensing images choose difficult, location algorithm calculate it is complicated,
Slow technical problem, the present invention proposes the remote sensing images mesh that a kind of view-based access control model conspicuousness constrains depth confidence network
Mark quick determination method.
(2) technical scheme
According to an aspect of the invention, there is provided a kind of view-based access control model conspicuousness constrains the remote sensing figure of depth confidence network
As detection method, comprise the following steps:S1:Using the image object rough localization method of view-based access control model conspicuousness in testing image
Target positioned and produce candidate's window to be detected;S2:Depth confidence network model is obtained using model training;S3:Profit
The image in candidate's window to be detected is classified with the step S2 depth confidence network models obtained and final inspection is produced
Survey result.
(3) beneficial effect
It can be seen from the above technical proposal that the view-based access control model conspicuousness of the present invention constrains the remote sensing figure of depth confidence network
As detection method at least has the advantages that one of them:
(1) saliency mark can obtain relatively stable Target Segmentation in the case of color of image change is obvious
As a result, the positioning precision of target is ensured during fast search;
(2) present invention, using saliency annotation results, is led to using a kind of nonoverlapping window initialization searching method
The method for crossing iteration optimization carries out target positioning, can significantly improve the target positioning search speed and precision during detection;
(3) present invention carries out clarification of objective extraction and classification, traditional limitation Bohr using depth confidence network model
Hereby graceful machine unsupervised training method generally regard view picture training image as input, it is impossible to target partial structurtes feature is carried out abundant
Coding, the present invention uses a kind of unsupervised training method based on image block strategy, i.e., Saliency maps picture and original image is same
Shi Zuowei training datas, while being constrained using regional area, make the expression of limitation Boltzmann machine localized region architectural feature
More fully, the generalization ability of model and the accuracy rate of detection are improved.
Brief description of the drawings
Fig. 1 is the Remote Sensing Target quick detection that view-based access control model of embodiment of the present invention conspicuousness constrains depth confidence network
The block schematic illustration of method.
Fig. 2 for the embodiment of the present invention detection method in view-based access control model conspicuousness image object rough localization method signal
Figure.
Fig. 3 for the embodiment of the present invention detection method in image object rough localization method initial search window layout viewing.
Fig. 4 is image slice piecemeal schematic diagram in the detection method of the embodiment of the present invention.
Embodiment
For the object, technical solutions and advantages of the present invention are more clearly understood, below in conjunction with specific embodiment, and reference
Accompanying drawing, the present invention is described in more detail.
The invention provides the remote sensing images detection method that a kind of view-based access control model conspicuousness constrains depth confidence network, this is distant
Sense image detecting method comprises the following steps:S1:Mapping is treated using the image object rough localization method of view-based access control model conspicuousness
Target as in is positioned and produces a number of candidate window to be detected;S2:Depth confidence is obtained using model training
Network model;S3:The depth confidence network model obtained using model training is divided the image in candidate's window to be detected
Class simultaneously produces final testing result.
There is provided a kind of the distant of view-based access control model conspicuousness constraint depth confidence network in an exemplary embodiment of the present invention
Feel image object quick determination method.Fig. 1 is the remote sensing that view-based access control model of embodiment of the present invention conspicuousness constrains depth confidence network
The block schematic illustration of image object quick determination method.As shown in figure 1, the Remote Sensing Target quick determination method of the present invention
Whole target detection process is divided into model training and detects two main process with testing image.Model training process includes limitation glass
The small parameter perturbations training of the non-supervisory pre-training of the graceful machine of Wurz and multilayer neural network, in non-supervisory pre-training, first with base
N number of limitation Boltzmann machine is respectively trained in the training method of partition strategy, and by original image and Saliency maps picture (by aobvious
Work property, which is calculated, to be obtained) as training data, as shown in fig. 1, finally N number of limitation Boltzmann machine is carried out using merging method
Fusion, N >=2, and the first layer of limitation Boltzmann machine as depth confidence network after fusion is gone to the following limitation of training
After the completion of Boltzmann machine, pre-training, a monitor layer is added in the depth confidence network the superiors first, and utilize backpropagation
Algorithm carries out small parameter perturbations to depth confidence network, and obtaining one is used for the depth confidence network model of classification and Detection.In target
Detection-phase, is positioned first with the image object rough localization method of view-based access control model conspicuousness to the target in testing image
And a number of candidate window to be detected is produced, the depth confidence network model then obtained using training above is classified
And produce final testing result.
Fig. 2 for the embodiment of the present invention detection method in view-based access control model conspicuousness image object rough localization method signal
Figure.The general principle of the image object rough localization method of view-based access control model conspicuousness, biological vision system are simply introduced first
The area-of-interest in piece image can easily be judged, and notice the important information in image, this vision is notable
Property be that caused by the image attributes such as the color in image, gradient, edge or border, how are vision significance and biological vision system
Perceive and processing visual stimulus is closely related, be widely studied in multiple scientific research fields, can based on this visual processes mechanism
Extract preferentially to distribute to limited computing resource in image with the calculating by salient region and include information of interest
Part, therefore using computer carry out image in salient region detection and extraction, can greatly improve graphical analysis reason
The efficiency of solution.As shown in Fig. 2 the image object rough localization method of view-based access control model conspicuousness comprises the following steps:S11:By aobvious
Work property, which is calculated, obtains a width and original image size identical Saliency maps picture;S12:A scale is set on the Saliency maps picture
Very little initial search window;S13:Adjustment is optimized to the position of search window using iteration optimization algorithms;S14:Utilize
The non-maxima suppression algorithm multiple search windows overlapping to having are merged, so as to obtain a number of candidate window to be detected
Mouthful.
As a kind of specific embodiment, when carrying out conspicuousness calculating, Saliency maps are used as using normed gradient algorithm
As computational methods.
Fig. 3 arranges for the initial search window of image object rough localization method in the detection method of the embodiment of the present invention
Figure.As shown in figure 3, non-overlapping copies and close-packed arrays between these initial search windows, to cover whole image.With natural field
Scape image is different, the characteristics of remote sensing images are due to vertical imaging, and typical feature target (such as aircraft, vehicle, building) is generally
Folded situation about covering is not had to occur, therefore initial search window of being arranged with the mode shown in Fig. 3, if initial search window
It is proper that size is set, and for some target in image, the major part of the target can be covered by least having a search window
Region.
Specifically, the iteration optimization algorithms in the present embodiment are summarized as follows:A, input testing image Saliency maps as M,
Initial search window WpAnd its center position pc, iteration stopping step-length δ, and make:pc=(xc,yc), δ=2, xij=i, yij
=j;B, calculating Saliency maps are as M is in initial search window WpPixel value center of gravity;C, calculating pc'=(xc',yc') and pc's
Euclidean distance d;D, the magnitude relationship for judging d and δ simultaneously carry out following operate:If d>δ, then by pc'=(xc',yc') be used as and search
The new central point of rope window, and step B continuation iteration is returned to, if d<δ, then terminate iteration;The final search window of E, output
Wo。
As a kind of specific embodiment, image pixel value barycentric coodinates computational methods are defined as follows in step B:
Wherein, SijRepresent that coordinate is the pixel value at (i, j) place, x in Saliency maps pictureijAnd yijRespectively abscissa and vertical
The weight coefficient of coordinate, h is the length of Saliency maps picture, and w is the width of Saliency maps picture.
By the iteration optimization of algorithm above, there is a strong possibility for the initial search window meeting comprising the most of region of target
The exact position being moved to where target, and the false window not comprising target will be removed in classification and Detection.Conventional image
Object detection method is mostly based on sliding window and its expanding method, and their principle is to use the inspection of fixed size in the picture
Survey window, with a fixed step size in the picture progress order or by certain regular image scanning, it is therefore an objective to will likely aiming circle
Enter in some detection window, this kind of sliding window method lacks independent of any priori or parameter learning, therefore with many
Point:First, sliding window scanning in the picture will produce substantial amounts of window to be detected, cause detection efficiency relatively low;Further, since
Scanned for independent of priori, and with a fixed step size, produced window often can not be positioned accurately to target, be made
Into many false-alarm and false dismissal, the accuracy rate and recall rate of detection are reduced;3rd, traditional sliding window searching method can not be right
The position of window is automatically adjusted and optimized.By comparison, target rough localization method of the invention can not only greatly reduce time
Number of windows to be detected is selected, the detection efficiency of system is improved, while can significantly improve mesh by the adjustment of search window position
Target positioning precision, this will bring very big improvement to subsequent feature extraction and classification and Detection process, improve target detection
While accuracy rate, greatly strengthen the robustness of different classifications model.
The depth confidence network model utilized in the present embodiment is 6 layer depth confidence network, including visual layers,
One monitor layer and four hidden layers, are 2592,300,100,100,300,2 per node layer quantity, visual layers include 2592
Input node (is used for inputting original image and Saliency maps picture, picture size is scaled into the picture of 36 pixels × 36 first during training
Element, 36 × 36 × 2=2592), in the present embodiment, the visual layer data of each limitation Boltzmann machine is between 0 to 1
Real number, while the visual layers that the activation probable value of preceding layer limitation Boltzmann machine limits Boltzmann machine as later layer are defeated
Enter.
Space structure semantic feature is extremely important for Remote Sensing Target, special in order to preferably extract partial structurtes
Levy, the basis of semantic feature extraction is provided for high-rise limitation Boltzmann machine, the present embodiment employs a kind of limit based on piecemeal
Boltzmann machine pre-training method processed.Specifically, before pre-training, first by the training figure of the pixel size of 36 pixels × 36
As carrying out piecemeal, as shown in figure 4, the separate limitation Boltzmann machine of random initializtion 50 afterwards, each limitation glass
The graceful machine of Wurz is trained with one of which subgraph image set, after these limitation Boltzmann machine training are completed, their ginseng
Number will be merged, deinitialization limitation Boltzmann machine one bigger, and 50 limit Boltzmann machine (being represented with SRBM)
Hidden layer nodes are set to 6, therefore the limitation Boltzmann machine (being represented with BRBM) after merging will possess 300 and hide
Node layer.When parameter merges, the weights connected with first SRBM weight initialization BRBM preceding 6 hiding node layers,
The weights that the second SRBM other 6 hiding node layers of weight initialization BRBM are connected, and so on, however, limitation
Boltzmann machine two-layer node is to connect entirely mutually, therefore also has many connection weights not being initialised in BRBM, these
Weights will be initialized to 0.Similarly, the bias term in BRBM parameters will be merged with the bias term in SRBM and be initialized.Divide above
After the completion of block training, BRBM will merge training using complete training image, until network parameter is finally restrained.
So far, the Remote Sensing Target of exemplary embodiment of the present view-based access control model conspicuousness constraint depth confidence network is fast
Fast detection method introduction is finished.
In summary, in the present invention, the saliency mark utilized can change obvious situation in color of image
It is lower to obtain relatively stable object segmentation result, ensure the positioning precision of target during fast search.The present invention is used
A kind of nonoverlapping window initialization searching method, using saliency annotation results, is carried out by the method for iteration optimization
Target is positioned, and can significantly improve the target positioning search speed and precision during detection.The present invention utilizes depth confidence network mould
Type carries out clarification of objective extraction and classification, and traditional limitation Boltzmann machine unsupervised training method, which generally trains view picture, to be schemed
As being used as input, it is impossible to which target partial structurtes feature is fully encoded, the present invention is based on image block strategy using a kind of
Unsupervised training method, i.e., using Saliency maps picture and original image simultaneously as training data, while constrained using regional area,
Make the expression of limitation Boltzmann machine localized region architectural feature more abundant, improve the generalization ability of model and the standard of detection
True rate.
So far, the present embodiment is described in detail combined accompanying drawing.According to above description, those skilled in the art
There should be clear understanding to the Remote Sensing Target quick determination method of view-based access control model conspicuousness of the present invention.
It should be noted that in accompanying drawing or specification text, the implementation for not illustrating or describing is affiliated technology
Form known to a person of ordinary skill in the art, is not described in detail in field.In addition, the above-mentioned definition to each element and method is simultaneously
Various concrete structures, shape or the mode mentioned in embodiment are not limited only to, those of ordinary skill in the art can carry out letter to it
Singly change or replace.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein.
Various general-purpose systems can also be used together with based on teaching in this.As described above, construct required by this kind of system
Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It is understood that, it is possible to use it is various
Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair
Bright preferred forms.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" is not excluded the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of some different elements and coming real by means of properly programmed computer
It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch
To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame
Claim.
Particular embodiments described above, has been carried out further in detail to the purpose of the present invention, technical scheme and beneficial effect
Describe in detail it is bright, should be understood that the foregoing is only the present invention specific embodiment, be not intended to limit the invention, it is all
Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements done etc., should be included in the guarantor of the present invention
Within the scope of shield.
Claims (10)
1. a kind of view-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network, it is characterised in that including with
Lower step:
S1:The target in testing image is positioned and produced using the image object rough localization method of view-based access control model conspicuousness
Candidate's window to be detected;
S2:Depth confidence network model is obtained using model training;
S3:The image in candidate's window to be detected is classified and produced using the step S2 depth confidence network models obtained
Final testing result.
2. remote sensing images detection method according to claim 1, it is characterised in that in step sl, view-based access control model is notable
The image object rough localization method of property specifically includes following steps:
S11:Calculated by conspicuousness and obtain a width and original image size identical Saliency maps picture;
S12:Initial search window is set on the Saliency maps picture;
S13:Adjustment is optimized to the position of search window using iteration optimization algorithms;
S14:Overlapping multiple search windows are merged, so as to obtain candidate's window to be detected.
3. remote sensing images detection method according to claim 2, it is characterised in that in step s 11, carries out conspicuousness meter
During calculation, using normed gradient algorithm.
4. remote sensing images detection method according to claim 2, it is characterised in that in step s 13, the iteration optimization
Algorithm is comprised the following steps that:
A, the Saliency maps of input testing image are as M, initial search window WpAnd its center position pc, iteration stopping step-length
δ;
B, calculating Saliency maps are as M is in initial search window WpPixel value barycentric coodinates (xc',yc');
C, calculating pc'=(xc',yc') and pcEuclidean distance d;
D, the magnitude relationship for judging d and δ simultaneously carry out following operate:If d>δ, then by pc'=(xc',yc') it is used as search window
New central point, and step B continuation iteration is returned to, if d<δ, then terminate iteration;
The final search window W of E, outputo。
5. remote sensing images detection method according to claim 4, it is characterised in that the calculating of the pixel value barycentric coodinates
Formula is as follows:
Wherein, SijRepresent that coordinate is the pixel value at (i, j) place, x in Saliency maps pictureijAnd yijRespectively abscissa and ordinate
Weight coefficient, h be Saliency maps picture length, w be Saliency maps picture width.
6. remote sensing images detection method according to claim 5, it is characterised in that δ=2, xij=i, yij=j.
7. remote sensing images detection method according to claim 1, it is characterised in that in step s 2, model training includes
Limit the small parameter perturbations training of the non-supervisory pre-training of Boltzmann machine and multilayer neural network.
8. remote sensing images detection method according to claim 7, it is characterised in that non-supervisory pre- in limitation Boltzmann machine
In training,
N number of limitation Boltzmann machine is respectively trained first with the training method based on partition strategy, and by original image and shows
Work property image is used as training data;
Then N number of limitation Boltzmann machine is merged, and regard the limitation Boltzmann machine after fusion as depth confidence net
The first layer of network goes to train following limitation Boltzmann machine;Wherein N >=2.
9. remote sensing images detection method according to claim 7, it is characterised in that in the small parameter perturbations of multilayer neural network
In training,
First a monitor layer is added in the depth confidence network the superiors;
Then small parameter perturbations are carried out to depth confidence network using back-propagation algorithm.
10. remote sensing images detection method according to claim 1, it is characterised in that depth confidence network model is 6 layer depths
Spend confidence network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710211411.9A CN106991397A (en) | 2017-03-31 | 2017-03-31 | View-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710211411.9A CN106991397A (en) | 2017-03-31 | 2017-03-31 | View-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106991397A true CN106991397A (en) | 2017-07-28 |
Family
ID=59414679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710211411.9A Pending CN106991397A (en) | 2017-03-31 | 2017-03-31 | View-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106991397A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399420A (en) * | 2018-01-30 | 2018-08-14 | 北京理工雷科电子信息技术有限公司 | A kind of visible light naval vessel false-alarm elimination method based on depth convolutional network |
CN108509898A (en) * | 2018-03-29 | 2018-09-07 | 中国电子科技集团公司第五十四研究所 | A kind of online object detection method of near real-time remote sensing images based on image stream |
CN109377479A (en) * | 2018-09-27 | 2019-02-22 | 中国电子科技集团公司第五十四研究所 | Satellite dish object detection method based on remote sensing image |
CN111028255A (en) * | 2018-10-10 | 2020-04-17 | 千寻位置网络有限公司 | Farmland area pre-screening method and device based on prior information and deep learning |
CN111310835A (en) * | 2018-05-24 | 2020-06-19 | 北京嘀嘀无限科技发展有限公司 | Target object detection method and device |
CN111582475A (en) * | 2020-04-28 | 2020-08-25 | 中国科学院空天信息创新研究院 | Data processing method and device based on automatic lightweight neural network |
CN111626176A (en) * | 2020-05-22 | 2020-09-04 | 中国科学院空天信息创新研究院 | Ground object target detection method and system of remote sensing image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102810158A (en) * | 2011-05-31 | 2012-12-05 | 中国科学院电子学研究所 | High-resolution remote sensing target extraction method based on multi-scale semantic model |
CN103955702A (en) * | 2014-04-18 | 2014-07-30 | 西安电子科技大学 | SAR image terrain classification method based on depth RBF network |
CN104851099A (en) * | 2015-05-21 | 2015-08-19 | 周口师范学院 | Method for image fusion based on representation learning |
CN105809198A (en) * | 2016-03-10 | 2016-07-27 | 西安电子科技大学 | SAR image target recognition method based on deep belief network |
-
2017
- 2017-03-31 CN CN201710211411.9A patent/CN106991397A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102810158A (en) * | 2011-05-31 | 2012-12-05 | 中国科学院电子学研究所 | High-resolution remote sensing target extraction method based on multi-scale semantic model |
CN103955702A (en) * | 2014-04-18 | 2014-07-30 | 西安电子科技大学 | SAR image terrain classification method based on depth RBF network |
CN104851099A (en) * | 2015-05-21 | 2015-08-19 | 周口师范学院 | Method for image fusion based on representation learning |
CN105809198A (en) * | 2016-03-10 | 2016-07-27 | 西安电子科技大学 | SAR image target recognition method based on deep belief network |
Non-Patent Citations (1)
Title |
---|
WENHUI DIAO 等: "Efficient Saliency-Based Object Detection in Remote Sensing Images Using Deep Belief Networks", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399420A (en) * | 2018-01-30 | 2018-08-14 | 北京理工雷科电子信息技术有限公司 | A kind of visible light naval vessel false-alarm elimination method based on depth convolutional network |
CN108399420B (en) * | 2018-01-30 | 2021-07-06 | 北京理工雷科电子信息技术有限公司 | Visible light ship false alarm rejection method based on deep convolutional network |
CN108509898A (en) * | 2018-03-29 | 2018-09-07 | 中国电子科技集团公司第五十四研究所 | A kind of online object detection method of near real-time remote sensing images based on image stream |
CN111310835A (en) * | 2018-05-24 | 2020-06-19 | 北京嘀嘀无限科技发展有限公司 | Target object detection method and device |
CN111310835B (en) * | 2018-05-24 | 2023-07-21 | 北京嘀嘀无限科技发展有限公司 | Target object detection method and device |
CN109377479A (en) * | 2018-09-27 | 2019-02-22 | 中国电子科技集团公司第五十四研究所 | Satellite dish object detection method based on remote sensing image |
CN109377479B (en) * | 2018-09-27 | 2021-10-22 | 中国电子科技集团公司第五十四研究所 | Butterfly satellite antenna target detection method based on remote sensing image |
CN111028255A (en) * | 2018-10-10 | 2020-04-17 | 千寻位置网络有限公司 | Farmland area pre-screening method and device based on prior information and deep learning |
CN111028255B (en) * | 2018-10-10 | 2023-07-21 | 千寻位置网络有限公司 | Farmland area pre-screening method and device based on priori information and deep learning |
CN111582475A (en) * | 2020-04-28 | 2020-08-25 | 中国科学院空天信息创新研究院 | Data processing method and device based on automatic lightweight neural network |
CN111626176A (en) * | 2020-05-22 | 2020-09-04 | 中国科学院空天信息创新研究院 | Ground object target detection method and system of remote sensing image |
CN111626176B (en) * | 2020-05-22 | 2021-08-06 | 中国科学院空天信息创新研究院 | Remote sensing target rapid detection method and system based on dynamic attention mechanism |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106991397A (en) | View-based access control model conspicuousness constrains the remote sensing images detection method of depth confidence network | |
CN109948425B (en) | Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching | |
CN110472627A (en) | One kind SAR image recognition methods end to end, device and storage medium | |
BR112020001110A2 (en) | automated seismic interpretation using fully convolutional neural networks | |
CN110069972A (en) | Automatic detection real world objects | |
CN106709568A (en) | RGB-D image object detection and semantic segmentation method based on deep convolution network | |
CN107886120A (en) | Method and apparatus for target detection tracking | |
CN107871124A (en) | A kind of Remote Sensing Target detection method based on deep neural network | |
Zhang et al. | Unsupervised difference representation learning for detecting multiple types of changes in multitemporal remote sensing images | |
CN113177559B (en) | Image recognition method, system, equipment and medium combining breadth and dense convolutional neural network | |
CN110084093B (en) | Method and device for detecting and identifying target in remote sensing image based on deep learning | |
CN106295506A (en) | A kind of age recognition methods based on integrated convolutional neural networks | |
CN112560675B (en) | Bird visual target detection method combining YOLO and rotation-fusion strategy | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
Demir et al. | Detecting visual design principles in art and architecture through deep convolutional neural networks | |
Tatzgern | Situated visualization in augmented reality | |
CN114519819B (en) | Remote sensing image target detection method based on global context awareness | |
Han et al. | A context-scale-aware detector and a new benchmark for remote sensing small weak object detection in unmanned aerial vehicle images | |
CN115115672B (en) | Dynamic vision SLAM method based on target detection and feature point speed constraint | |
CN108596952A (en) | Fast deep based on candidate region screening learns Remote Sensing Target detection method | |
Previtali et al. | Towards automatic reconstruction of indoor scenes from incomplete point clouds: Door and window detection and regularization | |
CN116824335A (en) | YOLOv5 improved algorithm-based fire disaster early warning method and system | |
CN109948527A (en) | Small sample terahertz image foreign matter detecting method based on integrated deep learning | |
Yin et al. | G2Grad-CAMRL: an object detection and interpretation model based on gradient-weighted class activation mapping and reinforcement learning in remote sensing images | |
CN118097358A (en) | Target detection method, device, equipment and medium for multi-level information remote sensing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170728 |
|
RJ01 | Rejection of invention patent application after publication |