CN106056609B - Method based on DBNMI model realization remote sensing image automatic markings - Google Patents
Method based on DBNMI model realization remote sensing image automatic markings Download PDFInfo
- Publication number
- CN106056609B CN106056609B CN201610383452.1A CN201610383452A CN106056609B CN 106056609 B CN106056609 B CN 106056609B CN 201610383452 A CN201610383452 A CN 201610383452A CN 106056609 B CN106056609 B CN 106056609B
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- dbnmi
- label
- semantic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of methods based on DBNMI model realization remote sensing image automatic markings, carry out adaptivenon-uniform sampling including to initial input remote sensing image, coarseness divides remote sensing image background area and subject area;For the subject area of the remote sensing image marked off, indicated using more example multi-tag technologies;For relationship between the low-level visual feature and high level semantic-concept of the subject area of remote sensing image, automatic modeling is carried out based on DBNMI models;The semantic relation of co-occurrence and opposition between quantitative calculating mark word.Using method of this kind based on DBNMI model realization remote sensing image automatic markings, image very small part is only accounted for for the interested region of ocean remote sensing image semantic tagger task, it is proposed a kind of remote sensing image meaning automatic marking model based on DBNMI, the precision of image automatic marking is effectively improved, there is wider application range.
Description
Technical field
The present invention relates to technical field of image processing more particularly to ocean remote sensing influence label technology field, in particular to
A method of based on DBNMI model realization remote sensing image automatic markings.
Background technology
Remote sensing image meaning automatic marking is to allow image example of the machine by semantic label known to study, then uses language
The secondary content for not marking remote sensing image of adopted representation of concept one.With more ocean remote sensing satellites transmitting (such as SeaStar,
The satellites such as GEOSAT, ERS-n, HY-1 to HY-3), ocean remote sensing image quantity explosive increase.It is marked by remote sensing image semanteme
Note may be implemented the tasks such as remote sensing image data analysis, index and retrieval, efficiently utilize remote sensing image.The method manually marked
There are subjectivities and inexactness, time-consuming and laborious, have been unable to meet current " big data " epoch.The hair of machine learning scheduling theory
Exhibition so that remote sensing image meaning automatic marking becomes a big research direction.
Existing remote sensing image semanteme marking method is mainly set about from the following aspect:Feature extraction, disaggregated model, high-level semantic
Divide level etc..Bratasanu et al. proposes the mapping mode that a kind of pixel groups by identical semantic concept are combined together, and completes
No semantic low-level image feature arrives the mapping of the high-level semantic of human intelligible to the mapping of spectrum semantic indexing again.Li é nou et al. are to each
Concept gives a training set, and unknown images allocation probability is given using maximum likelihood method, is based on latent Dirichletal location mould
Type (Latent Dirichlet Allocation, abbreviation LDA) model is learnt.Chen et al. is utilized in first layer to be shown more
Example multi-tag study (multi-instance multi-label learning, MIML) frame carries out high-definition remote sensing shadow
As mark, the second layer uses multiple dimensioned layering semantic information simulation example in the spatial relationship of MIML, and third layer utilizes nonparametric
Probabilistic model merges possible class.Yang Yang et al. proposes improved deep learning model, and the markup information of image is considered as image
Classification information, characteristics of image is paid close attention to insufficient.Due to the presence of semantic gap, the automatic marking performance of image semanteme urgently carries
It is high.
Remote sensing image has abundant spatial structural form and geographical feature information, and ocean remote sensing image semantic tagger is appointed
Interested region of being engaged in only accounts for image very small part.The contribution that the different zones of one secondary remote sensing image measure Semantic Similarity
Degree is different, and " subject area " is more much bigger than the contribution of " background area " in ocean remote sensing image, and work on hand is distant to ocean
The complexity for feeling image structure considers seldom.There is diversified semantic hierarchies and passes between remote sensing image mark vocabulary
System, including close, opposition, containing etc..For example, after a secondary remote sensing image has been noted as the vocabulary such as " steamer ", " seawater ", " port
The probability that mouth " marks vocabulary as the image will be promoted accordingly.
Invention content
The purpose of the present invention is overcoming the above-mentioned prior art, providing one kind can realize for ocean remote sensing
The essence that the interested region of image semantic tagger task only accounts for image very small part, improves ocean remote sensing image automatic marking
The method based on DBNMI model realization remote sensing image automatic markings of degree.
To achieve the goals above, the present invention has following constitute:
The method based on DBNMI model realization remote sensing image automatic markings, is mainly characterized by, and the method includes
Following steps:
(1) adaptivenon-uniform sampling is carried out to initial input remote sensing image, coarseness divides remote sensing image background area and object
Region;
(2) it is directed to the subject area of the remote sensing image marked off, is indicated using more example multi-tag technologies;
(3) for relationship between the low-level visual feature and high level semantic-concept of the subject area of remote sensing image, it is based on DBNMI
Model carries out automatic modeling;
(4) quantitative calculating marks the semantic relation of co-occurrence and opposition between word.
Preferably, the remote sensing image is ocean remote sensing image.
Preferably, the step (1), includes the following steps:
(1-1) generates the pyramid form of multiresolution image using wavelet transformation;
(1-2) adaptively builds the image information hierarchical organization of different scale;
(1-3) selects suitable segmentation scale according to video scenery complexity, is split to remote sensing image.
Preferably, the step (2), includes the following steps:
(2-1) uses (Xi,Li) indicate a width remote sensing image, wherein Xi={ xi 1,xi 2,…,xi mIndicate instance space, m
It is example number, Li={ li 1,li 2,…,li nIndicating tally set, n label numbers, wherein i indicates i-th layer of remote sensing image,Indicate m-th of example of i-th layer of remote sensing image,Indicate n-th of label of i-th layer of remote sensing image;
(2-2) every width remote sensing image is considered as example in different spaces scale as a packet, relevant cut zone,
In i-th layer with packet Bi={ xi1,xi2,…,ximIndicate, wherein i indicates that i-th layer of remote sensing image, m indicate cut zone, ximTable
Show m-th of cut zone of i-th layer of remote sensing image;
Threshold value t is arranged in (2-3), calculates background area and subject area similarity s, the magnitude relationship of s and t is judged, if s
When more than predetermined threshold value t, then i-th layer of packet needs to continue to divide, and otherwise i-th layer of packet needs not continue to divide.
Preferably, the step (3), includes the following steps:
(3-1) carries out feature extraction to the subject area of remote sensing image, using the feature extracted as the first of DBN model
Layer, i.e. input layer;
(3-2) DBN model carries out successively eigentransformation to input data, and the character representation by sample in former space transforms to
One new feature space;
(3-3) expresses best weights by limited Boltzmann machine acquisition from low-level image feature to high-level semantic, using anti-
Tuning is carried out to depth belief network to communication network.
Preferably, the step (4), includes the following steps:
(4-1) is as F (Bi, L)=1 when,Cooccurrence relation, wherein B between expression mark wordiIt is i-th
Packet, L represent semantic label collection, indicate packet BiIn contain label l, then the packet can select in the affiliated semantic label collection L of the label
Other labels;
(4-2) is as F (Bi,L1,L2When)=1,Count(Bi,l1)≥1,Δ(Bi,l2)=0 indicates
Antagonistic relations, wherein semantic label collection L between mark word1,L2Belong to antagonistic relations, wraps BiIn contain label l1, then the packet can select
Select the affiliated semantic label collection L of the label1In other labels, but semantic label collection L cannot be selected2In label, l2Indicate label
Collect L2In any one label.
Preferably, the method carries out on Matlab experiment porch.
The method based on DBNMI model realization remote sensing image automatic markings in the invention is used, for ocean remote sensing
The interested region of image semantic tagger task only accounts for image very small part, proposes a kind of based on DBNMI (depth belief networks
More examples) remote sensing image meaning automatic marking model, to initial input remote sensing image carry out adaptivenon-uniform sampling, coarseness divide
Ocean remote sensing image background area and subject area, between the low-level visual feature and high level semantic-concept in imaged object region
Relationship, depth belief network model based on limited Boltzmann machine carry out automatic modeling, it is quantitative calculate between mark word co-occurrence and
The semantic relation of opposition effectively improves the precision of image automatic marking, has wider application range.
Description of the drawings
Fig. 1 is the schematic diagram of each cut zone of the present invention.
Fig. 2 is the flow chart for carrying out tuning to depth belief network using counterpropagation network of the present invention.
Fig. 3 is the flow chart of the semantic tagger that new image is completed by the marking model learnt of the present invention.
Fig. 4 is the flow chart of the method for the remote sensing image automatic marking of the present invention.
Specific implementation mode
In order to more clearly describe the technology contents of the present invention, carried out with reference to specific embodiment further
Description.
The present invention solves the above problems the technical scheme adopted is that being as follows:
Step 1 carries out adaptivenon-uniform sampling to initial input ocean remote sensing image, and coarseness divides remote sensing image background area
Domain and subject area;
Step 2 is indicated for the subject area of the ocean remote sensing image marked off using more examples;
Step 3 is based on for relationship between the low-level visual feature and high level semantic-concept of remote sensing image subject area
DBNMI models carry out automatic modeling;
Step 4, quantitative calculating mark the semantic relation of co-occurrence and opposition between word.
In a kind of preferable embodiment, the pyramid of multiresolution image is generated in step 1 using wavelet transformation
Formula, the adaptive image information hierarchical organization for building different scale realize biography of the remote sensing image information between different scale layer
It passs.Its realization method is as follows,
Suitable segmentation scale, simple scenario is selected to select big scale, complex scene choosing according to video scenery complexity
Select small scale;
Wavelet transformation can provide expression of the image under different resolution, realize point of the image according to different scale and displacement
Solution.
In a kind of preferable embodiment, ocean remote sensing is indicated using more example multi-tag (MIML) technologies in step 2
Imaged object region, realization method is as follows,
Using (Xi,Li) indicate a width remote sensing image, wherein Xi={ xi 1,xi 2,...,xi mIndicating instance space, m is to show
Example number, Li={ li 1,li 2,...,li nIndicating tally set, n label numbers, wherein i indicates i-th layer of remote sensing image,
Indicate m-th of example of i-th layer of remote sensing image,Indicate n-th of label of i-th layer of remote sensing image;
Every width image is considered as example in different spaces scale as a packet (bag), relevant cut zone
(instances), wherein i-th layer with packet Bi={ xi1,xi2,...,xim}.Adaptive setting threshold value t, calculate background area with it is right
As Regional Similarity s, if s is more than predetermined threshold value t, i-th layer of packet needs to continue to divide, and is not needed then when less than threshold value t,
In, i indicates that i-th layer of remote sensing image, m indicate cut zone, ximIndicate m-th of cut zone of i-th layer of remote sensing image.
Moreover, being based on for relationship between the low-level visual feature and high level semantic-concept in imaged object region in step 3
DBNMI models carry out automatic modeling, and realization method is as follows,
Feature extraction is carried out to the remote sensing image subject area that is generated by wavelet transformation, using the feature extracted as
The first layer of DBN, i.e. input layer;
DBN model carries out successively eigentransformation to input data, and the character representation by sample in former space transforms to one
New feature space;
Best weights are expressed from low-level image feature to high-level semantic by limited Boltzmann machine acquisition, utilize backpropagation
Network carries out tuning to depth belief network, realizes the depth belief network model automatic marking based on limited Boltzmann machine.
The semantic relation of co-occurrence and opposition between marking word is quantitatively calculated in a kind of preferable embodiment, in step 4,
Its realization method is as follows,
As F (Bi, L)=1 when,Cooccurrence relation between expression mark word.Wherein BiIt is wrapped for i-th, L
Represent semantic label collection.Indicate packet BiIn contain label l, then the packet can select its in the affiliated semantic label collection L of the label
His label;
As F (Bi,L1,L2When)=1,Count(Bi,l1)≥1,Δ(Bi,l2)=0 indicates mark word
Between antagonistic relations.Wherein semantic label collection L1,L2Belong to antagonistic relations, wraps BiIn contain label l1, then the packet can select the mark
Semantic label collection L belonging to label1In other labels, but semantic label collection L cannot be selected2In label, l2Indicate tally set L2In
Any one label.
The present invention is further described with specific embodiment below:
The present invention realizes on Matlab2013a experiment porch, includes mainly three steps, specific as follows:
Every width remote sensing image carries out 3 layers of resolution ratio using wavelet pyramid method and pre-processes, and obtains different resolution scale
Under image;
Coarseness segmentation is carried out to remote sensing image on different spaces scale, image is considered as packet, and relevant cut zone regards
For example, each packet is by 2-10 composition examples, as shown in Figure 1.Wherein i-th layer with packet Bi={ xi1,xi2,...,ximIndicate.
Threshold value t is set, background area and subject area similarity s are calculated, if s is more than predetermined threshold value t, i-th layer of packet needs to continue to draw
Point, it is not needed then when less than threshold value t.
The each width remote sensing image (X of training seti,Li) indicate, wherein Xi={ xi 1,xi 2,...,xi mIndicate instance space,
M is example number, Li={ li 1,li 2,…,li nIndicate tally set, n label numbers.It is intended to obtain a letter from training set
Number:fMIML:2x→2l, input here is the instance space X that a width remote sensing image generatesi, output is that the width remote sensing image is corresponding
Tally set Li。
The characteristic that each width remote sensing image is extracted, is combined into one-dimensional vector.All training set image feature numbers
According to matrix is combined into, the feature of the extraction of a secondary remote sensing image is represented per a line.Matrix all values are normalized to [0,1] by row
Range, as the first layer of DBN model, i.e. input layer.
When giving the state of visible node layer, it is independent from each other between the state of activation of each hidden layer node, i.e.,:It is mutually solely between the state of activation of each visible node layer when the state of given hidden layer node
Vertical, i.e.,:The visible layer input of limited Boltzmann machine is v ∈ { 0,1 } binary variable, it is assumed that each
Neuron follows Bernoulli Jacob's distribution, and the energy function for defining the model is as follows:
Wherein θ={ a, b, W } is the parameter of model.The joint probability distribution of available (v, h) is P (v, h)=e-E(v,h)/ Z, wherein Z
For normaliztion constant.
Successively eigentransformation is carried out to input data, the character representation by sample in former space transforms to a new feature sky
Between;Best weights are expressed from low-level image feature to high-level semantic by limited Boltzmann machine acquisition, utilize counterpropagation network
Tuning is carried out to depth belief network, is trained by the way of iteration, is aimed at and learn parameter θ={ a, b, W's }
Value, as shown in Figure 2.
The probability occurred simultaneously to any two mark word carries out quantitative calculating, uses BiIndicate that i-th of packet, L indicate semantic
Tally set, by marking F (Bi, L)=1, indicate that there is cooccurrence relation between mark word, i.e.,Indicate packet
BiIn contain label l, then the packet can select other labels in the affiliated semantic label collection L of the label;
The probability that will not occur simultaneously to any two mark word carries out quantitative calculating, uses BiIndicate i-th of packet, semanteme mark
Label collection L1,L2Belong to antagonistic relations, by marking F (Bi,L1,L2)=1 indicates there is antagonistic relations between mark word, i.e.,Count(Bi,l1)≥1,Δ(Bi,l2B is wrapped in)=0iIn contain label l1, then the packet can select the label
Affiliated semantic label collection L1In other labels, but semantic label collection L cannot be selected2In label.The present invention proposes that one kind is based on
The remote sensing image meaning automatic marking model of the more examples of depth belief network (DBNMI), it is intended to improve the precision of mark.It realized
Journey passes through as shown in figure 3, using training set image to low-level visual feature and high level semantic-concept relationship progress automatic modeling
The marking model practised completes the semantic tagger of new image.
The method based on DBNMI model realization remote sensing image automatic markings in the invention is used, for ocean remote sensing
The interested region of image semantic tagger task only accounts for image very small part, proposes a kind of based on the more examples of depth belief network
Remote sensing image meaning automatic marking model, to initial input remote sensing image carry out adaptivenon-uniform sampling, coarseness divide ocean it is distant
Feel image background area and subject area, for relationship between the low-level visual feature and high level semantic-concept in imaged object region,
Depth belief network model based on limited Boltzmann machine carries out automatic modeling, quantitative co-occurrence and opposition between calculating mark word
Semantic relation effectively improves the precision of image automatic marking, has wider application range.
In this description, the present invention is described with reference to its specific embodiment.But it is clear that can still make
Various modifications and alterations are without departing from the spirit and scope of the invention.Therefore, the description and the appended drawings should be considered as illustrative
And not restrictive.
Claims (6)
1. a kind of method based on DBNMI model realization remote sensing image automatic markings, which is characterized in that the DBNMI is
The more examples of depth belief network, the method include the following steps:
(1) adaptivenon-uniform sampling is carried out to initial input remote sensing image, coarseness divides remote sensing image background area and subject area;
(2) it is directed to the subject area of the remote sensing image marked off, is indicated using more example multi-tag technologies;
(3) for relationship between the low-level visual feature and high level semantic-concept of the subject area of remote sensing image, DBNMI models are based on
Carry out automatic modeling;
(4) quantitative calculating marks the semantic relation of co-occurrence and opposition between word, specifically includes following steps:
(4-1) is as F (Bi, L)=1 when,Cooccurrence relation, wherein B between expression mark wordiIt is wrapped for i-th, L
Semantic label collection is represented, indicates packet BiIn contain label l, then the packet can select its in the affiliated semantic label collection L of the label
His label;
(4-2) is as F (Bi,L1,L2When)=1,Count(Bi,l1)≥1,Δ(Bi,l2)=0 indicates mark word
Between antagonistic relations, wherein semantic label collection L1,L2Belong to antagonistic relations, wraps BiIn contain label l1, then the packet can select the mark
Semantic label collection L belonging to label1In other labels, but semantic label collection L cannot be selected2In label, l2Indicate tally set L2In
Any one label.
2. the method according to claim 1 based on DBNMI model realization remote sensing image automatic markings, which is characterized in that
The remote sensing image is ocean remote sensing image.
3. the method according to claim 1 based on DBNMI model realization remote sensing image automatic markings, which is characterized in that
The step (1), includes the following steps:
(1-1) generates the pyramid form of multiresolution image using wavelet transformation;
(1-2) adaptively builds the image information hierarchical organization of different scale;
(1-3) selects suitable segmentation scale according to video scenery complexity, is split to remote sensing image.
4. the method according to claim 1 based on DBNMI model realization remote sensing image automatic markings, which is characterized in that
The step (2), includes the following steps:
(2-1) uses (Xi,Li) indicate a width remote sensing image, wherein Xi={ xi 1,xi 2,…,xi mIndicating instance space, m is to show
Example number, Li={ li 1,li 2,…,li nIndicating tally set, n label numbers, wherein i indicates i-th layer of remote sensing image,Table
Show m-th of example of i-th layer of remote sensing image,Indicate n-th of label of i-th layer of remote sensing image;
(2-2) every width remote sensing image is considered as example in different spaces scale as a packet, relevant cut zone, wherein the
I layers are used packet Bi={ xi1,xi2,…,ximIndicate, wherein i indicates that i-th layer of remote sensing image, m indicate cut zone, ximIndicate i-th
M-th of cut zone of layer remote sensing image;
Threshold value t is arranged in (2-3), calculates background area and subject area similarity s, the magnitude relationship of s and t is judged, if s is more than
When predetermined threshold value t, then i-th layer of packet needs to continue to divide, and otherwise i-th layer of packet needs not continue to divide.
5. the method according to claim 1 based on DBNMI model realization remote sensing image automatic markings, which is characterized in that
The step (3), includes the following steps:
(3-1) carries out feature extraction to the subject area of remote sensing image, using the feature extracted as the first layer of DBN model,
That is input layer;
(3-2) DBN model carries out successively eigentransformation to input data, and the character representation by sample in former space transforms to one
New feature space;
(3-3) expresses best weights by limited Boltzmann machine acquisition from low-level image feature to high-level semantic, is passed using reversed
It broadcasts network and tuning is carried out to depth belief network.
6. the method according to claim 1 based on DBNMI model realization remote sensing image automatic markings, which is characterized in that
The method carries out on Matlab experiment porch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610383452.1A CN106056609B (en) | 2016-06-02 | 2016-06-02 | Method based on DBNMI model realization remote sensing image automatic markings |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610383452.1A CN106056609B (en) | 2016-06-02 | 2016-06-02 | Method based on DBNMI model realization remote sensing image automatic markings |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106056609A CN106056609A (en) | 2016-10-26 |
CN106056609B true CN106056609B (en) | 2018-11-06 |
Family
ID=57172795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610383452.1A Active CN106056609B (en) | 2016-06-02 | 2016-06-02 | Method based on DBNMI model realization remote sensing image automatic markings |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106056609B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169051B (en) * | 2017-04-26 | 2019-09-24 | 山东师范大学 | Based on relevant method for searching three-dimension model semantic between ontology and system |
CN108319972B (en) * | 2018-01-18 | 2021-11-02 | 南京师范大学 | End-to-end difference network learning method for image semantic segmentation |
CN110704549B (en) * | 2019-10-09 | 2020-09-04 | 中国石油大学(华东) | Method, system, medium and device for selecting and constructing marine environment data service granularity |
CN111080625B (en) * | 2019-12-18 | 2020-12-29 | 推想医疗科技股份有限公司 | Training method and training device for lung image strip and rope detection model |
CN113761971B (en) * | 2020-06-02 | 2023-06-20 | 中国人民解放军战略支援部队信息工程大学 | Remote sensing image target knowledge graph construction method and device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823845A (en) * | 2014-01-28 | 2014-05-28 | 浙江大学 | Method for automatically annotating remote sensing images on basis of deep learning |
-
2016
- 2016-06-02 CN CN201610383452.1A patent/CN106056609B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823845A (en) * | 2014-01-28 | 2014-05-28 | 浙江大学 | Method for automatically annotating remote sensing images on basis of deep learning |
Non-Patent Citations (7)
Title |
---|
Rotation Invariant Texture Characterization and Retrieval Using Steerable Wavelet-Domain Hidden Markov Models;Minh N. Do etal;《IEEE TRANSACTIONS ON MULTIMEDIA》;20021231;第4卷(第4期);第1-11页 * |
SLED: Semantic Label Embedding Dictionary Representation for Multilabel Image Annotation;Xiaochun Cao etal.;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20150930;第24卷(第9期);第1-14页 * |
基于DBN模型的遥感图像分类;吕启 等;《计算机研究与发展》;20140930;第51卷(第9期);第1-8页 * |
基于多示例学习的遥感图像检索;葛永 等;《中国科学技术大学学报》;20090228;第39卷(第2期);第1-5页 * |
基于深度学习的图像自动标注算法;杨阳 等;《数据采集与处理》;20150228;第30卷(第1期);第1-10页 * |
基于特征提取和多示例学习的图像区域标注;张滢 等;《电子测量与仪器学报》;20140831;第28卷(第8期);第1-6页 * |
基于视觉复杂度的自适应尺度遥感影像分割;黄志坚 等;《电子与信息学报》;20130831;第35卷(第8期);第1-6页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106056609A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056609B (en) | Method based on DBNMI model realization remote sensing image automatic markings | |
Wang et al. | Crossformer++: A versatile vision transformer hinging on cross-scale attention | |
CN108416384B (en) | Image label labeling method, system, equipment and readable storage medium | |
CN102902821B (en) | The image high-level semantics mark of much-talked-about topic Network Based, search method and device | |
CN107506793B (en) | Garment identification method and system based on weakly labeled image | |
CN110580500A (en) | Character interaction-oriented network weight generation few-sample image classification method | |
CN109934293A (en) | Image-recognizing method, device, medium and obscure perception convolutional neural networks | |
CN101763440B (en) | Method for filtering searched images | |
CN106951422A (en) | The method and apparatus of webpage training, the method and apparatus of search intention identification | |
Kim et al. | CityCraft: 3D virtual city creation from a single image | |
CN105117429A (en) | Scenario image annotation method based on active learning and multi-label multi-instance learning | |
CN110390363A (en) | A kind of Image Description Methods | |
CN104142995B (en) | The social event recognition methods of view-based access control model attribute | |
CN106529503A (en) | Method for recognizing face emotion by using integrated convolutional neural network | |
CN105574063A (en) | Image retrieval method based on visual saliency | |
CN109948593A (en) | Based on the MCNN people counting method for combining global density feature | |
CN107657008A (en) | Across media training and search method based on depth discrimination sequence study | |
Ontrup et al. | Hyperbolic self-organizing maps for semantic navigation | |
CN104751175B (en) | SAR image multiclass mark scene classification method based on Incremental support vector machine | |
CN107169090A (en) | A kind of special object search method of utilization content rings around information extraction characterization image | |
CN109740752A (en) | Depth model training method and device, electronic equipment and storage medium | |
CN110298348A (en) | Remote sensing image building sample areas extracting method and system, storage medium, equipment | |
CN106844518A (en) | A kind of imperfect cross-module state search method based on sub-space learning | |
CN108984555A (en) | User Status is excavated and information recommendation method, device and equipment | |
CN115131698B (en) | Video attribute determining method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |