CN108805029B - Foundation cloud picture identification method based on significant dual activation coding - Google Patents
Foundation cloud picture identification method based on significant dual activation coding Download PDFInfo
- Publication number
- CN108805029B CN108805029B CN201810433104.XA CN201810433104A CN108805029B CN 108805029 B CN108805029 B CN 108805029B CN 201810433104 A CN201810433104 A CN 201810433104A CN 108805029 B CN108805029 B CN 108805029B
- Authority
- CN
- China
- Prior art keywords
- activation
- significant
- convolution
- foundation cloud
- feature vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004913 activation Effects 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000009977 dual effect Effects 0.000 title claims abstract description 16
- 239000013598 vector Substances 0.000 claims abstract description 79
- 238000012549 training Methods 0.000 claims abstract description 36
- 238000013527 convolutional neural network Methods 0.000 claims description 26
- 230000004044 response Effects 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000004576 sand Substances 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 235000009508 confectionery Nutrition 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 abstract description 3
- 238000011160 research Methods 0.000 description 6
- 238000011176 pooling Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a foundation cloud picture identification method based on significant dual activation codes, which comprises the following steps: inputting the training foundation cloud picture into a convolution neural network to obtain a convolution activation picture; obtaining a local area of a salient image by utilizing a shallow convolution activation image, and extracting the characteristics of the local area to obtain a salient characteristic vector; acquiring an image area corresponding to the deep convolution activation map, and learning corresponding weight; based on the significant feature vectors and the weights, obtaining a training foundation cloud picture weight significant feature vector set, and performing significant dual activation coding on the training foundation cloud picture weight significant feature vector set to obtain weight significant feature vectors; and acquiring the weight significant feature vector of the tested foundation cloud picture, and classifying the weight significant feature vector to obtain an identification result. The invention utilizes the shallow layer and deep layer convolution activation maps of the convolution neural network to extract features, excavates significant structure and texture information and features containing high semantic information, and further obtains the most representative feature representation foundation cloud map through significant dual activation coding, thereby improving the classification accuracy of the foundation cloud map.
Description
Technical Field
The invention belongs to the technical field of pattern recognition and artificial intelligence, and particularly relates to a foundation cloud picture recognition method based on a significant dual activation code.
Background
In the field of atmospheric science, the generation of cloud, appearance characteristics, cloud amount and the like reflect the movement of the atmosphere, one of important signs of future weather change is predicted, and the method plays a crucial role in weather forecast early warning. An important mode of cloud observation is foundation cloud observation, and automatic classification of foundation cloud pictures has important significance for climate and weather research and the like. At present, experts at home and abroad have carried out research work in related fields. Isosalo et al uses Local texture information, such as Local Binary Patterns (LBP) and Local Edge Patterns (LEP), to determine 5 sky types, such as high-volume clouds, rolling clouds, layer clouds, cumulus clouds, clear sky, etc. Calbo et al extracts Fourier transform image information and statistical information of the images to describe a ground cloud map, and classifies 5 sky types such as high-lying clouds, rolling clouds, layer clouds, lying clouds, clear sky and the like. Heinle et al also use spectral, texture and color information to describe the ground cloud images for classification. Xiao et al further extract texture, structure and color information in a densely sampled manner for classification of different sky types. Wang et al propose Stable LBPs (the Stable LBPs) to classify different cloud types based on rotation Invariant LBPs (rotation Invariant LBPs). With the huge achievement of Convolutional Neural Networks (CNNs) in the fields of pattern recognition, image processing and the like, CNNs are also beginning to be applied to ground cloud image classification, and the classification effect of CNNs is superior to that of the traditional ground cloud image classification method based on hand-crafted features. Ye et al use Convolutional Neural Networks (CNNs) for ground based cloud map classification for the first time, and classification accuracy is improved significantly. Zhang et al improved the performance of cross-domain ground based cloud classification by encoding local features on the convolution activation map. Furthermore, Shi et al consider that the features on the deep convolution activation map are superior to the traditional manual features (hand-craft) in the representation of the ground-based cloud map. The above ground cloud picture classification methods based on CNNs all perform feature extraction on single convolution layers, and cannot obtain relatively complete ground cloud picture information. In terms of feature representation of the ground cloud map, these methods generally use max pooling (max pooling), average pooling (average pooling), etc. to aggregate extracted features into a feature vector to represent the ground cloud map, and such feature vector usually lacks discriminability. Therefore, in the aspect of feature representation of the ground-based cloud graph, further innovative methods are needed to improve the accuracy of classification of the ground-based cloud graph.
Disclosure of Invention
The invention aims to solve the problem of classification of foundation cloud pictures, and provides a foundation cloud picture identification method based on significant dual activation codes.
In order to achieve the purpose, the invention provides a foundation cloud picture identification method based on significant dual activation coding, which comprises the following steps:
step S1, preprocessing the multiple input foundation cloud pictures to obtain training foundation cloud pictures;
step S2, inputting the training foundation cloud picture into a convolutional neural network to obtain a convolutional activation picture;
step S3, obtaining a saliency image local area of the training foundation cloud picture by utilizing the shallow convolution activation picture;
step S4, extracting the characteristics of each local area of the saliency image to obtain corresponding saliency characteristic vectors;
step S5, acquiring an image area corresponding to the deep convolution activation map by using the superficial saliency image local area, and learning the weight corresponding to the image area;
step S6, based on the significant feature vector and the weight, obtaining a weight significant feature vector set corresponding to the training foundation cloud picture;
step S7, carrying out significant dual activation coding on the weight significant feature vector set to obtain a weight significant feature vector corresponding to the training foundation cloud picture;
and step S8, acquiring the weight significant feature vector of the test foundation cloud picture, and classifying the test foundation cloud picture based on the weight significant feature vector to obtain a foundation cloud picture identification result.
Optionally, the step S1 includes the following steps:
step S11, normalizing the size of the input foundation cloud picture into H multiplied by W to obtain a training foundation cloud picture, wherein H and W respectively represent the height and width of the training foundation cloud picture;
and step S12, obtaining the category label of each training foundation cloud picture.
Optionally, the step S2 includes the following steps:
step S21, determining a convolutional neural network, initializing the convolutional neural network, and modifying the output number of the tail end of the convolutional neural network into the class number D of the foundation cloud picture;
and step S22, inputting the training foundation cloud picture into the initialized convolutional neural network to obtain a convolutional activation picture.
Optionally, the step S3 includes the following steps:
step S31, obtaining a set of shallow convolution activation maps corresponding to a preset shallow convolution layer, where the set of shallow convolution activation maps can be expressed as a tensor with a size Hs×Ws×NsWherein the subscript s denotes the number of shallow layers, HsAnd WsRespectively representing the height and width, N, of the layer of convolution activation mapsRepresenting the number of the layer of convolution activation maps;
step S32, sequentially connecting the activation responses at each same position on all convolution activation graphs corresponding to the shallow layer convolution layer to obtain NsA local feature vector of the dimension;
step S33, performing dense sampling on all convolution activation graphs corresponding to the shallow convolution layer by using sliding windows, and acquiring an activation response significant value S of each sliding window based on the local feature vectorskWherein the subscript k denotes the kth sliding window;
step S34, the activation response significant value SkAnd performing descending sorting, and selecting sliding windows corresponding to the first K activation response significant values as significant image local areas to obtain K significant image local areas of the training foundation cloud pictures.
Optionally, the size of the sliding window is a × a, and the step size of the dense samples is a/2.
Optionally, the activation response saliency value S of the kth sliding window on the shallow convolution activation mapkExpressed as:
wherein PgP2Which represents the two-norm of the vector,representing a local feature vector at the ith position, a2A x a denotes the number of local feature vectors within the kth sliding window,the mean feature vector representing the kth sliding window, i.e. the mean of all local feature vectors within the sliding window, is represented as:
optionally, in step S4, each local area of the saliency image is represented as a saliency feature vector mk。
Optionally, the step S5 includes the following steps:
step S51, obtaining a set of deep convolution activation maps corresponding to a preset deep convolution layer, where the set of deep convolution activation maps can be expressed as a tensor with a size of Hd×Wd×NdWherein the subscript d represents the number of layers in which the deep layer is located, HdAnd WdRespectively representing the height and width, N, of the layer of convolution activation mapdRepresenting the number of the layer of convolution activation maps;
wherein the deep convolutional layer is selected from convolutional layers of a latter half of the convolutional neural network.
Step S52, sequentially connecting the activation responses at each identical position on all convolution activation graphs corresponding to the deep convolution layer to obtain NdA local feature vector of the dimension;
step S53, acquiring K corresponding image areas with b × b size in the deep convolution activation map according to the local area of the saliency image corresponding to the shallow convolution layer;
step S54, calculating a weight corresponding to the image region, expressed as:
wherein,the weight representing the k-th image region,representing local feature vectors at the j-th position, b2B × b denotes the number of local feature vectors of the k-th image region.
Optionally, in step S6, the salient feature vectors m of the local regions of the K salient images according to the shallow layer convolution activation mapkAnd weights w of K image regions of the deep convolution activation mapkAnd obtaining a weight significant feature vector set χ of each training foundation cloud picture:
χ={w1m1,w2m2,...,wKmK}。
optionally, in step S7, the weighted significant feature vector is represented as:
h=(ue m)((ue m)T(ue m))-1C
wherein,e denotes the corresponding multiplication of the matrix elements,is a constant vector whose elements are c.
The invention has the beneficial effects that: the invention utilizes the shallow layer and deep layer convolution activation graphs of the convolution neural network to extract the characteristics, can mine the characteristics with obvious structure and texture information and containing high semantic information, and further obtains the most representative characteristic to represent the foundation cloud graph through obvious dual activation coding, thereby improving the classification accuracy of the foundation cloud graph.
It should be noted that the invention obtains the funding of national science fund projects No.61501327 and No.61711530240, the key project No.17JCZDJC30600 of the natural science fund in Tianjin City, the youth fund project No.15JCQNJC01700 of the application foundation and leading edge technology research plan in Tianjin City, the youth research talent culture plan No.135202RC1703 of Tianjin university, "youth research institute of youth" No.135202RC1703, the mode identification national key laboratory open topic fund Nos. 201700001 and No.201800002, the Chinese national reservation fund Nos. 201708120040 and No.201708120039 and the high education innovation team fund project in Tianjin City.
Drawings
Fig. 1 is a flowchart of a ground-based cloud picture identification method based on significant dual activation coding according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Fig. 1 is a flowchart of a ground-based cloud picture identification method based on significant dual activation coding according to an embodiment of the present invention, and some implementation flows of the present invention are described below by taking fig. 1 as an example. The method of the invention is a foundation cloud picture identification method based on significant dual activation coding, and the method comprises the following specific steps:
step S1, preprocessing the multiple input foundation cloud pictures to obtain training foundation cloud pictures;
the method for preprocessing the multiple input foundation cloud pictures comprises the following steps:
step S11, normalizing the size of the input foundation cloud picture into H multiplied by W to obtain a training foundation cloud picture, wherein H and W respectively represent the height and width of the training foundation cloud picture;
in one embodiment of the present invention, hxw is 224 × 224.
And step S12, obtaining the category label of each training foundation cloud picture.
Step S2, inputting the training foundation cloud picture into a convolutional neural network to obtain a convolutional activation picture;
further, the step S2 includes the following steps:
step S21, selecting a typical convolutional neural network in deep learning to initialize, and modifying the output number of the tail end of the convolutional neural network into the class number D of the foundation cloud picture;
in an embodiment of the present invention, the convolutional neural network is VGG19, and 7 types of ground clouds are classified, so D is 7.
And step S22, inputting the training foundation cloud picture into the initialized convolutional neural network to obtain a convolutional activation picture.
Step S3, obtaining a saliency image local area of the training foundation cloud picture by utilizing the shallow convolution activation picture;
further, the step S3 includes the following steps:
step S31, obtaining a set of shallow convolution activation maps corresponding to a preset shallow convolution layer, where the set of shallow convolution activation maps can be expressed as a tensor with a size Hs×Ws×NsWherein the subscript s denotes the number of shallow layers, HsAnd WsRespectively representing the height and width, N, of the layer of convolution activation mapsRepresenting the number of the layer of convolution activation maps;
wherein the shallow convolutional layer is selected from convolutional layers of the first half of the convolutional neural network.
In one embodiment of the present invention, Hs×Ws×Ns=224×224×64。
Step S32, sequentially connecting the activation responses at each same position on all convolution activation graphs corresponding to the shallow layer convolution layer to obtain NsA local feature vector of the dimension;
step S33, performing dense sampling on all convolution activation graphs corresponding to the shallow convolution layer by using sliding windows, and acquiring activation response of each sliding window based on the local feature vectorsSignificant value SkWherein the subscript k denotes the kth sliding window;
the size of the sliding window is a multiplied by a, and the step size of dense sampling is a/2.
Wherein the activation response significant value S of the kth sliding window on the shallow convolution activation mapkExpressed as:
wherein PgP2Which represents the two-norm of the vector,representing a local feature vector at the ith position, a2A x a denotes the number of local feature vectors within the kth sliding window,the mean feature vector representing the kth sliding window, i.e. the mean of all local feature vectors within the sliding window, is represented as:
note that mkAlso known as salient feature vectors.
In an embodiment of the present invention, a × a is 12 × 12, and the step size is 6.
Step S34, the activation response significant value SkSorting in a descending order, and selecting sliding windows corresponding to the first K activation response significant values as significant image local areas to obtain K significant image local areas of the training foundation cloud pictures;
in one embodiment of the present invention, K is taken to be 200.
Step S4, extracting the characteristics of each local area of the saliency image to obtain corresponding saliency characteristic vectors;
wherein each of the saliency image local area displaysSignificant feature vector mkAnd (4) showing.
Step S5, acquiring an image area corresponding to the deep convolution activation map by using the superficial saliency image local area, and learning the weight corresponding to the image area;
further, the step S5 includes the following steps:
step S51, obtaining a set of deep convolution activation maps corresponding to a preset deep convolution layer, where the set of deep convolution activation maps can be expressed as a tensor with a size of Hd×Wd×NdWherein the subscript d represents the number of layers in which the deep layer is located, HdAnd WdRespectively representing the height and width, N, of the layer of convolution activation mapdRepresenting the number of the layer of convolution activation maps;
wherein the deep convolutional layer is selected from convolutional layers of a latter half of the convolutional neural network.
In one embodiment of the present invention, Hd×Wd×Nd=56×56×256。
Step S52, sequentially connecting the activation responses at each identical position on all convolution activation graphs corresponding to the deep convolution layer to obtain NdA local feature vector of the dimension;
step S53, acquiring K corresponding image areas with b × b size in the deep convolution activation map according to the local area of the saliency image corresponding to the shallow convolution layer;
in an embodiment of the present invention, b × b is 3 × 3.
Step S54, calculating a weight corresponding to the image region, expressed as:
wherein,the weight representing the k-th image region,representing local feature vectors at the j-th position, b2B × b denotes the number of local feature vectors of the k-th image region.
Step S6, based on the significant feature vector and the weight, obtaining a weight significant feature vector set corresponding to the training foundation cloud picture;
wherein, the salient feature vectors m of the local areas of the K salient images according to the shallow convolution activation mapkAnd weights w of K image regions of the deep convolution activation mapkAnd obtaining a weight significant feature vector set χ of each training foundation cloud picture:
χ={w1m1,w2m2,...,wKmK}。
step S7, carrying out significant dual activation coding on the weight significant feature vector set to obtain a weight significant feature vector corresponding to the training foundation cloud picture;
further, the step S7 includes the following steps:
in step S71, a feature vector is learned using the objective functionAs a final representation of the image, the weighted salient feature vector, is based on:
(wkmk)Th=c,(k=1,2,...,K),
wherein c represents a constant.
The objective function can then be expressed as:
(ue m)Th=C,
wherein e represents the corresponding multiplication of matrix elements, is a constant vector whose elements are c.
In an embodiment of the present invention, c is 1.
minP(ue m)Th-CP2,
solving by utilizing the pseudo-inverse to obtain a minimum norm solution, namely an optimal h:
h=(ue m)((ue m)T(ue m))-1C。
and S8, acquiring the weight salient feature vector of the test foundation cloud picture according to the steps S1-S7, and classifying the test foundation cloud picture based on the weight salient feature vector of the test foundation cloud picture to obtain a foundation cloud picture identification result.
In an embodiment of the invention, a nearest neighbor classifier is used to classify the test ground cloud image based on the weighted significant feature vector of the test ground cloud image.
Taking a foundation cloud picture database collected by China Meteorological sciences research institute as an example, the accuracy of the foundation cloud picture identification is 91.24%, so that the effectiveness of the method is shown.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.
Claims (10)
1. A foundation cloud picture identification method based on significant dual activation coding is characterized by comprising the following steps:
step S1, preprocessing the multiple input foundation cloud pictures to obtain training foundation cloud pictures;
step S2, inputting the training foundation cloud picture into a convolutional neural network to obtain a convolutional activation picture;
step S3, obtaining a saliency image local area of the training foundation cloud picture by utilizing the shallow convolution activation picture;
step S4, extracting the characteristics of each local area of the saliency image to obtain corresponding saliency characteristic vectors;
step S5, acquiring an image area corresponding to the deep convolution activation map by using the superficial saliency image local area, and learning the weight corresponding to the image area;
step S6, based on the significant feature vector and the weight, obtaining a weight significant feature vector set corresponding to the training foundation cloud picture;
step S7, carrying out significant dual activation coding on the weight significant feature vector set to obtain a weight significant feature vector corresponding to the training foundation cloud picture, namely learning a feature vector as the weight significant feature vector by using a target function;
and step S8, acquiring the weight significant feature vector of the test foundation cloud picture, and classifying the test foundation cloud picture based on the weight significant feature vector to obtain a foundation cloud picture identification result.
2. The method according to claim 1, wherein the step S1 comprises the steps of:
step S11, normalizing the size of the input foundation cloud picture into H multiplied by W to obtain a training foundation cloud picture, wherein H and W respectively represent the height and width of the training foundation cloud picture;
and step S12, obtaining the category label of each training foundation cloud picture.
3. The method according to claim 1, wherein the step S2 comprises the steps of:
step S21, determining a convolutional neural network, initializing the convolutional neural network, and modifying the output number of the tail end of the convolutional neural network into the class number D of the foundation cloud picture;
and step S22, inputting the training foundation cloud picture into the initialized convolutional neural network to obtain a convolutional activation picture.
4. The method according to claim 1, wherein the step S3 comprises the steps of:
step S31, obtaining a set of shallow convolution activation maps corresponding to a preset shallow convolution layer, where the set of shallow convolution activation maps can be expressed as a tensor with a size Hs×Ws×NsWherein the subscript s denotes the number of shallow layers, HsAnd WsRespectively representing the height and width, N, of the layer of convolution activation mapsRepresenting the number of the layer of convolution activation maps;
step S32, sequentially connecting the activation responses at each same position on all convolution activation graphs corresponding to the shallow layer convolution layer to obtain NsA local feature vector of the dimension;
step S33, performing dense sampling on all convolution activation graphs corresponding to the shallow convolution layer by using sliding windows, and acquiring an activation response significant value S of each sliding window based on the local feature vectorskWherein the subscript k denotes the kth sliding window;
step S34, the activation response significant value SkAnd performing descending sorting, and selecting sliding windows corresponding to the first K activation response significant values as significant image local areas to obtain K significant image local areas of the training foundation cloud pictures.
5. The method of claim 4, wherein the sliding window has a size of a x a and the step size of the dense samples is a/2.
6. The method of claim 5, wherein the activation response saliency value S for the kth sliding window on the shallow convolution activation mapkExpressed as:
wherein | · | purple sweet2Which represents the two-norm of the vector,representing a local feature vector at the ith position, a2A x a denotes the number of local feature vectors within the kth sliding window,the mean feature vector representing the kth sliding window, i.e. the mean of all local feature vectors within the sliding window, is represented as:
7. the method according to claim 1, wherein in step S4, each local region of the significant image is represented as a significant feature vector mk。
8. The method according to claim 1, wherein the step S5 comprises the steps of:
step S51, obtaining a set of deep convolution activation maps corresponding to a preset deep convolution layer, where the set of deep convolution activation maps can be expressed as a tensor with a size of Hd×Wd×NdWherein the subscript d represents the number of layers in which the deep layer is located, HdAnd WdRespectively representing the height and width, N, of the layer of convolution activation mapdRepresenting the number of the layer of convolution activation maps;
wherein the deep convolutional layer can be selected from convolutional layers of the latter half of the convolutional neural network;
step S52, sequentially connecting the activation responses at each identical position on all convolution activation graphs corresponding to the deep convolution layer to obtain NdA local feature vector of the dimension;
step S53, acquiring K corresponding image areas with b × b size in the deep convolution activation map according to the local area of the saliency image corresponding to the shallow convolution layer;
step S54, calculating a weight corresponding to the image region, expressed as:
9. The method according to claim 4, wherein in step S6, the salient feature vectors m of the local areas of K salient images of the activation map are convolved according to a shallow layerkAnd weights w of K image regions of the deep convolution activation mapkAnd obtaining a weight significant feature vector set χ of each training foundation cloud picture:
χ={w1m1,w2m2,...,wKmK}。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810433104.XA CN108805029B (en) | 2018-05-08 | 2018-05-08 | Foundation cloud picture identification method based on significant dual activation coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810433104.XA CN108805029B (en) | 2018-05-08 | 2018-05-08 | Foundation cloud picture identification method based on significant dual activation coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108805029A CN108805029A (en) | 2018-11-13 |
CN108805029B true CN108805029B (en) | 2021-08-24 |
Family
ID=64091994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810433104.XA Active CN108805029B (en) | 2018-05-08 | 2018-05-08 | Foundation cloud picture identification method based on significant dual activation coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108805029B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310820A (en) * | 2020-02-11 | 2020-06-19 | 山西大学 | Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration |
CN112232297B (en) * | 2020-11-09 | 2023-08-22 | 北京理工大学 | Remote sensing image scene classification method based on depth joint convolution activation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295542A (en) * | 2016-08-03 | 2017-01-04 | 江苏大学 | A kind of road target extracting method of based on significance in night vision infrared image |
CN107274419A (en) * | 2017-07-10 | 2017-10-20 | 北京工业大学 | A kind of deep learning conspicuousness detection method based on global priori and local context |
CN107784308A (en) * | 2017-10-09 | 2018-03-09 | 哈尔滨工业大学 | Conspicuousness object detection method based on the multiple dimensioned full convolutional network of chain type |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017177188A1 (en) * | 2016-04-08 | 2017-10-12 | Vizzario, Inc. | Methods and systems for obtaining, aggregating, and analyzing vision data to assess a person's vision performance |
-
2018
- 2018-05-08 CN CN201810433104.XA patent/CN108805029B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295542A (en) * | 2016-08-03 | 2017-01-04 | 江苏大学 | A kind of road target extracting method of based on significance in night vision infrared image |
CN107274419A (en) * | 2017-07-10 | 2017-10-20 | 北京工业大学 | A kind of deep learning conspicuousness detection method based on global priori and local context |
CN107784308A (en) * | 2017-10-09 | 2018-03-09 | 哈尔滨工业大学 | Conspicuousness object detection method based on the multiple dimensioned full convolutional network of chain type |
Non-Patent Citations (4)
Title |
---|
Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection;Pingping Zhang;《2017 IEEE International Conference on Computer Vision》;20171225;第1-10页 * |
Deep Convolutional Activations-Based Features for Ground-Based Cloud Classification;Cunzhao Shi et al.;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20170331;第816-820页 * |
用于图像检索的多区域交叉加权聚合深度卷积特征;董荣胜等;《计算机辅助设计与图形学学报》;20180430;第658-665页 * |
联合显著性和多层卷积神经网络的高分影像场景分类;何小飞等;《测绘学报》;20160930;第1073-1080页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108805029A (en) | 2018-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977918B (en) | Target detection positioning optimization method based on unsupervised domain adaptation | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
Wang et al. | Tropical cyclone intensity estimation from geostationary satellite imagery using deep convolutional neural networks | |
CN109034044B (en) | Pedestrian re-identification method based on fusion convolutional neural network | |
Sun et al. | Rural building detection in high-resolution imagery based on a two-stage CNN model | |
CN108229589B (en) | Foundation cloud picture classification method based on transfer learning | |
CN105069481B (en) | Natural scene multiple labeling sorting technique based on spatial pyramid sparse coding | |
CN106096655B (en) | A kind of remote sensing image airplane detection method based on convolutional neural networks | |
CN106203318A (en) | The camera network pedestrian recognition method merged based on multi-level depth characteristic | |
CN108629368B (en) | Multi-modal foundation cloud classification method based on joint depth fusion | |
CN107392237B (en) | Cross-domain foundation cloud picture classification method based on migration visual information | |
CN107832797B (en) | Multispectral image classification method based on depth fusion residual error network | |
Lan et al. | Defect detection from UAV images based on region-based CNNs | |
CN106909902A (en) | A kind of remote sensing target detection method based on the notable model of improved stratification | |
CN109508756B (en) | Foundation cloud classification method based on multi-cue multi-mode fusion depth network | |
CN105931241A (en) | Automatic marking method for natural scene image | |
CN111242227A (en) | Multi-modal foundation cloud identification method based on heterogeneous depth features | |
CN108805029B (en) | Foundation cloud picture identification method based on significant dual activation coding | |
Liu et al. | Multimodal ground-based remote sensing cloud classification via learning heterogeneous deep features | |
CN108108720A (en) | A kind of ground cloud image classification method based on depth multi-modal fusion | |
Arya et al. | Object detection using deep learning: A review | |
CN111368843A (en) | Method for extracting lake on ice based on semantic segmentation | |
CN108256557B (en) | Hyperspectral image classification method combining deep learning and neighborhood integration | |
Li et al. | Airplane detection using convolutional neural networks in a coarse-to-fine manner | |
CN108985378B (en) | Domain self-adaption method based on hybrid cross-depth network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |