CN116740578B - Remote sensing image recommendation method based on user selection - Google Patents

Remote sensing image recommendation method based on user selection Download PDF

Info

Publication number
CN116740578B
CN116740578B CN202311014229.6A CN202311014229A CN116740578B CN 116740578 B CN116740578 B CN 116740578B CN 202311014229 A CN202311014229 A CN 202311014229A CN 116740578 B CN116740578 B CN 116740578B
Authority
CN
China
Prior art keywords
feature
features
image
layer
pyramid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311014229.6A
Other languages
Chinese (zh)
Other versions
CN116740578A (en
Inventor
李群
李洁
张丽
邹圣兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuhui Spatiotemporal Information Technology Co ltd
Original Assignee
Beijing Shuhui Spatiotemporal Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuhui Spatiotemporal Information Technology Co ltd filed Critical Beijing Shuhui Spatiotemporal Information Technology Co ltd
Priority to CN202311014229.6A priority Critical patent/CN116740578B/en
Publication of CN116740578A publication Critical patent/CN116740578A/en
Application granted granted Critical
Publication of CN116740578B publication Critical patent/CN116740578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a remote sensing image recommending method based on user selection, which relates to the field of data processing and comprises the following steps: acquiring user information, and acquiring a first candidate set and a second candidate set according to the user information; extracting scene features using the first network model; extracting salient features using a second network model; obtaining the regional characteristics of each image in the first candidate set according to the scene characteristics and the salient characteristics of the image, and obtaining the query characteristics of each image in the second candidate set according to the scene characteristics and the salient characteristics of the image; respectively carrying out prediction calculation on the regional characteristics and the query characteristics to obtain prediction probability; obtaining a first recommended feature set according to the prediction probability of the regional features, and obtaining a second recommended feature set according to the prediction probability of the query features; and screening the first recommendation feature set and the second recommendation feature set according to a screening strategy to obtain a recommendation result. The recommendation result obtained based on the recommendation method provided by the invention meets the requirements of users and has better quality.

Description

Remote sensing image recommendation method based on user selection
Technical Field
The invention relates to the field of data processing, in particular to a remote sensing image recommending method based on user selection.
Background
As an important basic resource, the remote sensing satellite data is widely applied in various fields such as national defense, economy, traffic, energy, environmental protection and the like. The remote sensing satellite data has the characteristics of mass, multisource, isomerism and the like, and is specifically described as follows: firstly, satellite data coverage is wide, time span is large, massive historical remote sensing satellite data is accumulated, thousands of satellites fly above the earth, the satellites can carry various mode loads, new remote sensing satellite data is generated at every moment, and more satellites lift off to cause explosive increase of data volume. Secondly, the remote sensing satellite data sources are various, including multiple sensor types such as visible light, infrared, microwave, hyperspectrum, etc., and the remote sensing satellite data of different sensor types, different resolutions, different wave band ranges are applicable to corresponding application needs. Thirdly, satellite data has observation periodicity, long time sequence observation of different angles can be carried out on the same area, and remote sensing data of different satellite sources have differences in various aspects such as storage formats, organization modes, metadata standards and the like, so that the conventional data management mode is difficult to realize overall management of multi-source satellite data, and therefore personalized data customization requirements are difficult to meet.
With the gradual improvement of the number and the data quality of the in-orbit satellites, the types and the number of products of the remote sensing satellite data are continuously increased, the demands on the satellite data are more and more, the application field of the data is also continuously widened, and great challenges are brought to the storage management and the service mode of the remote sensing satellite data. The existing remote sensing satellite data is simple in information storage, mainly uses basic attributes, and rarely considers and reflects association relations among heterogeneous data and high-level characteristics of the data, so that the application requirements of high timeliness are difficult to meet. At present, a user acquires remote sensing satellite data, and the user mainly relies on simple metadata and artificial experience to retrieve and obtain the required satellite data. The traditional remote sensing satellite data service requires users to have certain professional domain knowledge, so that the remote sensing data sharing range is limited to a certain extent; and with the increase of the volume of the data, the accuracy and timeliness of the data are difficult to ensure by the passive retrieval mode.
In addition, when a user inputs a query requirement, the current recommendation system cannot well understand semantic information of query information input by the user, so that accuracy of a search result is low, experience of the user is poor, and the search requirement of the user cannot be met.
Disclosure of Invention
The invention provides a remote sensing image recommending method based on user selection, which comprehensively recommends images by combining an interested area and query information, so that the recommending result better meets the requirements of users and the quality of the images is better.
In order to achieve the technical purpose, the invention provides a remote sensing image recommending method based on user selection, which comprises the following steps:
s1, acquiring user information, wherein the user information comprises query information and an interested region;
s2, acquiring a first candidate set according to the region of interest, and acquiring a second candidate set according to keywords in query information;
s3, respectively extracting scene characteristics of each image in the first candidate set and the second candidate set by using the first network model;
s4, respectively extracting the salient features of each image in the first candidate set and the second candidate set by using a second network model;
s5, obtaining the regional characteristics of each image in the first candidate set according to the scene characteristics and the salient characteristics of each image in the first candidate set, and obtaining the query characteristics of each image in the second candidate set according to the scene characteristics and the salient characteristics of each image in the second candidate set;
s6, respectively carrying out prediction probability calculation on the regional characteristics of each image in the first candidate set and the query characteristics of each image in the second candidate set to obtain the prediction probability of each regional characteristic and the prediction probability of each query characteristic;
S7, according to the sequencing result of the prediction probability of each regional feature, M regional features are determined to be used as a first recommended feature set, N query features are determined to be used as a second recommended feature set according to the sequencing result of the prediction probability of each query feature, N is a positive integer, and M is an integer which is more than 0 and less than N;
and S8, screening the first recommendation feature set and the second recommendation feature set according to a screening strategy to obtain a recommendation result.
According to an embodiment of the present invention, the first recommended feature set includes M first recommended features, and the second recommended feature set includes N second recommended features; step S8 includes:
s81, calculating the similarity between each first recommended feature in the first recommended feature set and each second recommended feature in the second recommended feature set according to each first recommended feature in the first recommended feature set, wherein the second recommended features with the similarity being ranked in the first q are used as a group of candidate features, and q is a positive integer;
s82, repeating the step S81 to obtain M groups of candidate features;
s83, performing de-duplication treatment on the M groups of candidate features, and combining the M groups of candidate features with the first recommended feature set to obtain a final recommended feature set;
s84, obtaining a corresponding recommended image according to the final recommended feature set, and taking the corresponding recommended image as a recommended result.
According to an embodiment of the present invention, the first network model includes a first feature extraction network, a feature screening network, and a first feature fusion network, and step S3 includes:
s31, respectively extracting pyramid features of each image in the first candidate set and pyramid features of each image in the second candidate set by using a first feature extraction network;
s32, aiming at pyramid features of each image, removing redundant information in each layer of pyramid features by utilizing a feature screening function in a feature screening network to obtain pyramid refining features;
s33, using a first feature fusion network to respectively perform layer-by-layer feature fusion from top to bottom and from bottom to top on pyramid refining features corresponding to each image to obtain two fusion features, and performing vector inner product calculation on the two fusion features to obtain scene features corresponding to each image.
According to an embodiment of the present invention, in step S31, the first feature extraction network includes 7*7 convolution layers, a max pooling layer, and 4 residual blocks, where each residual block includes 2 residual units, each residual unit includes 2 3*3 convolution layers, and a shortcut is set between the residual units for connection; inputting the first candidate set or the second candidate set into a first feature extraction network, and performing feature extraction on the image by using a 7*7 convolution layer to obtain a first feature map, which is marked as X 1 The first feature map is subjected to pooling treatment by using a maximum pooling layer, and then the pooled feature map is subjected to special treatmentInputting 4 residual blocks into the sign graph in turn to respectively obtain 4 corresponding feature graphs, and marking the corresponding feature graphs as a second feature graph X 2 Third characteristic diagram X 3 Fourth characteristic diagram X 4 Fifth characteristic diagram X 5 The scale of each feature map is different, and the scales of the feature maps are sequentially increased according to the data flow direction; will fifth characteristic diagram X 5 As a first layer pyramid feature; will fifth characteristic diagram X 5 Up-sampling operation is carried out and is matched with a fourth characteristic diagram X 4 Performing feature stitching to obtain a second layer pyramid feature; upsampling the second layer pyramid feature and comparing it with the third feature map X 3 Performing feature stitching to obtain third-layer pyramid features; upsampling the third layer pyramid feature and then comparing the upsampled third layer pyramid feature with a second feature map X 2 Performing feature stitching to obtain a fourth layer pyramid feature; upsampling the fourth layer pyramid feature and comparing it with the first feature map X 1 Performing feature stitching to obtain fifth-layer pyramid features; and arranging the first layer pyramid features, the second layer pyramid features, the third layer pyramid features, the fourth layer pyramid features and the fifth layer pyramid features in sequence to form a pyramid, and thus obtaining pyramid features.
According to an embodiment of the present invention, step S32 includes:
x is calculated using 1X 1 convolution layer 1 、X 2 、X 3 、X 4 、X 5 Convolving to unify the channel number into C 1 Obtaining the channel number C 1 A kind of electronic device、/>、/>、/>、/>
According to characteristic screening function pairs、/>、/>、/>、/>Removing redundant information and performing 2 Normalizing to obtain pyramid refining feature->、/>、/>、/>、/>
The feature screening function is as follows:
wherein i=1, 2,3,4,5, relu is the activation function, fc is the fully-connected layer, geM is the average pooling layer, F is the principal component analysis calculation,d is five-layer pyramid feature->、/>、/>、/>、/>The matrix is normalized after vectorization.
According to an embodiment of the present invention, step S33 includes:
blending pyramid refining features、/>、/>、/>、/>Feature fusion is carried out layer by layer from top to bottom in the sequence of (a) to obtain a first fusion feature Z, and a fusion formula is as follows:
wherein C is 1 The number of channels is the characteristic diagram, H 1 For the height of the feature map, W 1 Is the width of the feature map;
blending pyramid refining featuresFeature fusion is carried out layer by layer from bottom to top in the sequence of (2) to obtain a second fusion feature V, and the fusion formula is as follows:
and (5) performing vector inner product calculation on Z, V to obtain scene characteristics.
According to an embodiment of the present invention, the second network model includes a second feature extraction network, an attention network, and a second feature fusion network, and step S4 includes:
S41, respectively extracting the high-level features of each image in the first candidate set and the high-level features of each image in the second candidate set by using a second feature extraction network;
s42, carrying out coding processing of spatial attention and channel attention on high-level features of each image by using an attention network to obtain a spatial feature matrix and a channel feature matrix;
s43, adding the spatial feature matrix and the channel feature matrix by using a second feature fusion network, keeping the number of channels unchanged, and adding the spatial feature matrix and the channel feature matrix to obtain the salient features.
According to an embodiment of the present invention, in step S42, the attention network includes a spatial attention network and a channel attention network;
the method comprises the steps of inputting a high-level characteristic E into a spatial attention network, firstly, carrying out standardized conversion on the E into I, inputting the I into a 3X 3 convolution layer to obtain three intermediate characteristics J, K, L, converting the three intermediate characteristics J, K, L into a two-dimensional matrix, carrying out transposition on J, carrying out multiplication operation on the J and K, and inputting the obtained matrix into a softmax layer to obtain a spatial intermediate matrix S:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the effect of the ith position on the jth position, H 2 For the height of the feature map, W 2 Is the width of the feature map;
Multiplying L and S, and adding with I to obtain a space feature matrix P:
wherein, alpha is a proportionality coefficient, alpha is initialized to 0, and weight is gradually increased along with learning;
inputting a high-level characteristic E into a channel attention network, firstly, carrying out standardized conversion on E into I, carrying out multiplication operation on transposed matrixes of I and I, inputting the obtained matrix into a softmax layer, and obtaining a channel intermediate matrix T:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the effect of the ith channel on the jth channel, H 2 For the height of the feature map, W 2 Is the width of the feature map;
multiplying T and I, and adding with I to obtain a channel characteristic matrix Q:
wherein C is 2 For the number of channels, β is a scaling factor, β is initialized to 0, and the weight is gradually increased as learning proceeds.
According to an embodiment of the present invention, step S6 includes:
s61, fitting all the regional features in the first candidate set to obtain first fitting features, calculating a first feature distance between each regional feature in the first candidate set and the first fitting features, and normalizing each first feature distance to be the prediction probability of [0,1] to obtain the prediction probability of each regional feature in the first candidate set;
and S62, fitting all query features in the second candidate set to obtain second fitting features, calculating second feature distances between each region feature in the first candidate set and the second fitting features, and normalizing each second feature distance to be the prediction probability of [0,1] to obtain the prediction probability of each query feature in the second candidate set.
According to an embodiment of the present invention, the calculation formula of the first fitting feature is:
wherein m is the number of regional features, A i For the i (i=1, 2, …, m) th region feature, x 1 Is a first fitting feature;
the calculation formula of the second fitting feature is:
wherein n is the number of query features, B j For the j (j=1, 2, …, n) th query feature, x 2 Is the second fitting feature.
The beneficial effects of the invention at least comprise:
(1) According to the invention, the image set of the recommended candidate is expanded by combining the interested region and the query information selected by the user, then the image set is primarily screened according to the prediction probability of the regional features and the prediction probability of the query features, and fine screening is performed based on a screening strategy, so that the recommended result is more in line with the expectation of the user, and the recommended images are moderate in quantity and better in quality.
(2) According to the method, the interested region of the user is used as the core of the screening strategy, the first recommended feature set is used as the reference target, the second recommended feature set is subjected to fine screening to obtain the second recommended features with good quality, and the final recommended result is obtained based on the second recommended features, so that the accuracy and the precision of the recommended image are improved, and the final recommended image is more suitable for the requirements of the user.
(3) According to the invention, the feature screening function is introduced into the first network model, pyramid features are fused layer by layer from top to bottom and from bottom to top, and finally, the vector inner product is calculated on the two fused features to obtain scene features, so that redundant information such as noise interference in the features can be successfully removed based on the method, the quality of the features is improved, and the scene information can be better described.
(4) The second network model adopts the attention network, and the salient features with complex background are extracted by capturing the feature dependency relationship between the space and the channel, so that the context information and the long-distance information can be utilized in the mode, and the semantic description capability of the features is improved, so that better discriminant is realized.
(5) The invention obtains the corresponding fitting characteristic by fitting the regional characteristic and the query characteristic. Because the fitting features are high-quality features, the prediction probability of each region feature (or query feature) is determined based on the feature distance between each region feature and the corresponding fitting feature, and the method is beneficial to screening features with higher recommendation value from each candidate set, so that the accuracy of a recommendation feature set is improved, and the quality and the accuracy of a final recommendation result are improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a remote sensing image recommendation method based on user selection according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a remote sensing image recommendation method based on user selection according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a process using a first feature extraction network in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of a pyramid feature construction process according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a process using a second feature extraction network in accordance with an embodiment of the invention;
FIG. 6 is a schematic diagram of a process using an attention network in accordance with an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. It should be noted that, as long as no conflict is formed, each embodiment of the present invention and each feature of each embodiment may be combined with each other, and the formed technical solutions are all within the protection scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Referring to fig. 1 and 2, the present invention provides a remote sensing image recommendation method based on user selection, which includes steps S1 to S8.
In step S1, user information is acquired, the user information including query information and a region of interest.
In the embodiment of the invention, the query information of the user can comprise text information, voice information and the like. For non-textual query information, such as speech information, the query information may be converted to text information by a format conversion tool to facilitate subsequent continued processing.
The region of interest may be, for example, a target area determined by a user on an electronic map by a rectangular frame, a circular frame, or an irregular shape. In some embodiments, the region of interest may also be acquired in other ways, for example, a user may enter the region of interest through a user interface.
In step S2, a first candidate set is obtained from the region of interest, and a second candidate set is obtained from keywords in the query information.
After the region of interest is obtained, a comparison is made between the region of interest and the regions corresponding to all the images in the database, and if the coverage rate of the region(s) corresponding to some images in the database with respect to the region of interest reaches a threshold, the image(s) can be added into the first candidate set. In this embodiment, the threshold value may be set to 60%, for example. In some embodiments, the threshold may be set to other values according to actual needs, which is not limited herein.
In addition, keyword extraction can be performed on the query information, and image retrieval can be performed from the database based on the extracted keywords, so that a second candidate set is obtained.
In the embodiment of the invention, the candidate image set to be recommended is determined by combining the region of interest and the query information, so that the number of the images in the candidate set can be expanded, and the quality of image recommendation can be improved subsequently.
In one example, keywords in query information are extracted, for example, by a pre-trained topic model. In another example, query information may be, for example, word-segmented to obtain keywords. Of course, the method for acquiring the keywords in the query information in the present invention is not limited to the above examples, and may be specifically selected according to actual needs, which is not limited herein.
In the embodiment of the invention, the search can be performed based on an index mechanism according to the keywords so as to obtain the images matched with the keywords, and the second candidate set is obtained according to the images. The indexing mechanism may be an index generated by performing hybrid coding according to the image space hierarchy and the time span.
In one embodiment of the present invention, the construction of the index for the remote sensing image data includes the following steps:
first, a large amount of telemetry data is organized in three temporal levels, where each temporal level represents the entire dataset using a corresponding temporal dimension. For example, the lowest dimension level contains an index structure every year, the entire year of data is contained in one index, and the highest dimension index level contains an index structure every day. The monthly index structure is built after all the day-granularity indexes in the month are built, and the annual index structure is built at the end of the year.
Secondly, a quadtree index is established for all regions of the world, and the images are searched through a time-space mixed index mechanism.
In step S3, the scene feature of each image in the first candidate set and the scene feature of each image in the second candidate set are extracted using the first network model, respectively.
In an embodiment of the present invention, the first network model includes, for example, a first feature extraction network, a feature screening network, and a first feature fusion network. The step S3 includes, for example, steps S31 to S33.
In step S31, the pyramid feature of each image in the first candidate set and the pyramid feature of each image in the second candidate set are extracted using the first feature extraction network, respectively.
Referring to fig. 3, the first feature extraction network includes a 7*7 convolution layer, a max pooling layer, and 4 residual blocks. Each residual block comprises 2 residual units, each residual unit comprises 2 3*3 convolution layers, and shortcut is arranged between the residual units for connection.
In the embodiment of the present invention, a process of extracting pyramid features of each image in the first candidate set using the first feature extraction network is the same as a process of extracting pyramid features of each image in the second candidate set using the first feature extraction network, and an example of this process will be described below using the first feature extraction network to extract pyramid features of each image in the first candidate set.
For example, for each image in the first candidate set, the image is input into a first feature extraction network, and the image is subjected to feature extraction by using a 7*7 convolution layer to obtain a first feature map, which is recorded as a first feature map X 1 . Thereafter, using the max pooling layer for the first feature map X 1 Carrying out pooling treatment, sequentially inputting the treated characteristic diagram into 4 residual blocks to obtain 4 corresponding characteristics respectivelyThe figure is marked as a second characteristic figure X 2 Third characteristic diagram X 3 Fourth characteristic diagram X 4 Fifth characteristic diagram X 5 . Wherein the first characteristic diagram X 1 Fifth feature map X 5 The scale of each feature map is different, and the scale of the feature map is sequentially increased according to the data flow direction.
Referring to fig. 4, after the feature map of each scale is obtained, pyramid features corresponding to each image in the first candidate set may be obtained according to the feature map of each scale.
For example, a fifth feature map X 5 As a first layer pyramid feature. Next, for the fifth feature map X 5 Performing up-sampling operation, and comparing the up-sampled feature image with a fourth feature image X 4 And performing feature stitching to obtain the second-layer pyramid features.
Next, upsampling the pyramid features of the second layer, and comparing the upsampled feature map with a third feature map X 3 And performing feature stitching to obtain the third-layer pyramid features.
Next, upsampling the third layer pyramid feature, and comparing the upsampled feature map with the second feature map X 2 And performing feature stitching to obtain the fourth-layer pyramid features.
Next, upsampling the fourth layer pyramid feature, and comparing the upsampled feature map with the first feature map X 1 And performing feature stitching to obtain fifth-layer pyramid features.
And then, arranging the first layer pyramid features, the second layer pyramid features, the third layer pyramid features, the fourth layer pyramid features and the fifth layer pyramid features in sequence to form a pyramid, and thus obtaining pyramid features.
Similarly, pyramid features corresponding to each image in the second candidate set may be obtained based on the foregoing.
In step S32, for the pyramid feature of each image, redundant information in each layer of pyramid feature is removed by using a feature screening function in the feature screening network, so as to obtain pyramid refined features.
In an embodiment of the present invention, step S32 includes the steps of:
for pyramid features corresponding to each image, X is respectively calculated by using 1X 1 convolution layers 1 、X 2 、X 3 、X 4 、X 5 Convolving to unify the channel number of each feature map into C 1 Obtaining the channel number C 1 A kind of electronic device、/>、/>、/>、/>
According to characteristic screening function pairs、/>、/>、/>、/>Removing redundant information and performing 2 Normalizing to obtain pyramid refining feature->、/>、/>、/>、/>
The feature screening function is as follows:
wherein i=1, 2,3,4,5, relu is the activation function, fc is the fully-connected layer, geM is the average pooling layer, F is the principal component analysis calculation,d is five-layer pyramid feature->、/>、/>、/>、/>The matrix is normalized after vectorization.
In the embodiment of the invention, noise interference in the features can be removed through principal component analysis and calculation, most redundant information in the features can be removed by utilizing the integral feature screening function, so that effective information in the features is extracted, and the quality of the features can be greatly improved after the feature screening function is calculated.
In step S33, the pyramid refining features corresponding to each image are respectively fused layer by layer from top to bottom and from bottom to top by using the first feature fusion network to obtain two fusion features, and the vector inner product of the two fusion features is calculated to obtain the scene features corresponding to each image.
Specifically, step S33 includes the steps of:
pyramid refining characteristics corresponding to each image are calculated according to、/>、/>、/>、/>Feature fusion is carried out layer by layer from top to bottom in the sequence of (a) to obtain a first fusion feature Z, and a fusion formula is as follows:
Wherein C is 1 The number of channels is the characteristic diagram, H 1 For the height of the feature map, W 1 Is the width of the feature map.
Pyramid refining characteristics corresponding to each image are calculated according toFeature fusion is carried out layer by layer from bottom to top in the sequence of (2) to obtain a second fusion feature V, and the fusion formula is as follows:
next, the vector inner product is calculated on Z, V, so as to obtain the scene feature corresponding to each image.
In this embodiment, the calculation formula of the vector inner product is:
wherein phi () is a kernel function, and K (Z, V) is a scene feature corresponding to each image.
According to the method, pyramid refining features are subjected to layer-by-layer feature fusion according to two different directions, and the obtained two fusion features can contain information of all scale features to the greatest extent, so that a scene can be expressed from different scale directions, then the two fusion features are subjected to vector inner product calculation, all feature information is combined, and the obtained scene features can better represent scene information.
In step S4, the salient features of each image in the first candidate set and the salient features of each image in the second candidate set are extracted using the second network model, respectively.
In an embodiment of the present invention, the second network model may include, for example, a second feature extraction network, an attention network, and a second feature fusion network. The step S4 includes steps S41 to S43.
In step S41, the high-level features of each image in the first candidate set and the high-level features of each image in the second candidate set are extracted using the second feature extraction network, respectively.
Referring to fig. 5, in this embodiment, for example, a ConvNeXt-S network may be used as the second feature extraction network, where the ConvNeXt-S network includes 1 4*4 convolution layer and 4 residual blocks. Wherein the first residual block comprises 3 residual units, the second residual block comprises 3 residual units, the third residual block comprises 27 residual units, and the fourth residual block comprises 3 residual units. Each residual unit comprises a 7*7 depth separation convolution layer, an LN, a 1*1 convolution layer, a GELU and a 1*1 convolution layer according to the data flow direction, and shortcut is arranged between the residual units for connection.
When the ConvNeXt-S network is utilized to extract the characteristics of each image, the characteristic diagram output by the last residual block is used as l 2 And after normalization operation, obtaining the high-level characteristics of the image.
In step S42, the attention network is used to perform coding processing of spatial attention and channel attention on the high-level features of each image, so as to obtain a spatial feature matrix and a channel feature matrix.
Referring to fig. 6, the attention network includes a spatial attention network and a channel attention network. The high-level features E are input into the spatial attention network, normalized to I, then input into a 3 x 3 convolutional layer, resulting in three intermediate features J, K, L, and the three intermediate features J, K, L are converted into a two-dimensional matrix. And (3) transposed J is multiplied by K, and the obtained matrix is input into a softmax layer to obtain a space intermediate matrix S.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the effect of the ith position on the jth position, H 2 For the height of the feature map, W 2 Is the width of the feature map.
Then, multiplying L and S, multiplying the result by a proportionality coefficient alpha, converting the result, and adding the result and I to obtain a space feature matrix P.
Where α is a scaling factor, α is initialized to 0, and the weight is gradually increased as learning proceeds.
Because the spatial feature matrix utilizes each position information, the relation between the contexts is fully considered, and the contexts are selectively aggregated according to the spatial attention, so that the compactness and the semantic consistency in the class are improved.
Next, the high-level features E are input into the channel attention network, where they are normalized to I and converted to a two-dimensional matrix. And multiplying the transposed matrices of I and I, and inputting the obtained matrix into a softmax layer to obtain a channel intermediate matrix T.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the effect of the ith channel on the jth channel, H 2 For the height of the feature map, W 2 Is the width of the feature map.
Then, multiplying T and I, multiplying the result by a proportionality coefficient beta, converting the result, and adding the result and I to obtain a channel characteristic matrix Q.
Wherein C is 2 For the number of channels, β is a scaling factor, β is initialized to 0, and the weight is gradually increased as learning proceeds.
Because the channel feature matrix models and calculates long-distance information among channels, the feature discriminant can be improved.
In step S43, the second feature fusion network is used to perform an adding operation on the spatial feature matrix and the channel feature matrix, the number of channels is kept unchanged, and the spatial feature matrix and the channel feature matrix are added to obtain the salient feature.
In this embodiment, the spatial feature matrix and the channel feature matrix are fused together by adding operation, and then a 1*1 convolution layer is input, and the kernel dimension of the convolution layer is consistent with the number of channels, so as to keep the number of channels unchanged, and obtain the salient feature. The salient features are closer to the salient region and can represent typical information in the image.
In step S5, the regional feature of each image in the first candidate set is obtained according to the scene feature and the salient feature of each image in the first candidate set, and the query feature of each image in the second candidate set is obtained according to the scene feature and the salient feature of each image in the second candidate set.
In this embodiment, feature stitching is sequentially performed on the scene features and the salient features of each image in the first candidate set, so as to obtain the regional features of the image. Based on the above manner, the regional characteristics of all the images in the first candidate set can be obtained. Similarly, feature stitching is performed on the scene features and the salient features of each image in the second candidate set in sequence to obtain query features of the image. Based on the above manner, the query features of all images in the second candidate set can be obtained.
In step S6, the prediction probability calculation is performed on the regional features of each image in the first candidate set and the query features of each image in the second candidate set, so as to obtain the prediction probability of each regional feature and the prediction probability of each query feature.
Specifically, the step S6 includes steps S61 to S62.
In step S61, fitting all the region features in the first candidate set to obtain first fitting features, calculating a first feature distance between each region feature in the first candidate set and the first fitting features, and normalizing each first feature distance to be a prediction probability of [0,1] to obtain a prediction probability of each region feature in the first candidate set.
In this embodiment, the calculation formula of the first fitting feature is:
wherein m is the number of regional features in the first candidate set, A i For the i (i=1, 2, …, m) th region feature, x 1 Is the first fitting feature.
In step S62, fitting all query features in the second candidate set to obtain second fitted features, calculating a second feature distance between each region feature in the first candidate set and the second fitted feature, and normalizing each second feature distance to be a prediction probability of [0,1] to obtain a prediction probability of each query feature in the second candidate set.
In this embodiment, the calculation formula of the second fitting feature is:
wherein n is the number of query features in the second candidate set, B j For the j (j=1, 2, …, n) th query feature, x 2 Is the second fitting feature.
In this embodiment, the method for calculating the feature distance may include, but is not limited to, euclidean distance, cosine distance, manhattan distance, and the like.
In the embodiment of the invention, the corresponding fitting characteristic can be obtained by fitting the regional characteristic and the query characteristic. Because the fitting features are high-quality features, the prediction probability of each region feature (or query feature) is determined based on the feature distance between each region feature and the corresponding fitting feature, and the method is beneficial to screening features with higher recommendation value from each candidate set, so that the accuracy of a recommendation feature set is improved, and the quality and the accuracy of a final recommendation result are improved.
In step S7, M regional features are determined as a first recommended feature set according to the ranking result of the prediction probabilities of the regional features, and N query features are determined as a second recommended feature set according to the ranking result of the prediction probabilities of the query features, where N is a positive integer, and M is an integer greater than 0 and less than N.
In the embodiment of the invention, according to the sequencing result of the prediction probabilities of the regional features in the first candidate set, the regional features corresponding to the first M prediction probabilities can be selected as the first recommended feature set. Similarly, according to the sequencing result of the prediction probabilities of the query features in the second candidate set, selecting the query features corresponding to the first N prediction probabilities as the second recommended feature set.
In this embodiment, in general, the number of images in the second candidate set is greater than the number of images in the first candidate set, and in order to ensure the quality of the recommended result, the number of features in the second recommended feature set obtained after feature screening according to the prediction probability is correspondingly greater than the number of features in the first recommended feature set.
In the step S7, when feature screening is performed based on the prediction probability, a corresponding recommended feature set may be obtained by screening according to a preset ratio. For example, in one example, to ensure that the first recommended feature set fits the scene demand as much as possible, M may be set to, for example, the first 20%, and N may be set to, for example, the first 60%. That is, the region feature corresponding to the top 20% of the prediction probabilities is used as the first recommended feature set, and the query feature corresponding to the top 60% of the prediction probabilities is used as the second recommended feature set.
For convenience of description, the region features included in the first recommended feature set are hereinafter referred to as first recommended features, and the query features included in the second recommended feature set are hereinafter referred to as second recommended features. That is, the first recommended feature set includes M first recommended features, and the second recommended feature set includes N second recommended features.
In step S8, the first recommendation feature set and the second recommendation feature set are filtered according to a filtering policy, so as to obtain a recommendation result.
In the embodiment of the invention, the step S8 includes steps S81 to S84.
In step S81, for each first recommended feature in the first recommended feature set, a similarity between the first recommended feature and each second recommended feature in the second recommended feature set is calculated, and the second recommended features with the similarity ranked in the first q are used as a set of candidate features, where q is a positive integer.
In step S82, step S81 is repeated, resulting in M sets of candidate features.
In step S83, the M sets of candidate features are subjected to deduplication processing, and combined with the first recommended feature set, to obtain a final recommended feature set.
In step S84, a corresponding recommended image is obtained according to the final recommended feature set, and is used as a recommendation result.
In this embodiment, the method for calculating the similarity may be cosine similarity, euclidean distance, or the like.
In this embodiment, by taking the region of interest as the core of the screening policy and the first recommended feature set as the reference target, the second recommended feature set is finely screened to obtain a plurality of candidate features with better quality, and a final recommended result is obtained based on the plurality of candidate features. Therefore, the accuracy and the precision of the recommended image can be improved, and the recommended result can be more fit with the requirements of users.
In the technical scheme of the invention, the image set of the recommended candidate is expanded by combining the interested area selected by the user and the query information, so that the expansion of the number of images in the candidate set can be realized, and the quality of image recommendation can be improved subsequently. And then, performing primary screening according to the prediction probability of the regional characteristics and the prediction probability of the query characteristics, and performing fine screening based on a screening strategy, so that the obtained recommended result better accords with the expectation of a user, and the recommended images are moderate in quantity and better in quality.
The above embodiments are only for illustrating the present invention, not for limiting the present invention, and various changes and modifications may be made by one of ordinary skill in the relevant art without departing from the spirit and scope of the present invention, and therefore, all equivalent technical solutions are also within the scope of the present invention, and the scope of the present invention is defined by the claims.

Claims (8)

1. The remote sensing image recommending method based on the user selection is characterized by comprising the following steps of:
s1, acquiring user information, wherein the user information comprises query information and an interested region;
s2, acquiring a first candidate set according to the region of interest, and acquiring a second candidate set according to keywords in query information;
s3, respectively extracting scene characteristics of each image in the first candidate set and the second candidate set by using the first network model;
s4, respectively extracting the salient features of each image in the first candidate set and the second candidate set by using a second network model;
s5, obtaining the regional characteristics of each image in the first candidate set according to the scene characteristics and the salient characteristics of each image in the first candidate set, and obtaining the query characteristics of each image in the second candidate set according to the scene characteristics and the salient characteristics of each image in the second candidate set;
s6, respectively carrying out prediction probability calculation on the regional characteristics of each image in the first candidate set and the query characteristics of each image in the second candidate set to obtain the prediction probability of each regional characteristic and the prediction probability of each query characteristic;
s7, according to the sequencing result of the prediction probability of each regional feature, M regional features are determined to be used as a first recommended feature set, N query features are determined to be used as a second recommended feature set according to the sequencing result of the prediction probability of each query feature, N is a positive integer, and M is an integer which is more than 0 and less than N;
S8, screening the first recommendation feature set and the second recommendation feature set according to a screening strategy to obtain a recommendation result;
the first network model includes a first feature extraction network, a feature screening network and a first feature fusion network, and step S3 includes:
s31, respectively extracting pyramid features of each image in the first candidate set and pyramid features of each image in the second candidate set by using a first feature extraction network;
s32, aiming at pyramid features of each image, removing redundant information in each layer of pyramid features by utilizing a feature screening function in a feature screening network to obtain pyramid refining features;
s33, carrying out layer-by-layer feature fusion on pyramid refining features corresponding to each image from top to bottom and from bottom to top by using a first feature fusion network to obtain two fusion features, and carrying out vector inner product calculation on the two fusion features to obtain scene features corresponding to each image;
the second network model includes a second feature extraction network, an attention network, and a second feature fusion network, and step S4 includes:
s41, respectively extracting the high-level features of each image in the first candidate set and the high-level features of each image in the second candidate set by using a second feature extraction network;
S42, carrying out coding processing of spatial attention and channel attention on high-level features of each image by using an attention network to obtain a spatial feature matrix and a channel feature matrix;
s43, adding the spatial feature matrix and the channel feature matrix by using a second feature fusion network, keeping the number of channels unchanged, and adding the spatial feature matrix and the channel feature matrix to obtain the salient features.
2. The method of claim 1, wherein the first set of recommended features comprises M first recommended features and the second set of recommended features comprises N second recommended features; step S8 includes:
s81, calculating the similarity between each first recommended feature in the first recommended feature set and each second recommended feature in the second recommended feature set according to each first recommended feature in the first recommended feature set, wherein the second recommended features with the similarity being ranked in the first q are used as a group of candidate features, and q is a positive integer;
s82, repeating the step S81 to obtain M groups of candidate features;
s83, performing de-duplication treatment on the M groups of candidate features, and combining the M groups of candidate features with the first recommended feature set to obtain a final recommended feature set;
s84, obtaining a corresponding recommended image according to the final recommended feature set, and taking the corresponding recommended image as a recommended result.
3. The method according to claim 1, characterized in that in step S31, the first feature extraction network comprises 7*7 convolutional layers, a max pooling layer, 4 residual blocks, wherein each residual block comprises 2 residual units, each residual unit comprises 2 3*3 convolutional layers, and a shortcut is set between the residual units for connection;
inputting the first candidate set or the second candidate set into a first feature extraction network, and performing feature extraction on the image by using a 7*7 convolution layer to obtain a first feature map, which is marked as X 1 The first feature map is subjected to pooling treatment by using a maximum pooling layer, 4 residual blocks are sequentially input into the pooled feature map, 4 corresponding feature maps are respectively obtained, and the feature map is recorded as a second feature map X 2 Third characteristic diagram X 3 Fourth characteristic diagram X 4 Fifth step ofFeature map X 5 The scale of each feature map is different, and the scales of the feature maps are sequentially increased according to the data flow direction;
will fifth characteristic diagram X 5 As a first layer pyramid feature;
will fifth characteristic diagram X 5 Up-sampling operation is carried out and is matched with a fourth characteristic diagram X 4 Performing feature stitching to obtain a second layer pyramid feature;
upsampling the second layer pyramid feature and comparing it with the third feature map X 3 Performing feature stitching to obtain third-layer pyramid features;
upsampling the third layer pyramid feature and then comparing the upsampled third layer pyramid feature with a second feature map X 2 Performing feature stitching to obtain a fourth layer pyramid feature;
upsampling the fourth layer pyramid feature and comparing it with the first feature map X 1 Performing feature stitching to obtain fifth-layer pyramid features;
and arranging the first layer pyramid features, the second layer pyramid features, the third layer pyramid features, the fourth layer pyramid features and the fifth layer pyramid features in sequence to form a pyramid, and thus obtaining pyramid features.
4. A method according to claim 3, wherein step S32 comprises:
x is calculated using 1X 1 convolution layer 1 、X 2 、X 3 、X 4 、X 5 Convolving to unify the channel number into C 1 Obtaining the channel number C 1 A kind of electronic device
According to characteristic screening function pairsRemoving redundant information and performing 2 Normalizing to obtain pyramid refining feature->
The feature screening function is as follows:
wherein i=1, 2,3,4,5, relu is the activation function, fc is the fully-connected layer, geM is the average pooling layer, F is the principal component analysis calculation,d is five-layer pyramid feature->After vectorization, the matrix is normalized.
5. The method according to claim 4, wherein step S33 includes:
blending pyramid refining featuresFeature fusion is carried out layer by layer from top to bottom in the sequence of (a) to obtain a first fusion feature Z, and a fusion formula is as follows:
wherein C is 1 The number of channels is the characteristic diagram, H 1 For the height of the feature map, W 1 Is the width of the feature map;
blending pyramid refining featuresFeature fusion is carried out layer by layer from bottom to top in the sequence of (2) to obtain a second fusion feature V, and the fusion formula is as follows:
and (5) carrying out vector inner product calculation on Z, V to obtain scene characteristics.
6. The method according to claim 1, wherein in step S42, the attention network comprises a spatial attention network and a channel attention network;
the method comprises the steps of inputting a high-level characteristic E into a spatial attention network, firstly, carrying out standardized conversion on the E into I, inputting the I into a 3X 3 convolution layer to obtain three intermediate characteristics J, K, L, converting the three intermediate characteristics J, K, L into a two-dimensional matrix, carrying out transposition on J, carrying out multiplication operation on the J and K, and inputting the obtained matrix into a softmax layer to obtain a spatial intermediate matrix S:
wherein s is ji Representing the effect of the ith position on the jth position, H 2 For the height of the feature map, W 2 Is the width of the feature map;
multiplying L and S, and adding with I to obtain a space feature matrix P:
wherein, alpha is a proportionality coefficient, alpha is initialized to 0, and weight is gradually increased along with learning;
inputting a high-level characteristic E into a channel attention network, firstly, carrying out standardized conversion on E into I, carrying out multiplication operation on transposed matrixes of I and I, inputting the obtained matrix into a softmax layer, and obtaining a channel intermediate matrix T:
wherein t is ji Representing the effect of the ith channel on the jth channel, H 2 For the height of the feature map, W 2 Is the width of the feature map;
multiplying T and I, and adding with I to obtain a channel characteristic matrix Q:
wherein C is 2 For the number of channels, β is a scaling factor, β is initialized to 0, and the weight is gradually increased as learning proceeds.
7. The method according to claim 1, wherein step S6 comprises:
s61, fitting all the regional features in the first candidate set to obtain first fitting features, calculating a first feature distance between each regional feature in the first candidate set and the first fitting features, and normalizing each first feature distance to be the prediction probability of [0,1] to obtain the prediction probability of each regional feature in the first candidate set;
And S62, fitting all query features in the second candidate set to obtain second fitting features, calculating second feature distances between each region feature in the first candidate set and the second fitting features, and normalizing each second feature distance to be the prediction probability of [0,1] to obtain the prediction probability of each query feature in the second candidate set.
8. The method of claim 7, wherein the first fitting feature is calculated by the formula:
wherein m is the number of regional features, A i For the i (i=1, 2, …, m) th region feature, x 1 Is a first fitting feature;
the calculation formula of the second fitting feature is:
wherein n is the number of query features, B j For the j (j=1, 2, …, n) th query feature, x 2 Is the second fitting feature.
CN202311014229.6A 2023-08-14 2023-08-14 Remote sensing image recommendation method based on user selection Active CN116740578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311014229.6A CN116740578B (en) 2023-08-14 2023-08-14 Remote sensing image recommendation method based on user selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311014229.6A CN116740578B (en) 2023-08-14 2023-08-14 Remote sensing image recommendation method based on user selection

Publications (2)

Publication Number Publication Date
CN116740578A CN116740578A (en) 2023-09-12
CN116740578B true CN116740578B (en) 2023-10-27

Family

ID=87909986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311014229.6A Active CN116740578B (en) 2023-08-14 2023-08-14 Remote sensing image recommendation method based on user selection

Country Status (1)

Country Link
CN (1) CN116740578B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580299A (en) * 2018-06-08 2019-12-17 北京京东尚科信息技术有限公司 Method, system, device and storage medium for generating matching of recommendation language of object
CN111126482A (en) * 2019-12-23 2020-05-08 自然资源部国土卫星遥感应用中心 Remote sensing image automatic classification method based on multi-classifier cascade model
CN112182131A (en) * 2020-09-28 2021-01-05 中国电子科技集团公司第五十四研究所 Remote sensing image recommendation method based on multi-attribute fusion
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN114896437A (en) * 2022-07-14 2022-08-12 北京数慧时空信息技术有限公司 Remote sensing image recommendation method based on available domain
CN115017418A (en) * 2022-08-10 2022-09-06 北京数慧时空信息技术有限公司 Remote sensing image recommendation system and method based on reinforcement learning
CN115248876A (en) * 2022-08-18 2022-10-28 北京数慧时空信息技术有限公司 Remote sensing image overall planning recommendation method based on content understanding
CN115269899A (en) * 2022-08-26 2022-11-01 北京数慧时空信息技术有限公司 Remote sensing image overall planning system based on remote sensing knowledge map
CN115374303A (en) * 2022-10-26 2022-11-22 北京数慧时空信息技术有限公司 Satellite image recommendation method based on user demand understanding
CN115471739A (en) * 2022-08-03 2022-12-13 中南大学 Cross-domain remote sensing scene classification and retrieval method based on self-supervision contrast learning
CN115934990A (en) * 2022-10-24 2023-04-07 北京数慧时空信息技术有限公司 Remote sensing image recommendation method based on content understanding
CN116433940A (en) * 2023-04-21 2023-07-14 北京数慧时空信息技术有限公司 Remote sensing image change detection method based on twin mirror network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101916798B1 (en) * 2016-10-21 2018-11-09 네이버 주식회사 Method and system for providing recommendation query using search context
CN113536097B (en) * 2020-04-14 2024-03-29 华为技术有限公司 Recommendation method and device based on automatic feature grouping
CN113688304A (en) * 2020-05-19 2021-11-23 华为技术有限公司 Training method for search recommendation model, and method and device for sequencing search results

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580299A (en) * 2018-06-08 2019-12-17 北京京东尚科信息技术有限公司 Method, system, device and storage medium for generating matching of recommendation language of object
CN111126482A (en) * 2019-12-23 2020-05-08 自然资源部国土卫星遥感应用中心 Remote sensing image automatic classification method based on multi-classifier cascade model
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
CN112182131A (en) * 2020-09-28 2021-01-05 中国电子科技集团公司第五十四研究所 Remote sensing image recommendation method based on multi-attribute fusion
CN114896437A (en) * 2022-07-14 2022-08-12 北京数慧时空信息技术有限公司 Remote sensing image recommendation method based on available domain
CN115471739A (en) * 2022-08-03 2022-12-13 中南大学 Cross-domain remote sensing scene classification and retrieval method based on self-supervision contrast learning
CN115017418A (en) * 2022-08-10 2022-09-06 北京数慧时空信息技术有限公司 Remote sensing image recommendation system and method based on reinforcement learning
CN115248876A (en) * 2022-08-18 2022-10-28 北京数慧时空信息技术有限公司 Remote sensing image overall planning recommendation method based on content understanding
CN115269899A (en) * 2022-08-26 2022-11-01 北京数慧时空信息技术有限公司 Remote sensing image overall planning system based on remote sensing knowledge map
CN115934990A (en) * 2022-10-24 2023-04-07 北京数慧时空信息技术有限公司 Remote sensing image recommendation method based on content understanding
CN115374303A (en) * 2022-10-26 2022-11-22 北京数慧时空信息技术有限公司 Satellite image recommendation method based on user demand understanding
CN116433940A (en) * 2023-04-21 2023-07-14 北京数慧时空信息技术有限公司 Remote sensing image change detection method based on twin mirror network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种定义感兴趣局部显著特征的新方法及其在遥感影像检索中的应用;朱先强等;武汉大学学报(信息科学版);26-29 *
基于全卷积网络的高分辨遥感影像目标检测;徐逸之等;测绘通报;80-85 *

Also Published As

Publication number Publication date
CN116740578A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN104318340B (en) Information visualization methods and intelligent visible analysis system based on text resume information
CN111694965B (en) Image scene retrieval system and method based on multi-mode knowledge graph
CN110647632B (en) Image and text mapping technology based on machine learning
CN110188979A (en) Water industry Emergency decision generation method and device
CN115934990B (en) Remote sensing image recommendation method based on content understanding
WO2004013775A2 (en) Data search system and method using mutual subsethood measures
Vijayarani et al. Multimedia mining research-an overview
CN117290489A (en) Method and system for quickly constructing industry question-answer knowledge base
CN113190593A (en) Search recommendation method based on digital human knowledge graph
Ma et al. FENet: Feature enhancement network for land cover classification
CN114663164A (en) E-commerce site popularization and configuration method and device, equipment, medium and product thereof
Blier-Wong et al. Rethinking representations in P&C actuarial science with deep neural networks
CN117436724A (en) Multi-source data visual analysis method and system based on smart city
CN112632406B (en) Query method, query device, electronic equipment and storage medium
CN116740578B (en) Remote sensing image recommendation method based on user selection
CN110705279A (en) Vocabulary selection method and device and computer readable storage medium
CN117370650A (en) Cloud computing data recommendation method based on service combination hypergraph convolutional network
Datcu et al. The digital Earth Observation Librarian: a data mining approach for large satellite images archives
US20120131026A1 (en) Visual information retrieval system
Fisher et al. Artificial intelligence and expert systems in geodata processing
CN112860838B (en) Multi-scale map generation method, system and terminal based on generation type countermeasure network
CN115203234A (en) Remote sensing data query system
CN112016004B (en) Multi-granularity information fusion-based job crime screening system and method
CN105808715B (en) Method for establishing map per location
Yao Clustering in ratemaking: Applications in territories clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant