CN115878823B - Deep hash method and traffic data retrieval method based on graph convolution network - Google Patents
Deep hash method and traffic data retrieval method based on graph convolution network Download PDFInfo
- Publication number
- CN115878823B CN115878823B CN202310195620.4A CN202310195620A CN115878823B CN 115878823 B CN115878823 B CN 115878823B CN 202310195620 A CN202310195620 A CN 202310195620A CN 115878823 B CN115878823 B CN 115878823B
- Authority
- CN
- China
- Prior art keywords
- image
- graph
- hash
- module
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a deep hash method based on a graph rolling network, which comprises the steps of obtaining a training image; performing data enhancement on the image data; constructing a vision converter module; inputting output data of the visual transducer module into a graph convolution network for correlation optimization; mapping an output image of the graph rolling network through a full connection layer and an activation function to obtain a hash code; constructing a comprehensive loss function optimization hash process; and completing the actual deep hash process according to the final optimization result. The invention also discloses a traffic data retrieval method comprising the deep hash method based on the graph rolling network. The invention ensures the correlation relation of the low-dimensional Hamming space consistent with the high-dimensional space of the original image, generates more efficient and compact binary hash codes, improves the effectiveness of large-scale picture retrieval, and has high reliability, good effectiveness, simplicity and convenience.
Description
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a deep hash method and a traffic data retrieval method based on a graph rolling network.
Background
Along with the development of economic technology and the improvement of living standard of people, the compression mapping technology is widely applied to the production and living of people, and brings endless convenience to the production and living of people. The deep hash is used as a compression mapping technology, and the core idea is to map high-dimensional image information into low-dimensional binary codes by learning a hash function, and simultaneously maintain semantic information and similarity relation of the image. The deep hash technology is widely applied to large-scale image retrieval tasks in the fields of intelligent transportation, education big data and the like.
Currently, the mainstream depth hash method is based on a visual transformer (VIT, vision Transformer) to extract the features of an image, and the image features are mapped to a low-dimensional hamming space through a full-connection layer to construct a loss function to optimize a hash model, so that the generated hash code can maintain the semantic information and similarity relation of an original image.
However, the current deep hash method only considers that the correlation of the image is enhanced through the correlated loss function, so that the generated hash code can well maintain the similarity relationship. However, the effect of this approach is highly dependent on the effectiveness of the loss function, and it is very difficult to design an extremely effective loss function; moreover, if the loss function is not fully considered in the design, the hash method has unsatisfactory effect.
In addition, the traffic data retrieval process based on the existing hash method has the defects of poor reliability, low efficiency and extremely complex retrieval algorithm.
Disclosure of Invention
The invention aims to provide a deep hash method based on a graph rolling network, which has high reliability, good effectiveness, simplicity and convenience.
Another object of the present invention is to provide a traffic data retrieval method including the deep hash method based on graph rolling network.
The deep hash method based on the graph rolling network provided by the invention comprises the following steps:
s1, acquiring a training image;
s2, randomly cutting the image obtained in the step S1 to complete data enhancement of image data;
s3, constructing a visual converter module based on block embedding, position embedding and an encoder;
s4, inputting output data of the vision converter module constructed in the step S3 into a graph convolution network for correlation optimization;
s5, mapping the output image of the graph rolling network obtained in the step S4 through a full connection layer and an activation function to obtain a hash code;
s6, constructing a comprehensive loss function based on similarity loss and semantic loss, and optimizing the hash processes of the steps S3-S5;
s7, according to a final optimization result, the actual deep hash process is completed.
The step S2 specifically comprises the following steps:
unifying the images acquired in the step S1 into a 256×256 square image;
and randomly clipping the unified image by adopting a 224 multiplied by 224 clipping frame, thereby completing the data enhancement of the image data.
The step S3 specifically comprises the following steps:
the visual converter comprises a block embedding module, a position embedding module and an encoder module which are sequentially connected in series;
the block embedding module is used for dividing an input image into a plurality of blocks, adding a class token to be learned, obtaining an image block and an embedding vector, and inputting the image block and the embedding vector into the position embedding module together;
the position embedding module is used for adding sequence information to the input image blocks so as to generate a vector for classification;
the encoder module is used for extracting image characteristics of the vector output by the position embedding module.
The block embedding module cuts the input image into piecespThe block is provided with a plurality of channels,whereinHFor the length of the input image to be taken,Win order to be able to input the width of the image,Pthe length or width of the segmented image; then, a class token to be learned is added>Obtaining an embedded vectorIs->Wherein->Is the firstiFirst of sheet imagepA block.
The position embedding module adds sequence information for the input image blockPEThereby generating a vector for classificationz 0 Is that。
The encoder module comprises m blocks; each block comprises a first layer normalization sub-module, a multi-head self-attention sub-module, a second layer normalization sub-module and a multi-layer perceptron sub-module; the calculation process for each block is represented by the following formula: in->Is the firstmOutput characteristics of the individual blocks;MLP() Processing functions of the multi-layer perceptron submodule; />() Normalizing the processing functions of the sub-modules for the second layer; />Is an intermediate variable;MSA() Processing functions of the multi-head self-attention sub-module;LN 1 () The processing functions of the sub-modules are normalized for the first layer.
The step S4 specifically comprises the following steps:
class-based tokenx cls Calculating cosine similarity between image pairs to obtain a similarity relation matrixVIs thatWherein->Is the firstiClass tokens for the individual images; />Is->Is the modulus of the vector of (a);
image data characteristics output by the vision converter module are taken as points, and the similarity relation matrix is similarVAs an edge relation, inputting the edge relation into a graph convolution network for correlation optimization; the following formula is used as a propagation rule for the graph roll-up network:in->Is the firstlAn output feature matrix of the layer; />Is the firstlAn input feature matrix of the layer; />Is an activation function; />Is->Degree matrix of->,/>Is->Middle (f)iLine 1iColumn element->Is a matrixMiddle (f)iLine 1jElements of a column; />Is an adjacency matrix of an undirected graph composed of images and +.>,I n Is a unit matrix;is the firstlWeight parameters of the layers.
Step S5 is to map the output image of the graph rolling network obtained in step S4 with a sign activation function through a full connection layer to obtain a K-bit hash codeHIs thatWherein->。
The step S6 is based on similarity loss and semantic loss, and a comprehensive loss function is constructed, and specifically comprises the following steps:
the following equation is used as a similarity loss function:In the middle ofw ij For training pairsx i Andx j weights of (2), and->,S 1 For the number of similar items in the dataset,S 0 for the number of dissimilar items in the dataset,Sfor the total number in the dataset +.>Is the firstiImage and the firstjSimilarity labels for the individual images; />Is thatx i Obtained by mappingKBit hash codes;q ij for +/based on binary hash code>And->Intermediate variable calculated by cosine similarity between them, and +.>;
In the middle ofw i Is a weight parameter, and->,c t Is the firstiNumber of pictures of the category to which the piece of picture belongs, +.>Is the firstiThe correct number of classified categories to which the pictures belong; />Is the firstiGenuine category label of picturejThe value of the bit; />Is the firstiPrediction category label of a picturejThe value of the bit;
combining similarity loss and semantic loss to construct a comprehensive loss function:/>In->Is a set super parameter; />Is the L2 norm of the model parameters and is used for preventing the phenomenon of overfitting in the model training process.
The invention also provides a traffic data retrieval method comprising the deep hash method based on the graph convolutional network, which comprises the following steps:
A. acquiring traffic original data to be retrieved and traffic original data in a database;
B. a deep hash method based on a graph rolling network is adopted, hash codes of pictures to be searched and hash codes of pictures in a database are respectively generated, the hash codes of the pictures to be searched and the hash codes of the pictures in the database are compared and ordered according to Hamming distances, and a search result is returned;
C. and B, obtaining a processing result which is a final traffic data retrieval result.
According to the depth hash method and the traffic data retrieval method based on the graph rolling network, the maintenance of the similarity relationship between images is enhanced by introducing the graph rolling network, the feature flow between image features is promoted by utilizing the correlation relationship between images in the original image space, so that the distance between the similar images is closer, the correlation relationship between the low-dimensional Hamming space and the high-dimensional space of the original image is ensured, more efficient and compact binary hash codes are generated, and the effectiveness of large-scale image retrieval is improved; the invention has high reliability, good effectiveness, simplicity and convenience.
Drawings
Fig. 1 is a flow chart of the hash method of the present invention.
Fig. 2 is a flow chart of a traffic data retrieving method according to the present invention.
Detailed Description
Fig. 1 is a schematic flow chart of a hash method according to the present invention: the deep hash method based on the graph rolling network provided by the invention comprises the following steps:
s1, acquiring a training image;
in practice, it is assumed that there is a set of training setsComprising n training images and a corresponding set of labels +.>Each label is provided withy i Are all made ofcThe dimension vector is represented by a vector of dimensions,crepresenting the category number of the pictures in the dataset;
for any two pictures in the dataset, a similarity tag may be generatedIf->And->Similarly, then->=1; otherwise->=0; deep hash learning nonlinear hash function ++>The method comprises the steps of carrying out a first treatment on the surface of the Mapping an image from a high-dimensional space to a low-dimensional space, i.e. +.>Hash code mapped to K bits +.>At the same time according to the similarity matrixSMaintaining similar information of pictures, that is to say if=1,/>And->The smaller the hamming distance between them; vice versa;
s2, randomly cutting the image obtained in the step S1 to complete data enhancement of image data; the method specifically comprises the following steps:
unifying the images acquired in the step S1 into a 256×256 square image;
randomly cutting the unified image by adopting a 224 multiplied by 224 cutting frame, thereby completing the data enhancement of the image data;
s3, constructing a visual converter module based on block embedding, position embedding and an encoder; the method specifically comprises the following steps:
the visual converter comprises a block embedding module, a position embedding module and an encoder module which are sequentially connected in series;
the block embedding module is used for dividing the input image into a plurality of blocksAdding a class token to be learned to obtain an image block and an embedded vector, and inputting the image block and the embedded vector into a position embedding module together; in specific implementation, the block embedding module divides the input image intopThe block is provided with a plurality of channels,whereinHFor the length of the input image to be taken,Win order to be able to input the width of the image,Pthe length or width of the segmented image; then, a class token to be learned is added>Obtaining an embedded vector->Is->WhereinIs the firstiFirst of sheet imagepA block;
the position embedding module is used for adding sequence information to the input image blocks so as to generate a vector for classification; in specific implementation, the position embedding module adds sequence information for the input image blockPEThereby generating a vector for classificationz 0 Is that;
The encoder module is used for extracting image characteristics of the vector output by the position embedding module; in particular implementations, the encoder module includes m blocks; each block comprises a first layer normalization sub-module, a multi-head self-attention sub-module, a second layer normalization sub-module and a multi-layer perceptron sub-module; the calculation process for each block is represented by the following formula: in the middle ofz m Is the firstmOutput characteristics of the individual blocks;MLP() Processing functions of the multi-layer perceptron submodule;LN 2 () Normalizing the processing functions of the sub-modules for the second layer; />Is an intermediate variable;MSA() Processing functions of the multi-head self-attention sub-module;LN 1 () Normalizing the processing functions of the sub-modules for a first layer;
s4, inputting output data of the vision converter module constructed in the step S3 into a graph convolution network for correlation optimization; the method specifically comprises the following steps:
class-based tokenCalculating cosine similarity between image pairs to obtain a similarity relation matrixVIs thatWherein->Is the firstiClass tokens for the individual images; />Is->Is the modulus of the vector of (a);
image data characteristics output by the vision converter module are taken as points, and the similarity relation matrix is similarVAs an edge relation, inputting the edge relation into a graph convolution network for correlation optimization; the following formula is used as a propagation rule for the graph roll-up network:
in->Is the firstlAn output feature matrix of the layer; />Is the firstlAn input feature matrix of the layer; />Is an activation function; />Is->Degree matrix of->,/>Is->Middle (f)iLine 1iThe elements of the column are arranged such that,for matrix->Middle (f)iLine 1jElements of a column; />Is an adjacency matrix of an undirected graph composed of images and +.>,I n Is a unit matrix; />Is the firstlThe weight parameters of the layers; s5, mapping the output image of the graph rolling network obtained in the step S4 through a full connection layer and an activation function to obtain a hash code; specifically, mapping the output image of the graph rolling network obtained in the step S4 through a full connection layer and a sign activation function to obtain a K-bit hash codeHIs->Wherein->;
S6, constructing a comprehensive loss function based on similarity loss and semantic loss, and optimizing the hash processes of the steps S3-S5; the method specifically comprises the following steps:
assuming that P represents joint probability distribution in an original high-dimensional space of the image, and Q represents joint probability distribution of the hash code in a low-dimensional Hamming space; in order for the learned hash code to retain similar information to the original picture, the distribution Q should be made as similar as possible to the distribution P; here, JS divergence is adopted as the measurement method;wherein the P distribution is fixed bys ij Instead ofp ij The method comprises the steps of carrying out a first treatment on the surface of the This means that the distribution Q should be as close as possible to the distribution P; that is to sayp ij When the number of the codes is =1,q ij should approach 1 as closely as possible; otherwise the first set of parameters is selected,q ij should approach zero; here, according to binary hash codesh i Andh j cosine similarity betweenq ij :/>Therefore, the following formula is finally adopted as the similarity loss functionL sim :/>In the middle ofw ij For training pairsx i Andx j weights of (2), and->,S 1 For the number of similar items in the dataset,S 0 to refer to the number of dissimilar terms in the dataset,Sfor the total number in the dataset +.>Is the firstiImage and the firstjSimilarity labels for the individual images; />Is thatx i Obtained by mappingKBit hash codes; />For +/based on binary hash code>And->Intermediate variable calculated by cosine similarity between them, and +.>;
In order to ensure that the hash code finally generated by the model still can well keep the semantic information of the original image, classifying the finally generated hash code to obtain a classification vectorEach of which isl i Are all made ofcThe dimension vector is represented by a vector of dimensions,the method comprises the steps of carrying out a first treatment on the surface of the Therefore, the following expression is adopted as the semantic loss functionL sem :/>In the middle ofw i Is a weight parameter, and->,c t Is the firstiNumber of pictures of the category to which the piece of picture belongs, +.>Is the firstiThe category to which the picture belongs is correctly classifiedThe number of (3); />Is the firstiGenuine category label of picturejThe value of the bit; />Is the firstiPrediction category label of a picturejThe value of the bit;
combining similarity loss and semantic loss to construct a comprehensive loss function:/>In->Is a set super parameter; />The model is L2 norm of model parameters and is used for preventing the phenomenon of over fitting in the model training process; by constantly iterating the optimization +.>An efficient hash model can be learned;
s7, according to a final optimization result, the actual deep hash process is completed.
According to the hash method, feature correlation optimization is carried out through the graph convolution network, the graph convolution network can be utilized to update the features of similar nodes according to the adjacent matrix of the nodes in the graph, namely, iteration is carried out on point information by utilizing the relationship of edges, similar information among data is kept, the similarity among data points is learned through the graph convolution network, the similar information is integrated in the image features, so that a hash code which better keeps the similarity relationship of an original image is generated, and the hash retrieval performance is further improved; in the hash method, the JS divergence and the cross entropy loss function in the probability theory are used for reference, and corresponding similarity loss function and semantic loss function are designed for large-scale image retrieval to optimize the integral hash model, so that correlation optimization is enhanced; meanwhile, in order to solve the problem of uneven distribution of images, corresponding weights are designed, and the image training intensity with less distribution and wrong classification is enhanced so as to meet the requirement of actual retrieval.
The effect of the hash method of the present invention is further described below in conjunction with one embodiment:
the data set used in the experiment was CIFAR-10. To develop the comparison experiment, the dataset is further divided into a training dataset, a query dataset, and a retrieval dataset. The detailed settings for the data sets are shown in table 1:
table 1 detailed setup schematic table of dataset
The comparison methods adopted in the experiment are respectively as follows: DSH, hashNet, DCH, IDHN and QSMIH. The average accuracy results obtained with different Ha Xiwei are shown in table 2:
table 2 table of average accuracy results obtained for different Ha Xiwei
As can be seen from table 2, the hash method of the present invention achieves better retrieval performance at different experimental settings than other methods.
Fig. 2 is a flow chart of a traffic data retrieving method according to the present invention: the traffic data retrieval method comprising the deep hash method based on the graph rolling network provided by the invention comprises the following steps:
A. acquiring traffic original data to be retrieved and traffic original data in a database;
B. a deep hash method based on a graph rolling network is adopted, hash codes of pictures to be searched and hash codes of pictures in a database are respectively generated, the hash codes of the pictures to be searched and the hash codes of the pictures in the database are compared and ordered according to Hamming distances, and a search result is returned;
C. and B, obtaining a processing result which is a final traffic data retrieval result.
C, obtaining the information of the vehicle owner through the traffic data retrieval result obtained in the step C, thereby realizing the record and punishment of illegal vehicles; or combining the traffic flow model, accurately matching vehicles from cameras at a plurality of intersections, drawing vehicle tracks, and effectively relieving traffic jams.
Claims (10)
1. A deep hash method based on a graph rolling network comprises the following steps:
s1, acquiring a training image;
s2, randomly cutting the image obtained in the step S1 to complete data enhancement of image data;
s3, constructing a visual converter module based on block embedding, position embedding and an encoder;
s4, inputting output data of the vision converter module constructed in the step S3 into a graph convolution network for correlation optimization;
s5, mapping the output image of the graph rolling network obtained in the step S4 through a full connection layer and an activation function to obtain a hash code;
s6, constructing a comprehensive loss function based on similarity loss and semantic loss, and optimizing the hash processes of the steps S3-S5;
s7, according to a final optimization result, the actual deep hash process is completed.
2. The deep hashing method based on graph rolling network according to claim 1, wherein the step S2 specifically includes the following steps:
unifying the images acquired in the step S1 into a 256×256 square image;
and randomly clipping the unified image by adopting a 224 multiplied by 224 clipping frame, thereby completing the data enhancement of the image data.
3. The deep hashing method based on graph rolling network according to claim 2, wherein the step S3 specifically includes the following steps:
the visual converter comprises a block embedding module, a position embedding module and an encoder module which are sequentially connected in series;
the block embedding module is used for dividing an input image into a plurality of blocks, adding a class token to be learned, obtaining an image block and an embedding vector, and inputting the image block and the embedding vector into the position embedding module together;
the position embedding module is used for adding sequence information to the input image blocks so as to generate a vector for classification;
the encoder module is used for extracting image characteristics of the vector output by the position embedding module.
4. A deep hashing method based on graph rolling network according to claim 3 wherein the block embedding module segments the input image into piecespThe block is provided with a plurality of channels,whereinHFor the length of the input image to be taken,Win order to be able to input the width of the image,Pthe length or width of the segmented image; then, a class token to be learned is addedx cls Obtaining an embedded vectorX emd Is thatWhereinx ip Is the firstiFirst of sheet imagepA block.
6. The graph-rolling network based depth hash method of claim 5, wherein the encoder module comprises m blocks; each block comprises a first layer normalization sub-module, a multi-head self-attention sub-module, a second layer normalization sub-module and a multi-layer perceptron sub-module; the calculation process for each block is represented by the following formula:
in the middle ofz m Is the firstmOutput characteristics of the individual blocks;MLP() Processing functions of the multi-layer perceptron submodule;LN 2 () Normalizing the processing functions of the sub-modules for the second layer; />Is an intermediate variable;MSA() Processing functions of the multi-head self-attention sub-module;LN 1 () The processing functions of the sub-modules are normalized for the first layer.
7. The deep hashing method based on graph rolling network according to claim 6, wherein the step S4 specifically includes the following steps:
class-based tokenx cls Calculating cosine similarity between image pairs to obtain a similarity relation matrixVIs thatWherein->Is the firstiClass tokens for the individual images; />Is->Is the modulus of the vector of (a);
image data characteristics output by the vision converter module are taken as points, and the similarity relation matrix is similarVAs an edge relation, inputting the edge relation into a graph convolution network for correlation optimization; the following formula is used as a propagation rule for the graph roll-up network:
in->Is the firstlAn output feature matrix of the layer; />Is the firstlAn input feature matrix of the layer; />Is an activation function; />Is->Degree matrix of->,/>Is->Middle (f)iLine 1iColumn element->For matrix->Middle (f)iLine 1jElements of a column; />Is an adjacency matrix of an undirected graph composed of images and +.>,I n Is a unit matrix; />Is the firstlWeight parameters of the layers.
9. The deep hashing method based on graph rolling network according to claim 8, wherein the constructing the comprehensive loss function based on similarity loss and semantic loss in step S6 specifically includes the following steps:
the following equation is used as a similarity loss functionL sim :
In the middle ofw ij For training pairsx i Andx j weights of (2), and->,S 1 For the number of similar items in the dataset,S 0 for the number of dissimilar items in the dataset,Sas a total number in the data set,s ij is the firstiImage and the firstjSimilarity labels for the individual images;h i is thatx i Obtained by mappingKBit hash codes;q ij based on binary hash codesh i Andh i intermediate variable calculated by cosine similarity between them, and +.>;
The following formula is adopted as a semantic loss functionL sem :
In the middle ofw i Is a weight parameter, and->,c t Is the firstiThe number of pictures of the category to which the piece of picture belongs,c tp is the firstiThe correct number of classified categories to which the pictures belong;y ij is the firstiGenuine category label of picturejThe value of the bit;l ij is the firstiPrediction category label of a picturejThe value of the bit;
combining similarity loss and semantic loss to construct a comprehensive loss functionL total :
10. A traffic data retrieval method comprising the deep hash method based on graph convolutional network according to one of claims 1 to 9, characterized by comprising the following steps:
A. acquiring traffic original data to be retrieved and traffic original data in a database;
B. a depth hash method based on a graph convolution network according to one of claims 1-9 is adopted to respectively generate hash codes of pictures to be searched and hash codes of pictures in a database, the hash codes of the two are compared and ordered according to hamming distances, and a search result is returned;
C. and B, obtaining a processing result which is a final traffic data retrieval result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310195620.4A CN115878823B (en) | 2023-03-03 | 2023-03-03 | Deep hash method and traffic data retrieval method based on graph convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310195620.4A CN115878823B (en) | 2023-03-03 | 2023-03-03 | Deep hash method and traffic data retrieval method based on graph convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115878823A CN115878823A (en) | 2023-03-31 |
CN115878823B true CN115878823B (en) | 2023-04-28 |
Family
ID=85761875
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310195620.4A Active CN115878823B (en) | 2023-03-03 | 2023-03-03 | Deep hash method and traffic data retrieval method based on graph convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115878823B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980641A (en) * | 2017-02-09 | 2017-07-25 | 上海交通大学 | The quick picture retrieval system of unsupervised Hash and method based on convolutional neural networks |
CN109918528A (en) * | 2019-01-14 | 2019-06-21 | 北京工商大学 | A kind of compact Hash code learning method based on semanteme protection |
CN109977250A (en) * | 2019-03-20 | 2019-07-05 | 重庆大学 | Merge the depth hashing image search method of semantic information and multistage similitude |
CN110555121A (en) * | 2019-08-27 | 2019-12-10 | 清华大学 | Image hash generation method and device based on graph neural network |
CN111611413A (en) * | 2020-05-26 | 2020-09-01 | 北京邮电大学 | Deep hashing method based on metric learning |
CN111738058A (en) * | 2020-05-07 | 2020-10-02 | 华南理工大学 | Reconstruction attack method aiming at biological template protection based on generation of countermeasure network |
AU2020103715A4 (en) * | 2020-11-27 | 2021-02-11 | Beijing University Of Posts And Telecommunications | Method of monocular depth estimation based on joint self-attention mechanism |
CN115187610A (en) * | 2022-09-08 | 2022-10-14 | 中国科学技术大学 | Neuron morphological analysis method and device based on graph neural network and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200104721A1 (en) * | 2018-09-27 | 2020-04-02 | Scopemedia Inc. | Neural network image search |
-
2023
- 2023-03-03 CN CN202310195620.4A patent/CN115878823B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980641A (en) * | 2017-02-09 | 2017-07-25 | 上海交通大学 | The quick picture retrieval system of unsupervised Hash and method based on convolutional neural networks |
CN109918528A (en) * | 2019-01-14 | 2019-06-21 | 北京工商大学 | A kind of compact Hash code learning method based on semanteme protection |
CN109977250A (en) * | 2019-03-20 | 2019-07-05 | 重庆大学 | Merge the depth hashing image search method of semantic information and multistage similitude |
CN110555121A (en) * | 2019-08-27 | 2019-12-10 | 清华大学 | Image hash generation method and device based on graph neural network |
CN111738058A (en) * | 2020-05-07 | 2020-10-02 | 华南理工大学 | Reconstruction attack method aiming at biological template protection based on generation of countermeasure network |
CN111611413A (en) * | 2020-05-26 | 2020-09-01 | 北京邮电大学 | Deep hashing method based on metric learning |
AU2020103715A4 (en) * | 2020-11-27 | 2021-02-11 | Beijing University Of Posts And Telecommunications | Method of monocular depth estimation based on joint self-attention mechanism |
CN115187610A (en) * | 2022-09-08 | 2022-10-14 | 中国科学技术大学 | Neuron morphological analysis method and device based on graph neural network and storage medium |
Non-Patent Citations (1)
Title |
---|
Multiview Graph Convolutional Hashing for Multisource Remote Sensing Image Retrieval;J. Gao, X. Shen, P. Fu, Z. Ji and T. Wang;IEEE Geoscience and Remote Sensing Letters;第19卷;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115878823A (en) | 2023-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109711463B (en) | Attention-based important object detection method | |
Chen et al. | Saliency detection via the improved hierarchical principal component analysis method | |
CN110929080B (en) | Optical remote sensing image retrieval method based on attention and generation countermeasure network | |
Devaraj et al. | An efficient framework for secure image archival and retrieval system using multiple secret share creation scheme | |
CN110928961A (en) | Multi-mode entity linking method, equipment and computer readable storage medium | |
CN115019039B (en) | Instance segmentation method and system combining self-supervision and global information enhancement | |
CN113377981A (en) | Large-scale logistics commodity image retrieval method based on multitask deep hash learning | |
CN115965789A (en) | Scene perception attention-based remote sensing image semantic segmentation method | |
CN108805280B (en) | Image retrieval method and device | |
Sreeja et al. | A unified model for egocentric video summarization: an instance-based approach | |
Peng et al. | Swin transformer-based supervised hashing | |
CN115878823B (en) | Deep hash method and traffic data retrieval method based on graph convolution network | |
Yang et al. | IF-MCA: Importance factor-based multiple correspondence analysis for multimedia data analytics | |
Huynh et al. | An efficient model for copy-move image forgery detection | |
Hao | Deep learning review and discussion of its future development | |
Nguyen et al. | Fusion schemes for image-to-video person re-identification | |
CN116541592A (en) | Vector generation method, information recommendation method, device, equipment and medium | |
CN116204673A (en) | Large-scale image retrieval hash method focusing on relationship among image blocks | |
CN114398980A (en) | Cross-modal Hash model training method, encoding method, device and electronic equipment | |
CN113988052A (en) | Event detection method and device based on graph disturbance strategy | |
Yang et al. | RUW-Net: A Dual Codec Network for Road Extraction From Remote Sensing Images | |
Naik et al. | Image segmentation using encoder-decoder architecture and region consistency activation | |
Wang et al. | Research on grey relational clustering model of multiobjective human resources based on time constraint | |
Yang et al. | Generative face inpainting hashing for occluded face retrieval | |
CN115439688B (en) | Weak supervision object detection method based on surrounding area sensing and association |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |