CN110110116B - Trademark image retrieval method integrating deep convolutional network and semantic analysis - Google Patents

Trademark image retrieval method integrating deep convolutional network and semantic analysis Download PDF

Info

Publication number
CN110110116B
CN110110116B CN201910259374.8A CN201910259374A CN110110116B CN 110110116 B CN110110116 B CN 110110116B CN 201910259374 A CN201910259374 A CN 201910259374A CN 110110116 B CN110110116 B CN 110110116B
Authority
CN
China
Prior art keywords
trademark
similarity
image
calculating
trademark image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910259374.8A
Other languages
Chinese (zh)
Other versions
CN110110116A (en
Inventor
高楠
祝建明
李利娟
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910259374.8A priority Critical patent/CN110110116B/en
Publication of CN110110116A publication Critical patent/CN110110116A/en
Application granted granted Critical
Publication of CN110110116B publication Critical patent/CN110110116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The trademark image retrieval method integrating the deep convolutional neural network and the semantic analysis comprises the following steps of: step 1, preprocessing a picture; step 2, training a deep convolution neural network model; step 3, inputting the picture into the trained model for image matching; step 4, calculating the similarity of the two key phrases; step 5, calculating the similarity of the two concepts; 6, making a decision based on a feature fusion algorithm of a Bayesian theory; step 7, judging the distance between the characteristic vectors of the two images by adopting the Euclidean distance; step 8, calculating the similarity between the trademark images; and 9, constructing a trademark image retrieval tree. The method reduces the influence of subjective factors on the retrieval effect, solves the problem of inaccurate image retrieval information, and realizes the high-efficiency accurate trademark image retrieval effect.

Description

Trademark image retrieval method integrating deep convolutional network and semantic analysis
Technical Field
The invention relates to deep learning and image retrieval. The method for searching the trademark image by applying the deep convolutional neural network is provided, and the keyword group is combined for semantic matching.
Background
Trademarks are the identification of goods or services, are an indication of reputation and reliability of businesses, and are increasingly becoming indispensable weapons in intense competitive marketing activities. The new trademark must be sufficiently unique to avoid confusion or conflict with registered trademarks. Based on computer vision technology, and using relevant computer aided knowledge such as pattern recognition to search image, a good way is provided for solving the problem of current trademark registration. However, this method has the disadvantages of slow speed, influence of image complexity, and the like. Moreover, the traditional method is seriously influenced by human subjective factors for abstract images and more complex images. Particularly, for pure graphic trademarks and trademark images with incomplete explanation, the traditional trademark retrieval method is difficult and inefficient to use, and is not suitable for the trademark registration application requirement under the condition of economic rapid development in China. At present, the number of registered trademarks in China increases year by year, the problems that the traditional trademark retrieval method is subjective in manual distribution, difficult in definition of specific classification, difficult in description of trademark image similarity and the like are increasingly prominent, and the development of the field of trademark registration in China is severely restricted, so that the research of an automatic and efficient trademark retrieval technology is very important and is urgent. Research work has been carried out in this context.
Disclosure of Invention
The invention provides a trademark image retrieval method integrating a deep convolutional neural network and semantic analysis, aiming at overcoming the defects in the prior art. The method for searching the trademark image by using the deep convolutional neural network is provided, the success rate of trademark searching is improved, and a large amount of labor force of characteristic engineering is avoided. On the basis, semantic matching of key phrases is carried out, semantic similarity is considered during retrieval, so that the time of trademark image retrieval can be prolonged while the accuracy is ensured, the performance is improved, and the problem of semantic gap is solved to a certain extent.
The invention discloses a trademark image retrieval method integrating a deep convolutional neural network and semantic analysis, which comprises the following steps of:
firstly, extracting trademark image features by a deep learning method, and calculating similarity, and mainly comprises the steps of 1-3.
Step 1, preprocessing pictures.
Reading a trademark image which needs to be detected and is input by a user, detecting a trademark position in the image, detecting an image and a character part in the trademark, aligning the trademark image, finally normalizing the size of the trademark image, further packaging the trademark image into a file format of lmdb, and laying a foundation for deep learning;
and 2, training a deep convolutional neural network model.
A deep convolutional neural network model of a total of 10 layers of structure was constructed. Wherein, the first layer is an input layer for inputting the preprocessed trademark image. Connected to the input layer are convolutional layers of a total of 5 layers, each containing an excitation function, i.e. excitation layer, the excitation function being chosen ReLU in order to introduce a non-linear effect on the data. Then 3 layers were fully connected layers. The 1 st, 2 nd and 5 th convolutional layers all comprise a pooling layer, and the down-sampling mode is maxporoling, so that the data dimension is reduced. And the last layer, namely the output layer, outputs the characteristic information of the trademark image, namely the trained deep convolutional network model.
And after the construction is finished, importing the related file of the prepared trademark picture, configuring the related prototxt file, determining the model structure and the training parameters, and obtaining the trained model. Extracting the last layer of the model to be used as a feature library of the trademark image;
and 3, inputting the picture into the trained model for image matching.
Inputting the test picture into a deep convolutional neural network, extracting a feature vector of a target picture, and calculating the similarity by using the feature vector;
and secondly, extracting semantic features of the trademark image and calculating similarity, wherein the similarity calculation mainly comprises the steps of 4-5.
And 4, calculating the similarity of the two key phrases.
Suppose we have prepared two preprocessed brand pictures I1,I2And carrying out picture segmentation on the current two pictures, and extracting a corresponding keyword from each segmentation area, thereby forming a keyword group. The semantic features of the trademark image can be analyzed by a similarity calculation method of keyword degrees through diffusion of a traditional semantic analysis method.
Calculating the similarity of the two key phrases, wherein the calculation formula is as follows:
Figure GDA0002842045030000031
W1and W2For two words, image I is specifically referred to in this step1And I2And (4) corresponding key phrases. { S11,S12,……S1nAnd { S }21,S22,……S2mAnd are concept sets thereof, which specifically refer to the concrete representation of key phrases of two images in this step, S1nThe term "n" meaning items that the word 1 has specifically refers to the keyword pointed by the nth division area of the 1 st picture in this step. Sim (W)1,W2) The maximum similarity of each meaning item (concept) of the two words is represented as the similarity of semantic layers of the two trademark pictures in the step;
and 5, calculating the similarity of the two keywords.
From the previous step, the question is simplified from the similarity problem of the trademark image to the similarity of each keyword between two keyword groups, and this step will discuss this problem.
Calculating the similarity of the two keywords, wherein the calculation formula is as follows:
Figure GDA0002842045030000041
S1the n semantic items owned by the words specifically refer to the keywords pointed by a certain divided region of the picture in this step. Wherein beta isq(1. ltoreq. q. ltoreq.4) are adjustable parameters representing 4 features respectively: the first basic semantic description, the other basic semantic descriptions, the relation semantic descriptions and the relation symbolic descriptions, and satisfy: beta is a1234=1,β1≥β2≥β3≥β4
The third most part is to analyze and discuss the image similarity obtained by the previous two parts, and mainly comprises the steps 6-9.
And 6, fusing the similarity of the trademark images.
The similarity between the two brand images analyzed from different aspects is obtained from the above two major parts.
In the step, a feature fusion algorithm based on Bayesian theory is used for decision making, and the two similarities are fused. The process can be expressed as:
x→ωj
Figure GDA0002842045030000051
where Ω is { ω ═ ω1,…,ωcThe representation mode space Ω contains c modes, x ═ x1,x2,…,xN]Is that the unknown sample x is characterized by an N-dimensional real value. P (omega)k| x) represents the posterior probability of class k, k ∈ {1,2, …, c }. According to the Bayes decision theory of the minimum error rate, if the samples are divided into the jth class, the class is the mode class with the maximum posterior probability under the condition of the known sample x;
and 7, judging the distance between the characteristic vectors of the two images by adopting the Euclidean distance.
Defining similarity measurement of a trademark image, and after obtaining the characteristic vector of the input trademark and the category to which the trademark belongs, calculating the similarity between the characteristic vector of the input trademark and the characteristic vector in the characteristic library to which the trademark belongs. Whether the images are similar is mainly judged by judging the distance between the characteristic vectors of the two images. The calculation formula is as follows:
Figure GDA0002842045030000052
wherein m represents the dimension of the feature vector, d represents the distance between the feature vectors of the two images, and xiRepresenting the ith value in the feature vector of the first picture, corresponding to yiAnd obtaining the corresponding value of the feature vector of the second picture.
And 8, calculating the similarity between the trademark images.
And obtaining the similarity value of each trademark in the library and the input trademark, and returning the trademark with high similarity. The calculation formula is as follows:
Figure GDA0002842045030000061
wherein d is the similarity between the two pictures obtained in step 7.
And 9, constructing a trademark image retrieval tree.
And (4) matching each trademark image to a trademark image retrieval tree by combining the similarity between the trademark images so as to simplify the whole search process and establish a rapid retrieval system. Meanwhile, the method has positive effects on brand image incompleteness, brand image blurring and brand image retrieval result optimization based on user feedback.
The invention provides a trademark image retrieval method integrating a deep convolutional neural network and semantic analysis, and relates to deep learning and image retrieval. The method for searching the trademark image by using the deep convolutional neural network is provided, the success rate of image searching is improved, and a large amount of labor force of characteristic engineering is avoided. On the basis, the semantic matching of the key phrases is carried out, the semantic similarity is considered during retrieval, the image retrieval time can be prolonged while the accuracy is ensured, the performance is improved, and the problem of semantic gap is solved to a certain extent.
The invention has the advantages that: the influence of subjective factors on the retrieval effect is reduced, the problem that the image retrieval information is inaccurate is solved, and the high-efficiency accurate trademark image retrieval effect is realized.
Drawings
FIG. 1 is a schematic view of the technical process of the present invention.
FIG. 2 is a diagram of a convolution network model.
Detailed Description
In order to make the flow of the present invention easier to understand, the following detailed description is made in conjunction with the flow chart of fig. 1:
firstly, extracting trademark image features by a deep learning method, and calculating similarity, and mainly comprises the steps of 1-3.
Step 1, preprocessing pictures.
Reading a trademark image which needs to be detected and is input by a user, detecting a trademark position in the image, detecting an image and a character part in the trademark, aligning the trademark image, finally normalizing the size of the trademark image, further packaging the trademark image into a file format of lmdb, and laying a foundation for deep learning;
and 2, training a deep convolutional neural network model.
A deep convolutional neural network model of a total of 10 layers of structure was constructed. Wherein, the first layer is an input layer for inputting the preprocessed trademark image. Connected to the input layer are convolutional layers of a total of 5 layers, each containing an excitation function, i.e. excitation layer, the excitation function being chosen ReLU in order to introduce a non-linear effect on the data. Then 3 layers were fully connected layers. The 1 st, 2 nd and 5 th convolutional layers all comprise a pooling layer, and the down-sampling mode is maxporoling, so that the data dimension is reduced. And the last layer, namely the output layer, outputs the characteristic information of the trademark image, namely the trained deep convolutional network model.
And after the construction is finished, importing the related file of the prepared trademark picture, configuring the related prototxt file, determining the model structure and the training parameters, and obtaining the trained model. Extracting the last layer of the model to be used as a feature library of the trademark image;
and 3, inputting the picture into the trained model for image matching.
Inputting the test picture into a deep convolutional neural network, extracting a feature vector of a target picture, and calculating the similarity by using the feature vector;
and secondly, extracting semantic features of the trademark image and calculating similarity, wherein the similarity calculation mainly comprises the steps of 4-5.
And 4, calculating the similarity of the two key phrases.
Suppose we have prepared two preprocessed brand pictures I1,I2And carrying out picture segmentation on the current two pictures, and extracting a corresponding keyword from each segmentation area, thereby forming a keyword group. The semantic features of the trademark image can be analyzed by a similarity calculation method of keyword degrees through diffusion of a traditional semantic analysis method.
Calculating the similarity of the two key phrases, wherein the calculation formula is as follows:
Figure GDA0002842045030000081
W1and W2For two words, image I is specifically referred to in this step1And I2And (4) corresponding key phrases. { S11,S12,……S1nAnd { S }21,S22,……S2mAnd are concept sets thereof, which specifically refer to the concrete representation of key phrases of two images in this step, S1nThe term "n" meaning items that the word 1 has specifically refers to the keyword pointed by the nth division area of the 1 st picture in this step. Sim (W)1,W2) The maximum similarity of each meaning item (concept) of the two words is represented as the similarity of semantic layers of the two trademark pictures in the step;
and 5, calculating the similarity of the two keywords.
From the previous step, the question is simplified from the similarity problem of the trademark image to the similarity of each keyword between two keyword groups, and this step will discuss this problem.
Calculating the similarity of the two keywords, wherein the calculation formula is as follows:
Figure GDA0002842045030000091
S1the n semantic items owned by the words specifically refer to the keywords pointed by a certain divided region of the picture in this step. Wherein beta isq(1. ltoreq. q. ltoreq.4) are adjustable parameters representing 4 features respectively: the first basic semantic description, the other basic semantic descriptions, the relation semantic descriptions and the relation symbolic descriptions, and satisfy: beta is a1234=1,β1≥β2≥β3≥β4
The third most part is that the analysis and discussion of the image similarity obtained from the previous two parts mainly comprises the steps 6-9.
And 6, fusing the similarity of the trademark images.
The similarity between the two brand images analyzed from different aspects is obtained from the above two major parts. In the step, a feature fusion algorithm based on Bayesian theory is used for decision making, and the two similarities are fused. The process can be expressed as:
x→ωj
Figure GDA0002842045030000092
where Ω is { ω ═ ω1,…,ωcThe representation mode space Ω contains c modes, x ═ x1,x2,…,xN]Is that the unknown sample x is characterized by an N-dimensional real value. P (omega)k| x) represents the posterior probability of class k, k ∈ {1,2, …, c }. According to the Bayes decision theory of the minimum error rate, if the samples are divided into the jth class, the class is the mode class with the maximum posterior probability under the condition of the known sample x;
and 7, judging the distance between the characteristic vectors of the two images by adopting the Euclidean distance.
Defining similarity measurement of a trademark image, and after obtaining the characteristic vector of the input trademark and the category to which the trademark belongs, calculating the similarity between the characteristic vector of the input trademark and the characteristic vector in the characteristic library to which the trademark belongs. Whether the images are similar is mainly judged by judging the distance between the characteristic vectors of the two images. The calculation formula is as follows:
Figure GDA0002842045030000101
wherein m represents the dimension of the feature vector, d represents the distance between the feature vectors of the two images, and xiRepresenting the ith value in the feature vector of the first picture, corresponding to yiAnd obtaining the corresponding value of the feature vector of the second picture.
And 8, calculating the similarity between the trademark images.
And obtaining the similarity value of each trademark in the library and the input trademark, and returning the trademark with high similarity. The calculation formula is as follows:
Figure GDA0002842045030000102
wherein d is the similarity between the two pictures obtained in step 7.
And 9, constructing a trademark image retrieval tree.
And (4) matching each trademark image to a trademark image retrieval tree by combining the similarity between the trademark images so as to simplify the whole search process and establish a rapid retrieval system. Meanwhile, the method has positive effects on brand image incompleteness, brand image blurring and brand image retrieval result optimization based on user feedback.
The invention provides a trademark image retrieval method integrating a deep convolutional neural network and semantic analysis, and relates to deep learning and image retrieval. The method for searching the trademark image by using the deep convolutional neural network is provided, the success rate of image searching is improved, and a large amount of labor force of characteristic engineering is avoided. On the basis, the semantic matching of the key phrases is carried out, the semantic similarity is considered during retrieval, the image retrieval time can be prolonged while the accuracy is ensured, the performance is improved, and the problem of semantic gap is solved to a certain extent.
The invention has the advantages that: the influence of subjective factors on the retrieval effect is reduced, the problem that the image retrieval information is inaccurate is solved, and the high-efficiency accurate trademark image retrieval effect is realized.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. The trademark image retrieval method integrating the deep convolutional neural network and the semantic analysis comprises the following steps of:
step 1, preprocessing a picture; reading a trademark image which needs to be detected and is input by a user, detecting a trademark position in the image, detecting an image and a character part in the trademark, aligning the trademark image, finally normalizing the size of the trademark image, further packaging the trademark image into a file format of lmdb, and laying a foundation for deep learning;
step 2, training a deep convolution neural network model; constructing a deep convolution neural network model with a total 10-layer structure; wherein, the first layer is an input layer for inputting the preprocessed trademark image; connected to the input layer are convolutional layers of a total of 5 layers, each containing an excitation function, i.e. excitation layer, the excitation function being chosen from ReLU in order to introduce a non-linear effect on the data; then 3 layers are full connecting layers; the 1 st, 2 nd and 5 th convolutional layers comprise a pooling layer, and a maxporoling is selected as a down-sampling mode so as to reduce data dimensionality; the last layer, namely the output layer, outputs the characteristic information of the trademark image, namely the trained deep convolution network model;
after the construction is finished, importing the related file of the trademark picture prepared previously, configuring the related prototxt file, determining a model structure and training parameters, and obtaining a trained model; extracting the last layer of the model to be used as a feature library of the trademark image;
step 3, inputting the picture into the trained model for image matching; inputting the test picture into a deep convolutional neural network, extracting a feature vector of a target picture, and calculating the similarity by using the feature vector;
and 4, calculating the similarity of the two key phrases, wherein the calculation formula is as follows:
Figure FDA0002842045020000011
Figure FDA0002842045020000021
W1and W2Is two words, { S11,S12,……S1nAnd { S }21,S22,……S2mIs its concept set, S, respectively1nIs n concepts that a word has; sim (W)1,W2) The maximum value of the similarity of each concept of the two words is the similarity of the two words;
and 5, calculating the similarity of the two concepts, wherein the calculation formula is as follows:
Figure FDA0002842045020000022
wherein beta isqThe adjustable parameters respectively represent 4 characteristics, q is more than or equal to 1 and less than or equal to 4: the first basic semantic description, the other basic semantic descriptions, the relation semantic descriptions and the relation symbolic descriptions, and satisfy: beta is a1234=1,β1≥β2≥β3≥β4
And 6, carrying out decision making based on a feature fusion algorithm of the Bayesian theory, wherein the process can be expressed as:
x→ωj
Figure FDA0002842045020000023
where Ω is { ω ═ ω1,…,ωcThe representation mode space Ω contains c modes, x ═ x1,x2,…,xN]Is that the unknown sample x is characterized by an N-dimensional real value; p (omega)k| x) represents the posterior probability of the kth class, k ∈ {1,2, …, c }; according to the Bayes decision theory of the minimum error rate, if the samples are divided into the jth class, the class is the mode class with the maximum posterior probability under the condition of the known sample x;
step 7, judging the distance between the characteristic vectors of the two images by adopting the Euclidean distance; defining similarity measurement of a trademark image, and calculating the similarity between the characteristic vector of the input trademark and the characteristic vector in the affiliated characteristic library after the characteristic vector of the input trademark and the affiliated category are obtained; judging whether the images are similar mainly by judging the distance between the characteristic vectors of the two images; the calculation formula is as follows:
Figure FDA0002842045020000031
wherein m represents the dimension of the feature vector;
step 8, calculating the similarity between the trademark images; obtaining the similarity value of each trademark in the library and the input trademark, and returning the trademarks with high similarity; the calculation formula is as follows:
Figure FDA0002842045020000032
step 9, constructing a trademark image retrieval tree; matching each trademark image to a trademark image retrieval tree by combining the similarity between the trademark images so as to simplify the whole search process and establish a rapid retrieval system; meanwhile, the method has positive effects on brand image incompleteness, brand image blurring and brand image retrieval result optimization based on user feedback.
CN201910259374.8A 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis Active CN110110116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910259374.8A CN110110116B (en) 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910259374.8A CN110110116B (en) 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis

Publications (2)

Publication Number Publication Date
CN110110116A CN110110116A (en) 2019-08-09
CN110110116B true CN110110116B (en) 2021-04-06

Family

ID=67484759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910259374.8A Active CN110110116B (en) 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis

Country Status (1)

Country Link
CN (1) CN110110116B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765305A (en) * 2019-10-23 2020-02-07 深圳报业集团 Medium information pushing system and visual feature-based image-text retrieval method thereof
CN112507160A (en) * 2020-12-03 2021-03-16 平安科技(深圳)有限公司 Automatic judgment method and device for trademark infringement, electronic equipment and storage medium
CN113744831A (en) * 2021-08-20 2021-12-03 中国联合网络通信有限公司成都市分公司 Online medical application purchasing system
CN116244458B (en) * 2022-12-16 2023-08-25 北京理工大学 Method for generating training, generating sample pair, searching model training and trademark searching
CN116542818A (en) * 2023-07-06 2023-08-04 图林科技(深圳)有限公司 Trademark monitoring and analyzing method based on big data technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388022A (en) * 2008-08-12 2009-03-18 北京交通大学 Web portrait search method for fusing text semantic and vision content
CN101706964A (en) * 2009-08-27 2010-05-12 北京交通大学 Color constancy calculating method and system based on derivative structure of image
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN108038122A (en) * 2017-11-03 2018-05-15 福建师范大学 A kind of method of trademark image retrieval
CN109408600A (en) * 2018-09-25 2019-03-01 浙江工业大学 A kind of books based on data mining recommend purchaser's method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8744180B2 (en) * 2011-01-24 2014-06-03 Alon Atsmon System and process for automatically finding objects of a specific color

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388022A (en) * 2008-08-12 2009-03-18 北京交通大学 Web portrait search method for fusing text semantic and vision content
CN101706964A (en) * 2009-08-27 2010-05-12 北京交通大学 Color constancy calculating method and system based on derivative structure of image
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN108038122A (en) * 2017-11-03 2018-05-15 福建师范大学 A kind of method of trademark image retrieval
CN109408600A (en) * 2018-09-25 2019-03-01 浙江工业大学 A kind of books based on data mining recommend purchaser's method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多特征融合的商标图像检索;邵妍;《中国优秀硕士学位论文全文数据库·信息科技辑》;20130615;全文 *

Also Published As

Publication number Publication date
CN110110116A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110110116B (en) Trademark image retrieval method integrating deep convolutional network and semantic analysis
CN111488474B (en) Fine-grained freehand sketch image retrieval method based on attention enhancement
CN110135459B (en) Zero sample classification method based on double-triple depth measurement learning network
CN111626362B (en) Image processing method, device, computer equipment and storage medium
CN106295796A (en) Entity link method based on degree of depth study
Akrim et al. Classification of Tajweed Al-Qur'an on Images Applied Varying Normalized Distance Formulas
CN110929498B (en) Method and device for calculating similarity of short text and readable storage medium
CN112966091B (en) Knowledge map recommendation system fusing entity information and heat
CN111324765A (en) Fine-grained sketch image retrieval method based on depth cascade cross-modal correlation
CN111666766A (en) Data processing method, device and equipment
An et al. Hypergraph propagation and community selection for objects retrieval
CN110717090A (en) Network public praise evaluation method and system for scenic spots and electronic equipment
CN111222847A (en) Open-source community developer recommendation method based on deep learning and unsupervised clustering
CN113065409A (en) Unsupervised pedestrian re-identification method based on camera distribution difference alignment constraint
CN110705384B (en) Vehicle re-identification method based on cross-domain migration enhanced representation
CN111368066B (en) Method, apparatus and computer readable storage medium for obtaining dialogue abstract
CN113535949B (en) Multi-modal combined event detection method based on pictures and sentences
CN108717436B (en) Commodity target rapid retrieval method based on significance detection
CN114332519A (en) Image description generation method based on external triple and abstract relation
CN110347812A (en) A kind of search ordering method and system towards judicial style
JPH11250106A (en) Method for automatically retrieving registered trademark through the use of video information of content substrate
CN113516118B (en) Multi-mode cultural resource processing method for joint embedding of images and texts
CN114780862A (en) User interest vector extraction method, extraction model and computer system
CN113535928A (en) Service discovery method and system of long-term and short-term memory network based on attention mechanism
CN112650869A (en) Image retrieval reordering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant