CN108764245B - Method for improving similarity judgment accuracy of trademark graphs - Google Patents
Method for improving similarity judgment accuracy of trademark graphs Download PDFInfo
- Publication number
- CN108764245B CN108764245B CN201810291376.0A CN201810291376A CN108764245B CN 108764245 B CN108764245 B CN 108764245B CN 201810291376 A CN201810291376 A CN 201810291376A CN 108764245 B CN108764245 B CN 108764245B
- Authority
- CN
- China
- Prior art keywords
- trademark
- similarity
- result picture
- accuracy
- picture sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/754—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method for improving trademark graph similarity judgment accuracy, which can carry out secondary or even multiple times of retrieval on a primary sorting result generated by any trademark method and finally sort according to a set generated by the final retrieval; meanwhile, in order to further improve the accuracy, an extraction and comparison mode which enables the primary sequencing result to be more stable and accurate is provided, and a good basis is provided for subsequent secondary and even multiple retrieval.
Description
Technical Field
The invention belongs to the technical field of trademark retrieval, and particularly relates to a method for improving judgment accuracy of trademark graph similarity.
Background
Trademarks are indispensable factors in commercial economy, the quantity of trademarks applied for each year reaches the million level, the data of the trademarks reaches the million level, and for the huge quantity group, if people judge or examine whether two trademarks are similar or not and how much the similarity degree reaches, the judgment is carried out by human eyes and subjective consciousness, so that the operation period and the objective stability of results both have great improvement and improvement space.
At present, people research how to search and match trademarks by using computers and other electronic equipment becomes a hot problem in the field, people continuously try various computer algorithms and multimedia technologies to realize automatic searching of trademark graphs, and a great amount of experiments and improvements are made in the links of feature extraction and comparison.
Disclosure of Invention
The invention aims to provide a method capable of improving the similarity judgment accuracy of trademark graphs.
The technical scheme provided by the invention is as follows: a method for improving the accuracy of trademark graph similarity judgment comprises the following steps:
s101, sequencing the result pictures of trademark similarity retrieval according to similarity with the to-be-detected trademark to obtain a primary result picture sequence { S0}=[I01,I02,…,I0k,…];
S102, taking the first k pictures of the primary result picture sequence in the S101 as new to-be-detected trademarks respectively, and then performing similarity sorting m times, wherein m and k are integers more than or equal to 1, so as to obtain a secondary result picture sequence { Sm}=[Im1,Im2,…,Imk,...];
S103, obtaining a combined result picture set S ═ S by the primary result picture sequence in S101 and the secondary result picture sequence in S1020∪S1∪…∪Sm∪…}={st};
And S104, taking the same picture in the comprehensive result picture set S as a unit, and sequencing each unit to obtain a final result picture sequence.
As an improvement of the present invention, in S104, a weight matrix w of the related rank is defined0=[wij]Wherein w isijMax (2- (i + j)/L, 1), L is more than or equal to 2k, L is weight parameter, weight superposition of units in set SWhereinS according toAnd sequencing from large to small to obtain a final result picture sequence.
As an improvement of the present invention, the obtaining of the primary result picture sequence includes the following four steps:
s201, establishing a trademark graphic database;
s202, extracting features of the trademark to be detected and the trademark in the trademark graphic database;
s203, respectively carrying out similarity comparison on the features extracted in the S202 and the features extracted in the trademark graphic database;
and S204, sequencing the comparison results in the S203 according to the sequence of the similarity from high to low.
As a modification of the present invention, the trademark graphic database of step S201 at least includes all registered trademarks of the same and/or similar commodities as the kind of the commodity to which the trademark to be detected belongs.
As an improvement of the present invention, the feature extraction manner in step S202 is based on at least one of HOG (Histogram of Oriented Gradient), LBP (Local Binary Pattern), Haar (Haar-like Features), SIFT (Scale-invariant feature transform), SURF (speedup-Robust Features), orb (ordered FAST and Robust brief), Hash (Hash algorithm), CNN (Convolutional Neural Network), LSD (Local Statistical Distribution ), and GSD (GLobal Statistical Distribution, GLobal Statistical Distribution).
As a modification of the present invention, a step S301 of preprocessing the to-be-detected trademark and/or the graphics of the trademark in the trademark graphics database is further provided before the feature extraction in step S202.
As an improvement of the invention, the step of preprocessing comprises at least one of translation, stretching, compressing, zooming, shrinking, segmenting, and rotating.
As an improvement of the present invention, the similarity comparison in step S203 is performed by performing feature matching in a manner corresponding to the feature extraction.
As a refinement of the present invention, a step S302 of excluding the mismatching pairs is further provided after the feature matching.
As a refinement of the present invention, the step S302 employs a RANSAC-based algorithm to exclude mismatching pairs.
Has the advantages that:
the method is novel and practical in function, secondary and even multiple times of retrieval can be carried out on the primary ranking result generated by any trademark method, and final ranking is carried out according to the set generated by final retrieval;
meanwhile, in order to further improve the accuracy, an extraction and comparison mode which enables the primary sequencing result to be more stable and accurate is provided, and a good basis is provided for subsequent secondary and even multiple retrieval;
the trademark graphic database in the step S201 is set to at least comprise all registered merchants of commodities which are the same as and/or similar to the commodity type of the to-be-detected trademark, so that the requirement for searching the similarity of the trademark can be met, the database scale and the calculated amount can be effectively reduced, the cost is saved, and the efficiency is improved;
before the characteristic extraction in the step S202, a step S301 of preprocessing the trademark to be detected and/or the graph of the trademark in the trademark graph database is also arranged, the trademark to be detected is correspondingly translated, stretched, compressed, amplified, reduced, segmented, rotated and the like, so that the comparison accuracy is improved, for example, the characters and the graph are segmented to be correspondingly compared, so that the effective preprocessing method is realized;
and a step S302 of eliminating wrong matching pairs is also arranged after the characteristic matching, which is beneficial to reducing the redundant calculation amount caused by the wrong matching pairs, thereby improving the efficiency and the accuracy.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a table of specification definitions of multi-scale sliding windows in example 5 of the present invention;
FIG. 3 is a schematic diagram of weighted overlap-add of multi-scale similarity windows in embodiment 5 of the present invention;
FIG. 4 is the initial threshold matrix T in embodiment 5 of the present invention0Defining a graph;
FIG. 5 is a schematic diagram of region similarity calculation in example 5 of the present invention;
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, the method for improving the accuracy of determining similarity of trademark graphics in this embodiment includes:
s101, sorting the result pictures of trademark similarity retrieval according to similarity between the result pictures and the to-be-detected trademark from high to low to obtain a primary result picture sequence { S0}=[I01,I02,…,I0k,…];
S102, taking the first k pictures of the primary result picture sequence in the S101 as new to-be-detected trademarks respectively, and then performing similarity sorting m times, wherein m and k are integers more than or equal to 1, so as to obtain a secondary result picture sequence { Sm}=[Im1,Im2,...,Imk,...];
S103, obtaining a combined result picture set S ═ S by the primary result picture sequence in S101 and the secondary result picture sequence in S1020∪S1∪…∪Sm∪…}={st};
And S104, taking the same picture in the comprehensive result picture set S as a unit, and sequencing each unit to obtain a final result picture sequence.
In this embodiment, in step S101, an existing trademark retrieval software is used to obtain a primary result picture sequence, and in step S104, a weight matrix w of related rankings is defined0=[wij]Wherein w isijMax (2- (i + j)/L, 1), L is more than or equal to 2k, L is weight parameter, weight superposition of units in set SWhereinS according toAnd sequencing from large to small to obtain a final result picture sequence.
The method integrates the ranking results of multiple searches of related images, the images with more occurrence times have higher weight, the images ranked in front have higher weight, the correlation among the images is fully mined, and the accuracy of trademark graph similarity judgment is improved.
Example 2
The difference between this embodiment 2 and embodiment 1 is that the obtaining of the primary result picture sequence includes the following four steps:
s201, establishing a trademark graphic database;
s202, extracting features of the trademark to be detected and the trademark in the trademark graphic database;
s203, respectively carrying out similarity comparison on the features extracted in the S202 and the features extracted in the trademark graphic database;
and S204, sequencing the comparison results in the S203 according to the sequence of the similarity from high to low.
In this embodiment, the similarity of the feature extraction in step S202 is based on the existing corner detection, LSD (local Statistical Distribution feature) and GSD (GLobal Statistical Distribution feature); the similarity comparison in step S203 is also implemented by using the existing corner matching method corresponding to the feature extraction; and after corner matching, performing candidate similar region segmentation, and performing feature extraction and corresponding feature matching on the basis of HOG (Histogram of Oriented Gradient) and Hash (Hash algorithm) after segmentation, thereby obtaining a more accurate primary result picture sequence.
Example 3
In this embodiment 3, on the basis of embodiment 2, a step S301 of preprocessing the to-be-detected trademark and the trademark graph in the trademark graph database is further provided before the feature extraction in step S202; the preprocessing step comprises the steps of translation, stretching, compression, magnification, reduction, segmentation, rotation and the like as required; step S302 of eliminating wrong matched pairs is also set after the characteristic matching; the step S302 adopts a RANSAC-based algorithm to exclude mismatching pairs.
Example 4
Embodiment 4 is based on embodiment 3, the trademark graphic database of step S201 at least includes all registered trademarks of the same and similar commodities as the to-be-detected trademark belongs to.
Example 5
This embodiment 5 is modified based on embodiment 4, and specifically obtained by adopting the following manner for the primary result picture sequence:
first, in S201, a database D ═ D containing N brand images is created1,D2,…,DNAnd inputting a trademark Q to be retrieved.
Then, multi-scale feature extraction is performed in S202,
1. self-defining specification and sliding step length of multi-scale sliding window, setting input image Iw×hThe various dimensions of the sliding window are defined as shown in FIG. 2 (in this embodiment, σ)1=0.8,σ2=0.6,σ30.4), sliding step parameter μ (in this embodiment, μ is 0.1), sliding window horizontal stepxStep in vertical direction w muy=hμ。
2. According to the size of the multi-scale sliding window defined above, each sliding window is divided into an image Iw×hThe upper left corner is taken as a starting point and step is performed according to the sliding step lengthx、stepySliding from left to right and from top to bottom in sequence to obtain a series of partial window images (t total) set R ═ Ri},i=0,1,…,t.
3. For each partial window image R obtained in 2iExtracting regional image features fi:
a, for any image window RiThe gradient in the horizontal and vertical directions is calculated:
the calculation method [ G ]h,Gv]=gradient(Ri) Using a directional template [ -1,0,1 [ -0 [ -1 ]]Calculating RiHorizontal gradient G of any pixel point (x, y)h(x, y) and vertical gradient Gv(x,y)。
The direction angle θ of the point (x, y) is arctan (G)v/Gh) And the value is 0-360 degrees.
b, quantizing the gradient direction to obtain a gradient direction histogram:
quantizing a gradient direction into two adjacent bins, that is, one direction is represented by components projected to the two adjacent directions, for example, the gradient direction of a certain pixel point (x, y) is θ (x, y), and the two adjacent bins are respectively θk、θk+1Then the gradient direction point is quantized to thetakComponent ofQuantising to thetak+1Component ofAnd (c) quantizing the gradient directions obtained in the step a according to the fuzzy quantization method, and counting the fuzzy gradient directions of all the pixel points to obtain a gradient direction histogram.
c, calculating a normalized gradient direction histogram:
d, histogram feature coding:
obtaining R through step ciNormalized histogram ofWherein 0 < hujAnd (3) < 1, j ═ 0, 1.. 7, and the floating point data are coded in order to save computer computing resources.
After the histogram normalization, the quantization intervals (0,0.098), (0.098,0.134), (0.134,0.18), (0.18,0.24), (0.24,1) are calculated according to the principle of uniform probability distribution of gradient points of each interval, and the calculation of the quantization intervals is obtained by performing statistical calculation experiments on the current sample set. The data falling in these 5 intervals are encoded as follows: 0000,0001,0011,0111,1111.After coding, the code words of each bin are concatenated to obtain a binary string with the length of 4 × 8 ═ 32 bitsI.e. fi。
Then, similar region detection segmentation is performed in S203:
e, global inter-scale feature window matching
To search for imagesAnd any images in the databaseFor example, the following steps are carried out: for search imageArbitrary sliding window a in (1)iTraversing images in a databaseAll windows B meeting the similar possibility conditionj,j=k1,k2,., the calculated similarity distance isFind the most similar windowIf the similarity distance is within the similarity threshold, then the pair of similarity windows is marked, i.e. dmin-i<Tsim,TsimThe empirical value is about 0.4 to 0.6 in this example.
Here the similarity distance is calculated as follows: with sliding window AiThe binary characteristic string of the characteristic vector after being coded is fiSliding window BjThe binary characteristic string of the coded characteristic vector is gjThen A isiAnd Bi-jThe distance d of similarity therebetweenijCalculation by hamming distance:whereinRepresenting a binary string fiThe (k) th bit of (a),representing a binary string gjThe (k) th bit of (a),representing an exclusive-or operation, alpha being equal to fiAnd gjThe inverse of the length.
The similar possibility conditions here are as follows:
(1) window BjIs located at aiIn a certain range near the center position, the allowable transformation range u is 0.5 (the offset range, the window center position is calculated according to the ratio of the length and the width of the graph, the offset is also calculated according to the ratio of the length and the width, here, the allowable offset range is one half of the length or the width, and the suggested value range is 0.4-0.6), that is, the allowable transformation range u is 0.5And isIn the same wayAnd is
(2) Let AiAspect ratio ofBjAspect ratio ofThen there isAnd isI.e. similar windows must have similar aspect ratios.
Obtaining the matching set { A ] of the A and B similar windows through the operationi∶BjThere may be matching pairs that do not conform to spatial consistency due to a lookup pattern between global scales. All these results will be screened for the correct match.
f, dividing the candidate similar region:
1. and eliminating the error matching by adopting a RANSAC algorithm based on a scale-space consistency model.
Through searching and matching among scales in the global range, some correct matching windows can be found, and some wrong matches are included, wherein one is a scale matching error, the other is a position matching error, and the wrong matches are eliminated by adopting a scale-space consistency method.
Adopting an improved RANSAC (random sample consensus) algorithm to eliminate wrong matching pairs and reserving matching pairs with consistency in dimension and spatial position, wherein the steps are as follows:
(1) for a set of matching data { Ai:BjCalculating a transformation matrix L through any pair of matching windows, and marking the transformation matrix L as a model M, wherein the model is defined as follows:
transforming the model: let a pair of matching windows { (x)1,y1),(x1′,y1′)}:{(x2,y2),(x2′,y2') } (in which (x)1,y1)、(x1′,y1') respectively represent windows Ai(x) coordinates of the upper left and lower right corners of the body2,y2)、(x2′,y2') denotes a window BjUpper left and lower right coordinates), then there is a spatial transformation modelSo that L can be solved.
(2) Calculating projection errors of all data in the data set and the model M, and adding an inner point set I if the errors are smaller than a threshold value;
(3) if the number of elements in the current internal point set I is greater than the optimal internal point set I _ best, updating I _ best to I;
(4) traversing all data in the data set, and repeating the steps.
(5) The samples in the optimal interior point set I _ best are correct matching samples, and finally the correct matching sample set I _ best is obtained as { a ═ ai:Bj}。
2. And segmenting the similar region according to the adaptive threshold value.
Screening through the steps basically eliminates some error matching, quantitative weighted superposition is carried out on correct matching windows, and the number of similar windows covering each anchor point (central point of a grid) is counted. The more similar areas cover the greater number of similar windows for that area anchor point.
The weight of each pair of windows is determined by the similarity distance, the smaller the similarity distance is, the larger the weight is, the larger the similarity distance is, the smaller the weight is, and the weight is about 1. The significance of the weighting process is: matching windows with higher degrees of similarity have a relatively high probability of being correct, superimposed with a weight greater than 1, and matching windows with lower degrees of similarity have a relatively low probability of being correct, superimposed with a weight less than 1.
After the similar window overlapping result shown in fig. 3 is obtained (the deeper the color in the diagram indicates the smaller the overlapping value), the similar region is divided according to the adaptive threshold matrix, and the adaptive threshold is related to the initial threshold matrix of each position and the size and number of the similar windows obtained by the above screening.
(1) For I _ best ═ ai:BjAny pair of matching windows { (x)1,y1),(x1′,y1′)}:{(x2,y2),(x2′,y2') } (in which (x)1,y1)、(x1′,y1') respectively represent windows Ai(x) coordinates of the upper left and lower right corners of the body2,y2)、(x2′,y2') denotes a window BjCoordinates of upper left corner and lower right corner) with a similarity distance dijDefining a weighting factor omegaij=min(2,2.67-3.33dij) Then there is
(4) As shown in FIG. 4, an initial threshold matrix T is defined0,.
T0Is set in the set I _ best ═ A, depending on the specification of the specific sliding windowi∶BjAll belong toHas a total area of sAThen the adaptive threshold matrix isIn the set I _ best ═ Ai∶BjAll belong toHas a total area of sBThen the adaptive threshold matrix isWhere κ is 0.2 and α is 0.7, are empirical values, with a sliding window gaugeThe variation parameters of the lattice should be adaptively adjusted.
Then there is a similar region partition matrixThe part of the matrix other than 0 represents the candidate similar region in the image.
Finally, in S204, the primary results are sorted according to the similarity measure of the local similar regions:
for the CA obtained above10×10And CB10×10The similar region shown in (1) is divided into the similar region ROI of the A pictureAAnd similar region ROI of B pictureBMatching of similar windows in the region is performed according to the method in fig. 5, and the search method is local neighborhood search. The method comprises the following steps:
for ROIAArbitrary sliding window a in (1)iTraversing the ROI of the image in the databaseBAll windows B meeting the similar possibility conditionj,j=k1,k2,., the calculated similarity distance isFind the most similar windowIf the similarity distance is within the similarity threshold, then the pair of similarity windows is marked, i.e. dmin-i<Tsim,TsimThe empirical value is about 0.4 to 0.6 in this example.
Here the similarity distance is calculated as follows: with sliding window AiThe binary characteristic string of the characteristic vector after being coded is fiSliding window BjThe binary characteristic string of the coded characteristic vector is gjThen A isiAnd Bi-jThe distance d of similarity therebetweenijCalculation by hamming distance:whereinRepresenting a binary string fiThe (k) th bit of (a),representing a binary string gjThe (k) th bit of (a),representing an exclusive-or operation, alpha being equal to fiAnd gjThe inverse of the length.
The similar possibility conditions here are as follows:
(1) window BjIs located at aiIn a certain range near the center position, the allowable transformation range is u equal to 0.2 (offset range, recommended value range is 0.1 to 0.3), that is, the allowable transformation range isAnd isIn the same wayAnd isWhere A isiAnd Bi-jThe positions of (a) are relative positions in the roi area.
(2) Let AiAspect ratio ofBjAspect ratio ofThen there isAnd isI.e. similar windows must be presentWith similar aspect ratios.
Obtaining ROI by the above operationAAnd ROIBMatching set of similarity windows { A }i∶Bj}。
The similarity of the sliding window in the ROI region is replaced by the similarity of the center point of the sliding window, as shown in fig. 5, pA (u, v) is the center point of one of the regions a containing the window, and the similarity of the point is calculated by the mean of the corresponding similarities of all the windows centered on the point:
the similar distance of the two ROI areas in AB is then:
λ=max(0.2(90/(nA+nB))+0.8,1)
wherein n isA、nBAre respectively ROIA、ROIBIncluding the number of window center points, λ is a similar area parameter, and nA、nBIn inverse proportion, the larger the total area of similar regions, the smaller λ.
For the search image Q, and the image D in the database is { D ═ D1,D2,...,DNAny image D ini(i ═ 1, 2, …, N) the similarity distance d is calculatediAnd sorting according to the similarity distance from small to large and returning a sorting result.
The present invention is not limited to the above-described embodiments, and various changes may be made by those skilled in the art, and any changes equivalent or similar to the present invention are intended to be included within the scope of the claims.
Claims (9)
1. A method for improving the accuracy of judging similarity of trademark graphs is characterized by comprising the following steps:
s101, detecting the similarity of the trademarksSequencing the searched result pictures according to the similarity of the searched result pictures and the trademark to be detected to obtain a primary result picture sequence { S0}=[I01,I02,…,I0k,];
S102, taking the first k pictures of the primary result picture sequence in the S101 as new to-be-detected trademarks respectively, and then performing similarity sorting m times, wherein m and k are integers more than or equal to 1, so as to obtain a secondary result picture sequence { Sm}=[Im1,Im2,...,Imk,...];
S103, obtaining a combined result picture set S ═ S by the primary result picture sequence in S101 and the secondary result picture sequence in S1020∪S1∪…∪Sm∪…}={st};
S104, taking the same picture in the comprehensive result picture set S as a unit, and sequencing each unit to obtain a final result picture sequence;
2. The method for improving trademark graph similarity judgment accuracy, as claimed in claim 1, wherein the obtaining of the primary result picture sequence comprises the following four steps:
s201, establishing a trademark graphic database;
s202, extracting features of the trademark to be detected and the trademark in the trademark graphic database;
s203, respectively carrying out similarity comparison on the features extracted in the S202 and the features extracted in the trademark graphic database;
and S204, sequencing the comparison results in the S203 according to the sequence of the similarity from high to low.
3. The method for improving the accuracy of determining the similarity of trademark graphs as claimed in claim 2, wherein the database of trademark graphs of step S201 includes at least all registered trademarks of commodities that are the same and/or similar to the kind of commodity to which the trademark to be detected belongs.
4. The method as claimed in claim 3, wherein the feature extraction in step S202 is based on at least one of HOG, LBP, Haar, SIFT, SURF, ORB, Hash, CNN, LSD and GSD.
5. The method for improving the accuracy of determining the similarity of trademark graphs as claimed in claim 4, wherein a step S301 of preprocessing the graphs of the trademarks to be detected and/or the trademarks in the trademark graph database is further provided before the feature extraction in the step S202.
6. The method of claim 5, wherein the preprocessing step includes at least one of translation, stretching, compressing, zooming, shrinking, splitting, and rotating.
7. The method for improving the accuracy of determining the similarity of trademark graphics of claim 6, wherein the similarity comparison in step S203 is performed by feature matching in a manner corresponding to the feature extraction.
8. The method for improving the accuracy of trademark graph similarity determination of claim 7, wherein a step S302 of eliminating the mismatching pairs is further provided after the feature matching.
9. The method as claimed in claim 8, wherein the step S302 is performed by using RANSAC-based algorithm to exclude mismatching pairs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810291376.0A CN108764245B (en) | 2018-04-03 | 2018-04-03 | Method for improving similarity judgment accuracy of trademark graphs |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810291376.0A CN108764245B (en) | 2018-04-03 | 2018-04-03 | Method for improving similarity judgment accuracy of trademark graphs |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108764245A CN108764245A (en) | 2018-11-06 |
CN108764245B true CN108764245B (en) | 2022-04-29 |
Family
ID=63981056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810291376.0A Active CN108764245B (en) | 2018-04-03 | 2018-04-03 | Method for improving similarity judgment accuracy of trademark graphs |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108764245B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110196917B (en) * | 2019-05-30 | 2021-04-06 | 厦门一品威客网络科技股份有限公司 | Personalized LOGO format customization method, system and storage medium |
CN111898434B (en) * | 2020-06-28 | 2021-03-19 | 江苏柏勋科技发展有限公司 | Video detection and analysis system |
CN112861656B (en) * | 2021-01-21 | 2024-05-14 | 平安科技(深圳)有限公司 | Trademark similarity detection method and device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1581159A (en) * | 2003-08-04 | 2005-02-16 | 中国科学院自动化研究所 | Trade-mark searching method |
CN102622420A (en) * | 2012-02-22 | 2012-08-01 | 哈尔滨工程大学 | Trademark image retrieval method based on color features and shape contexts |
CN103258037A (en) * | 2013-05-16 | 2013-08-21 | 西安工业大学 | Trademark identification searching method for multiple combined contents |
CN104021229A (en) * | 2014-06-25 | 2014-09-03 | 厦门大学 | Shape representing and matching method for trademark image retrieval |
CN104156413A (en) * | 2014-07-30 | 2014-11-19 | 中国科学院自动化研究所 | Trademark density based personalized trademark matching recognition method |
CN104462382A (en) * | 2014-12-11 | 2015-03-25 | 北京中细软移动互联科技有限公司 | Trademark image inquiry method |
CN106776896A (en) * | 2016-11-30 | 2017-05-31 | 董强 | A kind of quick figure fused images search method |
CN107247752A (en) * | 2017-05-27 | 2017-10-13 | 西安电子科技大学 | A kind of image search method based on corner description |
CN107506429A (en) * | 2017-08-22 | 2017-12-22 | 北京联合大学 | A kind of image rearrangement sequence method integrated based on marking area and similitude |
-
2018
- 2018-04-03 CN CN201810291376.0A patent/CN108764245B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1581159A (en) * | 2003-08-04 | 2005-02-16 | 中国科学院自动化研究所 | Trade-mark searching method |
CN102622420A (en) * | 2012-02-22 | 2012-08-01 | 哈尔滨工程大学 | Trademark image retrieval method based on color features and shape contexts |
CN103258037A (en) * | 2013-05-16 | 2013-08-21 | 西安工业大学 | Trademark identification searching method for multiple combined contents |
CN104021229A (en) * | 2014-06-25 | 2014-09-03 | 厦门大学 | Shape representing and matching method for trademark image retrieval |
CN104156413A (en) * | 2014-07-30 | 2014-11-19 | 中国科学院自动化研究所 | Trademark density based personalized trademark matching recognition method |
CN104462382A (en) * | 2014-12-11 | 2015-03-25 | 北京中细软移动互联科技有限公司 | Trademark image inquiry method |
CN106776896A (en) * | 2016-11-30 | 2017-05-31 | 董强 | A kind of quick figure fused images search method |
CN107247752A (en) * | 2017-05-27 | 2017-10-13 | 西安电子科技大学 | A kind of image search method based on corner description |
CN107506429A (en) * | 2017-08-22 | 2017-12-22 | 北京联合大学 | A kind of image rearrangement sequence method integrated based on marking area and similitude |
Also Published As
Publication number | Publication date |
---|---|
CN108764245A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107256262B (en) | Image retrieval method based on object detection | |
CN108763262B (en) | Trademark graph retrieval method | |
CN104778242B (en) | Cartographical sketching image search method and system based on image dynamic partition | |
CN111126482B (en) | Remote sensing image automatic classification method based on multi-classifier cascade model | |
CN108764245B (en) | Method for improving similarity judgment accuracy of trademark graphs | |
CN104850822B (en) | Leaf identification method under simple background based on multi-feature fusion | |
Zagoris et al. | Image retrieval systems based on compact shape descriptor and relevance feedback information | |
CN108932518A (en) | A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words | |
Zhu et al. | Deep residual text detection network for scene text | |
CN103995864B (en) | A kind of image search method and device | |
CN112163114B (en) | Image retrieval method based on feature fusion | |
Lan et al. | An edge-located uniform pattern recovery mechanism using statistical feature-based optimal center pixel selection strategy for local binary pattern | |
CN108694411B (en) | Method for identifying similar images | |
Chen et al. | Image retrieval based on quadtree classified vector quantization | |
CN108845999B (en) | Trademark image retrieval method based on multi-scale regional feature comparison | |
CN108897747A (en) | A kind of brand logo similarity comparison method | |
CN108763261B (en) | Graph retrieval method | |
CN110674334B (en) | Near-repetitive image retrieval method based on consistency region deep learning features | |
JP6017277B2 (en) | Program, apparatus and method for calculating similarity between contents represented by set of feature vectors | |
CN108763265B (en) | Image identification method based on block retrieval | |
Bakheet et al. | Adaptive multimodal feature fusion for content-based image classification and retrieval | |
Mao et al. | Detection of artificial pornographic pictures based on multiple features and tree mode | |
CN108804499B (en) | Trademark image retrieval method | |
Altintakan et al. | An improved BOW approach using fuzzy feature encoding and visual-word weighting | |
CN108897746B (en) | Image retrieval method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |