CN117788850A - Trademark similarity evaluation method and device - Google Patents

Trademark similarity evaluation method and device Download PDF

Info

Publication number
CN117788850A
CN117788850A CN202410192592.5A CN202410192592A CN117788850A CN 117788850 A CN117788850 A CN 117788850A CN 202410192592 A CN202410192592 A CN 202410192592A CN 117788850 A CN117788850 A CN 117788850A
Authority
CN
China
Prior art keywords
feature
characteristic
trademark
trademark image
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410192592.5A
Other languages
Chinese (zh)
Other versions
CN117788850B (en
Inventor
罗时民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Oushuitong Technology Co ltd
Original Assignee
Shenzhen Oushuitong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Oushuitong Technology Co ltd filed Critical Shenzhen Oushuitong Technology Co ltd
Priority to CN202410192592.5A priority Critical patent/CN117788850B/en
Publication of CN117788850A publication Critical patent/CN117788850A/en
Application granted granted Critical
Publication of CN117788850B publication Critical patent/CN117788850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a trademark similarity evaluation method and device, and relates to the technical field of image recognition, wherein the method comprises the following steps: the server extracts a first feature vector in the first trademark image and a second feature vector in the second trademark image based on the NetVLAD model; acquiring the image matching degree between the first trademark image and the second trademark image based on the first characteristic vector and the second characteristic vector; and then, based on the image matching degree and the text similarity, acquiring the similarity between the first trademark image and the second trademark image, and fusing the image matching degree and the text similarity by adopting the method, so that the similarity between the first trademark image and the second trademark image can be evaluated more accurately.

Description

Trademark similarity evaluation method and device
Technical Field
The application relates to the technical field of image recognition, in particular to a trademark similarity evaluation method and device.
Background
There is a great difference between the trademark image and the ordinary image, wherein the most commonly used trademark is composed of characters and patterns, in other words, the application amount of the trademark of this type in the market is the most common, and the trademark agent enterprise processes the trademark image of one type most. A group of trademark images with highest similarity are searched in a trademark image database, and the process can be realized by using an image search algorithm with traditional meaning, but the searching effect is not satisfactory, and the reason is that the trademark images have the characteristics that (1) the pattern is single, and the pixel points are not greatly connected; (2) trademark images containing the same text, the font shapes of which show a large difference in different trademark images, but belong to similar trademarks; (3) the different words contained in the images of the different trademarks may be homophones, but belong to similar trademarks.
Therefore, there is an urgent need for a method capable of accurately judging the similarity between a trademark to be applied and an existing trademark.
Disclosure of Invention
The application provides a trademark similarity evaluation method and a trademark similarity evaluation system, and a method capable of accurately judging the similarity between a trademark needing to be applied and an existing trademark. To achieve the above object, the present application provides the following solutions:
in a first aspect, the present application provides a trademark similarity assessment method, the method including the steps of: based on a NetVLAD model, respectively extracting a first characteristic vector in the first trademark image and a second characteristic vector in the second trademark image; acquiring an image matching degree between the first trademark image and the second trademark image based on the first feature vector and the second feature vector; based on a CRNN algorithm, respectively extracting a first keyword in the first trademark image and a second keyword in the second trademark image; acquiring the character similarity between the first keyword and the second keyword by using an improved editing distance algorithm;
acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity; the NetVLAD model is based, and the method comprises the following steps of: extracting a first feature matrix in the first trademark image based on the NetVLAD model; the first feature matrix comprises N layers of feature planes which are composed of feature plane elements; weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the first characteristic matrix to obtain a weighted second characteristic matrix; and aggregating the characteristic plane elements of each layer in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix.
Further, the NetVLAD model is based on extracting second feature vectors in the second trademark image respectively, and the method comprises the following steps: extracting a third feature matrix in the second trademark image based on the NetVLAD model; the third feature matrix comprises N layers of feature planes which are composed of feature plane elements; weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the third characteristic matrix to obtain a weighted fourth characteristic matrix; and aggregating each layer of characteristic plane elements in the fourth characteristic matrix to obtain a second characteristic vector capable of describing the fourth characteristic matrix.
Further, the NetVLAD model is based on extracting second feature vectors in the second trademark image respectively, and the method comprises the following steps: extracting a third feature matrix in the second trademark image based on the NetVLAD model; the third feature matrix comprises N layers of feature planes which are composed of feature plane elements; weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the third characteristic matrix to obtain a weighted fourth characteristic matrix; and aggregating each layer of characteristic plane elements in the fourth characteristic matrix to obtain a second characteristic vector capable of describing the fourth characteristic matrix.
Further, the NetVLAD model is used for respectively extracting a first feature vector in the first trademark image and a second feature vector in the second trademark image, and the method comprises the following steps: and preprocessing the first trademark image to obtain a first trademark image with a preset size.
Further, the aggregating each layer of feature plane elements in the second feature matrix to obtain a first feature vector capable of describing the second feature matrix includes the following steps: aggregating each layer of feature plane elements in the second feature matrix to obtain a plurality of one-dimensional feature vectors corresponding to the second feature matrix, wherein the one-dimensional feature vectors comprise all elements in each layer of feature plane; and clustering the plurality of one-dimensional feature vectors to obtain a first feature vector.
Further, the obtaining the image matching degree between the first trademark image and the second trademark image based on the first feature vector and the second feature vector includes the following steps: acquiring cosine distances between the first feature vectors and the second feature vectors by utilizing a cosine distance formula; and acquiring the image matching degree between the first trademark image and the second trademark image based on the cosine distance.
Further, the text similarity between the first keyword and the second keyword is obtained by using an edit distance algorithm, and the method comprises the following steps: acquiring the editing distance between the first keyword and the second keyword by using an editing distance algorithm; and based on the editing distance, acquiring the text similarity between the first keyword and the second keyword.
In a second aspect, the present application provides a brand similarity assessment device, the device comprising:
the characteristic information acquisition module is used for respectively extracting a first characteristic vector in the first trademark image and a second characteristic vector in the second trademark image based on the NetVLAD model; the NetVLAD model is based, and the method comprises the following steps of: extracting a first feature matrix in the first trademark image based on the NetVLAD model; the first feature matrix comprises N layers of feature planes which are composed of feature plane elements; weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the first characteristic matrix to obtain a weighted second characteristic matrix; and aggregating the characteristic plane elements of each layer in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix.
An image matching degree obtaining module, configured to obtain an image matching degree between the first trademark image and the second trademark image based on the first feature vector and the second feature vector;
the keyword acquisition module is used for respectively extracting a first keyword in the first trademark image and a second keyword in the second trademark image based on a CRNN algorithm;
the text similarity acquisition module is used for acquiring the text similarity between the first keyword and the second keyword by using an edit distance algorithm;
and the evaluation module is used for acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity.
The beneficial effects that technical scheme that this application provided brought include:
the server extracts image characteristic information in a first trademark image, first characteristic information and second characteristic information in a second trademark image based on a NetVLAD model; respectively carrying out feature clustering on the first feature information and the second feature information to obtain a first feature vector corresponding to the first feature information and a second feature vector corresponding to the second feature information; acquiring the image matching degree between the first trademark image and the second trademark image based on the first characteristic vector and the second characteristic vector; based on a CRNN algorithm, respectively extracting a first keyword in a first trademark image and a second keyword in a second trademark image; and acquiring the text similarity between the first keyword and the second keyword by using an edit distance algorithm, acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity, and fusing the image matching degree and the text similarity, so that the similarity between the first trademark image and the second trademark image can be acquired more accurately.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart illustrating steps of a trademark similarity evaluation method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a first feature vector obtaining step according to an embodiment of the present application;
fig. 3 is a flowchart illustrating steps of a Chinese character similarity obtaining step in the embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
Embodiments of the present application are described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a trademark similarity evaluation method, which aims to solve the problem of insufficient fault data in a physical entity.
In order to achieve the technical effects, the general idea of the application is as follows:
referring to fig. 1, a trademark similarity evaluation method includes the steps of:
s1, respectively extracting a first feature vector in a first trademark image and a second feature vector in a second trademark image based on a NetVLAD model;
training the NetVLAD model by using the existing trademark image training set to obtain a trained NetVLAD model; based on the NetVLAD model, extracting first feature vectors in the first trademark image respectively includes the following steps: extracting a first feature matrix in the first trademark image based on the NetVLAD model; the first feature matrix comprises N layers of feature planes which are composed of feature plane elements; weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the first characteristic matrix to obtain a weighted second characteristic matrix; and aggregating the characteristic plane elements of each layer in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix.
S2, acquiring the image matching degree between a first trademark image and a second trademark image based on the first characteristic vector and the second characteristic vector;
s3, based on a CRNN algorithm, respectively extracting a first keyword in the first trademark image and a second keyword in the second trademark image;
s4, acquiring the character similarity between the first keyword and the second keyword by using an edit distance algorithm;
acquiring cosine distances between the first feature vector and the second feature vector by utilizing a cosine distance formula; and acquiring the image matching degree between the first trademark image and the second trademark image based on the cosine distance.
S5, based on the image matching degree and the text similarity, the similarity between the first trademark image and the second trademark image is obtained.
In the embodiment of the application, the server extracts the image characteristic information in the first trademark image, the first characteristic information and the second characteristic information in the second trademark image based on the NetVLAD model; respectively carrying out feature clustering on the first feature information and the second feature information to obtain a first feature vector corresponding to the first feature information and a second feature vector corresponding to the second feature information; acquiring the image matching degree between the first trademark image and the second trademark image based on the first characteristic vector and the second characteristic vector; based on a CRNN algorithm, respectively extracting a first keyword in a first trademark image and a second keyword in a second trademark image; the text similarity between the first keyword and the second keyword is obtained by utilizing an edit distance algorithm, the similarity between the first trademark image and the second trademark image is obtained based on the image matching degree and the text similarity, and the image matching degree and the text similarity are fused, so that the similarity between the first trademark image and the second trademark image can be obtained more accurately.
In one embodiment, as shown in fig. 2, step S1 includes:
s101, extracting a first feature matrix in a first trademark image based on a NetVLAD model;
s102, weighting elements of each layer of characteristic plane based on the characteristic value of each layer of characteristic plane of the first characteristic matrix to obtain a weighted second characteristic matrix;
firstly, preprocessing an input image to ensure that the size of the input image of the rolled neural network is 224×224×3, and inputting the image into a netVLAD model to obtain X epsilon R N×W×H The three-dimensional matrix extracted by the convolution layer is obtained, wherein N is the number of channels, and W and H are the length and the width of the characteristic plane of each layer respectively.
Assume thatx kij E, X, s is a three-dimensional weight matrix,x' kij representing the weighted features, thenb k Representing the weight size of each channel,a ij representing the weight assigned to each element on the feature plane and therefores kij = b k a ij Wherein, the plane is opposite toij) The weight parameter at a point is the product of the channel weight and the plane weight of that point, whereiRepresenting the length of the feature planes of each layer,jrepresenting the width of each layer of feature planes.
The weighted second feature matrix is as follows:
wherein,,/>,/>a ij representing the weight assigned to each element on the feature plane,b k representing the weight size of each channel.
S103, aggregating the characteristic plane elements of each layer in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix.
Aggregating each layer of feature plane elements in the second feature matrix to obtain a plurality of one-dimensional feature vectors corresponding to the second feature matrix, wherein the one-dimensional feature vectors comprise all elements in each layer of feature plane; clustering the plurality of one-dimensional feature vectors to obtain a first feature vector.
In the embodiment of the present application, the feature value of each layer of feature plane based on the first feature matrix weights each layer of feature plane element, and when local feature polymerization is performed for the next step, the area with logo in the trademark image is weighted more, so that the feature vector capable of representing the trademark image is obtained on the maximum limit.
In an application embodiment, as shown in fig. 3, step S4 includes:
s301, acquiring cosine distances between a first feature vector and a second feature vector by utilizing a cosine distance formula;
wherein the cosine distance formula includes:
wherein X is a first feature vector and Y is a second feature vector.
S302, acquiring the image matching degree between the first trademark image and the second trademark image based on the cosine distance.
In the embodiment of the application, the cosine distance formula is utilized to calculate the vector distance between the first feature vector and the second feature vector, so that the image matching degree between the first trademark image and the second trademark image can be quantized.
In an embodiment of the application, step S5 includes the following steps:
s501, acquiring an editing distance between a first keyword and a second keyword by using an editing distance algorithm;
the edit distance algorithm measures the similarity of two character strings according to the minimum operation required for converting one character string into another, namely, inserting, deleting or replacing the character strings. The smaller the edit distance, the more similar the two strings.
And calculating the similarity of the two character strings a and b, wherein the editing distance is ED (a and b), and the standardized editing distance is NED (a and b).
S502, based on the editing distance, acquiring the text similarity between the first keyword and the second keyword.
In the embodiment of the application, an edit distance algorithm is utilized to obtain the edit distance between the first keyword and the second keyword, and then the text similarity between the first keyword and the second keyword is quantified by using the edit distance.
It should be noted that, step numbers of each step in the embodiments of the present application do not limit the order of each operation in the technical solution of the present application.
In a second aspect, based on the same inventive concept as the real-time example of the trademark similarity evaluation method, an embodiment of the present application provides a trademark similarity evaluation device, including:
the characteristic information acquisition module is used for respectively extracting a first characteristic vector in the first trademark image and a second characteristic vector in the second trademark image based on the NetVLAD model; the feature information acquisition module further includes: the first feature matrix acquisition sub-module is used for extracting a first feature matrix in the first trademark image based on the NetVLAD model; the first feature matrix comprises N layers of feature planes which are composed of feature plane elements; the second feature matrix acquisition sub-module is used for weighting elements of each layer of feature plane based on the feature value of each layer of feature plane of the first feature matrix to acquire a weighted second feature matrix; and the vector generation sub-module is used for aggregating each layer of characteristic plane elements in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix.
An image matching degree obtaining module, configured to obtain an image matching degree between the first trademark image and the second trademark image based on the first feature vector and the second feature vector;
the keyword acquisition module is used for respectively extracting a first keyword in the first trademark image and a second keyword in the second trademark image based on a CRNN algorithm;
the text similarity acquisition module is used for acquiring the text similarity between the first keyword and the second keyword by using an edit distance algorithm;
and the evaluation module is used for acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity.
The server in the device extracts the image characteristic information in the first trademark image, the first characteristic information and the second characteristic information in the second trademark image based on the NetVLAD model; respectively carrying out feature clustering on the first feature information and the second feature information to obtain a first feature vector corresponding to the first feature information and a second feature vector corresponding to the second feature information; acquiring the image matching degree between the first trademark image and the second trademark image based on the first characteristic vector and the second characteristic vector; based on a CRNN algorithm, respectively extracting a first keyword in a first trademark image and a second keyword in a second trademark image; and acquiring the text similarity between the first keyword and the second keyword by using an edit distance algorithm, acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity, and fusing the image matching degree and the text similarity, so that the similarity between the first trademark image and the second trademark image can be acquired more accurately.
It should be noted that, the trademark similarity evaluation device provided in the embodiment of the present application has technical problems, technical means and technical effects corresponding to the same, and is similar to the trademark similarity evaluation method in principle.
In a third aspect, embodiments of the present application provide a storage medium having stored thereon a computer program which, when executed by a processor, implements the trademark similarity assessment method mentioned in the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program running on the processor, and the processor implements the trademark similarity assessment method mentioned in the first aspect when executing the computer program.
It should be noted that in this application, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A brand similarity evaluation method, comprising the steps of:
based on a NetVLAD model, respectively extracting a first characteristic vector in the first trademark image and a second characteristic vector in the second trademark image;
acquiring an image matching degree between the first trademark image and the second trademark image based on the first feature vector and the second feature vector;
based on a CRNN algorithm, respectively extracting a first keyword in the first trademark image and a second keyword in the second trademark image;
acquiring the character similarity between the first keyword and the second keyword by using an edit distance algorithm;
acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity;
the NetVLAD model is based, and the method comprises the following steps of:
extracting a first feature matrix in the first trademark image based on the NetVLAD model; the first feature matrix comprises N layers of feature planes which are composed of feature plane elements;
weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the first characteristic matrix to obtain a weighted second characteristic matrix;
and aggregating the characteristic plane elements of each layer in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix.
2. The trademark similarity evaluation method of claim 1, wherein the extracting the second eigenvectors in the second trademark image based on the NetVLAD model comprises the steps of:
extracting a third feature matrix in the second trademark image based on the NetVLAD model; the third feature matrix comprises N layers of feature planes which are composed of feature plane elements;
weighting each layer of characteristic plane elements based on the characteristic value of each layer of characteristic plane of the third characteristic matrix to obtain a weighted fourth characteristic matrix;
and aggregating each layer of characteristic plane elements in the fourth characteristic matrix to obtain a second characteristic vector capable of describing the fourth characteristic matrix.
3. The trademark similarity evaluation method of claim 1, wherein the extracting the first feature vector in the first trademark image and the second feature vector in the second trademark image based on the NetVLAD model includes the steps of:
and preprocessing the first trademark image to obtain a first trademark image with a preset size.
4. The trademark similarity evaluation method of claim 1, wherein the aggregating each layer of feature plane elements in the second feature matrix to obtain a first feature vector capable of describing the second feature matrix comprises the following steps:
aggregating each layer of feature plane elements in the second feature matrix to obtain a plurality of one-dimensional feature vectors corresponding to the second feature matrix, wherein the one-dimensional feature vectors comprise all elements in each layer of feature plane;
and clustering the plurality of one-dimensional feature vectors to obtain a first feature vector.
5. The brand similarity evaluation method of claim 1, wherein the acquiring the image matching degree between the first brand image and the second brand image based on the first feature vector and the second feature vector includes the steps of:
acquiring cosine distances between the first feature vectors and the second feature vectors by utilizing a cosine distance formula;
and acquiring the image matching degree between the first trademark image and the second trademark image based on the cosine distance.
6. The brand similarity evaluation method of claim 1, wherein said obtaining the text similarity between said first keyword and said second keyword using an edit distance algorithm comprises the steps of:
acquiring the editing distance between the first keyword and the second keyword by using an editing distance algorithm;
and based on the editing distance, acquiring the text similarity between the first keyword and the second keyword.
7. A brand similarity evaluation device, the device comprising:
the characteristic information acquisition module is used for respectively extracting a first characteristic vector in the first trademark image and a second characteristic vector in the second trademark image based on the NetVLAD model; the feature information acquisition module further includes: the first feature matrix acquisition sub-module is used for extracting a first feature matrix in the first trademark image based on the NetVLAD model; the first feature matrix comprises N layers of feature planes which are composed of feature plane elements; the second feature matrix acquisition sub-module is used for weighting elements of each layer of feature plane based on the feature value of each layer of feature plane of the first feature matrix to acquire a weighted second feature matrix; the vector generation sub-module is used for aggregating each layer of characteristic plane elements in the second characteristic matrix to obtain a first characteristic vector capable of describing the second characteristic matrix;
an image matching degree obtaining module, configured to obtain an image matching degree between the first trademark image and the second trademark image based on the first feature vector and the second feature vector;
the keyword acquisition module is used for respectively extracting a first keyword in the first trademark image and a second keyword in the second trademark image based on a CRNN algorithm;
the text similarity acquisition module is used for acquiring the text similarity between the first keyword and the second keyword by using an edit distance algorithm;
and the evaluation module is used for acquiring the similarity between the first trademark image and the second trademark image based on the image matching degree and the text similarity.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 6.
CN202410192592.5A 2024-02-21 2024-02-21 Trademark similarity evaluation method and device Active CN117788850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410192592.5A CN117788850B (en) 2024-02-21 2024-02-21 Trademark similarity evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410192592.5A CN117788850B (en) 2024-02-21 2024-02-21 Trademark similarity evaluation method and device

Publications (2)

Publication Number Publication Date
CN117788850A true CN117788850A (en) 2024-03-29
CN117788850B CN117788850B (en) 2024-05-10

Family

ID=90389173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410192592.5A Active CN117788850B (en) 2024-02-21 2024-02-21 Trademark similarity evaluation method and device

Country Status (1)

Country Link
CN (1) CN117788850B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804499A (en) * 2018-04-03 2018-11-13 南昌奇眸科技有限公司 A kind of trademark image retrieval method
US20190114505A1 (en) * 2016-04-14 2019-04-18 Ader Bilgisayar Hizmetleri Ve Ticaret A.S. Content based search and retrieval of trademark images
CN109934258A (en) * 2019-01-30 2019-06-25 西安理工大学 The image search method of characteristic weighing and Regional Integration
CN110033003A (en) * 2019-03-01 2019-07-19 华为技术有限公司 Image partition method and image processing apparatus
WO2023091131A1 (en) * 2021-11-17 2023-05-25 Innopeak Technology, Inc. Methods and systems for retrieving images based on semantic plane features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114505A1 (en) * 2016-04-14 2019-04-18 Ader Bilgisayar Hizmetleri Ve Ticaret A.S. Content based search and retrieval of trademark images
CN108804499A (en) * 2018-04-03 2018-11-13 南昌奇眸科技有限公司 A kind of trademark image retrieval method
CN109934258A (en) * 2019-01-30 2019-06-25 西安理工大学 The image search method of characteristic weighing and Regional Integration
CN110033003A (en) * 2019-03-01 2019-07-19 华为技术有限公司 Image partition method and image processing apparatus
WO2023091131A1 (en) * 2021-11-17 2023-05-25 Innopeak Technology, Inc. Methods and systems for retrieving images based on semantic plane features

Also Published As

Publication number Publication date
CN117788850B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN109885692B (en) Knowledge data storage method, apparatus, computer device and storage medium
Chaudhuri et al. Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method
CN110866140A (en) Image feature extraction model training method, image searching method and computer equipment
Huang et al. Object-location-aware hashing for multi-label image retrieval via automatic mask learning
CN110837846A (en) Image recognition model construction method, image recognition method and device
CN111182364B (en) Short video copyright detection method and system
CN110968725B (en) Image content description information generation method, electronic device and storage medium
CN109213886B (en) Image retrieval method and system based on image segmentation and fuzzy pattern recognition
CN112434533A (en) Entity disambiguation method, apparatus, electronic device, and computer-readable storage medium
CN114995903A (en) Class label identification method and device based on pre-training language model
CN113642602A (en) Multi-label image classification method based on global and local label relation
JP5971722B2 (en) Method for determining transformation matrix of hash function, hash type approximate nearest neighbor search method using the hash function, apparatus and computer program thereof
CN117788850B (en) Trademark similarity evaluation method and device
JP5197492B2 (en) Semi-teacher image recognition / retrieval device, semi-teacher image recognition / retrieval method, and program
CN114329016B (en) Picture label generating method and text mapping method
EP4089568A1 (en) Cascade pooling for natural language document processing
CN112884053B (en) Website classification method, system, equipment and medium based on image-text mixed characteristics
CN110826488B (en) Image identification method and device for electronic document and storage equipment
CN113849669A (en) Similar picture searching method, device, equipment and medium based on deep learning
CN112650870A (en) Method for training picture ordering model, and method and device for picture ordering
Rahul et al. Deep reader: Information extraction from document images via relation extraction and natural language
CN112528066B (en) Trademark retrieval method, system, computer device and storage medium based on attention mechanism
CN111460088A (en) Similar text retrieval method, device and system
CN116303909B (en) Matching method, equipment and medium for electronic bidding documents and clauses
CN114385831B (en) Knowledge-graph relation prediction method based on feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant