CN113298146A - Image matching method, device, equipment and medium based on feature detection - Google Patents
Image matching method, device, equipment and medium based on feature detection Download PDFInfo
- Publication number
- CN113298146A CN113298146A CN202110568405.5A CN202110568405A CN113298146A CN 113298146 A CN113298146 A CN 113298146A CN 202110568405 A CN202110568405 A CN 202110568405A CN 113298146 A CN113298146 A CN 113298146A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- matching
- sift
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000001514 detection method Methods 0.000 title claims abstract description 39
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 38
- 239000013598 vector Substances 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 7
- 238000005286 illumination Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image matching method, device, equipment and medium based on feature detection, wherein the method comprises the following steps: after an original image and a template image are input, detecting feature points by an SIFT method, extracting N feature points, and generating a feature descriptor according to the coordinate, the scale and the direction of each feature point; generating a target characteristic point set for each image; after a target feature point set is generated, extracting a plurality of previous images with the highest matching scores by utilizing SIFT and BoW algorithms according to the matching degree of the feature points of each template image and the original image; local feature matching is carried out by utilizing SIFT and AdaLAM algorithms, and the best matching image is selected from the plurality of images with the highest matching scores; and outputting the matching result. The image matching method based on the feature detection has good matching effect and can efficiently and quickly filter out outliers.
Description
Technical Field
The invention relates to the technical field of image matching models and algorithms of computer vision, in particular to an image matching method, device, equipment and medium for feature detection.
Background
Image matching is a basic problem in computer vision, and an accurate and efficient image matching algorithm provides a solid foundation for solving other problems. The image matching method based on the SIFT features can be widely applied to industries such as industrial detection, indoor navigation, security monitoring and the like, and the method based on the SIFT features mostly depends on a BoW model. The BoW technology is initially applied to a document retrieval system and is characterized in that a large number of words are needed to generate a codebook, one of the characteristics of SIFT feature extraction is the multiformity, even if an image only contains a few targets, the image can generate a plurality of SIFT features, visual vocabulary vectors are extracted from images of different categories by using an SIFT algorithm, the vectors represent feature points which are not changed locally in the image, all the feature vectors are collected into one block, visual words are obtained by using a K-Means clustering algorithm, a visual word dictionary is constructed, the number of times of each word in a word list appearing in the image is counted, and the image is expressed as a K-dimensional numerical value vector.
However, the conventional SIFT algorithm plus the BoW model has the following problems: the SIFT algorithm and the BoW model are based on global feature matching of deep learning, a data set needs to be created and model training needs to be carried out, and the traditional SIFT algorithm is greatly influenced by illumination when image features are extracted; in addition, in the image matching task, there are many outliers in the initial matching, and it is difficult to efficiently and quickly filter the outliers at present.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a feature detection-based image matching method, a feature detection-based image matching device, feature detection-based image matching equipment and a feature detection-based image matching medium, wherein the feature detection-based image matching method, the feature detection-based image matching device, the feature detection-based image matching equipment and the feature detection-based image matching medium are high in precision and good in matching effect and can efficiently and quickly filter out outliers.
In order to solve the problems, the technical scheme of the invention is as follows:
an image matching method based on feature detection, the method comprising the steps of:
after an original image and a template image are input, detecting feature points by an SIFT method, extracting N feature points, and generating a feature descriptor according to the coordinate, the scale and the direction of each feature point;
generating a target characteristic point set for each image;
after a target feature point set is generated, extracting a plurality of previous images with the highest matching scores by utilizing SIFT and BoW algorithms according to the matching degree of the feature points of each template image and the original image;
local feature matching is carried out by utilizing SIFT and AdaLAM algorithms, and the best matching image is selected from the plurality of images with the highest matching scores; and
and outputting a matching result.
Optionally, after the original image and the template image are input, detecting feature points by an SIFT method, extracting N feature points, and generating a feature descriptor according to the coordinate, the scale, and the direction of each feature point specifically includes: respectively solving feature points and descriptors related to gradients and directions in the original image and the template image to obtain features, wherein the features comprise scale space extreme value detection, key point interpolation positioning, direction determination and key point description.
Optionally, after the original image and the template image are input, detecting feature points by an SIFT method, extracting N feature points, and generating a feature descriptor according to the coordinate, the scale, and the direction of each feature point further includes: and performing gradient calculation on the key points to generate pixel gradients of the gradient histogram image so as to determine directions, taking 44 region blocks around the feature points, counting 8 gradient directions in each small block, and using the vectors as descriptors of the SIFT features.
Optionally, after the target feature point set is generated, the step of extracting a plurality of previous images with the highest matching scores by using the SIFT and BoW algorithms according to the matching degree of the feature points of each template image and the original image specifically includes:
preprocessing image data, including enhancing, rotating, filtering and segmenting;
carrying out efficient, stable and repeatable feature extraction on image data;
establishing an image characteristic database for the image data;
extracting retrieval image features and constructing feature vectors;
the design retrieval module comprises similarity measurement criteria, sorting and searching;
and outputting a result with higher similarity.
Optionally, the step of performing local feature matching by using SIFT and AdaLAM algorithms, and selecting the best matching image from the plurality of images with the highest matching scores specifically includes:
finding an initial match;
finding out points with high confidence coefficient and better distribution as seed points;
selecting a matching point in the same area with the seed point in the initial matching;
the best locally consistent match is retained.
Further, the present invention also provides an image matching apparatus based on feature detection, the apparatus comprising:
a feature point extraction module: the method comprises the steps of detecting feature points through an SIFT method after an original image and a template image are input, and extracting N feature points;
the characteristic point description module: the characteristic descriptor is generated according to the coordinate, the scale and the direction of each characteristic point;
a target feature point set generation module: the method comprises the steps of acquiring corresponding characteristic points between an original image and a template image, and taking the corresponding characteristic points as a target characteristic point set;
a feature matching module: after the target feature point set is generated, according to the feature point matching degree of each template image and the original image, extracting a plurality of previous images with the highest matching scores by utilizing SIFT and BoW algorithms, then performing local feature matching by utilizing SIFT and AdaLAM algorithms, and selecting the best matching image from the plurality of images with the highest matching scores; and
an output module: for outputting the matching result.
Further, the present invention also provides a computer device characterized in that the device comprises at least one processor and at least one memory storing at least one program, which when executed by the at least one processor causes the at least one processor to implement the feature detection based image matching method as described above.
Further, the present invention provides a storage medium in which a program executable by a processor is stored, characterized in that: the processor-executable program is for implementing the feature detection-based image matching method as described above when executed by a processor.
Compared with the prior art, the image matching method and device based on feature detection have the advantages that: according to the invention, the SIFT algorithm and the AdaLAM algorithm are adopted for image matching, so that the image matching method is not greatly influenced by illumination, and the matching effect is good in the environment of uneven illumination. In addition, the AdaLAM algorithm is combined with the SIFT feature matching algorithm, so that the pose resolving precision can be improved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a block diagram of a flow chart of an image matching method based on feature detection according to an embodiment of the present invention;
FIG. 2 is another block flow diagram of an image matching method based on feature detection according to an embodiment of the present invention;
FIG. 3 is a block diagram of another flowchart of an image matching method based on feature detection according to an embodiment of the present invention;
FIG. 4 is a block diagram of an image matching apparatus based on feature detection according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of image matching based on feature detection according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Fig. 1 is a flowchart of an image matching method based on feature detection according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
s1: after an original image and a template image are input, detecting feature points by an SIFT method, extracting N feature points, and generating a feature descriptor according to the coordinate, the scale and the direction of each feature point;
specifically, the feature points and descriptors related to gradients and directions in the original image and the template image are respectively obtained to obtain features, including extreme value detection in scale space, interpolation positioning of key points, direction determination and description of key points.
And (3) detection of extreme values in the scale space: firstly, establishing a Gaussian difference scale space, determining an extreme value, and comparing each pixel point in the Gaussian difference scale space with 26 points adjacent to the pixel point to ensure that the extreme value points are detected in the scale space and the two-dimensional image space.
Key point interpolation positioning: interpolation requires the expression of discrete image expressions as continuous functions, and the positions and scales of key points are accurately determined by fitting the functions.
And performing gradient calculation on the key points to generate pixel gradients of the gradient histogram image so as to determine the direction.
And taking 44 area blocks around the feature points, counting 8 gradient directions in each small block, and using the vectors as descriptors of SIFT features.
S2: generating a target characteristic point set for each image;
specifically, corresponding feature points between the original image and the template image are acquired, and the corresponding feature points are used as a target feature point set.
S3: after a target feature point set is generated, extracting a plurality of previous images with the highest matching scores by utilizing SIFT and BoW algorithms according to the matching degree of the feature points of each template image and the original image;
specifically, as shown in fig. 2, the method comprises the following steps:
s31: preprocessing image data, including enhancing, rotating, filtering and segmenting;
s32: carrying out efficient, stable and repeatable feature extraction on image data;
s33: establishing an image characteristic database for the image data;
s34: extracting retrieval image features and constructing feature vectors;
s35: the design retrieval module comprises similarity measurement criteria, sorting and searching;
s36: and outputting a result with higher similarity.
S4: local feature matching is carried out by utilizing SIFT and AdaLAM algorithms, and the best matching image is selected from the plurality of images with the highest matching scores;
specifically, as shown in fig. 3, the method includes the following steps:
s41: finding an initial match;
s42: finding out points with high confidence coefficient and better distribution as seed points;
s43: selecting a matching point in the same area with the seed point in the initial matching;
s44: the best locally consistent match is retained.
S5: and outputting a matching result.
Fig. 4 is a block diagram of an image matching apparatus based on feature detection according to an embodiment of the present invention, and as shown in fig. 4, the apparatus includes:
the feature point extraction module 21: the method comprises the steps of detecting feature points through an SIFT method after an original image and a template image are input, and extracting N feature points;
the feature point description module 22: the characteristic descriptor is generated according to the coordinate, the scale and the direction of each characteristic point;
the target feature point set generation module 23: the method comprises the steps of acquiring corresponding characteristic points between an original image and a template image, and taking the corresponding characteristic points as a target characteristic point set;
the feature matching module 24: after a target feature point set is generated, extracting a plurality of images with the highest matching scores by utilizing an SIFT algorithm and a BoW algorithm, performing local feature matching by utilizing the SIFT algorithm and an AdaLAM algorithm, and selecting the best matching image from the plurality of images with the highest matching scores; and
the output module 25: for outputting the matching result.
According to the method, the SIFT algorithm is adopted together with a BoW model and an AdaLAM model, the accuracy is about 60%, the average accuracy and the average accuracy can reach 100%, the accuracy refers to the ratio of the number of returned correct matches to the total number or the number of correct matches in a database, the accuracy refers to the part of the image which is truly matched in the returned result, the average accuracy and the average accuracy are equivalent to the area under an accuracy curve, and the matched point is more accurate from the matched result.
As shown in fig. 5, the computer performs image matching by using the present invention, and connects the matched feature point pairs, wherein lines respectively represent correctly matched feature point pairs, and it can be seen from fig. 5 that the matching results at some special positions are more accurate by using the SIFT algorithm and the AdaLAM algorithm, and outliers can be filtered out quickly and efficiently. The original image, i.e. the query image, is on the left, the template image in the simulation template library is on the right, and fig. 5 shows the final result of feature matching.
As shown in fig. 6, the present invention also provides a computer device comprising at least one processor and at least one memory storing at least one program which, when executed by the at least one processor, causes the at least one processor to implement the feature detection based image matching method as described above.
A storage medium having a program stored therein, the program being executed by a processor for performing the image matching method based on feature detection as described above.
Compared with the prior art, the image matching method has the advantages that the SIFT algorithm and the AdaLAM algorithm are adopted for image matching, so that the influence of illumination is small, and the matching effect is good in the environment with uneven illumination. In addition, the AdaLAM algorithm is combined with the SIFT feature matching algorithm, so that the pose resolving precision can be improved.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (8)
1. An image matching method based on feature detection, characterized in that the method comprises the following steps:
after an original image and a template image are input, detecting feature points by an SIFT method, extracting N feature points, and generating a feature descriptor according to the coordinate, the scale and the direction of each feature point;
generating a target characteristic point set for each image;
after a target feature point set is generated, extracting a plurality of previous images with the highest matching scores by utilizing SIFT and BoW algorithms according to the matching degree of the feature points of each template image and the original image;
local feature matching is carried out by utilizing SIFT and AdaLAM algorithms, and the best matching image is selected from the plurality of images with the highest matching scores; and
and outputting a matching result.
2. The image matching method based on feature detection according to claim 1, wherein the step of detecting feature points by a SIFT method after the original image and the template image are input, extracting N feature points, and generating a feature descriptor according to coordinates, scale, and direction of each feature point specifically comprises: respectively solving feature points and descriptors related to gradients and directions in the original image and the template image to obtain features, wherein the features comprise scale space extreme value detection, key point interpolation positioning, direction determination and key point description.
3. The method of claim 2, wherein the step of detecting feature points by a SIFT method after the original image and the template image are input, extracting N feature points, and generating a feature descriptor according to coordinates, scale, and direction of each feature point further comprises: and performing gradient calculation on the key points to generate pixel gradients of the gradient histogram image so as to determine directions, taking 44 region blocks around the feature points, counting 8 gradient directions in each small block, and using the vectors as descriptors of the SIFT features.
4. The image matching method based on feature detection according to claim 1, wherein the step of extracting a plurality of previous images with the highest matching scores by using SIFT and BoW algorithms according to the degree of matching between the feature points of each template image and the feature points of the original image after the target feature point set is generated specifically comprises:
preprocessing image data, including enhancing, rotating, filtering and segmenting;
carrying out efficient, stable and repeatable feature extraction on image data;
establishing an image characteristic database for the image data;
extracting retrieval image features and constructing feature vectors;
the design retrieval module comprises similarity measurement criteria, sorting and searching;
and outputting a result with higher similarity.
5. The image matching method based on feature detection according to claim 1, wherein the step of performing local feature matching by using SIFT and AdaLAM algorithms and selecting the best matching image from the plurality of images with the highest matching scores specifically comprises:
finding an initial match;
finding out points with high confidence coefficient and better distribution as seed points;
selecting a matching point in the same area with the seed point in the initial matching;
the best locally consistent match is retained.
6. An image matching apparatus based on feature detection, the apparatus comprising:
a feature point extraction module: the method comprises the steps of detecting feature points through an SIFT method after an original image and a template image are input, and extracting N feature points;
the characteristic point description module: the characteristic descriptor is generated according to the coordinate, the scale and the direction of each characteristic point;
a target feature point set generation module: the method comprises the steps of acquiring corresponding characteristic points between an original image and a template image, and taking the corresponding characteristic points as a target characteristic point set;
a feature matching module: after the target feature point set is generated, according to the feature point matching degree of each template image and the original image, extracting a plurality of previous images with the highest matching scores by utilizing SIFT and BoW algorithms, then performing local feature matching by utilizing SIFT and AdaLAM algorithms, and selecting the best matching image from the plurality of images with the highest matching scores; and
an output module: for outputting the matching result.
7. A computer device, characterized in that the device comprises at least one processor and at least one memory storing at least one program which, when executed by the at least one processor, causes the at least one processor to carry out the feature detection based image matching method according to any one of claims 1-5.
8. A storage medium having stored therein a program executable by a processor, characterized in that: the processor-executable program is for implementing the feature detection based image matching method as claimed in any one of claims 1-5 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110568405.5A CN113298146A (en) | 2021-05-25 | 2021-05-25 | Image matching method, device, equipment and medium based on feature detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110568405.5A CN113298146A (en) | 2021-05-25 | 2021-05-25 | Image matching method, device, equipment and medium based on feature detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113298146A true CN113298146A (en) | 2021-08-24 |
Family
ID=77324485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110568405.5A Pending CN113298146A (en) | 2021-05-25 | 2021-05-25 | Image matching method, device, equipment and medium based on feature detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298146A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674260A (en) * | 2021-08-26 | 2021-11-19 | 万安裕高电子科技有限公司 | SMT welding spot defect detection method |
CN116386060A (en) * | 2023-03-23 | 2023-07-04 | 浪潮智慧科技有限公司 | Automatic water gauge data labeling method, device, equipment and medium |
CN117058432B (en) * | 2023-10-11 | 2024-01-30 | 北京万方数据股份有限公司 | Image duplicate checking method and device, electronic equipment and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550381A (en) * | 2016-03-17 | 2016-05-04 | 北京工业大学 | Efficient image retrieval method based on improved SIFT (scale invariant feature transform) feature |
CN108255858A (en) * | 2016-12-29 | 2018-07-06 | 北京优朋普乐科技有限公司 | A kind of image search method and system |
CN111914117A (en) * | 2020-07-03 | 2020-11-10 | 武汉邦拓信息科技有限公司 | Retrieval-oriented monitoring video big data recording method and system |
CN111930985A (en) * | 2020-07-08 | 2020-11-13 | 泰康保险集团股份有限公司 | Image retrieval method and device, electronic equipment and readable storage medium |
CN112652020A (en) * | 2020-12-23 | 2021-04-13 | 上海应用技术大学 | Visual SLAM method based on AdaLAM algorithm |
-
2021
- 2021-05-25 CN CN202110568405.5A patent/CN113298146A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550381A (en) * | 2016-03-17 | 2016-05-04 | 北京工业大学 | Efficient image retrieval method based on improved SIFT (scale invariant feature transform) feature |
CN108255858A (en) * | 2016-12-29 | 2018-07-06 | 北京优朋普乐科技有限公司 | A kind of image search method and system |
CN111914117A (en) * | 2020-07-03 | 2020-11-10 | 武汉邦拓信息科技有限公司 | Retrieval-oriented monitoring video big data recording method and system |
CN111930985A (en) * | 2020-07-08 | 2020-11-13 | 泰康保险集团股份有限公司 | Image retrieval method and device, electronic equipment and readable storage medium |
CN112652020A (en) * | 2020-12-23 | 2021-04-13 | 上海应用技术大学 | Visual SLAM method based on AdaLAM algorithm |
Non-Patent Citations (3)
Title |
---|
SHANE ZHAO: "SIFT+BOW 实现图像检索", pages 1 - 8, Retrieved from the Internet <URL:https://blog.csdn.net/silence2015/article/details/77374910> * |
VINCENT QIN: "笔记:AdaLAM: Revisiting Handcrafted Outlier Detection 超强外点滤除算法", pages 1 - 7, Retrieved from the Internet <URL:https://vincentqin.tech/posts/adalam/> * |
龚涛编著: "《摄影测量学》", 30 April 2014, 成都:西南交通大学出版社, pages: 185 - 190 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674260A (en) * | 2021-08-26 | 2021-11-19 | 万安裕高电子科技有限公司 | SMT welding spot defect detection method |
CN116386060A (en) * | 2023-03-23 | 2023-07-04 | 浪潮智慧科技有限公司 | Automatic water gauge data labeling method, device, equipment and medium |
CN116386060B (en) * | 2023-03-23 | 2023-11-14 | 浪潮智慧科技有限公司 | Automatic water gauge data labeling method, device, equipment and medium |
CN117058432B (en) * | 2023-10-11 | 2024-01-30 | 北京万方数据股份有限公司 | Image duplicate checking method and device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109948425B (en) | Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching | |
CN109002562B (en) | Instrument recognition model training method and device and instrument recognition method and device | |
CN109960742B (en) | Local information searching method and device | |
CN113298146A (en) | Image matching method, device, equipment and medium based on feature detection | |
CN109726746B (en) | Template matching method and device | |
CN111914921A (en) | Similarity image retrieval method and system based on multi-feature fusion | |
CN113360701A (en) | Sketch processing method and system based on knowledge distillation | |
CN109685830B (en) | Target tracking method, device and equipment and computer storage medium | |
KR102166117B1 (en) | Semantic matchaing apparatus and method | |
Radkowski et al. | Natural feature tracking augmented reality for on-site assembly assistance systems | |
CN110942473A (en) | Moving target tracking detection method based on characteristic point gridding matching | |
CN115115825B (en) | Method, device, computer equipment and storage medium for detecting object in image | |
CN115690803A (en) | Digital image recognition method and device, electronic equipment and readable storage medium | |
CN117671508B (en) | SAR image-based high-steep side slope landslide detection method and system | |
CN110083731B (en) | Image retrieval method, device, computer equipment and storage medium | |
CN105825215B (en) | It is a kind of that the instrument localization method of kernel function is embedded in based on local neighbor and uses carrier | |
CN114168768A (en) | Image retrieval method and related equipment | |
CN103064857B (en) | Image inquiry method and image querying equipment | |
Schmid et al. | Object recognition using local characterization and semi-local constraints | |
CN111401252B (en) | Book spine matching method and equipment of book checking system based on vision | |
Lee et al. | Bag of sampled words: a sampling-based strategy for fast and accurate visual place recognition in changing environments | |
CN111428565B (en) | Point cloud identification point positioning method and device based on deep learning | |
CN114663760A (en) | Model training method, target detection method, storage medium and computing device | |
CN108334884B (en) | Handwritten document retrieval method based on machine learning | |
CN114519729A (en) | Image registration quality evaluation model training method and device and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |