CN112633081B - Specific object identification method in complex scene - Google Patents
Specific object identification method in complex scene Download PDFInfo
- Publication number
- CN112633081B CN112633081B CN202011406594.8A CN202011406594A CN112633081B CN 112633081 B CN112633081 B CN 112633081B CN 202011406594 A CN202011406594 A CN 202011406594A CN 112633081 B CN112633081 B CN 112633081B
- Authority
- CN
- China
- Prior art keywords
- image
- scene
- feature
- reference image
- pyramid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for identifying a specific object in a complex scene, and belongs to the field of digital image processing. The method comprises the following specific steps: acquiring a reference image of a specific object and a scene image containing the object; establishing an image pyramid of a reference image; performing Gaussian filtering on the scene image, extracting and matching feature points with each layer of the reference image pyramid respectively, and finding out the pyramid layer number of the reference image closest to the scene image scale according to the number of matched feature points; based on the corresponding layer number, adopting extended interpolation for the scene image to improve the resolution of the image; performing Gaussian filtering on the reference image, and extracting and matching feature points with the scene graph after the extended interpolation, so that the error matching generated in the cross-layer matching process can be reduced; and finally, obtaining the position information of the specific object in the scene. The method has the advantages of not only having fast recognition speed, but also obviously improving the number of correctly matched characteristic point pairs in precision so as to achieve the aim of correct recognition.
Description
Technical Field
The invention relates to the field of digital image processing, in particular to a specific object identification method in a complex scene.
Background
The method is widely applied to the fields of industrial automation and intelligent robots. The specific object recognition means that a specific object (e.g., my cup) is recognized in the scene image, and the corresponding general object recognition means that a class of objects (e.g., cup) is recognized in the scene image. At present, in the field of digital image processing, a method of local invariant features is generally adopted to identify a specific object in a complex scene, and a deep learning method is adopted to identify a general object in the complex scene.
The design idea of local invariant features is that the image is composed of different types of target regions, the parameters of the target regions, such as color, brightness, distribution form and the like, are different, and each target region has a specific control range, namely each target region only affects the local part of the image. These local structures are highly representative due to the rich image information contained therein. On one hand, the device is not easily influenced by the change factors such as translation, rotation, scaling, dimension, visual angle, illumination, blurring and compression of the external environment; on the other hand, the defect that the traditional global feature is easily influenced by a complex background or noise attack and the like can be avoided to a great extent. In addition, compared with the blindness of the traditional global features, the localized feature processing method is more suitable for the actual situation of image data and human vision, and useful information is searched more specifically from the local part. Thus, the locally invariant feature has great advantages in terms of stability, repeatability and distinctiveness.
The local invariant features are divided into feature angular points and feature spots, specific objects are recognized in a complex scene, the recognition method adopting the feature spots is good in robustness, but high in complexity and cannot meet the real-time requirements in the fields of industrial automation, intelligent robots and the like, the recognition method adopting the feature angular points is high in calculation efficiency, but low in robustness, few in number of correctly matched feature point pairs and inaccurate in positioning.
The first cause of inaccurate positioning is the problem of scale space: in the method of adopting characteristic corner point identification, the scale change between image matching is simulated by establishing an image pyramid, so that a plurality of error matches exist during cross-scale matching; the second reason is the problem of image resolution: the resolution of the image pyramid is gradually decreased layer by layer, so that the number of detected key points is greatly influenced, and particularly, a corner detector based on neighborhood detection is adopted in a corner identification method; the third reason is that, unlike image feature matching applications (e.g., wide baseline matching), the reference image of a specific object is a part of the scene image, and therefore the feature points of other objects in the scene image interfere with the matching result, which is determined by the characteristics of the specific object identification method.
Disclosure of Invention
The invention aims to provide a method for identifying a specific object with high robustness and high calculation efficiency in a complex scene, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a specific object identification method under a complex scene comprises the following specific steps:
(1) inputting a reference image of a specific object and a scene image containing the object;
(2) establishing a reference image Ir(x, y) image pyramid Pr(x,y,n):
Pr(x,y,n)=Fs(n)[Ir(x,y)]N is 0, 1, 2.. N (formula 1)
Wherein Fs(n)Bilinear interpolation with s (N) as scale factor, N is the total layer number of the image pyramid;
(3) extracting an image pyramid PrLocal invariant feature corner C of each layer in (x, y, n)r(Nr,n) And generates a corresponding feature descriptor Dr(Nr,n):
Wherein C isr(Nr,n) Characteristic corner points, D, representing the pyramid of the nth level reference imager(Nr,n) Feature descriptor representing the pyramid of the nth level reference image, Nr,nThe pyramid feature quantity of the nth layer reference image is obtained;
(4) according to Nr,nCalculating the feature quantity of the scene image to be feature-matched with the reference image pyramid, wherein the formula is as follows:
wherein N isos,nNumber of features, R, for scene images feature-matched with nth level reference image pyramidr,nImage resolution, R, for the nth level reference image pyramidsIs the resolution of the scene image;
(5) for a scene image I using a Gaussian filter G (x, y, σ)s(x, y) filtering, and then extracting a local invariant feature corner C of the filtered scene images(Ns) And generates a corresponding feature descriptor Ds(Ns):
Where σ is a filter kernel parameter, NsThe feature quantity of the filtered scene image is obtained;
(6) according to N obtained in the step (4)os,nFor scene image characteristic descriptor Ds(Ns) Is limited to obtain Ds(Nos,n) Then, performing feature matching with feature descriptors on a corresponding layer of the reference image pyramid to obtain a matched feature point pair k (n) of each layer of the scale space:
(7) initial scale factor S based on reference image pyramidinitAnd distributing different weights to k (n), wherein the maximum value is the corresponding matching layer number c:
(8) calculating scale factor of bilateral interpolation by corresponding matching layer number cThen using the scale factorCarrying out extended interpolation on the scene image to obtain an extended scene image Es(x,y):
(9) Extraction of EsLocal invariant feature corner point C of (x, y)'s(N′s) And generates a corresponding feature descriptor D's(N′s):
Wherein N'sThe feature quantity of the expanded scene image is obtained;
(10) for reference image Ir(x, y) filtering using a Gaussian filter, and then extracting local invariant feature corner points C 'of the filtered reference image'r(N′r) And generates a corresponding feature descriptor D'r(N′r):
Wherein N'rThe feature quantity of the filtered reference image is obtained;
(11) according to N'rCalculating the quantity of scene image features to be feature-matched with the filtered reference image, wherein the formula is as follows:
wherein N'osThe number of features of the scene image for feature matching with the filtered reference image, alpha being a coefficient factor, betarAnd betasEntropy, R, of the reference image and scene image, respectivelyrIs the resolution of the reference image;
(12) according to N 'obtained in step (11)'osTo feature descriptor D's(N′s) Is limited in number to give D's(N′os) Then with the filtered reference picture feature descriptor D'r(N′r) And performing feature matching to obtain matched feature point pairs k':
(13) and calculating the geometric transformation of the reference target in the scene image according to the matched characteristic point pair k', thereby obtaining the position information of the target in the scene, namely completing the identification.
As a further scheme of the invention: the mesoscale factor s (n) in (formula 1) is calculated as follows:
wherein SinitIs an initialization constant for the scale factor.
As a further scheme of the invention: the constant S is initialized in the (equation 2)initThe value of (A) is 1.2.
As a further scheme of the invention: the value of N in the (formula 1) is 7.
As a further scheme of the invention: the value of the filter kernel parameter σ in (equation 5) is 0.3.
As a further scheme of the invention: the value of the coefficient factor α in the (equation 11) is 2.
Compared with the prior art, the invention has the beneficial effects that:
the specific object identification method has high calculation efficiency, has high robustness in natural and complex scenes, and can accurately and quickly identify the specific object.
Drawings
FIG. 1 is a reference image of a specific object to be identified in an embodiment of the present invention.
FIG. 2 is an image of a scene containing an object to be identified in accordance with an embodiment of the present invention.
Fig. 3 is an effect diagram of marking matched pairs of feature points after object identification by using the method of the present invention in the embodiment of the present invention.
FIG. 4 is a diagram illustrating the effect of marking the position of an object after the object is identified by the method of the present invention according to an embodiment of the present invention.
Fig. 5 is a flow chart of object recognition using the method of the present invention in an embodiment of the present invention.
Detailed Description
The following detailed description of the present patent refers to the accompanying drawings and detailed description.
A specific object identification method under a complex scene comprises the following specific steps:
(1) inputting a reference image of a specific object and a scene image containing the object;
(2) establishing a reference image Ir(x, y) image pyramid Pr(x,y,n):
Pr(x,y,n)=Fs(n)[Ir(x,y)]N is 0, 1, 2.. N (formula 1)
Wherein Fs(n)Bilinear interpolation with S (n) as scale factor, Sinit1.2 is the initialization constant of the scale factor, and N7 is the total number of layers of the image pyramid;
(3) extracting an image pyramid Pr(x, y, n) local invariant feature corner C of each layerr(Nr,n) And generates a corresponding feature descriptor Dr(Nr,n):
Wherein Cr(Nr,n) Characteristic corner points, D, representing the pyramid of the nth level reference imager(Nr,n) Feature descriptor representing the pyramid of the nth level reference image, Nr,nThe pyramid feature quantity of the nth layer reference image is obtained;
(4) according to Nr,nCalculating the feature quantity of the scene image to be feature-matched with the reference image pyramid, wherein the formula is as follows:
wherein N isos,nNumber of features for scene images feature-matched with nth level reference image pyramid, Nr,nImage resolution, R, for the nth level reference image pyramidsIs the resolution of the scene image;
(5) image of scene I using gaussian filter G (x, y, σ) with filter kernel parameter σ of 0.3s(x, y) filtering, and then extracting a local invariant feature corner C of the filtered scene images(Ns) And generates a corresponding feature descriptor Ds(Ns):
Wherein N issThe feature quantity of the filtered scene image is obtained;
(6) according to N obtained in the step (4)os,nFeature descriptor D for scene images(Ns) Is limited to obtain Ds(Nos,n) Then, performing feature matching with feature descriptors on the corresponding layer of the reference image pyramid to obtain a matched feature point pair k (n) of each layer of the scale space:
(7) initial scale factor S based on reference image pyramidinitDistributing different weights to k (n), wherein the maximum value is the corresponding matching layer number c:
(8) calculating scale factor of bilateral interpolation by corresponding matching layer number cThen using the scale factorCarrying out extended interpolation on the scene image to obtain an extended scene image Es(x,y):
(9) Extraction of EsLocal invariant feature corner point C of (x, y)'s(N′s) And generating a corresponding feature descriptor D's(N′s):
Wherein N'sThe feature quantity of the expanded scene image is obtained;
(10) for reference image Ir(x, y) filtering by using a Gaussian filter with a filtering kernel parameter sigma of 0.3, and then extracting a local invariant feature corner point C 'of the filtered reference image'r(N′r) And generates a corresponding feature descriptor D'r(N′r):
Wherein N'rThe feature quantity of the filtered reference image is obtained;
(11) according to N'rCalculating the quantity of scene image features to be feature-matched with the filtered reference image, wherein the formula is as follows:
wherein N'osThe number of features of the scene image for feature matching with the filtered reference image, α -2 being a coefficient factor, βrAnd betasEntropy, R, of the reference image and scene image, respectivelyrIs the resolution of the reference image;
(12) according to N 'obtained in step (11)'osTo feature descriptor D's(N′s) Is limited in number to give D's(N′os) Then with the filtered reference picture feature descriptor D'r(N′r) And performing feature matching to obtain matched feature point pairs k':
(13) and calculating the geometric transformation of the reference target in the scene image according to the matched characteristic point pair k', thereby obtaining the position information of the target in the scene, namely completing the identification.
In an embodiment, the method of the present invention is used to identify a specific object in the scene image shown in fig. 1, as follows:
(1) the captured reference image of a particular object (as shown in fig. 1) and an image of a scene containing the object (as shown in fig. 2) are input.
(2) Establishing a reference image Ir(x, y) image pyramid Pr(x,y,n):
Pr(x,y,n)=Fs(n)[Ir(x,y)]N is 0, 1, 2.. N (formula 1)
Wherein Fs(n)Bilinear interpolation with S (n) as scale factor, Sinit1.2 is the initialization constant of the scale factor, and N7 is the total number of layers of the image pyramid.
(3) Extracting an image pyramid PrORB feature corner C of each layer in (x, y, n)r(Nr,n) And generates a corresponding ORB binary feature descriptor Dr(Nr,n):
Wherein C isr(Nr,n) Characteristic corner points, D, representing the pyramid of the nth level reference imager(Nr,n) Feature descriptor representing the pyramid of the nth level reference image, Nr,nAnd the pyramid feature quantity of the nth layer reference image is obtained.
(4) Since the scene image contains more objects than the reference image, the number of feature points of the scene image is much greater than the number of feature points of the scene image, and therefore the number of feature points of the scene image to be feature-matched with the reference image pyramid needs to be limited, according to Nr,nCalculating the number of scene image features to be feature-matched with the reference image pyramid, wherein the formula is as follows:
wherein N isos,nNumber of features, R, for scene images feature-matched with nth level reference image pyramidr,nImage resolution, R, for the nth level reference image pyramidsThe resolution of the scene image.
(5) Image of scene I using gaussian filter G (x, y, σ) with filter kernel parameter σ of 0.3s(x, y) filtering to reduce the interference of other parts in the scene image to the matching result, and then extracting the ORB characteristic corner C of the filtered scene images(Ns) And generates a corresponding ORB binary feature descriptor Ds(Ns):
Wherein N issIs the feature quantity of the filtered scene image.
(6) According to N obtained in the step (4)os,nFor scene image characteristic descriptor Ds(Ns) Is limited in number byTo Ds(Nos,n) Then, calculating the Hamming distance between the characteristic descriptors on the corresponding layer of the reference image pyramid for characteristic matching to obtain a characteristic point pair k (n) matched with each layer of the scale space:
(7) initial scale factor S based on reference image pyramidinitAnd distributing different weights to k (n), wherein the maximum value is the corresponding matching layer number c:
(8) calculating scale factor of bilateral interpolation by corresponding matching layer number cThen using the scale factorCarrying out extended interpolation on the scene image to obtain an extended scene image Es(x,y):
(9) Extraction of EsORB feature corner point C of (x, y)'s(N′s) And generates a corresponding ORB binary feature descriptor D's(N′s):
Wherein N'sThe feature quantity of the expanded scene image.
(10) The extended interpolation of the scene image in the step (8) causes the loss of image detail information becauseThe pair of reference images Ir(x, y) simulating the loss of the part by filtering with a gaussian filter having a filtering kernel parameter σ of 0.3, and then extracting an ORB feature corner point C 'of the filtered reference image'r(N′r) And generates a corresponding ORB binary feature descriptor D'r(N′r):
Wherein N'rIs the feature quantity of the filtered reference picture.
(11) Limiting the number of characteristic point pairs of the scene image not only relates to the resolution of the image, but also relates to the texture information of the object to be identified in the image, if the texture information of the object to be identified is less, the object to be identified is easily interfered by other objects in the scene image, so in order to increase the number of the correctly matched characteristic point pairs, the number of the characteristic point pairs is determined according to N'rWhen calculating the number of scene image features to be feature-matched with the filtered reference image, the information entropy of the image is taken into account, and the formula is as follows:
wherein N'osThe number of features of the scene image for feature matching with the filtered reference image, α -2 being a coefficient factor, βrAnd betasEntropy, R, of the reference image and scene image, respectivelyrIs the resolution of the reference picture.
(12) According to N 'obtained in step (11)'osTo feature descriptor D's(N′s) Is limited in number to give D's(N′os) Then, the filtered reference image feature descriptor D 'is calculated'r(N′r) Carrying out feature matching on the Hamming distance between the two points to obtain a matched feature point pair k':
as shown in fig. 3, matching feature points are respectively marked in the reference image and the scene image, and matching pairs of feature points are marked with straight lines.
(13) And calculating the geometric transformation of the reference object in the scene image according to the matching feature point pair k', so as to obtain the position information of the object in the scene, namely completing the identification, as shown in fig. 4.
In practical application, the reference image of the object to be recognized is preserved in advance, the image pyramid of the reference image is established in advance, the features are extracted, and then the features are preserved for standby application, so that the steps (2), (3) and (4) in the method do not generate the consumption of computing resources. In practical operation, although the method of the present invention has more steps of finding the number of corresponding matching layers than the conventional local invariant corner feature method, the method still has advantages in computational efficiency because: in the conventional feature extraction method, feature points need to be extracted and matched for each layer of the image pyramids of the reference image and the scene image (for example, if the reference image has 7 layers of image pyramids, and the scene image has 7 layers of image pyramids, matching is needed 49 times). The method provided by the invention can obtain a result by one-time matching according to the corresponding matching layer number, more importantly, the method can reduce the error matching points generated during cross-layer matching, and can obviously improve the number of correctly matched characteristic point pairs in precision.
The specific object identification method provided by the invention has high calculation efficiency, has high robustness in a natural and complex scene, and can correctly and quickly identify the specific object.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications may be made or equivalents may be substituted for some of the features thereof without departing from the scope of the present invention, and such modifications and substitutions should also be considered as the protection scope of the present invention.
Claims (6)
1. A specific object identification method under a complex scene is characterized by comprising the following specific steps:
(1) inputting a reference image of a specific object and a scene image containing the object;
(2) establishing a reference image Ir(x, y) image pyramid Pr(x,y,n):
Pr(x,y,n)=Fs(n)[Ir,(x,y)]N is 0, 1, 2.. N (formula 1)
Wherein Fs(n)Bilinear interpolation with s (N) as scale factor, N is the total layer number of the image pyramid;
(3) extracting an image pyramid Pr(x, y, n) local invariant feature corner C of each layerr(Nr,n) And generates a corresponding feature descriptor Dr(Nr,n):
Wherein C isr(Nr,n) Characteristic corner points, D, representing the pyramid of the nth level reference imager(Nr,n) Feature descriptor representing the pyramid of the nth level reference image, Nr,nThe pyramid feature quantity of the nth layer reference image is obtained;
(4) according to Nr,nCalculating the feature quantity of the scene image to be feature-matched with the reference image pyramid, wherein the formula is as follows:
wherein N isos,nNumber of features, R, for scene images feature-matched with nth level reference image pyramidr,nImage resolution, R, for the nth level reference image pyramidsIs the resolution of the scene image;
(5) using a Gaussian filter G (x, y, σ) for an image I of a scenes(x, y) filtering and then extractingLocal invariant feature corner C of filtered scene images(Ns) And generates a corresponding feature descriptor Ds(Ns):
Where σ is a filter kernel parameter, NsThe feature quantity of the filtered scene image is obtained;
(6) according to N obtained in the step (4)os,nFeature descriptor D for scene images(Ns) Is limited to obtain Ds(Nos,n) Then, performing feature matching with feature descriptors on a corresponding layer of the reference image pyramid to obtain a matched feature point pair k (n) of each layer of the scale space:
(7) initial scale factor S based on reference image pyramidinitDistributing different weights to k (n), wherein the maximum value is the corresponding matching layer number c:
(8) calculating scale factor of bilateral interpolation by corresponding matching layer number cThen using the scale factorCarrying out extended interpolation on the scene image to obtain an extended scene image Es(x,y):
(9) Extraction of EsLocal invariant feature corner point C of (x, y)'s(N′s) And generates a corresponding feature descriptor D's(N′s):
Wherein N'sThe feature quantity of the expanded scene image is obtained;
(10) for reference image Ir(x, y) filtering by using a Gaussian filter, and then extracting local invariant feature corner points C 'of the filtered reference image'r(N′r) And generates a corresponding feature descriptor D'r(N′r):
Wherein N'rThe feature quantity of the filtered reference image is obtained;
(11) according to N'rCalculating the quantity of scene image features to be feature-matched with the filtered reference image, wherein the formula is as follows:
wherein N'osThe number of features of the scene image for feature matching with the filtered reference image, alpha being a coefficient factor, betarAnd betasEntropy, R, of the reference image and scene image, respectivelyrIs the resolution of the reference image;
(12) according to N 'obtained in step (11)'osTo feature descriptor D's(N′s) Is limited in number to give D's(N′os) Then with the filtered reference picture feature descriptorD′r(N′r) And performing feature matching to obtain matched feature point pairs k':
(13) and calculating the geometric transformation of the reference target in the scene image according to the matched characteristic point pair k', thereby obtaining the position information of the target in the scene, namely completing the identification.
3. The method for identifying specific objects in complex scene according to claim 2, wherein the constant S is initialized in the formula 2initThe value of (A) is 1.2.
4. The method for identifying the specific object in the complex scene according to claim 1, wherein the value of N in (formula 1) is 7.
5. The method for identifying specific objects in a complex scene according to claim 1, wherein the value of the filter kernel parameter σ in (formula 5) is 0.3.
6. The method for identifying specific objects in a complex scene according to claim 1, wherein the value of the coefficient factor α in the formula 11 is 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011406594.8A CN112633081B (en) | 2020-12-07 | 2020-12-07 | Specific object identification method in complex scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011406594.8A CN112633081B (en) | 2020-12-07 | 2020-12-07 | Specific object identification method in complex scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112633081A CN112633081A (en) | 2021-04-09 |
CN112633081B true CN112633081B (en) | 2022-07-01 |
Family
ID=75308045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011406594.8A Active CN112633081B (en) | 2020-12-07 | 2020-12-07 | Specific object identification method in complex scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112633081B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7382897B2 (en) * | 2004-04-27 | 2008-06-03 | Microsoft Corporation | Multi-image feature matching using multi-scale oriented patches |
US8406507B2 (en) * | 2009-01-14 | 2013-03-26 | A9.Com, Inc. | Method and system for representing image patches |
CN111144360A (en) * | 2019-12-31 | 2020-05-12 | 新疆联海创智信息科技有限公司 | Multimode information identification method and device, storage medium and electronic equipment |
CN111898428A (en) * | 2020-06-23 | 2020-11-06 | 东南大学 | Unmanned aerial vehicle feature point matching method based on ORB |
-
2020
- 2020-12-07 CN CN202011406594.8A patent/CN112633081B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112633081A (en) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110097093B (en) | Method for accurately matching heterogeneous images | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
He et al. | Sparse template-based 6-D pose estimation of metal parts using a monocular camera | |
US8994723B2 (en) | Recognition and pose determination of 3D objects in multimodal scenes | |
CN107145829B (en) | Palm vein identification method integrating textural features and scale invariant features | |
CN106981077B (en) | Infrared image and visible light image registration method based on DCE and LSS | |
CN111767960A (en) | Image matching method and system applied to image three-dimensional reconstruction | |
CN110706293B (en) | SURF feature matching-based electronic component positioning and detecting method | |
Hagara et al. | About Edge Detection in Digital Images. | |
CN109919971B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN106340010B (en) | A kind of angular-point detection method based on second order profile difference | |
CN108229500A (en) | A kind of SIFT Mismatching point scalping methods based on Function Fitting | |
CN108550165A (en) | A kind of image matching method based on local invariant feature | |
CN108447092B (en) | Method and device for visually positioning marker | |
CN110991501B (en) | Improved ORB feature point matching method based on Hessian matrix | |
CN112633081B (en) | Specific object identification method in complex scene | |
Feng et al. | A feature detection and matching algorithm based on Harris Algorithm | |
CN111340134A (en) | Rapid template matching method based on local dynamic warping | |
CN112907662B (en) | Feature extraction method and device, electronic equipment and storage medium | |
Koutaki et al. | Fast and high accuracy pattern matching using multi-stage refining eigen template | |
CN108364013B (en) | Image key point feature descriptor extraction method and system based on neighborhood Gaussian differential distribution | |
CN114255398A (en) | Method and device for extracting and matching features of satellite video image | |
Wu et al. | An algorithm for extracting spray trajectory based on laser vision | |
CN113436251A (en) | Pose estimation system and method based on improved YOLO6D algorithm | |
Chen et al. | Method of item recognition based on SIFT and SURF |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |