CN110163894B - Sub-pixel level target tracking method based on feature matching - Google Patents

Sub-pixel level target tracking method based on feature matching Download PDF

Info

Publication number
CN110163894B
CN110163894B CN201910397719.6A CN201910397719A CN110163894B CN 110163894 B CN110163894 B CN 110163894B CN 201910397719 A CN201910397719 A CN 201910397719A CN 110163894 B CN110163894 B CN 110163894B
Authority
CN
China
Prior art keywords
frame image
point
tracking
feature vector
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910397719.6A
Other languages
Chinese (zh)
Other versions
CN110163894A (en
Inventor
窦润江
刘力源
刘剑
吴南健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Semiconductors of CAS
Original Assignee
Institute of Semiconductors of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Semiconductors of CAS filed Critical Institute of Semiconductors of CAS
Priority to CN201910397719.6A priority Critical patent/CN110163894B/en
Publication of CN110163894A publication Critical patent/CN110163894A/en
Application granted granted Critical
Publication of CN110163894B publication Critical patent/CN110163894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A sub-pixel level target tracking method based on feature matching comprises the following steps: selecting a tracking point of a first frame image from continuously transmitted images as a reference tracking point; respectively processing the first frame image and the Nth frame image to obtain a feature vector of the first frame image and a feature vector of the Nth frame image, wherein N is a natural number greater than 1; matching the feature vectors of the first frame image and the Nth frame image to obtain feature point pairs; and estimating the characteristic point pairs to obtain a conversion matrix, and performing point multiplication operation on the conversion matrix and the reference tracking points to obtain new tracking points to finish updating of the tracking points. The sub-pixel level target tracking method based on feature matching can be used for tracking one point in a target with high precision, and has robustness when an image tracking point region has obvious representation change; meanwhile, the method is simple in calculation, high in parallelism, beneficial to accelerated calculation and widely applicable to a high-speed high-precision real-time tracking photoelectric countermeasure system.

Description

Sub-pixel level target tracking method based on feature matching
Technical Field
The invention relates to the technical field of image processing and target tracking, in particular to a high-precision target tracking method based on features.
Background
Target tracking has been a popular research direction for both academic research and practical applications over the past few decades. There are currently a basic classification into grayscale-based and feature-based tracking methods. The gray level-based tracking algorithm is mainly divided into template matching and clustering, the two target tracking methods are simple in calculation and suitable for occasions needing real-time tracking, but the matching error is large and the robustness is poor. Among the feature-based target methods, the target tracking method based on online learning also has the problems of easy drift, easy degradation, poor real-time performance and the like. The target tracking method based on deep learning is a hotspot at the present stage, has high tracking accuracy, but needs large-scale training aiming at an application scene, and generally cannot meet the requirement of real-time property. In a traditional target tracking method based on characteristics, a tracking method based on characteristic matching describes a target by using characteristics, and tracks the target by matching description vectors. Because a large amount of parallelism exists in the calculation process, the method can achieve higher real-time performance. However, such methods generally track the whole target, and if tracking is performed on one point in the target, drift often occurs, and even the target is lost.
Disclosure of Invention
Technical problem to be solved
In view of the above, the present invention is to provide a sub-pixel level target tracking method based on feature matching, so as to at least partially solve the above-mentioned disadvantages in the prior art.
(II) technical scheme
The invention provides a sub-pixel level target tracking method based on feature matching, which comprises the following steps:
selecting a tracking point of a first frame image from continuously transmitted images as a reference tracking point;
respectively processing the first frame image and the Nth frame image to obtain a feature vector of the first frame image and a feature vector of the Nth frame image, wherein N is a natural number greater than 1;
matching the feature vector of the first frame image with the feature vector of the Nth frame image to obtain a feature point pair;
and estimating the characteristic point pairs to obtain a conversion matrix, and performing point multiplication operation on the conversion matrix and the reference tracking points to obtain new tracking points to finish updating of the tracking points.
Wherein the step of selecting the tracking point of the first frame image as the reference tracking point in the continuously transmitted images comprises:
determining the starting point ranges of the target contour in the X direction and the Y direction;
selecting the middle point of two points on the contour as an alternative tracking point in the direction with a larger range of the starting point in the X, Y two directions;
in the direction with a smaller range of the starting point in the two directions of X, Y, selecting an alternative tracking point every 2n pixels, wherein 2n is more than or equal to 8 and less than or equal to 10, and n is a natural number;
judging whether an angular point exists in a neighborhood of the alternative tracking point with the radius of n pixels, and if the angular point does not exist, taking the alternative tracking point as a reference tracking point; if there are multiple alternative tracking points, one is randomly selected as the reference tracking point.
Wherein the step of obtaining the feature vector of the first frame image and the feature vector of the Nth frame image comprises:
respectively extracting outlines of the first frame image and the Nth frame image to obtain an image block of the first frame image and an image block of the Nth frame image, and determining the sizes of the image block of the first frame image and the image block of the Nth frame image;
respectively carrying out sub-pixel corner detection on an image block of the first frame image and an image block of the Nth frame image to obtain corner coordinates of the first frame image and corner coordinates of the Nth frame image;
and according to the coordinates of the corner point of the first frame image and the coordinates of the corner point of the Nth frame image, performing binary feature vector extraction on the corner point of the first frame image and the corner point of the Nth frame image to obtain the feature vector of the corner point of the first frame image and the feature vector of the corner point of the Nth frame image.
Wherein the contour extraction step comprises: thresholding, erosion, dilation and contour extraction of the image; the sub-pixel corner detection is to detect the image block by a Shi-Tomasi corner detection method; the step of binary feature vector extraction is to adopt a FREAK feature extraction method and set the feature vector to be 256 bits or 516 bits.
In the step of matching the feature vector of the first frame image with the feature vector of the Nth frame image, the Hamming distance is used as a judgment basis, and the proportional threshold is set to be 8-12.
In the step of estimating the feature point pairs to obtain the conversion matrix, the RANSAC algorithm is adopted to estimate the feature point pairs to obtain the conversion matrix.
After the method completes the updating of the tracking point, the method also comprises the step of updating the reference tracking point, which comprises the following steps:
setting a threshold value P, and judging whether the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are consistent or not when N is equal to P;
if the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are consistent, setting the tracking point of the P frame image as a new reference tracking point;
and if the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are not consistent, making P equal to P +1, returning to the judging step, judging whether the tracking points of the new P-1 frame image, the P frame image and the P +1 frame image are consistent or not, and performing circular judgment.
(III) advantageous effects
According to the technical scheme, the sub-pixel level target tracking method based on feature matching has the following beneficial effects:
(1) according to the sub-pixel level target tracking method based on feature matching, a new tracking point is obtained through the operation of a conversion matrix obtained by estimating the feature point pairs and the reference tracking point, so that when the target is obviously characterized and changed near the tracking point, stable tracking can still be achieved, and the robustness of the algorithm is not affected.
(2) The sub-pixel level target tracking method based on the feature matching improves the tracking precision through the sub-pixel level feature point detection method and the tracking point matching.
(3) The sub-pixel level target tracking method based on feature matching provided by the invention adopts sub-pixel corner detection and binary feature vector extraction, so that the operation amount is reduced, the method can be realized by using hardware, and faster tracking can be realized.
Drawings
Fig. 1 is a schematic diagram of a sub-pixel level target tracking method based on feature matching according to the present invention.
Fig. 2 is a schematic diagram of a sub-pixel level target tracking method based on feature matching according to an embodiment of the invention.
FIG. 3 is a diagram illustrating a candidate tracking point selection method according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
Fig. 1 is a sub-pixel level target tracking method based on feature matching according to the present invention, and as shown in fig. 1, the tracking method includes:
s101: selecting a tracking point of a first frame image from continuously transmitted images as a reference tracking point;
s102: respectively processing the first frame image and the Nth frame image to obtain a feature vector of the first frame image and a feature vector of the Nth frame image, wherein N is a natural number greater than 1;
s103: matching the feature vector of the first frame image with the feature vector of the Nth frame image to obtain a feature point pair;
s104: and estimating the characteristic point pairs to obtain a conversion matrix, and performing point multiplication operation on the conversion matrix and the reference tracking points to obtain new tracking points to finish updating of the tracking points.
The method for selecting the tracking point of the first frame image from the continuously transmitted images as the reference tracking point comprises the following steps: determining the starting point ranges of the target contour in the X direction and the Y direction; selecting the middle point of two points on the contour as an alternative tracking point in the direction with a larger range of the starting point in the X, Y two directions; in the direction with a smaller range of the starting point in the two directions of X, Y, selecting an alternative tracking point every 2n pixels, wherein 2n is more than or equal to 8 and less than or equal to 10, and n is a natural number; judging whether an angular point exists in a neighborhood of the alternative tracking point with the radius of n pixels, and if the angular point does not exist, taking the alternative tracking point as a reference tracking point; if there are multiple alternative tracking points, one is randomly selected as the reference tracking point.
The step of obtaining the feature vector of the first frame image and the feature vector of the Nth frame image comprises the following steps: respectively extracting outlines of the first frame image and the Nth frame image to obtain an image block of the first frame image and an image block of the Nth frame image, and determining the sizes of the image block of the first frame image and the image block of the Nth frame image; respectively carrying out sub-pixel corner detection on an image block of the first frame image and an image block of the Nth frame image to obtain corner coordinates of the first frame image and corner coordinates of the Nth frame image; and according to the coordinates of the corner point of the first frame image and the coordinates of the corner point of the Nth frame image, performing binary feature vector extraction on the corner point of the first frame image and the corner point of the Nth frame image to obtain the feature vector of the corner point of the first frame image and the feature vector of the corner point of the Nth frame image. The contour extraction step comprises: thresholding, erosion, dilation and contour extraction of the image; the sub-pixel corner detection is to detect an image block by a Shi-Tomasi corner detection method; the binary feature vector extraction step adopts a FREAK feature extraction method, and sets the feature vector to be 256 bits or 516 bits. Because the sub-pixel level feature point detection method and the tracking point matching are adopted, the tracking precision is high; and the algorithm is more suitable for hardware realization by combining a rapid feature point detection method and a binary feature description vector.
In the step of matching the feature vector of the first frame image with the feature vector of the Nth frame image, the Hamming distance is taken as a judgment basis, and the proportional threshold is set to be 8-12; in the step of estimating the feature point pairs to obtain the conversion matrix, the RANSAC algorithm is adopted to estimate the feature point pairs to obtain the conversion matrix.
After the method completes the updating of the tracking point, the method also comprises the step of updating the reference tracking point, which comprises the following steps:
setting a threshold value P, and judging whether the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are consistent or not when N is equal to P;
if the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are consistent, setting the tracking point of the P frame image as a new reference tracking point;
and if the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are not consistent, making P equal to P +1, returning to the judging step, judging whether the tracking points of the new P-1 frame image, the P frame image and the P +1 frame image are consistent or not, and performing circular judgment.
When the tracking point is updated and the target is obviously characterized and changed near the tracking point, the stable tracking can be realized without influencing the robustness of the algorithm.
To explain the contents of the present invention in detail, embodiments are described with reference to the accompanying drawings. Fig. 2 is a schematic diagram of a sub-pixel level target tracking method based on feature matching according to an embodiment of the invention. FIG. 3 is a diagram illustrating a candidate tracking point selection method according to an embodiment of the invention. With reference to fig. 2 and fig. 3, the tracking method of the present embodiment includes 9 steps:
(1) extracting a target contour of a first frame image, namely a current frame input image 1, and determining the size of an image block 3 to be subjected to feature extraction;
(2) performing subpixel level Shi-Tomasi corner detection on the image block 3;
(3) automatically selecting tracking points from the alternative tracking points 15 according to the distribution characteristics of the angular points;
(4) extracting a FREAK binary feature vector from the angular point;
(5) extracting a target contour from the 2 nd frame image, namely the next frame input image 7, and determining the size of an image block 9 to be subjected to feature extraction;
(6) performing sub-pixel level Shi-Tomasi corner detection on the image block 9;
(7) extracting a FREAK binary feature vector from the angular point;
(8) matching feature vectors of angular points in the current frame image block and the next frame image block;
(9) and the matched characteristic point pairs are used for estimating to obtain a conversion matrix and updating the tracking points.
As shown in fig. 2 and 3, the target contour extraction in steps 1 and 5 adopts the steps of image thresholding, erosion, dilation, contour extraction and the like. Specifically, the method comprises the following steps: thresholding, namely setting a threshold value such as 20, changing the image into a binary image, selecting a square structural element with the radius of 2, and carrying out corrosion operation on the binary image, namely removing noise; and (4) performing edge extraction on the binary image after the etching operation, wherein an edge detection operator can adopt Canny, Sobel and other operators.
In addition, the size of the image block to be subjected to feature point detection in each frame is determined by the extracted target contour: the starting coordinate range of the object outline in the x and y directions plus 10 (which may be a range, such as 10-20) is the size of the image block.
And 2, in the step 6 and step 6, in the Shi-Tomasi corner detection, the filtering size of a correlation matrix is 5 multiplied by 5 pixels, the proportion threshold value of the corner is set to be 0.01, and the sub-pixel level feature point coordinates are obtained by adopting quadratic polynomial fitting.
Step 3, when selecting the alternative tracking points, firstly determining the starting point ranges of the target contour in the X direction and the Y direction; selecting the middle point of two points on the contour as an alternative tracking point in the direction with a larger range of the starting point in the X, Y two directions; selecting one alternative tracking point every 10 pixels in the direction with the smaller starting point range in the X, Y two directions; judging whether an angular point exists in a neighborhood with the radius of the alternative tracking point of 5 pixels, and if the angular point does not exist, taking the alternative tracking point as a reference tracking point; if there are multiple alternative tracking points, one is randomly selected as the reference tracking point. In the step 4 and the step 7, during the extraction of the FREAK features, the feature vector is 256 bits or 512 bits, wherein when the feature vector is 256 bits, faster tracking can be realized.
And 8, when the features are matched, using the Hamming distance as a judgment basis, and setting the proportion threshold value to be 8. Here, the hamming distance refers to the number of different elements (1 or 0) in the vector, for example, the hamming distance of vector 1011 and vector 1001 is 1. Specifically, the hamming distance between each feature vector extracted from the current frame image and each feature vector extracted from the next frame image is compared: when the Hamming distance is smaller than the threshold value, recording as a matching point pair; when the hamming distance is greater than the feature point, it is disregarded. Meanwhile, for the same characteristic point, if a plurality of characteristic point pairs smaller than the threshold value are obtained through matching, the point pair with the minimum Hamming distance is selected.
And 9, when estimating the conversion matrix, estimating by using a RANSAC algorithm. When the tracking point of the current frame is updated, the tracking point is not calculated by using the matching tracking point of the previous frame, but calculated according to the reference tracking point. Meanwhile, when the first 10 matching point pairs obtained by matching once before and after the updating threshold of the reference tracking point are not changed, the reference tracking point is updated once.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A sub-pixel level target tracking method based on feature matching comprises the following steps:
selecting a tracking point of a first frame image from continuously transmitted images as a reference tracking point;
respectively processing the first frame image and the Nth frame image to obtain a feature vector of the first frame image and a feature vector of the Nth frame image, wherein N is a natural number greater than 1;
matching the feature vector of the first frame image with the feature vector of the Nth frame image to obtain a feature point pair;
estimating the characteristic point pairs to obtain a conversion matrix, and performing point multiplication operation on the conversion matrix and the reference tracking points to obtain new tracking points to finish updating of the tracking points;
wherein the step of selecting the tracking point of the first frame image as the reference tracking point in the continuously transmitted images comprises:
determining the starting point ranges of the target contour in the X direction and the Y direction;
selecting the middle point of two points on the contour as an alternative tracking point in the direction with a larger range of the starting point in the X, Y two directions;
in the direction with a smaller range of the starting point in the two directions of X, Y, selecting an alternative tracking point every 2n pixels, wherein 2n is more than or equal to 8 and less than or equal to 10, and n is a natural number;
judging whether an angular point exists in a neighborhood of the alternative tracking point with the radius of n pixels, and if the angular point does not exist, taking the alternative tracking point as a reference tracking point; if there are multiple alternative tracking points, one is randomly selected as the reference tracking point.
2. The tracking method according to claim 1, wherein the step of obtaining the feature vector of the first frame image and the feature vector of the nth frame image comprises:
respectively extracting outlines of the first frame image and the Nth frame image to obtain an image block of the first frame image and an image block of the Nth frame image, and determining the sizes of the image block of the first frame image and the image block of the Nth frame image;
respectively carrying out sub-pixel corner detection on an image block of the first frame image and an image block of the Nth frame image to obtain corner coordinates of the first frame image and corner coordinates of the Nth frame image;
and according to the coordinates of the corner point of the first frame image and the coordinates of the corner point of the Nth frame image, performing binary feature vector extraction on the corner point of the first frame image and the corner point of the Nth frame image to obtain the feature vector of the corner point of the first frame image and the feature vector of the corner point of the Nth frame image.
3. The tracking method according to claim 2, wherein the contour extraction step comprises: image thresholding, erosion, dilation, and contour extraction.
4. The tracking method according to claim 2, wherein the sub-pixel corner detection is performed on the image block by a Shi-Tomasi corner detection method.
5. The tracking method according to claim 2, wherein the binary feature vector extraction is performed by using FREAK feature extraction and setting the feature vector to 256 or 516 bits.
6. The tracking method according to claim 1, wherein the step of matching the feature vector of the first frame image with the feature vector of the nth frame image is based on hamming distance determination, and the proportional threshold is set to 8-12.
7. The tracking method according to claim 1, wherein in the step of estimating the pairs of characteristic points to obtain the transformation matrix, the pair of characteristic points is estimated by using a RANSAC algorithm to obtain the transformation matrix.
8. The tracking method according to claim 1, further comprising updating the reference tracking point after the updating of the tracking point is completed.
9. The tracking method according to claim 8, wherein the updating the reference tracking point comprises:
setting a threshold value P, and judging whether the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are consistent or not when N is equal to P;
if the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are consistent, setting the tracking point of the P frame image as a new reference tracking point;
and if the tracking points of the P-1 frame image, the P frame image and the P +1 frame image are not consistent, making P equal to P +1, returning to the judging step, judging whether the tracking points of the new P-1 frame image, the P frame image and the P +1 frame image are consistent or not, and performing circular judgment.
CN201910397719.6A 2019-05-14 2019-05-14 Sub-pixel level target tracking method based on feature matching Active CN110163894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910397719.6A CN110163894B (en) 2019-05-14 2019-05-14 Sub-pixel level target tracking method based on feature matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910397719.6A CN110163894B (en) 2019-05-14 2019-05-14 Sub-pixel level target tracking method based on feature matching

Publications (2)

Publication Number Publication Date
CN110163894A CN110163894A (en) 2019-08-23
CN110163894B true CN110163894B (en) 2021-04-06

Family

ID=67634424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910397719.6A Active CN110163894B (en) 2019-05-14 2019-05-14 Sub-pixel level target tracking method based on feature matching

Country Status (1)

Country Link
CN (1) CN110163894B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010558B (en) * 2019-12-17 2021-11-09 浙江农林大学 Stumpage depth map generation method based on short video image
CN111179315A (en) * 2019-12-31 2020-05-19 湖南快乐阳光互动娱乐传媒有限公司 Video target area tracking method and video plane advertisement implanting method
CN112819889A (en) * 2020-12-30 2021-05-18 浙江大华技术股份有限公司 Method and device for determining position information, storage medium and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10027952B2 (en) * 2011-08-04 2018-07-17 Trx Systems, Inc. Mapping and tracking system with features in three-dimensional space
CN103400388B (en) * 2013-08-06 2016-12-28 中国科学院光电技术研究所 A kind of method utilizing RANSAC to eliminate Brisk key point error matching points pair
CN108257155B (en) * 2018-01-17 2022-03-25 中国科学院光电技术研究所 Extended target stable tracking point extraction method based on local and global coupling
WO2020014901A1 (en) * 2018-07-18 2020-01-23 深圳前海达闼云端智能科技有限公司 Target tracking method and apparatus, and electronic device and readable storage medium

Also Published As

Publication number Publication date
CN110163894A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN108288088B (en) Scene text detection method based on end-to-end full convolution neural network
CN109035276B (en) Image edge extraction method and device and automatic driving system
CN110163894B (en) Sub-pixel level target tracking method based on feature matching
CN109409366B (en) Distorted image correction method and device based on angular point detection
CN109784250B (en) Positioning method and device of automatic guide trolley
US20170132803A1 (en) Apparatus and method for processing a depth image
CN103870828A (en) System and method for judging image similarity degree
CN111323037B (en) Voronoi path planning algorithm for novel framework extraction of mobile robot
CN105096298A (en) Grid feature point extraction method based on fast line extraction
CN110084830B (en) Video moving object detection and tracking method
CN104282027A (en) Circle detecting method based on Hough transformation
CN108898148A (en) A kind of digital picture angular-point detection method, system and computer readable storage medium
CN103116890B (en) A kind of intelligent search matching process based on video image
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN108764343B (en) Method for positioning tracking target frame in tracking algorithm
CN109815763A (en) Detection method, device and the storage medium of two dimensional code
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
CN113643290B (en) Straw counting method and device based on image processing and storage medium
CN110705568B (en) Optimization method for image feature point extraction
CN111260723B (en) Barycenter positioning method of bar and terminal equipment
CN110298799B (en) PCB image positioning correction method
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
CN115049847B (en) ORB descriptor-based feature point local neighborhood feature matching method
CN116309837B (en) Method for identifying and positioning damaged element by combining characteristic points and contour points
CN112652004B (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant