CN113689466B - Feature point-based plane tracking method and system - Google Patents

Feature point-based plane tracking method and system Download PDF

Info

Publication number
CN113689466B
CN113689466B CN202110869838.4A CN202110869838A CN113689466B CN 113689466 B CN113689466 B CN 113689466B CN 202110869838 A CN202110869838 A CN 202110869838A CN 113689466 B CN113689466 B CN 113689466B
Authority
CN
China
Prior art keywords
corner
frame
image
effective area
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110869838.4A
Other languages
Chinese (zh)
Other versions
CN113689466A (en
Inventor
曾锐
林汉权
林杰兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Draft Xiamen Information Service Co ltd
Gaoding Xiamen Technology Co Ltd
Original Assignee
Gaoding Xiamen Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gaoding Xiamen Technology Co Ltd filed Critical Gaoding Xiamen Technology Co Ltd
Priority to CN202110869838.4A priority Critical patent/CN113689466B/en
Publication of CN113689466A publication Critical patent/CN113689466A/en
Application granted granted Critical
Publication of CN113689466B publication Critical patent/CN113689466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a plane tracking method based on characteristic points and a system thereof, comprising the following steps: acquiring an initial video file; an angular point obtaining step: acquiring a target area to be tracked in an initial frame, cutting out an effective area, and extracting an angular point of the effective area; a screening step: associating the corner points of the effective area in the image of the i-1 frame with the corresponding corner points in the i frame to form corner point pairs, screening qualified corner point pairs to define the corner point pairs as final matching point pairs, and calculating a final homography matrix through the final matching point pairs; and (3) optimizing: and calculating the cross-correlation matching value of the corresponding extended image block in the ith frame of image through an image algorithm, and defining the coordinate of the pixel with the maximum cross-correlation matching value as the coordinate tracked by the plane.

Description

Feature point-based plane tracking method and system
Technical Field
The invention relates to the field of plane tracking, in particular to a plane tracking method and system based on feature points.
Background
Visual tracking based on a plane graph is called plane tracking for short. When plane tracking is performed in a video, tracking failure may be caused because a target area moves too fast between video frames, and then a tracking target needs to be timely retrieved.
However, in the process of retrieving the tracking target again, the conditions of object shake, deformation, color change and the like in the plane are often encountered, and it is difficult to obtain an accurate homography matrix, so that a stable tracking effect cannot be obtained, and the robustness of the tracking effect is affected.
The invention aims to design a plane tracking method and system based on feature points aiming at the problems in the prior art.
Disclosure of Invention
In view of the problems in the prior art, the present invention provides a method and a system for plane tracking based on feature points, which can effectively solve the problems in the prior art.
The technical scheme of the invention is as follows:
a plane tracking method based on feature points comprises the following steps:
a video acquisition step: acquiring an initial video file, and designating a specific key frame image of the initial video file as an initial frame;
an angular point acquisition step: acquiring a target area to be tracked in the initial frame, cutting out an effective area, and extracting an angular point of the effective area;
a screening step: associating the corner points of the effective area in the image of the i-1 frame with the corresponding corner points in the i frame to form corner point pairs, screening qualified corner point pairs to define the corner point pairs as final matching point pairs, and calculating a final homography matrix through the final matching point pairs;
and (3) optimizing: and performing alignment operation on the final matching points by using the final homography matrix to obtain an image block with a first preset size taking each final matching point pair as a center in the i-1 th frame, mapping each pixel in the image block to the i-th frame image by using the final homography matrix, calculating a cross-correlation matching value of a corresponding extended image block in the i-th frame image, and defining the coordinate of the pixel with the maximum cross-correlation matching value as a coordinate tracked by a plane.
The screening step specifically comprises:
a first screening step: predicting the corresponding corner of the effective area corresponding to the image of the i-1 frame in the i frame, associating the corner of the effective area in the image of the i-1 frame with the corresponding corner of the i frame to form a corner pair, screening out the qualified corner pair, and calculating according to the corner pair to obtain a first homography matrix;
a second screening step: utilizing the first homography matrix to carry out positive operation on the corner points of the effective area corresponding to the i-1 frame image and carry out inverse operation on the calculation result, screening qualified corner point pairs to obtain first matching point pairs, and calculating a second homography matrix through the first matching point pairs;
and a third screening step: and carrying out positive operation on the corner points of the effective area corresponding to the i-1 frame image by using the second homography matrix, screening a first matching point pair to obtain a final matching point pair, and calculating the final homography matrix through the final matching point pair.
Further, the first screening step specifically comprises:
predicting the corresponding corner of the effective area corresponding to the image of the i-1 frame in the ith frame, associating the corner of the effective area in the image of the i-1 frame with the corresponding corner of the ith frame to form a corner pair, obtaining each motion track of the corner of the effective area corresponding to the image of the i-1 frame and the corner of the effective area corresponding to the image of the ith frame, calculating the Euclidean distance of each motion track, defining the corner pair with the Euclidean distance smaller than or equal to a first preset threshold value as a qualified corner pair, and counting the number of the corner pairs,
if the number of the corner point pairs of which the Euclidean distance of the motion trail is smaller than or equal to a first preset threshold value is larger than a second preset threshold value, the tracking is judged to be successful, and a first homography matrix is obtained through calculation according to the corner point pairs.
Furthermore, predicting that the corner of the image of the (i-1) th frame corresponding to the effective area is before the corresponding corner of the ith frame,
acquiring an i-1 th frame image and an ith frame image, extracting an angular point of the effective area corresponding to the i-1 th frame image and an angular point of the effective area corresponding to the ith frame image, and corresponding to the angular point of the effective area through the i-1 th frame image, the ith frame image and the ith-1 th frame image.
Further, the first preset threshold is 3-5, and the second preset threshold is 8-12.
Furthermore, predicting the corner of the image of the i-1 frame corresponding to the effective area in the corresponding corner of the i frame by using a KLT sparse optical flow method.
Further, in the second screening step, the screening of qualified corner pairs specifically includes:
and screening the corner point pairs left in the i-1 frame image.
Further, in the third screening step, the screening of the first matching point pair to obtain a final matching point pair specifically includes:
acquiring an image block to be calculated with a first preset size and with the first matching point as the center in the (i-1) th frame, calculating the variance of the image block to be calculated, and screening the first matching point pairs with the variance smaller than a threshold value to obtain final matching point pairs.
Further, in the optimizing step, the specifically calculating, by an image algorithm, the cross-correlation matching value of the corresponding extended image block in the ith frame image is as follows:
and acquiring an extended image block with a second preset size which corresponds to the second matching point as a center in the ith frame of image, and calculating a cross-correlation matching value in the extended image block line by line according to each pixel, wherein the second preset size is larger than the first preset size.
Further, the first predetermined size is 6 × 6 to 10 × 10, and the second predetermined size is 12 × 12 to 16 × 16.
Further provided is a feature point-based plane tracking system, comprising the following modules:
a video acquisition module: the system comprises a video acquisition unit, a video display unit and a display unit, wherein the video acquisition unit is used for acquiring an initial video file and appointing a specific key frame image of the initial video file as an initial frame;
an angular point acquisition module: the system is used for acquiring a target area to be tracked in the initial frame, cutting out an effective area and extracting an angular point of the effective area;
a screening module: the homography matrix is used for associating the corner points of the effective area in the image of the (i-1) th frame with the corresponding corner points of the (i) th frame as corner point pairs, screening qualified corner point pairs, defining the corner point pairs as final matching point pairs, and calculating a final homography matrix through the final matching point pairs;
an optimization module: the method is used for utilizing the final homography matrix to carry out alignment operation on the final matching points, obtaining an image block with a first preset size taking each final matching point pair as a center in the i-1 th frame, mapping each pixel in the image block to the i-th frame image through the final homography matrix, calculating the cross-correlation matching value of the corresponding extended image block in the i-th frame image through an image algorithm, and defining the coordinate of the pixel with the maximum cross-correlation matching value as the coordinate tracked by a plane.
Accordingly, the present invention provides the following effects and/or advantages:
the method obtains the angular points of the effective area in the initial frame, screens the angular points for multiple times to finally obtain stable final matching point pairs, and optimizes the positions of the matching point pairs, so that the most relevant coordinates are obtained as the tracked points, the offset error of the point pairs is reduced, and the stability of plane tracking is enhanced.
The invention tracks the position movement of the angular points through a tracking algorithm, thereby eliminating the angular points which are moved too much; points with overlarge offset can be removed through the positive operation and the inverse operation of the first homography matrix and the match algorithm; and then the second homography matrix is combined with the variance of the corresponding image block to filter out the point with overlarge variance, thereby providing more stable tracking for the subsequent steps, reducing the offset error and improving the practicability of the invention.
According to the method, through the optimization step, each pixel in the image block is mapped to the ith frame image by the final homography matrix, the cross-correlation matching value of the corresponding extended image block in the ith frame image is calculated through an image algorithm, the coordinate of the pixel with the maximum cross-correlation matching value is defined as the coordinate tracked by a plane, the most relevant pixel can be found, the correlation is greatly improved, and the tracking effect is improved.
The invention has high stability and robustness, and prevents the situations of exit and the like caused by the situations of point loss and the like in the calculation process.
It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Drawings
FIG. 1 is a schematic flow diagram of the process.
Fig. 2 is an exemplary diagram of an initial frame that needs to be tracked by the method.
Fig. 3 is a schematic diagram of a target area to be tracked.
Fig. 4 is a schematic view of the effective area of the target to be tracked.
FIG. 5 is a schematic diagram of the first screening step. .
FIG. 6 is a schematic diagram of the second screening step.
Fig. 7 is a schematic diagram of an image block and an extended image block.
Fig. 8 is a schematic diagram of the region of the first cross-correlation match.
Fig. 9 is a schematic diagram of the region of the second cross-correlation match.
Detailed Description
To facilitate understanding of those skilled in the art, the structure of the present invention will now be described in further detail by way of examples in conjunction with the accompanying drawings:
referring to fig. 1, a feature point-based plane tracking method includes the following steps:
s1, video acquisition step: acquiring an initial video file, and designating a specific key frame image of the initial video file as an initial frame;
s2, corner point acquisition step: and acquiring a target area to be tracked in the initial frame, cutting out an effective area, and extracting an angular point of the effective area. The target area to be tracked is an area selected by a user.
Fig. 2 is an initial frame to be tracked in the method, the method selects a girl upper half area in a video as a plane to be tracked, in order to eliminate interference of a background area, we only cut out images in the area to prepare for subsequent feature extraction, because feature points of the areas are sparse or do not exist, finally we obtain a key area of the chest rag shown in fig. 3 as a target area to be tracked, and a frame in fig. 3 shows that the image in the frame is the target area. The effective area tracked by the invention has certain complexity, for example, the pattern has certain difference with the ground color of the clothes, and the texture also has difference. The effective area is obtained by the user inputting the area to be cropped, as shown in fig. 4. The result of extracting the corner points of the effective area is shown in fig. 5, where the square lattice frames added in fig. 5 are corner points, and the number of the corner points is large, generally greater than 20.
Corner points are extreme points, i.e. points where the properties are particularly prominent in some way. The corner point may be the intersection of two lines or a point located on two adjacent objects with different main directions.
S3, a screening step: and associating the corner points of the effective area in the image of the i-1 frame with the corresponding corner points in the i frame to form corner point pairs, screening the qualified corner point pairs to define the corner point pairs as final matching point pairs, and calculating a final homography matrix through the final matching point pairs.
The screening steps are specifically as follows:
a first screening step: acquiring an i-1 th frame image and an i-th frame image, extracting an angular point of the effective area corresponding to the i-1 th frame image and an angular point of the effective area corresponding to the i-th frame image, predicting the angular point of the i-1 th frame image corresponding to the effective area at the corresponding angular point of the i-th frame by a KLT sparse optical flow method according to the angular points of the effective area corresponding to the i-1 th frame image, associating the angular point of the effective area in the i-1 th frame image and the corresponding angular point of the i-th frame image as an angular point pair to obtain each motion track of the angular point of the effective area corresponding to the i-1 th frame image and the angular point of the effective area corresponding to the i-th frame image, calculating a Euclidean distance of each motion track, wherein the pair of the Euclidean angular point distance smaller than or equal to a first preset threshold is defined as a qualified angular point pair, counting the number of the corner point pairs,
the first preset threshold value is 3-5,
if the number of the corner point pairs of which the Euclidean distance of the motion track is smaller than or equal to a first preset threshold is larger than a second preset threshold which is 8-12, the tracking is judged to be successful, and a first homography matrix is obtained through calculation according to the corner point pairs.
Since we know the information of the initial frame, we next start tracking the location of the keypoints of the active region in the ith frame. The KLT sparse optical flow method is used for tracking and predicting, the moving position of the corner point of the ith frame is obtained according to the corner point prediction of the (i-1) th frame, and partial unqualified points are filtered through an offset error threshold value of corresponding points of the previous frame and the next frame obtained through calculation. Firstly, predicting the corner points of the i-1 frame image corresponding to the effective area by a KLT sparse optical flow method to obtain the positions of the corner points at the i frame, and then respectively connecting the corner points of the i-1 frame image with the corner points of the i frame image to obtain a series of connecting lines, wherein the connecting lines are each motion track of the corner points. In this embodiment, the KLT sparse optical flow method is directly adopted in the prior art. Next, the distances of the motion trajectories are calculated, the euclidean distances are adopted in the embodiment, and if the euclidean distances of a certain motion trajectory are too long, the tracking is considered to be wrong, the angular points are deleted, and the subsequent tracking can be more stable. And if the Euclidean distances of most of motion tracks are overlarge, the tracking is considered to be failed, and the method is exited. And finally, calculating according to the qualified corner pairs to obtain a first homography matrix. Homography matrix Homogeneous is a term in projective geometry, also known as projective transformation. In essence, it is a mathematical concept, and the homography matrix is a homography matrix on a plane, which is mainly used for solving the problem that the transformation realizes the transformation of an image from one view to another view.
In this embodiment, if the number of the corner point pairs whose euclidean distance of the motion trajectory is equal to or less than 4 is greater than 10, it is determined that the tracking is successful. In other embodiments, the first preset threshold may be any value from 3 to 5, and the second preset threshold may be any value from 8 to 12.
The right image of fig. 5 shows the result of the first screening step of the corner points, and it can be seen that the corner points in the box of fig. 5 are screened out.
And a second screening step: and utilizing the first homography matrix to carry out forward operation on the corner points of the effective area corresponding to the i-1 frame image and carry out inverse operation on the calculation result, screening the corner point pairs left in the i-1 frame image to obtain first matching point pairs, and calculating a second homography matrix through the first matching point pairs.
And transforming the points of the i-1 th frame to the ith frame by using the first homography matrix obtained by calculation, then inversely transforming the calculated points of the ith frame to the previous frame, and screening out the points which are transformed to the boundaries of the image of the ith frame by combining a match algorithm because normal matching points are positioned in the image space of the previous frame and the next frame. This enables points of excessive displacement to be filtered out.
Referring to fig. 6, after the second screening step, the corner point pairs in the right image frame are deleted, and the first matching point pairs in the right image are left after the screening.
And a third screening step: and utilizing the second homography matrix to carry out positive operation on the corner points of the effective area corresponding to the image of the (i-1) th frame, acquiring an image block to be calculated with a first preset size and taking the first matching point as the center in the (i-1) th frame, calculating the variance of the image block to be calculated, screening the first matching point pairs with the variance smaller than a threshold value to obtain final matching point pairs, and calculating the final homography matrix through the final matching point pairs.
In the step, points left in pairs in the second screening step are recalculated to obtain a second homography matrix, the angular point of the previous frame is converted to the current frame by using the second homography matrix, the variance of a local 8 x 8 image block of each point in the current frame centering on the point is counted at the time, and if the variance is lower than a set threshold, the texture of the point is considered to be too simple and not suitable for tracking or is caused by conversion errors, and the texture is removed.
In this embodiment, the first matching point pairs with the variance smaller than 8 are screened. And in other embodiments may be any value from 7 to 9.
S4, optimizing: and performing alignment operation on the final matching points by using the final homography matrix pair and the final homography matrix to obtain an image block with a first preset size and centering on each final matching point in an i-1 frame, mapping each pixel in the image block to the image of the i frame through the final homography matrix, obtaining an extended image block with a second preset size and centering on the second matching point in the image of the i frame, calculating cross-correlation matching values in the extended image block line by line according to each pixel, wherein the second preset size is larger than the first preset size, and defining the coordinate of the pixel with the maximum cross-correlation matching value as a coordinate tracked by a plane.
For example, in the following description, because the object may rotate, move, and twist at different times, the i-1 th frame image and the i-th frame image in the video may also rotate, move, and twist, and when these occur, the coordinates of the point to be tracked also need to be converted to a certain degree. Taking each point of the i-1 frame as the center, extracting 8 × 8 image blocks, performing perspective change, and at this time, we obtain a new 8 × 8 image block, which represents what it should be in the current frame space, and then expanding within 14 × 14 of the center of the predicted i-frame point to obtain an expanded image block of 14 × 14, as shown in fig. 7. Then, cross correlation matching is performed, that is, the step length is 1, block-by-block correlation calculation is performed within the range of 14 × 14 by 8 × 8, as shown in fig. 8, correlation calculation of 8 × 8 regions is performed by using pixels in the first row and the first column, then, as shown in fig. 9, correlation calculation of 8 × 8 regions is performed by using pixels in the first row and the second column until all pixels are subjected to correlation calculation, at this time, a sequence of cross correlation matching values can be obtained, and the coordinates of the pixel point with the largest cross correlation matching value are the last possible position of the object in the ith frame. Then the located point position is the point optimized position for predicting the current frame.
Further, in this embodiment, the first predetermined size is 8 × 8, in other embodiments, 6 × 6 or 10 × 10, and the second predetermined size is 14 × 14, in other embodiments, 12 × 12 or 16 × 16.
Further provided is a feature point-based plane tracking system, comprising the following modules:
a video acquisition module: the system comprises a video acquisition module, a video display module, a key frame image generation module and a key frame image generation module, wherein the video acquisition module is used for acquiring an initial video file and appointing a specific key frame image of the initial video file as an initial frame;
an angular point acquisition module: the system is used for acquiring a target area to be tracked in the initial frame, cutting out an effective area and extracting an angular point of the effective area;
a screening module: the homography matrix is used for associating the corner points of the effective area in the image of the (i-1) th frame with the corresponding corner points in the ith frame to form corner point pairs, screening the qualified corner point pairs to define the corner point pairs as final matching point pairs, and calculating a final homography matrix through the final matching point pairs;
an optimization module: the method is used for utilizing the final homography matrix to carry out alignment operation on the final matching points, obtaining an image block with a first preset size taking each final matching point pair as a center in the i-1 th frame, mapping each pixel in the image block to the i-th frame image through the final homography matrix, calculating the cross-correlation matching value of the corresponding extended image block in the i-th frame image through an image algorithm, and defining the coordinate of the pixel with the maximum cross-correlation matching value as the coordinate tracked by a plane.
The function of the system is the same as the method described above and will not be described in detail here.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (7)

1. A plane tracking method based on feature points is characterized in that: comprises the following steps:
a video acquisition step: acquiring an initial video file, and designating a specific key frame image of the initial video file as an initial frame;
an angular point obtaining step: acquiring a target area to be tracked in the initial frame, cutting out an effective area, and extracting an angular point of the effective area;
a screening step: associating the corner points of the effective area in the image of the i-1 frame with the corresponding corner points in the i frame to form corner point pairs, screening the qualified corner point pairs to define the corner point pairs as final matching point pairs, and calculating a final homography matrix through the final matching point pairs; the method specifically comprises the following steps:
a first screening step: predicting the corresponding corner of the effective area corresponding to the image of the i-1 frame in the i frame, associating the corner of the effective area in the image of the i-1 frame with the corresponding corner of the i frame to form a corner pair, screening out the qualified corner pair, and calculating according to the corner pair to obtain a first homography matrix; the method specifically comprises the following steps:
predicting the corresponding corner of the effective area corresponding to the image of the i-1 frame in the i-frame by using a KLT sparse optical flow method, associating the corner of the effective area in the image of the i-1 frame with the corresponding corner of the i-frame as a corner pair to obtain each motion track of the corner of the effective area corresponding to the image of the i-1 frame and the corner of the effective area corresponding to the image of the i-frame, calculating the Euclidean distance of each motion track, defining the corner pair with the Euclidean distance being less than or equal to a first preset threshold as a qualified corner pair, and counting the number of the corner pairs,
if the number of the corner point pairs of which the Euclidean distance of the motion trail is smaller than or equal to a first preset threshold is larger than a second preset threshold, judging that the tracking is successful, and calculating according to the corner point pairs to obtain a first homography matrix;
and a second screening step: utilizing the first homography matrix to carry out positive operation on the corner points of the effective area corresponding to the i-1 frame image and carry out inverse operation on the calculation result, screening qualified corner point pairs to obtain first matching point pairs, and calculating a second homography matrix through the first matching point pairs;
and a third screening step: utilizing the second homography matrix to carry out positive operation on the corner points of the effective area corresponding to the i-1 frame image, screening a first matching point pair to obtain a final matching point pair, and calculating a final homography matrix through the final matching point pair;
and (3) optimizing: and performing alignment operation on the final matching points by using the final homography matrix to obtain an image block with a first preset size taking each final matching point pair as a center in the i-1 th frame, mapping each pixel in the image block to the i-th frame image by using the final homography matrix, calculating a cross-correlation matching value of a corresponding extended image block in the i-th frame image, and defining the coordinate of the pixel with the maximum cross-correlation matching value as a coordinate tracked by a plane.
2. The feature point-based plane tracking method according to claim 1, wherein: predicting that the corner of the image of the i-1 frame corresponding to the effective area is before the corresponding corner of the i frame,
acquiring an i-1 th frame image and an ith frame image, extracting an angular point of the effective area corresponding to the i-1 th frame image and an angular point of the effective area corresponding to the ith frame image, and corresponding to the angular point of the effective area through the i-1 th frame image, the ith frame image and the ith-1 th frame image.
3. The feature point-based plane tracking method according to claim 1, wherein: the first preset threshold is 4, and the second preset threshold is 10.
4. The feature point-based plane tracking method according to claim 1, wherein: in the second screening step, the angular point pairs qualified in the screening are specifically:
and screening the corner point pairs left in the i-1 frame image.
5. The feature point-based plane tracking method according to claim 1, wherein: in the third screening step, the screening of the first matching point pair to obtain the final matching point pair specifically comprises:
and acquiring an image block to be calculated of a first preset size with the first matching point as the center in the (i-1) th frame, calculating the variance of the image block to be calculated, and screening the first matching point pairs with the variance smaller than a threshold value to obtain final matching point pairs.
6. The feature point-based plane tracking method according to claim 1, wherein: in the optimization step, the specific steps of calculating the cross-correlation matching value of the corresponding extended image block in the ith frame image through an image algorithm are as follows:
and acquiring an extended image block with a second preset size taking a corresponding second matching point as a center in the ith frame of image, and calculating a cross-correlation matching value in the extended image block line by line according to each pixel, wherein the second preset size is larger than the first preset size.
7. A plane tracking system based on feature points is characterized in that: the system comprises the following modules:
a video acquisition module: the system comprises a video acquisition module, a video display module, a key frame image generation module and a key frame image generation module, wherein the video acquisition module is used for acquiring an initial video file and appointing a specific key frame image of the initial video file as an initial frame;
an angular point acquisition module: the system is used for acquiring a target area to be tracked in the initial frame, cutting out an effective area and extracting an angular point of the effective area;
a screening module: the homography matrix is used for associating the corner points of the effective area in the image of the (i-1) th frame with the corresponding corner points of the (i) th frame to form corner point pairs, screening qualified corner point pairs to form final matching point pairs, and calculating a final homography matrix through the final matching point pairs; the method is specifically used for:
a first screening step: predicting the corresponding corner of the effective area corresponding to the image of the i-1 frame in the i frame, associating the corner of the effective area in the image of the i-1 frame with the corresponding corner of the i frame to form a corner pair, screening out the qualified corner pair, and calculating according to the corner pair to obtain a first homography matrix; the method specifically comprises the following steps:
predicting the corresponding corner point of the i-1 frame image corresponding to the effective area in the i-frame by a KLT sparse optical flow method, associating the corner point of the effective area in the i-1 frame image with the corresponding corner point in the i-frame as a corner point pair to obtain each motion track of the corner point of the i-1 frame image corresponding to the effective area and the corner point of the i-1 frame image corresponding to the effective area, calculating the Euclidean distance of each motion track, defining the corner point pair with the Euclidean distance being less than or equal to a first preset threshold value as a qualified corner point pair, counting the number of the corner point pairs,
if the number of the corner point pairs of which the Euclidean distance of the motion trail is smaller than or equal to a first preset threshold is larger than a second preset threshold, judging that the tracking is successful, and calculating according to the corner point pairs to obtain a first homography matrix;
and a second screening step: utilizing the first homography matrix to carry out positive operation on the corner points of the effective area corresponding to the i-1 frame image and carry out inverse operation on the calculation result, screening qualified corner point pairs to obtain first matching point pairs, and calculating a second homography matrix through the first matching point pairs;
and a third screening step: utilizing the second homography matrix to carry out positive operation on the corner points of the effective area corresponding to the i-1 frame image, screening a first matching point pair to obtain a final matching point pair, and calculating a final homography matrix through the final matching point pair;
an optimization module: the method is used for utilizing the final homography matrix to carry out alignment operation on the final matching points, obtaining an image block with a first preset size taking each final matching point pair as a center in the i-1 th frame, mapping each pixel in the image block to the i-th frame image through the final homography matrix, calculating the cross-correlation matching value of the corresponding extended image block in the i-th frame image through an image algorithm, and defining the coordinate of the pixel with the maximum cross-correlation matching value as the coordinate tracked by a plane.
CN202110869838.4A 2021-07-30 2021-07-30 Feature point-based plane tracking method and system Active CN113689466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110869838.4A CN113689466B (en) 2021-07-30 2021-07-30 Feature point-based plane tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110869838.4A CN113689466B (en) 2021-07-30 2021-07-30 Feature point-based plane tracking method and system

Publications (2)

Publication Number Publication Date
CN113689466A CN113689466A (en) 2021-11-23
CN113689466B true CN113689466B (en) 2022-07-12

Family

ID=78578327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110869838.4A Active CN113689466B (en) 2021-07-30 2021-07-30 Feature point-based plane tracking method and system

Country Status (1)

Country Link
CN (1) CN113689466B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408460A (en) * 2014-09-17 2015-03-11 电子科技大学 A lane line detecting and tracking and detecting method
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111242908A (en) * 2020-01-07 2020-06-05 青岛小鸟看看科技有限公司 Plane detection method and device and plane tracking method and device
CN111754548A (en) * 2020-06-29 2020-10-09 西安科技大学 Multi-scale correlation filtering target tracking method and device based on response discrimination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408460A (en) * 2014-09-17 2015-03-11 电子科技大学 A lane line detecting and tracking and detecting method
CN110599605A (en) * 2019-09-10 2019-12-20 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111242908A (en) * 2020-01-07 2020-06-05 青岛小鸟看看科技有限公司 Plane detection method and device and plane tracking method and device
CN111754548A (en) * 2020-06-29 2020-10-09 西安科技大学 Multi-scale correlation filtering target tracking method and device based on response discrimination

Also Published As

Publication number Publication date
CN113689466A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN110631554B (en) Robot posture determining method and device, robot and readable storage medium
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
US8401333B2 (en) Image processing method and apparatus for multi-resolution feature based image registration
US11170202B2 (en) Apparatus and method for performing 3D estimation based on locally determined 3D information hypotheses
CN110009732B (en) GMS feature matching-based three-dimensional reconstruction method for complex large-scale scene
CN111540005B (en) Loop detection method based on two-dimensional grid map
US11367195B2 (en) Image segmentation method, image segmentation apparatus, image segmentation device
CN112509003B (en) Method and system for solving target tracking frame drift
CN113284251B (en) Cascade network three-dimensional reconstruction method and system with self-adaptive view angle
CN111275743B (en) Target tracking method, device, computer readable storage medium and computer equipment
CN104574331A (en) Data processing method, device, computer storage medium and user terminal
CN112200157A (en) Human body 3D posture recognition method and system for reducing image background interference
CN112085031A (en) Target detection method and system
US11256949B2 (en) Guided sparse feature matching via coarsely defined dense matches
US8472756B2 (en) Method for producing high resolution image
CN113689466B (en) Feature point-based plane tracking method and system
KR20150097251A (en) Camera alignment method using correspondences between multi-images
CN113689467B (en) Feature point optimization method and system suitable for plane tracking
CN116402713A (en) Electric three-dimensional point cloud completion method based on two-dimensional image and geometric shape
CN114707611B (en) Mobile robot map construction method, storage medium and equipment based on graph neural network feature extraction and matching
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation
CN105141963A (en) Image motion estimation method and device
CN110111249B (en) Method and system for acquiring and generating tunnel inner wall jigsaw image
CN117670939B (en) Multi-camera multi-target tracking method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220923

Address after: 361001 unit R, 2nd floor, No. 33-102, Punan 1st Road, Siming District, Xiamen City, Fujian Province

Patentee after: GAODING (XIAMEN) TECHNOLOGY Co.,Ltd.

Patentee after: Draft (Xiamen) Information Service Co.,Ltd.

Address before: 361001 unit R, 2nd floor, No. 33-102, Punan 1st Road, Siming District, Xiamen City, Fujian Province

Patentee before: GAODING (XIAMEN) TECHNOLOGY Co.,Ltd.