CN108021921A - Image characteristic point extraction system and its application - Google Patents
Image characteristic point extraction system and its application Download PDFInfo
- Publication number
- CN108021921A CN108021921A CN201711183532.3A CN201711183532A CN108021921A CN 108021921 A CN108021921 A CN 108021921A CN 201711183532 A CN201711183532 A CN 201711183532A CN 108021921 A CN108021921 A CN 108021921A
- Authority
- CN
- China
- Prior art keywords
- feature point
- feature
- unit
- point
- feature points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image characteristic point extraction system and its application,Including a characteristic point judging unit,One feature point extraction unit and a texture-rich degree judging unit,Wherein described characteristic point judgment module carries out the point in image according to a characteristic point judgment threshold judgement of characteristic point,Wherein described feature point extraction unit is communicatively coupled to the characteristic point judging unit,To extract the corresponding characteristic point,Wherein described texture-rich degree judging unit is communicatively coupled to the characteristic point judging unit and the feature point extraction unit,Wherein described texture-rich degree judging unit is analyzed by whether the characteristic point that the feature point extraction unit extracts meets the reference data corresponding points requirement according to a reference data,When not meeting,The texture-rich degree judging unit correspondingly forms one and updates the data to change the judgment threshold.
Description
Technical Field
The invention relates to an image feature point extraction system and application thereof, wherein the image feature point extraction system can extract a proper number of feature points aiming at object images with different texture richness so as to realize tracking of corresponding objects.
Background
Tracking of three-dimensional objects has been a hotspot in machine vision research, particularly in the field of augmented reality, and machine vision-based methods have gained wide attention by researchers in the field of augmented reality with the advantages of simplicity, cheapness, non-contact, and the like. At present, tracking methods for three-dimensional objects mainly include tracking based on three-dimensional point cloud, tracking based on edges, tracking based on feature points and combined tracking of the above methods, wherein the tracking method based on the feature points is widely applied.
The traditional ORB (Object Request Broker) feature point extraction method adopts a uniform threshold and a standard to extract feature points of the whole image, so that once the gray scale of the image changes unevenly, some places are flat and slow, and some places have particularly rich image information, the extracted feature points are mainly concentrated in the places with severe changes, which causes the extracted feature points to be particularly concentrated, and further is extremely disadvantageous to matching and tracking of later-stage images. In other words, the conventional feature point-based tracking method cannot track objects with different texture richness.
On the other hand, in the process of tracking the object, the object is often influenced by various other factors, for example, when the texture-rich region is concentrated, if the region is influenced, the corresponding feature point may not be extracted, and the entire tracking may fail. The traditional method for extracting the ORB feature points cannot solve the problems. Similarly, if the texture of an object is not obvious, if feature points are extracted, it is likely that feature points sufficient to characterize the object cannot be extracted, which may also result in failure of tracking. In other words, the conventional feature point extraction method cannot extract feature points according to the actual situation of the tracked object, especially when the area with rich texture of the object moves out of the machine vision or is blocked by other objects in the tracking process.
Disclosure of Invention
An object of the present invention is to provide an image feature point extraction system and an application thereof, wherein the image feature point extraction system is capable of extracting feature points of an object to track the corresponding object based on the extracted feature points in the following.
An object of the present invention is to provide an image feature point extraction system and an application thereof, wherein the image feature point extraction system is capable of extracting a suitable number of feature points for object images with different texture richness, so as to track corresponding objects according to the extracted feature points in the following.
Another object of the present invention is to provide an image feature point extraction system and its application, wherein the image feature point extraction system can still extract feature points of a tracked object when a texture-rich region of an image of the tracked object cannot be acquired.
Another object of the present invention is to provide an image feature point extraction system and applications thereof, wherein the image feature point extraction system can perform cyclic feature point extraction on the tracked object image until the number of extracted feature points is sufficient for tracking the corresponding object.
Another object of the present invention is to provide an image feature point extraction system and an application thereof, wherein the image feature point extraction system is also capable of extracting feature points from an object image with uneven texture richness.
To achieve at least one of the above objects, the present invention provides an image feature point extraction system, including:
a feature point judgment unit for judging a feature point of a point in the image according to a feature point judgment threshold value;
a feature point extracting unit, wherein the feature point extracting unit is communicatively connected to the feature point judging unit to extract the corresponding feature points; and
a texture richness judging unit, wherein the texture richness judging unit is communicatively connected to the feature point judging unit and the feature point extracting unit, wherein the texture richness judging unit analyzes whether the feature point extracted by the feature point extracting unit meets the requirement of the corresponding point of the reference data according to a reference data, and when not, the texture richness judging unit correspondingly forms an updating data to change the judging threshold.
According to an embodiment of the present invention, the texture richness determining unit includes a texture richness determining module and a feature point change determining module, wherein the image feature point extracting system further includes a homogenization processing unit, wherein the texture richness determining module is communicatively connected to the feature point determining unit and the feature point extracting unit to analyze whether the feature point extracted by the feature point extracting unit meets a requirement of a corresponding point of reference data according to reference data, wherein the feature point change determining module is communicatively connected to the feature point extracting unit and the texture richness determining module, wherein the feature point change determining module is capable of comparing the feature point currently extracted with the feature point extracted last time and determining whether a change before and after the feature point meets a change threshold, wherein the homogenization processing unit is communicatively connected to the feature point change determining module and the feature point determining unit to homogenize an image when the change before and after the feature point meets the change threshold.
According to an embodiment of the present invention, the feature point determination threshold is an absolute value of a difference between a pixel value of a center point and a pixel value of 16 pixels on a circle with a radius R of a certain pixel point in the image, and if the difference exceeds det (P) num ) If the difference between each pixel point and the central point exceeds a characteristic point judgment threshold value delta, the central point is taken as a characteristic point, and the judgment condition is as follows:
according to an embodiment of the present invention, the feature point changing module determines whether the image needs to be homogenized according to the following formula:
F num (I j )<σF num (T),
wherein said F num (I j ) The number of the feature points representing the current object image, wherein F num (T) is the number of feature points of a key template frame of the object image, wherein the sigma and the texture richness of the object are influenced by the whole object feature.
According to an embodiment of the present invention, the reference data corresponds to the number range δ 1- δ 2 of the feature points corresponding to the objects with different texture richness.
According to an embodiment of the present invention, the function for changing the feature point determination threshold is:
according to an embodiment of the present invention, the image feature point extracting system further includes a data acquiring unit, wherein the data acquiring unit is communicatively connected to the feature point judging unit, wherein the data acquiring unit is capable of acquiring the related data of at least one video frame.
In order to achieve at least one of the above objects, the present invention provides a target tracking system, comprising the image feature point extraction system of any one of claims 1 to 7, a feature point matching judgment unit and an analysis unit, wherein the feature point matching judgment unit matches the extracted feature points with feature points of a template frame according to a matching criterion to form a feature point set, wherein the analysis unit is communicatively connected to the feature point matching judgment unit to calculate homography transformation between the feature point set and the feature points of the template frame.
According to an embodiment of the present invention, the target tracking system, wherein the data acquiring unit includes a template frame acquiring module, wherein the template frame acquiring module is communicatively connected to the feature point determining unit, wherein the template frame acquiring module is capable of storing image-related data of at least one tracked object, wherein the feature point determining module is capable of determining feature points of the template frame, and wherein the feature point extracting unit is capable of extracting the feature points of the template frame accordingly.
In order to achieve at least one of the above objects, the present invention provides a method for extracting feature points of an image, wherein the method comprises the steps of:
(A) Judging whether a point in the current image data is a characteristic point according to a characteristic point judgment threshold;
(B) Extracting feature points according to the judgment result;
(C) Comparing the feature points extracted in the step (B) with feature points in reference data to judge that the feature points meet the corresponding requirements of the reference data on the feature points; and
(D) And if the characteristic points are not satisfied, continuously executing the step (A) by changing the characteristic point judgment threshold value in the step (A).
To achieve at least one of the above objects, the present invention provides a method for tracking an object, comprising the steps of:
(A1) Judging whether a point in the current image data is a characteristic point or not according to a characteristic point judgment threshold;
(B1) Extracting feature points according to the judgment result;
(C1) Comparing the feature points extracted in the step (B) with feature points in reference data to judge that the feature points meet the corresponding requirements of the reference data on the feature points;
(D1) If yes, taking the feature points extracted in the step (B) as feature points extracted finally, and if not, continuously executing the step (A) by changing the feature point judgment threshold in the step (A);
(E1) Matching the extracted feature points with feature points of a template frame to form a feature point set; and
(F1) Computing a homography transformation between the feature points in the set of feature points and the feature points of the template frame.
Drawings
FIG. 1 is a diagram of an image feature point extraction system according to the present invention.
FIG. 2 is a schematic diagram of feature point determination and extraction according to the present invention.
FIG. 3 is a diagram of a target tracking system according to the present invention.
FIG. 4A is a schematic diagram of a tracked object image rich region being occluded according to the present invention.
FIG. 4B is a diagram of a homogenization processing unit according to the present invention when a rich region of a tracked object image is occluded.
FIG. 5 is a flowchart illustrating a method for extracting feature points of an image according to the present invention.
FIG. 6 is a flowchart of a method for tracking a target object according to the present invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention. It will be understood by those skilled in the art that in the present disclosure, the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, which are merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus the above terms should not be construed as limiting the present invention.
Referring to fig. 1 and 2, the present invention provides an image feature point extraction system, wherein the image feature point extraction system comprises a feature point determination unit 10, a feature point extraction unit 20 and a texture richness determination unit 30, wherein the feature point extraction unit 20 is communicatively connected to the feature point determination unit 10 and the texture richness determination unit 30.
The feature point determining unit 10 is capable of determining a point in an image of a tracked object to determine whether the point is a required feature point, so as to form a corresponding determination result, wherein the feature point extracting unit 20 is capable of obtaining the determination result from the feature point determining unit 10, and extracting the feature points corresponding to the condition according to the determination result, and the feature point extracting unit 20 is capable of calculating the number of the extracted feature points to form corresponding at least one extracted data, wherein the extracted data is implemented as the total number of the feature points of the point set of the feature points extracted by the feature point extracting unit 20, wherein the texture richness determining unit 30 is capable of obtaining the extracted data from the feature point extracting unit 20, wherein the texture richness determining unit 30 is further capable of obtaining a reference data, wherein the reference data corresponds to the number range δ 1- δ 2 of the feature points corresponding to objects with different texture richness, and wherein the texture richness determining unit 30 is capable of comparatively analyzing the extracted data and the reference data, so as to form a corresponding analysis result.
Specifically, the feature point determination unit 10 is capable of determining the feature point of the acquired image according to a feature point determination threshold δ, wherein the determination method of the feature point may adopt a conventional determination method, and in order to enable those skilled in the art to understand the present invention, the following description of the present invention will be exemplified, and those skilled in the art will understand that the present invention is not limited in this respect.
In an embodiment of the present invention, the feature point determination threshold δ is implemented as an absolute value of a difference between a pixel value of a center point and a pixel value of 16 pixels on a circle with a radius of a certain pixel point R, if the circle has a value exceeding det (P) num ) If the difference between each pixel point and the central point exceeds a characteristic point judgment threshold delta, the central point is taken as a characteristic point, and the judgment condition is as follows:
the feature point extracting unit 20 is communicatively connected to the feature point judging unit 10, and is capable of performing statistics on the feature points judged by the feature point judging unit 10 to form corresponding extracted data.
The texture richness judging unit 30 is communicatively connected to the feature point extracting unit 20, and is capable of acquiring the extracted data from the feature point extracting unit 20, and analyzing and comparing the extracted data with the reference data, wherein when the number of feature points corresponding to the extracted data is less than the number of feature points of the reference data, the texture richness representing the object is low, resulting in insufficient number of extracted feature points. At this time, the texture richness judging unit 30 accordingly forms an update data, wherein the feature point judging unit 10 is communicatively connected to the texture richness judging unit 30, wherein the feature point judging unit 10 can acquire the update data from the texture richness judging unit 30, and the feature point judging unit 10 accordingly can automatically update the feature point judging threshold δ in the feature point judging unit 10 according to the update data, thereby accordingly lowering the criterion of the feature point extraction.
Specifically, when the texture richness of the object is low, the number of feature points on the image of the object is small, and subsequent tracking of the object cannot be achieved at all, but the texture richness determination module 30 in the present invention changes the manner of the feature point determination threshold δ, so as to reduce the determination condition for the feature points, and the adjustment function thereof is:
wherein the feature point determination module 10 further re-determines the feature points in the image according to the changed extraction threshold δ to re-form the determination result, wherein the feature point extraction unit 20 re-forms the extracted data according to the determination result, and further wherein the texture richness determination unit 30 further re-analyzes the extracted data with the reference data according to the extracted data, wherein the number of feature points once extracted is within the range δ 1 -δ 2 Then, the feature point extracting unit 20 directly stores the extracted data.
When the texture richness of the object is high, in order to avoid that the number of the extracted feature points is large, the threshold δ of the feature point determination unit 10 needs to be adjusted, where the adjustment function is:
wherein the feature point determining unit 10 determines the feature points in the image again to reform the determination result after adjusting the feature point determination threshold δ, wherein the feature point extracting unit 20 reforms the extracted data accordingly according to the determination result, wherein the texture richness determining unit 30 further re-analyzes the extracted data according to the reference data, wherein once the number of the extracted feature points is within the range δ 1- δ 2, the feature point extracting unit 20 directly saves the extracted data, and takes the feature points extracted by the feature point extracting unit 20 this time as the feature points to be finally extracted.
It should be noted that the texture richness determining unit 30 can also count the determined times, wherein an iteration time limit MaxIndex can be set in the texture richness determining unit 30, so as to ensure that the texture richness determining unit 30 can determine the iteration time limit MaxIndexThe timeliness of the image feature point extraction system is that, after changing the feature point determination threshold δ a plurality of times, even if the number of the feature points extracted by the feature point extraction unit 20 last remains out of the range δ 1 -δ 2 And if the number of iterations has reached the number limit value MaxIndex, the feature points extracted by the feature point extraction unit 20 this time are still used as the feature points finally extracted. As can be understood by those skilled in the art, the feature point judgment threshold δ is adjusted in real time, so that the image feature point extraction system can extract suitable feature points for objects with different texture richness.
Further, the texture richness judging unit 30 includes a texture richness judging module 31 and a feature point changing module 32, wherein the texture richness judging module 31 is communicatively connected to the feature point extracting unit 20 and the feature point judging unit 10, wherein the feature point extracting unit 31 can obtain the extracted data from the feature point extracting unit 20, and can analyze and compare the extracted data with the reference data to form the updated data when the feature point number corresponding to the extracted data cannot satisfy the feature point number corresponding to the reference data.
The feature point change module 32 is communicatively connected to the feature point extraction unit 20 and the feature point extraction unit 31, wherein the feature point change module 32 is capable of recording the extraction data formed by the feature point extraction unit 20 each time, and comparing the feature point quantity corresponding to the extraction data extracted last time by the feature point extraction unit 20 with the feature point quantity corresponding to the extraction data acquired this time, so as to form corresponding change data, wherein the feature point change module 32 is further capable of acquiring change reference data, wherein the change reference data corresponds to a change threshold value of the change of the feature point quantity twice before and after, and when the change of the feature point quantity corresponding to the change data is smaller than the feature point quantity corresponding to the change threshold value, the texture richness determination module 31 continues to perform determination processing on the extraction data; when the change of the feature point number corresponding to the change data is larger than the feature point number corresponding to the change threshold, it indicates that the feature point has a drastic change, and accordingly indicates that the currently tracked object is out of bounds or is blocked, and at this time, the image needs to be subjected to homogenization processing.
In an embodiment of the present invention, the feature point changing module 32 determines whether the image needs to be homogenized according to the following formula:
F num (I j )<σF num (T),
wherein said F num (I j ) The number of the feature points representing the current object image, wherein F num (T) is the number of feature points in a key template frame of the object image, wherein σ and the texture richness of the object are the coefficients of the whole object feature influence.
Specifically, the image feature point extraction system further includes a homogenization processing unit 40, wherein the homogenization processing unit 40 is communicatively connected to the feature point variation module 32 of the texture richness determination unit 30 and the feature point extraction unit 20, wherein the homogenization processing unit 40 is capable of acquiring the variation data formed by the feature point variation module 32 from the feature point variation module 32 to perform homogenization processing on the image of the object. Specifically, the homogenization processing unit 40 can perform region-equal differentiation on the images to form a plurality of corresponding sub-images, and the feature point extraction unit 20 can perform feature point extraction on each sub-image and statistically combine the feature points of each extracted sub-image to form corresponding extracted data.
Accordingly, the texture richness determining module 31 of the texture richness determining module 30 obtains the extracted data, wherein when the number of feature points corresponding to the extracted data is less than the number of feature points corresponding to the reference data, which indicates that the number of feature points extracted at this time is too large or too small, the texture richness determining module 31 forms the updated data accordingly, and after the feature point extracting unit 30 obtains the updated data, the feature point extracting unit updates the threshold δ, thereby re-extracting feature points from the sub-image.
As can be understood by those skilled in the art, with the feature point changing module 32 and the homogenization processing unit 40, even if an object in the analyzed image is occluded or out of bounds, the image feature point extraction system can extract a suitable number of feature points.
For example, referring to fig. 4A and 4B, when a central region of an image with rich object texture is blocked, the data of the feature points extracted by the feature point extraction unit 20 is sharply reduced, wherein the feature point change module 32 forms corresponding change data, wherein the homogenization processing unit 40 can obtain the change data and equally divide the image into 9 parts according to the change data to form 9 sub-images, and then the feature point extraction unit 20 simultaneously performs feature point extraction on the 9 sub-images to form corresponding extracted data, wherein the texture richness determination module 31 can obtain the extracted data and can determine whether the number of the feature points corresponding to the extracted data satisfies the number of the feature points corresponding to the reference data, and if not, the texture richness determination module 31 also forms corresponding updated data.
In fig. 3, when a central area of the image, which is rich in objects, is occluded, the extracted feature points will be distributed to the central area that is less occluded, and will be distributed to other areas outside the central area more, specifically, to the areas corresponding to the other 8 sub-images that are not occluded. Therefore, the feature points extracted by the image feature point extraction system will avoid the central area corresponding to the occluded digital image.
As can be understood by those skilled in the art, when a texture-rich area of an image of a tracked object is blocked and feature point extraction cannot be performed on the area, the image feature point extraction system can still extract feature points of the tracked object.
Further, the image feature point extraction system further comprises a data acquisition unit 50, wherein the data acquisition unit 50 can acquire image data of the outside world, and specifically, the data acquisition unit 50 can be communicatively connected to an image acquisition device, such as at least one monocular camera, so as to directly perform feature point extraction on each video frame of the image acquisition device.
In the embodiment of the present invention, the data acquisition unit 50 is communicatively connected to the characteristic point determination unit 10, so that the acquired data can be transmitted to the characteristic point determination unit 10.
The data acquiring unit 50 includes a video frame acquiring module 51, wherein the video frame acquiring module 51 is communicatively connected to the feature point determining unit 10, wherein the video frame acquiring unit 51 is capable of acquiring related data of at least one video frame, and wherein the feature point determining module 10 is capable of acquiring the related data from the video frame acquiring module 51.
It is worth mentioning that the image feature point extraction system of the present invention can be used for tracking a three-dimensional object.
Referring to fig. 3, according to another aspect of the present invention, the present invention provides a target tracking system, wherein the target tracking system includes the feature point determining unit 10, the feature point extracting unit 20, the texture richness determining unit 30, and a feature point matching determining unit 60, wherein the feature point matching determining unit 60 is capable of performing matching determination on the feature points of the acquired image and the feature points of a key template frame image, so as to achieve object tracking, where the key template frame image corresponds to an image of a tracked object.
Also, the texture richness judging unit 30 includes the texture richness judging module 31 and the feature point changing module 32, wherein the texture richness judging module 31 is communicatively connected to the feature point extracting unit 20 and the feature point judging unit 10, wherein the feature point extracting unit 31 can acquire the extracted data from the feature point extracting unit 20 and can analyze and compare the extracted data with the reference data to form the updated data when the number of the feature points corresponding to the extracted data cannot satisfy the number of the feature points corresponding to the reference data.
The feature point change module 32 is communicatively connected to the feature point extraction unit 20 and the feature point extraction unit 31, wherein the feature point change module 32 is capable of recording the extraction data formed by the feature point extraction unit 20 each time, and comparing the feature point quantity corresponding to the extraction data extracted last time by the feature point extraction unit 20 with the feature point quantity corresponding to the extraction data acquired this time, so as to form corresponding change data, wherein the feature point change module 32 is further capable of acquiring change reference data, wherein the change reference data corresponds to a change threshold value of the change of the feature point quantity twice before and after, and when the change of the feature point quantity corresponding to the change data is smaller than the feature point quantity corresponding to the change threshold value, the texture richness determination module 31 continues to perform determination processing on the extraction data; when the change of the feature point number corresponding to the change data is larger than the feature point number corresponding to the change threshold, it indicates that the feature point has a drastic change, and accordingly indicates that the currently tracked object is out of bounds or is blocked, and at this time, the image needs to be subjected to homogenization processing.
Specifically, the target tracking system includes the homogenization processing unit 40, wherein the homogenization processing unit 40 is communicatively connected to the feature point change module 32 and the feature point extraction unit 20 of the texture richness determination unit 30, wherein the homogenization processing unit 40 is capable of acquiring the change data formed by the feature point change module 32 from the feature point change module 32 to perform homogenization processing on the image of the object. Specifically, the homogenization processing unit 40 can perform region-equal differentiation on the image to form a plurality of corresponding sub-images, and the feature point extraction unit 20 can perform feature point extraction on each sub-image and perform statistical combination on the feature points of each extracted sub-image to form corresponding extracted data.
Accordingly, the texture richness determining module 31 of the texture richness determining module 30 obtains the extracted data, wherein when the number of the feature points corresponding to the extracted data is less than the number of the feature points corresponding to the reference data, the number of the feature points extracted at this time is too large or too small, and the texture richness determining module 31 accordingly forms the updated data, wherein after the feature point extracting unit 30 obtains the updated data, the feature point extracting unit extracts the feature points of the sub-image again by updating the threshold δ.
The data acquiring unit 50 further comprises a template frame acquiring module 52, wherein the template frame acquiring module 52 is communicatively connected to the feature point determining unit 10, wherein the template frame acquiring module 52 is capable of storing image-related data of at least one tracked object, wherein the feature point determining module 10 is capable of determining feature points of the template frame, wherein the feature point extracting unit 20 is accordingly capable of extracting the feature points of the template frame, and accordingly, the feature point matching determining unit 60 is communicatively connected to the feature point extracting unit 20 so as to be capable of acquiring the feature point set of the template frame from the feature point extracting unit 20 to form the matching reference.
The feature point matching judgment unit 60 can simultaneously acquire the feature points F of each of the video frames num (I i ) And the feature points of the template frame and the feature points of the video frame can be matched and judged to correspondingly form a matching result.
Specifically, the feature point matching judgment unit 60 has a matching threshold t, and the feature point matching judgment unit 60 is capable of calculating a distance between each feature point of the template frame and each feature point of each video frameHamming distance D = { D = { (D) 1 ,d 2 ,...,d n In which the Hamming distance minimum D min =min{d 1 ,d 2 ,...,d n The point corresponding to p is the nearest neighbor point if D min &T, determining that the two points are matched, otherwise, p has no matched point.
The feature point matching determination unit 60 rearranges the feature points of the video frame according to the matching result to form the matched feature point set, specifically, performs a ratio test with the template frame image as a reference and the video template frame image as a target, first performs proximity query 2 times on the image feature points by using the feature points of the template frame, so as to obtain the distance from each feature point of the video frame to each feature point of the template frame, correspondingly, also obtains the nearest neighbor bn and the next neighbor bn ' of each feature point of the video frame, and then performs a threshold test with the distances from the nearest neighbor and the next neighbor being dn and dn ' respectively, and sets a matching threshold t, and if dn is set, performs a threshold test with the distance from the nearest neighbor bn to the next neighbor and dn ' respectively&T, rejecting the matched pair; finally, a ratio test is carried out, and a ratio threshold value is set as epsilon ifAt this time, we consider that both bn and bn' are possible matching points of the query set, so the matching pair is eliminated. Then, the cross test is carried out on the obtained point sets, and the feature point sets of the query set and the target set matching pairs are respectively { s } n And { p } n Where n =1,2,3 \ 8230, inverting the query set and target set, solving for { p } n H matching point s' n And f, the correct matching pair corresponds to a query set of s n }∩{s' n }。
Further, the target tracking system includes an analysis unit 70, wherein the tracking analysis unit 70 is communicatively connected to the feature point matching judgment unit 60, wherein the analysis unit 70 is capable of acquiring the feature point set of the video template frame matched with the feature point of the template frame image from the feature point matching judgment unit 10, and is capable of calculating a homography between the feature point set and a corresponding feature point of the template frame, so as to obtain the motion of the tracked object in two adjacent video frames according to the homography, thereby realizing the tracking of the object.
Referring to fig. 5, according to another aspect of the present invention, the present invention provides a method for extracting feature points of an image, wherein the method includes the steps of:
(A) Judging whether a point in the current image data is a characteristic point according to a characteristic point judgment threshold;
(B) Extracting feature points according to the judgment result;
(C) Comparing the feature points extracted in the step (B) with feature points in reference data to judge that the feature points meet the corresponding requirements of the reference data on the feature points; and
(D) And if the characteristic points are not satisfied, continuing to execute the step (A) by changing the characteristic point judgment threshold value in the step (A).
Referring to fig. 6, according to another aspect of the present invention, the present invention provides a method for tracking an object, comprising the steps of:
(A1) Judging whether a point in the current image data is a characteristic point or not according to a characteristic point judgment threshold;
(B1) Extracting feature points according to the judgment result;
(C1) Comparing the feature points extracted in the step (B) with feature points in reference data to judge that the feature points meet the corresponding requirements of the reference data on the feature points;
(D1) If yes, taking the feature points extracted in the step (B) as feature points extracted finally, and if not, continuously executing the step (A) by changing the feature point judgment threshold in the step (A);
(E1) Matching the extracted feature points with feature points of a template frame to form a feature point set; and
(F1) Computing a homography transformation between the feature points in the set of feature points and the feature points of the template frame.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the embodiments, and any variations or modifications may be made to the embodiments of the present invention without departing from the principles described.
Claims (11)
1. An image feature point extraction system, comprising:
a feature point judgment unit for judging the feature point of the point in the image according to a feature point judgment threshold value;
a feature point extracting unit, wherein the feature point extracting unit is communicatively connected to the feature point judging unit to extract the corresponding feature points; and
a texture richness judging unit, wherein said texture richness judging unit is communicatively connected to said feature point judging unit and said feature point extracting unit, wherein said texture richness judging unit analyzes whether said feature point extracted by said feature point extracting unit meets the requirement of the corresponding point of said reference data or not based on a reference data, and when not, said texture richness judging unit forms an update data correspondingly to change said judgment threshold.
2. The image feature point extraction system according to claim 1, wherein the texture richness determination unit includes a texture richness determination module and a feature point change determination module, wherein the image feature point extraction system further includes a homogenization processing unit, wherein the texture richness determination module is communicatively connected to the feature point determination unit and the feature point extraction unit to analyze whether the feature point extracted by the feature point extraction unit meets the reference data corresponding point requirement or not, based on a reference data, wherein the feature point change determination module is communicatively connected to the feature point extraction unit and the texture richness determination module, wherein the feature point change determination module is capable of comparing the feature point currently extracted with the feature point previously extracted and determining whether the feature point back-and-forth change satisfies a change threshold or not, wherein the homogenization processing unit is communicatively connected to the feature point change determination module and the feature point determination unit to homogenize an image when the feature point back-and-forth change satisfies the change threshold.
3. The image feature point extraction system of claim 1, wherein the feature point determination threshold is an absolute value of a difference between a pixel value of a center point and a pixel value of 16 pixels on a circle with a radius R of a certain pixel point in the image, and if the difference exceeds det (P) on the circle num ) If the difference value between each pixel point and the central point exceeds a characteristic point judgment threshold value delta, the central point is taken as a characteristic point, and the judgment condition is as follows:
4. the image feature point extraction system according to claim 2, wherein the feature point change module determines whether the image needs to be uniformized according to the following formula:
F num (I j )<σF num (T),
wherein said F num (I j ) The number of the feature points representing the current object image, wherein F num (T) is the number of feature points of a key template frame of the object image, wherein the sigma and the texture richness of the object are the coefficients of the influence of the whole object features.
5. The image feature point extraction system according to claim 4, wherein the reference data corresponds to the number range δ 1- δ 2 of the feature points corresponding to objects of different texture richness.
6. The image feature point extraction system according to claim 5, wherein the function of changing the feature point determination threshold is:
7. the image feature point extraction system of claim 1, wherein the image feature point extraction system further comprises a data acquisition unit, wherein the data acquisition unit is communicatively connected to the feature point determination unit, wherein the data acquisition unit is capable of acquiring data related to at least one video frame.
8. An object tracking system comprising the image feature point extraction system of any one of claims 1 to 7, a feature point matching determination unit and an analysis unit, wherein the feature point matching determination unit matches the extracted feature points with feature points of a template frame according to a matching criterion to form a feature point set, wherein the analysis unit is communicatively connected to the feature point matching determination unit to calculate homography between the feature point set and the feature points of the template frame.
9. The object tracking system of claim 8, wherein the data acquisition unit comprises a template frame acquisition module, wherein the template frame acquisition module is communicatively connected to the feature point determination unit, wherein the template frame acquisition module is capable of storing image-related data of at least one tracked object, wherein the feature point determination module is capable of determining feature points of the template frame, wherein the feature point extraction unit is capable of extracting the feature points of the template frame accordingly.
10. A method for extracting image feature points is characterized by comprising the following steps:
(A) Judging whether a point in the current image data is a characteristic point or not according to a characteristic point judgment threshold;
(B) Extracting feature points according to the judgment result;
(C) Comparing the feature points extracted in the step (B) with feature points in reference data to judge that the feature points meet the corresponding requirements of the reference data on the feature points; and
(D) And if so, taking the feature points extracted in the step (B) as feature points finally extracted, and if not, continuously executing the step (A) by changing the feature point judgment threshold in the step (A).
11. A method of target tracking, comprising the steps of:
(A1) Judging whether a point in the current image data is a characteristic point or not according to a characteristic point judgment threshold;
(B1) Extracting feature points according to the judgment result;
(C1) Comparing the feature points extracted in the step (B) with feature points in reference data to judge that the feature points meet the corresponding requirements of the reference data on the feature points;
(D1) If yes, using the feature points extracted in the step (B) as feature points which are finally extracted, and if not, continuously executing the step (A) by changing the feature point judgment threshold in the step (A);
(E1) Matching the extracted feature points with feature points of a template frame to form a feature point set; and
(F1) Computing a homography transformation between the feature points in the set of feature points and the feature points of the template frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711183532.3A CN108021921A (en) | 2017-11-23 | 2017-11-23 | Image characteristic point extraction system and its application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711183532.3A CN108021921A (en) | 2017-11-23 | 2017-11-23 | Image characteristic point extraction system and its application |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108021921A true CN108021921A (en) | 2018-05-11 |
Family
ID=62080171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711183532.3A Pending CN108021921A (en) | 2017-11-23 | 2017-11-23 | Image characteristic point extraction system and its application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108021921A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109060806A (en) * | 2018-08-29 | 2018-12-21 | 孙燕 | Chest bottom end material type recognition mechanism |
WO2020052120A1 (en) * | 2018-09-12 | 2020-03-19 | 北京字节跳动网络技术有限公司 | Method and device for processing feature point of image |
CN112288040A (en) * | 2020-01-10 | 2021-01-29 | 牧今科技 | Method and system for performing image classification for object recognition |
CN113020428A (en) * | 2021-03-24 | 2021-06-25 | 北京理工大学 | Processing monitoring method, device and equipment of progressive die and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405783A (en) * | 2006-03-24 | 2009-04-08 | 丰田自动车株式会社 | Road division line detector |
CN103279952A (en) * | 2013-05-17 | 2013-09-04 | 华为技术有限公司 | Target tracking method and device |
US20140270484A1 (en) * | 2013-03-14 | 2014-09-18 | Nec Laboratories America, Inc. | Moving Object Localization in 3D Using a Single Camera |
CN104182974A (en) * | 2014-08-12 | 2014-12-03 | 大连理工大学 | A speeded up method of executing image matching based on feature points |
CN104200487A (en) * | 2014-08-01 | 2014-12-10 | 广州中大数字家庭工程技术研究中心有限公司 | Target tracking method based on ORB characteristics point matching |
US20140368645A1 (en) * | 2013-06-14 | 2014-12-18 | Qualcomm Incorporated | Robust tracking using point and line features |
CN105844663A (en) * | 2016-03-21 | 2016-08-10 | 中国地质大学(武汉) | Adaptive ORB object tracking method |
CN106022263A (en) * | 2016-05-19 | 2016-10-12 | 西安石油大学 | Vehicle tracking method in fusion with feature matching and optical flow method |
CN107122782A (en) * | 2017-03-16 | 2017-09-01 | 成都通甲优博科技有限责任公司 | A kind of half intensive solid matching method in a balanced way |
CN107248169A (en) * | 2016-03-29 | 2017-10-13 | 中兴通讯股份有限公司 | Image position method and device |
CN107369183A (en) * | 2017-07-17 | 2017-11-21 | 广东工业大学 | Towards the MAR Tracing Registration method and system based on figure optimization SLAM |
-
2017
- 2017-11-23 CN CN201711183532.3A patent/CN108021921A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405783A (en) * | 2006-03-24 | 2009-04-08 | 丰田自动车株式会社 | Road division line detector |
US20140270484A1 (en) * | 2013-03-14 | 2014-09-18 | Nec Laboratories America, Inc. | Moving Object Localization in 3D Using a Single Camera |
CN103279952A (en) * | 2013-05-17 | 2013-09-04 | 华为技术有限公司 | Target tracking method and device |
US20140368645A1 (en) * | 2013-06-14 | 2014-12-18 | Qualcomm Incorporated | Robust tracking using point and line features |
CN104200487A (en) * | 2014-08-01 | 2014-12-10 | 广州中大数字家庭工程技术研究中心有限公司 | Target tracking method based on ORB characteristics point matching |
CN104182974A (en) * | 2014-08-12 | 2014-12-03 | 大连理工大学 | A speeded up method of executing image matching based on feature points |
CN105844663A (en) * | 2016-03-21 | 2016-08-10 | 中国地质大学(武汉) | Adaptive ORB object tracking method |
CN107248169A (en) * | 2016-03-29 | 2017-10-13 | 中兴通讯股份有限公司 | Image position method and device |
CN106022263A (en) * | 2016-05-19 | 2016-10-12 | 西安石油大学 | Vehicle tracking method in fusion with feature matching and optical flow method |
CN107122782A (en) * | 2017-03-16 | 2017-09-01 | 成都通甲优博科技有限责任公司 | A kind of half intensive solid matching method in a balanced way |
CN107369183A (en) * | 2017-07-17 | 2017-11-21 | 广东工业大学 | Towards the MAR Tracing Registration method and system based on figure optimization SLAM |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109060806A (en) * | 2018-08-29 | 2018-12-21 | 孙燕 | Chest bottom end material type recognition mechanism |
CN109060806B (en) * | 2018-08-29 | 2019-09-13 | 陈青 | Chest bottom end material type recognition mechanism |
WO2020052120A1 (en) * | 2018-09-12 | 2020-03-19 | 北京字节跳动网络技术有限公司 | Method and device for processing feature point of image |
CN110895699A (en) * | 2018-09-12 | 2020-03-20 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing feature points of image |
US11403835B2 (en) | 2018-09-12 | 2022-08-02 | Beijing Bytedance Network Technology Co., Ltd. | Method and device for processing feature point of image |
CN110895699B (en) * | 2018-09-12 | 2022-09-13 | 北京字节跳动网络技术有限公司 | Method and apparatus for processing feature points of image |
CN112288040A (en) * | 2020-01-10 | 2021-01-29 | 牧今科技 | Method and system for performing image classification for object recognition |
CN112288040B (en) * | 2020-01-10 | 2021-07-23 | 牧今科技 | Method and system for performing image classification for object recognition |
CN113020428A (en) * | 2021-03-24 | 2021-06-25 | 北京理工大学 | Processing monitoring method, device and equipment of progressive die and storage medium |
CN113020428B (en) * | 2021-03-24 | 2022-06-28 | 北京理工大学 | Progressive die machining monitoring method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220092882A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
US10534957B2 (en) | Eyeball movement analysis method and device, and storage medium | |
CN108021921A (en) | Image characteristic point extraction system and its application | |
CN108764041B (en) | Face recognition method for lower shielding face image | |
US20200380285A1 (en) | Systems and methods for enhancing real-time image recognition | |
CN111144366A (en) | Strange face clustering method based on joint face quality assessment | |
CN108416266A (en) | A kind of video behavior method for quickly identifying extracting moving target using light stream | |
CN105426828B (en) | Method for detecting human face, apparatus and system | |
WO2016023264A1 (en) | Fingerprint identification method and fingerprint identification device | |
CN103854292B (en) | A kind of number and the computational methods and device in crowd movement direction | |
Aiping et al. | Face detection technology based on skin color segmentation and template matching | |
CN109614933B (en) | Motion segmentation method based on deterministic fitting | |
CN109948420B (en) | Face comparison method and device and terminal equipment | |
CN110032932B (en) | Human body posture identification method based on video processing and decision tree set threshold | |
CN103955682A (en) | Behavior recognition method and device based on SURF interest points | |
CN110298829A (en) | A kind of lingual diagnosis method, apparatus, system, computer equipment and storage medium | |
CN105139417A (en) | Method for real-time multi-target tracking under video surveillance | |
CN108805902A (en) | A kind of space-time contextual target tracking of adaptive scale | |
CN110992426A (en) | Gesture recognition method and apparatus, electronic device, and storage medium | |
Chen et al. | Edge preservation ratio for image sharpness assessment | |
CN115661187A (en) | Image enhancement method for Chinese medicinal preparation analysis | |
CN105335717B (en) | Face identification system based on the analysis of intelligent mobile terminal video jitter | |
CN105631285A (en) | Biological feature identity recognition method and apparatus | |
CN106406507B (en) | Image processing method and electronic device | |
CN110826534A (en) | Face key point detection method and system based on local principal component analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 202177 room 493-61, building 3, No. 2111, Beiyan highway, Chongming District, Shanghai Applicant after: TAPUYIHAI (SHANGHAI) INTELLIGENT TECHNOLOGY Co.,Ltd. Address before: 201802 room 412, building 5, No. 1082, Huyi Road, Jiading District, Shanghai Applicant before: TAPUYIHAI (SHANGHAI) INTELLIGENT TECHNOLOGY Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180511 |