CN105405154A - Target object tracking method based on color-structure characteristics - Google Patents
Target object tracking method based on color-structure characteristics Download PDFInfo
- Publication number
- CN105405154A CN105405154A CN201510530842.2A CN201510530842A CN105405154A CN 105405154 A CN105405154 A CN 105405154A CN 201510530842 A CN201510530842 A CN 201510530842A CN 105405154 A CN105405154 A CN 105405154A
- Authority
- CN
- China
- Prior art keywords
- target object
- color
- image
- frame image
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims abstract description 12
- 238000004422 calculation algorithm Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract description 4
- 230000003190 augmentative effect Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a target object tracking method based on color-structure characteristics in order to overcome the defects of high complexity and low accuracy of the tracking of moving objects in video images in the prior art. The method includes: object detection of an image in a video is performed, at least one object in the current frame of image of the video is obtained, ultra-pixel segmentation of the object is performed, and the color characteristic and the structure characteristic of the object are determined; a to-be-tracked target object is determined in the current frame of image via comparison and matching of the color characteristic and the structure characteristic of a to-be-tracked object in a preset object model database, and position information of the target object in the current frame of image is recorded; and the target object is tracked in the next frame of image of the video according to the color characteristic and the structure characteristic of the target object in the current frame of image, and the position information of the target object is updated. According to the method, the accuracy and the robustness of target tracking with single texture for a video tracking algorithm are effectively improved.
Description
Technical Field
The invention relates to the technical field of pattern recognition and computer vision, in particular to a target object tracking method based on color-structure characteristics.
Background
The Augmented Reality (AR) technology can seamlessly merge objects and information in the real world with objects and information in the virtual world generated by a computer, has the characteristics of virtual-real combination, real-time interaction and the like, can provide people with richer information and more convenient information acquisition experience, and enhances the understanding and perception of people to the real world.
The augmented reality technology based on video has been developed rapidly in recent years due to its low application cost and its general applicability to various environments. How to accurately track objects in the real world is one of the keys for realizing the combination of virtuality and reality in the augmented reality technology. As a basis for realizing the augmented reality technology, a target tracking technology based on a video image is widely applied to the fields of safety monitoring, autonomous vehicle driving, navigation guidance and control, man-machine interaction and the like at present, and is one of the key research directions in the field of computer vision in recent years.
In video-based augmented reality technology, video object tracking usually requires a virtual object to be tracked and registered on a real object photographed in real time. For the tracking of moving objects, if the same tracking algorithm is repeated for each key frame image of a video sequence, the complexity and the calculation amount of the whole operation are very large.
Meanwhile, in view of the complexity of feature recognition of a moving object and tracking of an object with a form change in motion, how to effectively ensure the recognition accuracy of the moving object and the real-time performance of detection and tracking becomes one of the technical problems to be solved for realizing wide application of the augmented reality technology.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a target object tracking method based on color-structure characteristics aiming at the defects of high complexity and low accuracy of tracking a moving object in a video image in the prior art, wherein the target object in the video image is identified according to the combination of the color characteristics and the structure characteristics, and is compared and matched with a preset model database, so that the target object is determined and tracked, and the accuracy, the real-time property and the robustness of a target tracking system based on the video image are improved.
In view of the above, the present invention provides a target object tracking method based on color-structure features, including: carrying out object detection on an image in a video to acquire at least one object in a current frame image of the video; performing superpixel segmentation on the object according to the pixel color information of the object; determining color features and structural features of the object according to the super pixels meeting preset conditions in the object; comparing and matching color features and structural features with an object to be tracked in a preset object model database, determining a target object to be tracked in the current frame image, and recording position information of the target object in the current frame image; and tracking the target object in the next frame image of the video according to the color feature and the structural feature of the target object in the current frame image, and updating the position information of the target object.
Preferably, the object detection is performed on an image in a video, and comprises: and reading the image in the video, and carrying out the object detection on the image in the video through foreground identification or contour identification.
Preferably, the method comprises: performing said superpixel segmentation on said object, said object resulting in a set of l superpixels { S }1,S2,S3,…,SlL is a positive integer greater than or equal to 1.
Preferably, determining the color feature and the structural feature of the object according to the super-pixels meeting the preset condition in the object comprises: of the super-pixel set of the object, a super-pixel SkThe number of pixels included is nkSaid super pixel SkSize of (p)kComprises the following steps:calculating to obtain the object according to the superpixels of which rho is larger than a preset threshold in the superpixel set of the objectColor features and structural features of (a).
Preferably, the method further comprises: and converting the pixel color information described based on the HSV color space into a color characteristic of the pixel expressed by Euclidean space coordinates under a cylindrical coordinate system.
Preferably, the structural features of the object include distances and angles of superpixels in the object.
Preferably, determining a target object to be tracked in the current frame image includes: and after the comparison and matching are carried out, calculating the matching degree of the object in the current frame image and the object to be tracked, and if the matching degree reaches a preset matching threshold value, determining the object in the current frame image as the target object.
Preferably, the method further comprises: and after the position information of the target object in the current frame image is recorded, estimating the position information of the target object in the next frame image according to the position information of the target object in the current frame image.
Preferably, tracking the target object in a next frame image of the video according to the color feature and the structural feature of the target object in the current frame image comprises: and extracting a sub-image in the next frame image according to the estimated position information of the target object in the next frame image, and determining the target object in the sub-image according to the color feature and the structural feature of the target object in the sub-image.
Preferably, the method further comprises: before the object detection is carried out on the image in the video, the object model database is established, and the color characteristic and the structural characteristic of the object to be tracked are stored.
According to the technical scheme, when the target object in the video image is tracked, the superpixel with high color correlation is adopted to carry out superpixel segmentation on the object in the image, the superpixel is combined with the color characteristic and the structural characteristic of the object, then the characteristic matching degree of the object in the video image and the model object is calculated, and the target object to be tracked is determined through characteristic matching. In the next frame image of the video, the tracking of the object in the video is realized by comparing and matching the color characteristic and the structure characteristic of the target object in the previous frame image of the video. The technical scheme of the invention effectively overcomes the defect that the object tracking method based on pixel feature description depends on the texture of the target object, and simultaneously improves the applicability of the target object tracking algorithm in the video image to the single-texture target.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure and/or process particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly described below. It is to be understood that the drawings in the following description are merely illustrative of some embodiments of the invention and that other drawings may be derived by those skilled in the art without inventive exercise from these drawings:
FIG. 1 is a flowchart illustrating a target object tracking method based on color-structure features according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating a target object tracking method based on color-structure characteristics according to a second embodiment of the present invention.
Detailed Description
So that the objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof that are illustrated in the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, these are merely examples of the invention, which may be embodied in other ways than those specifically set forth herein. Therefore, the scope of the invention is not limited by the specific embodiments disclosed below.
Fig. 1 is a flowchart illustrating a target object tracking method based on color-structure characteristics according to a first embodiment of the present invention.
As shown in fig. 1, a target object tracking method based on color-structure features according to a first embodiment of the present invention mainly includes the following steps:
step S101, carrying out object detection on an image in a video to acquire at least one object in a current frame image of the video;
step S102, performing superpixel segmentation on the object according to the pixel color information of the object;
step S103, determining color characteristics and structural characteristics of the object according to the super pixels meeting preset conditions in the object;
step S104, comparing and matching the color characteristics and the structural characteristics of the object with an object to be tracked in a preset object model database, determining a target object to be tracked in the current frame image, and recording the position information of the target object in the current frame image;
step S105, tracking the target object in a next frame image of the video according to the color feature and the structural feature of the target object in the current frame image, obtaining the position information of the target object in the next frame image, and updating the position information of the target object.
In the next frame image of the video, the technology from step S101 to step S103 is used to perform object detection on the next frame image, and at least one object in the next frame image is acquired. And comparing and matching the color features and the structural features of the object in the next frame image with the target object in the previous frame image (namely, the current frame image) of the video, determining the target object in the next frame image, namely, tracking the target object in the next frame image, determining the position information of the target object in the next frame image, and updating the position information of the target object by using the position information of the target object in the next frame image.
After an object is obtained in the previous frame image of two adjacent frame images of the video, the color feature and the structural feature of the object are adopted to carry out comparison and matching in an object model database, a target object to be tracked is obtained, and the position information of the target object in the frame image is recorded. And determining the position information of the target object in the next frame image in the two adjacent frame images by using the color feature and the structural feature of the target object. And updating the position information of the target object according to the position information of the target object in the next frame image.
In the technical scheme, in order to accurately determine and track a target object in a video image, pixels of the image are grouped and clustered according to color features of the pixels, and then superpixels with high color correlation are adopted to perform superpixel segmentation on the object in the image. Based on the superpixels meeting the preset conditions, the color features and the structural features of the superpixels forming the object in the image are calculated, so that the data volume processed by the operation of analyzing and identifying the object in the image is greatly reduced, and meanwhile, the structural feature information related to the object in the image is retained to the maximum extent. And combining the color characteristic and the structural characteristic of each super pixel in the super pixel set forming the object to obtain the color characteristic and the structural characteristic set of the object. And determining a target object to be tracked in the video image by matching and comparing the similarity of the object in the image and the object to be tracked in the model database on the color characteristic and the structural characteristic, and realizing real-time and accurate tracking of the target object in the video image by comparing and matching the color characteristic and the structural characteristic of the target object in the previous frame of image. The technical scheme of the invention effectively overcomes the defect that the object tracking method based on pixel feature description depends on the texture of the target object, and simultaneously improves the applicability of the target object tracking algorithm in the video image to the single-texture target.
In the above technical solution, preferably, the video image sequence is read and analyzed, foreground recognition or contour recognition is performed by using a background difference method, and one or more main objects in the current frame image of the video are extracted, or all objects that can be recognized can be extracted as needed.
In the foregoing technical solution, preferably, the object is subjected to superpixel segmentation according to the obtained pixel color information of the object, and the object obtains a set { S ] including l superpixels1,S2,S3,…,SlL is a positive integer greater than or equal to 1.
In the technical scheme, according to the pixel color of an object, the object obtained from a current frame image of a video is subjected to superpixel segmentation to obtain a plurality of regions with different colors, wherein each region is a superpixel. Wherein each super pixel comprises a plurality of pixels.
In the foregoing technical solution, preferably, in the super-pixel set of any object, the super-pixel SkThe number of pixels included is nkAnd if the number of pixels contained in the object is N, the super pixel SkSize of (p)kComprises the following steps:
wherein k is the number of the super pixel, and k is more than or equal to 1 and less than or equal to l. And calculating to obtain the color characteristic and the structural characteristic of the object according to the superpixels of which the size rho is larger than a preset threshold value of 0.05 in the object superpixel set.
In this technical solution, for a plurality of super pixels already segmented in an image object, a relative size ρ of the super pixel may be calculated according to the number of pixels included in each super pixel, which represents a ratio of the size of the super pixel in the image object to the size of the image object. Compared with the superpixels of which the rho value is less than 0.05, the superpixels of which the rho value is greater than 0.05 in the same image object contain more pixels and can provide more superpixel color feature and structural feature information, so that when the color feature and the structural feature of the superpixels are calculated and analyzed, the superpixels contained in the image object are subjected to condition selection, and the superpixels of which the rho value is greater than 0.05 (or other preset thresholds are also feasible) in a superpixel set forming the object are screened and used for calculating and obtaining the color feature and the structural feature of the image object.
In the foregoing technical solution, preferably, before determining the color feature of the object, the pixel color information described based on the HSV color space is converted into the color feature of the pixel represented by the euclidean space coordinate in the cylindrical coordinate system, and then the color feature of the superpixel is described as (c)1,c2,c3) Wherein
where h denotes hue, s denotes saturation, and v denotes brightness.
In the technical scheme, the RGB color space description value of the object pixel can be converted into HSV color space description through an HSV color model, and meanwhile, in order to more accurately perform color feature comparison and matching, the chromaticity coordinates described by the HSV color space description are uniformly converted into Euclidean space coordinates under a cylindrical coordinate system to describe the color feature of the super pixel.
In the above technical solution, preferably, the structural feature of the object includes a distance and an included angle of a super pixel in the object.
In the technical scheme, a method for calculating a superpixel distance and an included angle in the structural features of the object is specifically implemented by selecting m superpixels with the size rho larger than a preset threshold (0.05) in a superpixel set of the object, and defining a superpixel SkCenter C ofkIs the coordinate average of all the pixels it contains, i.e.:
wherein m is a positive integer of 1 or more.
Defining a center C of said object0Comprises the following steps:
super pixel SkA distance l ofkIs defined as a super pixel SkCenter CkTo the center C of the object0I.e.:
arranging m super-pixels in the object according to the distance in the order from small to large or from large to small to obtain a super-pixel set S1,S2,S3,…,Sm+。
The main direction of the object is the center C of the object0The super-pixel with the smallest (or largest) distance from the center of the object, namely S, in all super-pixels of the object1Center C of1Direction of (1)
Super pixel SkAngle of (theta)kIs defined asWith the main direction of the objectAt an angle of (i.e.
The feature description for the object includes its color feature and structural feature, wherein the color feature of the object is:
the structural features of the object are:
((l1,θ1),(l2,θ2),…,(lm,θm))T
in the technical scheme, the matching degree of the object in the current frame image and the object to be tracked is calculated by comparing and matching color features and structural features with the object to be tracked in an object model database, if the matching degree reaches a preset matching threshold, the object of which the matching degree reaches the preset matching threshold in the current frame image is determined as the target object to be tracked, and the position information of the target object in the current frame image of the video is recorded.
In the technical scheme, an object to be tracked is selected from an object model database, and the matching degree of feature matching is calculated by respectively performing color feature and structural feature contrast matching with the object in the image, specifically:
the superpixel similarity of the object to be tracked and the image object is defined, and specifically,
=wc c+ws s
wherein,cin order to be the degree of similarity of the color features,sfor structural feature similarity, wc,wsRespectively a color feature weight and a structural feature weight, wc+ws=1。
Respectively calculating the color feature similarity and the structural feature similarity of the object to be tracked and the superpixel in the image object through cosine distances, specifically, the color feature similaritycCalculated by the following expression:
the similarity of the structural features is calculated by the following expression:
in the above calculation expression, the characteristic parameter of the super pixel in the image object is represented by a superscript symbol q, and the characteristic parameter of the super pixel in the object to be tracked is represented by a superscript symbol r.
Through the calculation of the feature similarity, the similarity of the matching of the two superpixel features in the superpixel set of the object to be tracked and the superpixel set of the image object can be obtained, and if the similarity is greater than 0.7, the successful matching of the superpixel in the object to be tracked and the superpixel in the image object can be determined. If the number of the superpixels successfully matched in the superpixel set of the object to be tracked and the image object set reaches a preset proportion range or exceeds a preset proportion, for example, the number of the superpixels successfully matched reaches 50% -90% of the total number of the superpixels in the image object, determining that the matching degree of the object in the image and the object to be tracked reaches a preset matching threshold value, the matching is successful, the image object is the target object to be tracked, and recording the position information of the target object.
In the technical scheme, similarity calculation is performed by adopting cosine distance. It should be noted that for the structural similarity calculation between the object to be tracked and the object in the image, mahalanobis distance and other calculation methods capable of achieving the purpose may also be used, and details are not described here.
In the technical scheme, in the next frame image of the video, steps S101 to S103 are repeated, and the target object to be tracked in the next frame image is determined by comparing and matching the color feature and the structural feature of the target object in the previous frame image of the video (the current frame image in step S101). The specific method of comparison and matching is consistent with the above similarity comparison method and similarity determination condition, and is not repeated here. And if the number of the superpixels with the similarity reaching the preset similarity threshold reaches the preset proportion of the total number of the superpixels in the object superpixel set, for example, the number of the superpixels with the matching similarity greater than 0.7 reaches 50% -90% of the total number of the superpixels in the image object, determining that the target object is successfully matched, updating the position information of the target object, and realizing accurate tracking of the target object.
Fig. 2 is a flowchart illustrating a target object tracking method based on color-structure characteristics according to a second embodiment of the present invention.
As shown in fig. 2, a target object tracking method based on color-structure features according to a second embodiment of the present invention includes the following steps:
step S201, carrying out object detection on an image in a video, and acquiring at least one object in a current frame image of the video;
step S202, performing superpixel segmentation on the object according to the acquired pixel color information of the object;
step S203, determining color characteristics and structural characteristics of the object according to the super pixels meeting preset conditions in the object;
step S204, comparing and matching color features and structural features with an object to be tracked in an object model database, determining a target object to be tracked in the current frame image, and recording position information of the target object in the current frame image;
step S205, in the current frame image of the video, according to the position information of the target object, estimating the position information of the target object in the next frame image by adopting a motion model;
step S206, in the next frame image of the video, taking the estimated position information as a reference position, extracting a sub-image within a preset range determined based on the reference position, tracking the target object in the sub-image according to the color feature and the structural feature of the target object in the current frame image by using the techniques of steps S202 to 203 for the sub-image, obtaining the position information of the target object in the sub-image, and updating the position information of the target object.
Specifically, the sub-image is super-pixel divided to obtain the color feature and the structure feature of the sub-image, comparing and matching the color features and the structural features of the sub-image with the target object in the previous frame image (current frame image) of the video to determine whether the target object to be tracked exists in the sub-image, specifically, if the matching degree of the color features and the structural features of the target object in the previous frame image (current frame image) of the sub-image and the video reaches a preset matching degree threshold value, i.e. determining that the target object is tracked in the sub-image, based on the position information of the sub-image in the next frame image, further, the position information of the target object in the next frame image can be determined, and the position information of the target object is updated by using the position information of the target object in the next frame image.
In the technical scheme, the position of the target object which may appear in the next frame image of the video is estimated according to the position information of the target object in the current frame image of the video image, the object motion model and the object motion track or motion trend characteristics and the time interval between each frame of the video image. Extracting a sub-image in a preset area range of the possible position in the next frame of image of the video, such as a 100-200% range area with the position as the center, performing superpixel segmentation and object identification and comparison on the sub-image, determining whether the sub-image contains a target object or not according to the matching degree of the color feature and the structure feature of the sub-image and the target object, and further determining whether the target object is tracked successfully. Compared with the method for identifying and comparing the whole image content of the whole frame image, the method for identifying and comparing the whole frame image only identifies and analyzes the partial image area, reduces the data volume of image identification, can effectively reduce the identification time of the target object, simultaneously more quickly and accurately positions the target object, and improves the positioning efficiency of the target object.
In the technical scheme, the motion state information of the target object, including the position, the speed, the acceleration and the like of the target object, can be accurately estimated and predicted by adopting least square filtering, Kalman filtering, extended Kalman filtering or particle filtering. The method comprises the steps of predicting the motion state information of a target object in a next frame image of a video by combining the video frame interval time based on the motion state information of the target object in the previous frame image of the video, determining a reference range for searching the target object in the next frame image, reducing the target object searching range, and simplifying the operation amount and the operation complexity of object identification, thereby quickly and effectively searching and matching the target object and realizing real-time accurate tracking of the target object.
In any of the above embodiments, preferably, in the next frame image of the video, if the color feature and the structural feature of the target object in the previous frame image of the video fail to be matched, the target object to be tracked may be re-determined by comparing and matching the target object to be tracked in the model database. And if the matching is successful, updating the position information of the target object, and completing the tracking of the target object.
In the technical scheme, because the object to be tracked in the model database and the target object in the previous frame image of the video have color features and structure features which are not completely the same and belong to different comparison samples, if the color features and the structure features of the target object in the previous frame image of the video fail to be matched in the next frame image of the video, which results in target tracking loss, secondary comparison matching can be further performed through the color features and the structure features of the object to be tracked in the object model database. And if the matching degree of the secondary comparison matching reaches a preset matching degree threshold value, determining that the target object is successfully matched and updating the position information of the target object, so that the matching accuracy of the target object can be obviously improved, and the reliability and robustness of the tracking of the target object are enhanced.
Further, after the target object to be tracked is determined by first matching through comparison and matching with the object to be tracked in the object model database, in the tracking process of the target object, when the target object is compared and matched in the subsequent frame image of the video, the target object can be determined in the subsequent image according to the similarity through comparison of color features and structural features with the target object in the previous image. According to the technical scheme, color features and structural features of the target object to be tracked in the object model database can be compared, and the target object is determined in the image according to similarity matching; or two comparison modes are adopted simultaneously, so that the accuracy and the reliability of comparison and matching are improved.
In any of the above embodiments, preferably, before the step of performing object detection on an image in a video and obtaining at least one object therefrom, an object model database may be further established in advance, and color features and structural features of the object to be tracked are stored for subsequent comparison and matching with the object in the image, and the target object to be tracked is determined from the image.
In any of the above embodiments, preferably, the object model database obtains image information of the object to be tracked in an online and/or offline manner, and updates the color feature and the structural feature of the object to be tracked in the object model database.
In summary, the present invention provides a target object tracking method based on color-structure features, which determines a target object to be tracked in a video image through feature matching, and can predict a motion trend of the target object under the condition that a background environment of the target object is complex or the target object is blocked or the target object is located at a large distance in two adjacent frames of video images due to rapid motion of the target object, so as to quickly and accurately realize positioning and tracking of the target object, and have good reliability and robustness.
It is again stated that all of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except mutually exclusive features and/or steps.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
It will be appreciated by those skilled in the art that the steps of the method provided by the embodiments of the present application may be performed collectively on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented in program code executable by a computing device. Thus, they may be stored in a memory device for execution by a computing device, or they may be separately fabricated as individual integrated circuit modules, or multiple modules or steps thereof may be fabricated as a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the technical solution of the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A method for tracking a target object based on color-structure features, the method comprising:
carrying out object detection on an image in a video to acquire at least one object in a current frame image of the video;
performing superpixel segmentation on the object according to the pixel color information of the object;
determining color features and structural features of the object according to the super pixels meeting preset conditions in the object;
comparing and matching color features and structural features with an object to be tracked in a preset object model database, determining a target object to be tracked in the current frame image, and recording position information of the target object in the current frame image;
and tracking the target object in the next frame image of the video according to the color feature and the structural feature of the target object in the current frame image, and updating the position information of the target object.
2. The color-structure feature based target object tracking method according to claim 1, wherein performing object detection on an image in a video comprises:
and reading the image in the video, and carrying out the object detection on the image in the video through foreground identification or contour identification.
3. The color-structure feature based target object tracking method according to claim 1 or 2, characterized in that the method comprises:
performing said superpixel segmentation on said object, said object resulting in a set of l superpixels { S }1,S2,S3,…,SlL is a positive integer greater than or equal to 1.
4. The color-structure feature based target object tracking method according to claim 3, wherein determining the color feature and the structure feature of the object according to the super-pixel meeting the preset condition in the object comprises:
of the super-pixel set of the object, a super-pixel SkThe number of pixels included is nkSaid super pixel SkSize of (p)kComprises the following steps:
and calculating to obtain the color characteristic and the structural characteristic of the object according to the superpixels of which the rho is larger than a preset threshold value in the superpixel set of the object.
5. The color-structure feature based target object tracking method according to claim 4, characterized in that the method further comprises:
and converting the pixel color information described based on the HSV color space into a color characteristic of the pixel expressed by Euclidean space coordinates under a cylindrical coordinate system.
6. The color-structure feature based target object tracking method according to claim 4, wherein the structural features of the object include distances and angles of superpixels in the object.
7. The color-structure feature based target object tracking method according to claim 4, wherein determining a target object to be tracked in the current frame image comprises:
and after the comparison and matching are carried out, calculating the matching degree of the object in the current frame image and the object to be tracked, and if the matching degree reaches a preset matching threshold value, determining the object in the current frame image as the target object.
8. The color-structure feature based target object tracking method according to claim 1, further comprising:
and after the position information of the target object in the current frame image is recorded, estimating the position information of the target object in the next frame image according to the position information of the target object in the current frame image.
9. The color-structure feature based target object tracking method according to claim 8, wherein tracking the target object in a next frame image of the video according to the color feature and the structure feature of the target object in the current frame image comprises:
and extracting a sub-image in the next frame image according to the estimated position information of the target object in the next frame image, and determining the target object in the sub-image according to the color feature and the structural feature of the target object in the sub-image.
10. The color-structure feature based target object tracking method according to claim 1, further comprising:
before the object detection is carried out on the image in the video, the object model database is established, and the color characteristic and the structural characteristic of the object to be tracked are stored.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510530842.2A CN105405154B (en) | 2014-09-04 | 2015-08-26 | Target object tracking based on color-structure feature |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410450138.1A CN104240266A (en) | 2014-09-04 | 2014-09-04 | Target object tracking method based on color-structure features |
CN2014104501381 | 2014-09-04 | ||
CN201510530842.2A CN105405154B (en) | 2014-09-04 | 2015-08-26 | Target object tracking based on color-structure feature |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105405154A true CN105405154A (en) | 2016-03-16 |
CN105405154B CN105405154B (en) | 2018-06-15 |
Family
ID=52228272
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410450138.1A Pending CN104240266A (en) | 2014-09-04 | 2014-09-04 | Target object tracking method based on color-structure features |
CN201510530842.2A Active CN105405154B (en) | 2014-09-04 | 2015-08-26 | Target object tracking based on color-structure feature |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410450138.1A Pending CN104240266A (en) | 2014-09-04 | 2014-09-04 | Target object tracking method based on color-structure features |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN104240266A (en) |
WO (1) | WO2016034059A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108268823A (en) * | 2016-12-30 | 2018-07-10 | 纳恩博(北京)科技有限公司 | Target recognition methods and device again |
CN108492314A (en) * | 2018-01-24 | 2018-09-04 | 浙江科技学院 | Wireless vehicle tracking based on color characteristics and structure feature |
CN109918997A (en) * | 2019-01-22 | 2019-06-21 | 深圳职业技术学院 | A pedestrian target tracking method based on multi-instance learning |
CN110580707A (en) * | 2018-06-08 | 2019-12-17 | 杭州海康威视数字技术股份有限公司 | object tracking method and system |
CN110647658A (en) * | 2019-08-02 | 2020-01-03 | 惠州市德赛西威汽车电子股份有限公司 | Vehicle-mounted image feature automatic identification method and system based on cloud computing |
CN111383246A (en) * | 2018-12-29 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Scroll detection method, device and equipment |
WO2020252974A1 (en) * | 2019-06-17 | 2020-12-24 | 北京影谱科技股份有限公司 | Method and device for tracking multiple target objects in motion state |
CN112244887A (en) * | 2019-07-06 | 2021-01-22 | 西南林业大学 | Carotid artery vessel wall motion trajectory extraction device and method based on B-ultrasonic image |
US10928898B2 (en) | 2019-01-03 | 2021-02-23 | International Business Machines Corporation | Augmented reality safety |
CN115439509A (en) * | 2022-11-07 | 2022-12-06 | 成都泰盟软件有限公司 | Multi-target tracking method and device, computer equipment and storage medium |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240266A (en) * | 2014-09-04 | 2014-12-24 | 成都理想境界科技有限公司 | Target object tracking method based on color-structure features |
CN106156248B (en) * | 2015-04-28 | 2020-03-03 | 北京智谷睿拓技术服务有限公司 | Information processing method and apparatus |
CN106373143A (en) * | 2015-07-22 | 2017-02-01 | 中兴通讯股份有限公司 | Adaptive method and system |
CN107301651A (en) * | 2016-04-13 | 2017-10-27 | 索尼公司 | Object tracking apparatus and method |
CN105930815B (en) * | 2016-05-04 | 2022-10-04 | 中国农业大学 | A kind of underwater biological detection method and system |
CN109416535B (en) * | 2016-05-25 | 2022-11-11 | 深圳市大疆创新科技有限公司 | Aircraft navigation technology based on image recognition |
CN106780582B (en) * | 2016-12-16 | 2019-08-13 | 西安电子科技大学 | The image significance detection method merged based on textural characteristics and color characteristic |
CN106909935B (en) * | 2017-01-19 | 2021-02-05 | 博康智能信息技术有限公司上海分公司 | Target tracking method and device |
CN106909934B (en) * | 2017-01-19 | 2021-02-05 | 博康智能信息技术有限公司上海分公司 | Target tracking method and device based on self-adaptive search |
CN106897735A (en) * | 2017-01-19 | 2017-06-27 | 博康智能信息技术有限公司上海分公司 | The tracking and device of a kind of Fast Moving Object |
CN109658326B (en) * | 2017-10-11 | 2024-01-16 | 深圳市中兴微电子技术有限公司 | Image display method and device and computer readable storage medium |
CN108090436B (en) * | 2017-12-13 | 2021-11-19 | 深圳市航盛电子股份有限公司 | Method, system and medium for detecting moving object |
CN108229554A (en) * | 2017-12-29 | 2018-06-29 | 北京中船信息科技有限公司 | Integrated touch-control commander's table and command methods |
CN110163076B (en) * | 2019-03-05 | 2024-05-24 | 腾讯科技(深圳)有限公司 | Image data processing method and related device |
CN110503696B (en) * | 2019-07-09 | 2021-09-21 | 浙江浩腾电子科技股份有限公司 | Vehicle face color feature detection method based on super-pixel sampling |
CN112101207B (en) * | 2020-09-15 | 2023-12-22 | 精英数智科技股份有限公司 | Target tracking method and device, electronic equipment and readable storage medium |
CN113240712A (en) * | 2021-05-11 | 2021-08-10 | 西北工业大学 | Underwater cluster neighbor tracking measurement method based on vision |
CN113361388B (en) * | 2021-06-03 | 2023-11-24 | 北京百度网讯科技有限公司 | Image data correction method and device, electronic equipment and automatic driving vehicle |
CN114529624A (en) * | 2022-02-17 | 2022-05-24 | 浙江核新同花顺网络信息股份有限公司 | Image color matching method and system and image generation method and system |
CN115225815B (en) * | 2022-06-20 | 2023-07-25 | 南方科技大学 | Target intelligent tracking shooting method, server, shooting system, equipment and medium |
CN118657920B (en) * | 2024-08-21 | 2024-11-19 | 湖南苏科智能科技有限公司 | Article detection method, system, security inspection machine and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100128110A1 (en) * | 2008-11-21 | 2010-05-27 | Theofanis Mavromatis | System and method for real-time 3-d object tracking and alerting via networked sensors |
CN103092930A (en) * | 2012-12-30 | 2013-05-08 | 信帧电子技术(北京)有限公司 | Method of generation of video abstract and device of generation of video abstract |
EP2626835A1 (en) * | 2012-02-08 | 2013-08-14 | Samsung Electronics Co., Ltd | Object tracking apparatus and control method thereof |
CN103281477A (en) * | 2013-05-17 | 2013-09-04 | 天津大学 | Multi-level characteristic data association-based multi-target visual tracking method |
CN103426183A (en) * | 2013-07-10 | 2013-12-04 | 上海理工大学 | Method and device for tracking motion objects |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8027513B2 (en) * | 2007-03-23 | 2011-09-27 | Technion Research And Development Foundation Ltd. | Bitmap tracker for visual tracking under very general conditions |
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
CN102930539B (en) * | 2012-10-25 | 2015-08-26 | 江苏物联网研究发展中心 | Based on the method for tracking target of Dynamic Graph coupling |
CN103037140B (en) * | 2012-12-12 | 2019-06-28 | 杭州国策商图科技有限公司 | A kind of target tracking algorism based on Block- matching |
CN104240266A (en) * | 2014-09-04 | 2014-12-24 | 成都理想境界科技有限公司 | Target object tracking method based on color-structure features |
-
2014
- 2014-09-04 CN CN201410450138.1A patent/CN104240266A/en active Pending
-
2015
- 2015-08-26 CN CN201510530842.2A patent/CN105405154B/en active Active
- 2015-08-26 WO PCT/CN2015/088095 patent/WO2016034059A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100128110A1 (en) * | 2008-11-21 | 2010-05-27 | Theofanis Mavromatis | System and method for real-time 3-d object tracking and alerting via networked sensors |
EP2626835A1 (en) * | 2012-02-08 | 2013-08-14 | Samsung Electronics Co., Ltd | Object tracking apparatus and control method thereof |
CN103092930A (en) * | 2012-12-30 | 2013-05-08 | 信帧电子技术(北京)有限公司 | Method of generation of video abstract and device of generation of video abstract |
CN103281477A (en) * | 2013-05-17 | 2013-09-04 | 天津大学 | Multi-level characteristic data association-based multi-target visual tracking method |
CN103426183A (en) * | 2013-07-10 | 2013-12-04 | 上海理工大学 | Method and device for tracking motion objects |
Non-Patent Citations (3)
Title |
---|
WANG S 等: "Superpixel tracking", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 * |
周治平 等: "基于超像素的目标跟踪方法研究", 《光电工程》 * |
高一文 等: "基于颜色与结构特征的车牌定位算法", 《重庆文理学院学报 ( 自然科学版)》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108268823B (en) * | 2016-12-30 | 2021-07-20 | 纳恩博(北京)科技有限公司 | Target re-identification method and device |
CN108268823A (en) * | 2016-12-30 | 2018-07-10 | 纳恩博(北京)科技有限公司 | Target recognition methods and device again |
CN108492314B (en) * | 2018-01-24 | 2020-05-19 | 浙江科技学院 | Vehicle tracking method based on color characteristics and structural features |
CN108492314A (en) * | 2018-01-24 | 2018-09-04 | 浙江科技学院 | Wireless vehicle tracking based on color characteristics and structure feature |
CN110580707A (en) * | 2018-06-08 | 2019-12-17 | 杭州海康威视数字技术股份有限公司 | object tracking method and system |
CN111383246A (en) * | 2018-12-29 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Scroll detection method, device and equipment |
CN111383246B (en) * | 2018-12-29 | 2023-11-07 | 杭州海康威视数字技术股份有限公司 | Scroll detection method, device and equipment |
US10928898B2 (en) | 2019-01-03 | 2021-02-23 | International Business Machines Corporation | Augmented reality safety |
CN109918997A (en) * | 2019-01-22 | 2019-06-21 | 深圳职业技术学院 | A pedestrian target tracking method based on multi-instance learning |
WO2020252974A1 (en) * | 2019-06-17 | 2020-12-24 | 北京影谱科技股份有限公司 | Method and device for tracking multiple target objects in motion state |
CN112244887A (en) * | 2019-07-06 | 2021-01-22 | 西南林业大学 | Carotid artery vessel wall motion trajectory extraction device and method based on B-ultrasonic image |
CN112244887B (en) * | 2019-07-06 | 2023-07-18 | 西南林业大学 | A device and method for extracting motion trajectory of carotid artery wall based on B-ultrasound image |
CN110647658A (en) * | 2019-08-02 | 2020-01-03 | 惠州市德赛西威汽车电子股份有限公司 | Vehicle-mounted image feature automatic identification method and system based on cloud computing |
CN115439509A (en) * | 2022-11-07 | 2022-12-06 | 成都泰盟软件有限公司 | Multi-target tracking method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2016034059A1 (en) | 2016-03-10 |
CN105405154B (en) | 2018-06-15 |
CN104240266A (en) | 2014-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105405154A (en) | Target object tracking method based on color-structure characteristics | |
US12148204B2 (en) | Target detection method and apparatus | |
WO2020108362A1 (en) | Body posture detection method, apparatus and device, and storage medium | |
US9824294B2 (en) | Saliency information acquisition device and saliency information acquisition method | |
Keller et al. | A new benchmark for stereo-based pedestrian detection | |
WO2021238062A1 (en) | Vehicle tracking method and apparatus, and electronic device | |
CN103020986B (en) | A kind of motion target tracking method | |
CN104200495B (en) | A kind of multi-object tracking method in video monitoring | |
CN103310194B (en) | Pedestrian based on crown pixel gradient direction in a video shoulder detection method | |
JP2020520512A (en) | Vehicle appearance feature identification and vehicle search method, device, storage medium, electronic device | |
CN110263712B (en) | A Coarse and Fine Pedestrian Detection Method Based on Region Candidates | |
CN109711416B (en) | Target identification method and device, computer equipment and storage medium | |
US9443137B2 (en) | Apparatus and method for detecting body parts | |
Audebert et al. | How useful is region-based classification of remote sensing images in a deep learning framework? | |
KR20220043847A (en) | Method, apparatus, electronic device and storage medium for estimating object pose | |
CN104992453A (en) | Target tracking method under complicated background based on extreme learning machine | |
CN103761747B (en) | Target tracking method based on weighted distribution field | |
EP3073443A1 (en) | 3D Saliency map | |
CN103106409A (en) | Composite character extraction method aiming at head shoulder detection | |
CN118115927B (en) | Target tracking method, apparatus, computer device, storage medium and program product | |
CN106529472B (en) | Object detection method and device based on large scale high-resolution high spectrum image | |
Vafadar et al. | A vision based system for communicating in virtual reality environments by recognizing human hand gestures | |
Wang et al. | Hand posture recognition from disparity cost map | |
CN109508674B (en) | Airborne Down-View Heterogeneous Image Matching Method Based on Region Division | |
CN109523570A (en) | Beginning parameter transform model method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |