WO2016034059A1 - 基于颜色-结构特征的目标对象跟踪方法 - Google Patents

基于颜色-结构特征的目标对象跟踪方法 Download PDF

Info

Publication number
WO2016034059A1
WO2016034059A1 PCT/CN2015/088095 CN2015088095W WO2016034059A1 WO 2016034059 A1 WO2016034059 A1 WO 2016034059A1 CN 2015088095 W CN2015088095 W CN 2015088095W WO 2016034059 A1 WO2016034059 A1 WO 2016034059A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
color
feature
frame image
image
Prior art date
Application number
PCT/CN2015/088095
Other languages
English (en)
French (fr)
Inventor
柳寅秋
Original Assignee
成都理想境界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都理想境界科技有限公司 filed Critical 成都理想境界科技有限公司
Publication of WO2016034059A1 publication Critical patent/WO2016034059A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the invention relates to the field of pattern recognition and computer vision technology, in particular to a target object tracking method based on color-structure features.
  • Augmented Reality (AR) technology seamlessly integrates objects and information in the real world with objects and information in a computer-generated virtual world. It combines virtual and real-time interactions and real-time interactions to provide people with more Rich information and a more convenient information acquisition experience enhance people's understanding and perception of the real world.
  • AR Augmented Reality
  • Video-based augmented reality technology has developed rapidly in recent years due to its low application cost and its universal application in a variety of environments. How to accurately track objects in the real world is one of the keys to realizing the combination of reality and reality in augmented reality technology.
  • the target tracking technology based on video image is widely used in the fields of security monitoring, vehicle autonomous driving, navigation guidance and control, human-computer interaction, etc. It is one of the key research directions in the field of computer vision in recent years. .
  • video object tracking usually requires tracking and registering virtual objects on a real-time captured real object. For the tracking of moving objects, if the same tracking algorithm is repeated for each key frame image of a video sequence, the complexity and calculation amount of the entire operation will be very large.
  • the technical problem to be solved by the present invention is to provide a target object tracking method based on color-structure features according to the defects of high complexity and low accuracy of moving object tracking in a video image in the prior art, according to the color feature and The combination of structural features, the target object in the video image is identified, and the object matching is matched with the preset model database to realize the determination and tracking of the target object, and the accuracy of the target tracking system based on the video image is improved, Real-time and robust.
  • the present invention provides a target object tracking method based on a color-structure feature, comprising: performing object detection on an image in a video, acquiring at least one object in a current frame image of the video; Pixel color information, performing superpixel segmentation on the object; determining color features and structural features of the object according to the superpixels in the object that meet preset conditions; and to be tracked in the database with the preset object model
  • the object performs matching matching of the color feature and the structural feature, determines a target object to be tracked in the current frame image, and records position information of the target object in the current frame image; according to the target object, A color feature and a structural feature in the current frame image, tracking the target object in a next frame image of the video, and updating location information of the target object.
  • performing object detection on the image in the video comprises: reading an image in the video, and performing the object detection on the image in the video by foreground recognition or contour recognition.
  • the method comprises: the object of the superpixel segmentation, the object is to obtain a set of pixels comprising a set of super-l ⁇ S 1, S 2, S 3, ..., S l ⁇ , wherein , l is a positive integer greater than or equal to 1.
  • determining a color feature and a structural feature of the object according to a super pixel that meets a preset condition in the object including: in the super pixel set of the object, the number of pixels included in the super pixel S k is n k the size of the super pixel S k ⁇ k is:
  • the color feature and the structural feature of the object are calculated according to the super pixel in the super pixel set of the object that is greater than a preset threshold.
  • the method further comprises converting the pixel color information based on the HSV color space description to a color feature of the pixel by Euclidean space coordinates in a cylindrical coordinate system.
  • the structural features of the object include the distance and angle of the superpixels in the object.
  • determining the target object to be tracked in the current frame image including: performing the comparison matching, calculating a matching degree between the object in the current frame image and the object to be tracked, if the matching When the degree reaches a preset matching threshold, the object in the current frame image is determined as the target object.
  • the method further comprises: after recording the location information of the target object in the current frame image, the root And determining location information of the target object in the next frame image according to location information of the target object in the current frame image.
  • tracking the target object in a next frame image of the video according to a color feature and a structural feature of the target object in the current frame image comprises: estimating, according to the target object Position information in the next frame image, extracting a sub-image in the next frame image, determining the said sub-image according to a color feature and a structural feature of the target object in the sub-image target.
  • the method further comprises: before performing the object detection on the image in the video, establishing the object model database, and storing color features and structural features of the object to be tracked.
  • FIG. 1 is a flow chart showing a target object tracking method based on a color-structure feature according to a first embodiment of the present invention.
  • FIG. 2 is a flow chart showing a target object tracking method based on a color-structure feature according to a second embodiment of the present invention.
  • FIG. 1 is a flow chart showing a target object tracking method based on a color-structure feature according to a first embodiment of the present invention.
  • the target object tracking method based on the color-structure feature according to the first embodiment of the present invention mainly includes the following steps.
  • Step S101 Perform object detection on an image in the video, and acquire at least one object in the current frame image of the video.
  • Step S102 Perform superpixel segmentation on the object according to pixel color information of the object.
  • Step S103 Determine a color feature and a structural feature of the object according to the super pixel that meets the preset condition in the object.
  • Step S104 Perform matching of the color feature and the structural feature on the object to be tracked in the preset object model database, determine the target object to be tracked in the current frame image, and record the target object in the current Location information in the frame image.
  • Step S105 tracking the target object in the next frame image of the video according to the color feature and the structural feature of the target object in the current frame image, obtaining position information of the target object in the next frame image, and The location information of the target object is updated.
  • next frame image of the video using the techniques of steps S101 to S103, object detection is performed on the next frame image, and at least one of the objects is acquired.
  • the target object that is, tracking the target object in the next frame image, determining location information of the target object in the next frame image, using the location information of the target object in the next frame image
  • the target object updates the location information.
  • the object color matching and the structural feature are used to perform matching matching in the object model database, and the target object to be tracked is obtained, and the target object is recorded in the frame.
  • Image Location information in .
  • the positional information of the target object is determined in the next frame image of the adjacent two frames by using the color feature and the structural feature of the target object.
  • the location information of the target object is updated according to the location information of the target object in the next frame image.
  • the pixels of the image are first grouped and clustered according to the color feature of the pixel, and then the super pixel pair with high color correlation is used in the image.
  • the object is subpixel divided. Based on the superpixels that meet the preset conditions, the color features and structural features of the superpixels constituting the objects in the image are calculated, thereby greatly reducing the amount of data processed by the analysis and recognition operations of the objects in the image, while maximizing The structural feature information related to the object in the image is preserved.
  • the color feature and the structural feature set of the object are obtained by combining the color feature and the structural feature of each super pixel in the super pixel set constituting the object.
  • the target object to be tracked in the video image is determined by using the color feature of the target object in the image of the previous frame.
  • the structural features are matched and matched to realize real-time and accurate tracking of the target object in the video image.
  • the technical solution of the invention effectively overcomes the defect that the object tracking method based on the pixel feature description depends on the target object texture, and improves the applicability of the target object tracking algorithm in the video image to the single target of the texture.
  • the object is subjected to super pixel segmentation according to the pixel color information of the acquired object, and the object obtains a set of sets ⁇ S 1 , S 2 including 1 super pixel.
  • the object obtained from the current frame image of the video is super-pixel divided according to the pixel color of the object, and a plurality of regions having different colors are obtained, and each region is a super pixel.
  • each super pixel contains a plurality of pixels.
  • the number of pixels included in the super pixel S k is n k
  • the number of pixels included in the object is N
  • the size of the super pixel S k ⁇ k is:
  • k is the serial number of the super pixel, 1 ⁇ k ⁇ l.
  • the color feature and the structural feature of the object are calculated according to the super pixel in the object super pixel set whose size ⁇ is greater than a preset threshold of 0.05.
  • the relative size ⁇ of the super pixel may be calculated according to the number of pixels included in each super pixel, indicating that the super pixel size in the image object accounts for The ratio of the size of the image object.
  • the super pixel with the ⁇ value greater than 0.05 in the same image object contains more pixels than the super pixel with the ⁇ value less than 0.05, and can provide more super pixel color features and structural feature information, thus the color feature in the super pixel
  • structural feature calculation analysis conditionally selecting the superpixels included in the image object, and filtering the superpixels in the superpixel set constituting the object with a ⁇ value greater than 0.05 (or other preset thresholds are also feasible) for calculation
  • the color and structural features of the image object are derived.
  • the pixel color information based on the HSV color space description is converted into the color feature of the pixel represented by the Euclidean space coordinate in the cylindrical coordinate system,
  • the color characterization of the superpixel is described as (c 1 , c 2 , c 3 ), wherein
  • h is the hue
  • s is the saturation
  • v is the brightness
  • the object pixel RGB color space description value can be converted into the HSV color space description by the HSV color model, and in order to more accurately perform the color feature comparison matching, the chromaticity coordinates of the HSV color space description are uniformly converted into The Euclidean space coordinates in the cylindrical coordinate system are used to describe the color features of the superpixel.
  • the structural features of the object include the distance and the angle of the super pixels in the object.
  • a method for calculating a distance and an angle of a super pixel in a structural feature of the object is specifically, wherein, in the super pixel set of the selected object, m super pixels having a size ⁇ greater than a preset threshold (0.05) are defined, and the super The center C k of the pixel S k is the coordinate average of all the pixels it contains, namely:
  • n is a positive integer greater than or equal to 1.
  • the m superpixels in the object are arranged in the order of small to large or large to small according to the above distance, and a super pixel set ⁇ S 1 , S 2 , S 3 , . . . , S m ⁇ is obtained.
  • the main direction of the object is the center C 0 of the object to a super pixel in which all the super pixels of the object have the smallest (or largest) distance from the center of the object, that is, the direction of the center C 1 of S 1
  • the angle ⁇ k of the super pixel S k is defined as Main direction with the object Angle
  • the feature description for the object includes its color feature and structural feature, wherein the color feature of the object is:
  • the structural features of the object are:
  • the matching degree between the object in the current frame image and the object to be tracked is calculated by performing matching matching of the color feature and the structural feature with the object to be tracked in the object model database, if the matching degree is reached. Determining a matching threshold, determining that the object in the current frame image that matches the object to be tracked reaches a preset matching threshold is a target object to be tracked, and recording the current frame image of the target object in the video Location information in .
  • the object to be tracked is selected in the object model database, and the matching degree of the feature matching is calculated by comparing and matching the color feature and the structural feature with the object in the image, specifically:
  • ⁇ c is the color feature similarity
  • ⁇ s is the structural feature similarity
  • the color feature similarity and structural feature similarity of the super pixel in the object to be tracked and the object in the image are calculated by the cosine distance.
  • the color feature similarity ⁇ c is calculated by the following expression:
  • the feature parameter of the super pixel in the image object is represented by the symbol q
  • the feature parameter of the super pixel in the object to be tracked is represented by the symbol r.
  • the similarity ⁇ of the super pixel set of the object to be tracked and the two super pixel features in the super pixel set of the image object can be obtained. If ⁇ >0.7, the super in the object to be tracked can be determined. The pixel matches the superpixel in the image object successfully. If the number of super-pixels in the super-pixel set and the image object set to be tracked reaches a preset ratio range, or exceeds a preset ratio, for example, the number of successfully matched super-pixels reaches 50 of the total number of super-pixels in the image object. % ⁇ 90%, it is determined that the matching degree of the object in the image and the object to be tracked reaches a preset matching threshold, and the matching is successful, the image object is the target object to be tracked, and the target object is recorded. Location information.
  • the similarity calculation is performed using the cosine distance. It should be noted that, for the structural similarity calculation of the object to be tracked and the object in the image, a Mahalanobis distance or other calculation method that can achieve the purpose may also be used, and details are not described herein again.
  • steps S101 to S103 are repeated to pass the color feature and structure of the target object in the image of one frame on the video (the current frame image in step S101).
  • the feature is compared and matched, and the target object to be tracked in the next frame image is determined.
  • the specific method of the matching matching is consistent with the above similarity comparison mode and the similarity judgment condition, and details are not described herein again. If the number of super pixels whose similarity reaches the preset similarity threshold reaches the preset proportion of the total number of super pixels in the object super pixel set, for example, the number of super pixels matching the similarity ⁇ >0.7 reaches 50% to 90% of the total number of super pixels in the image object. Then, it is determined that the target object is successfully matched, and the location information of the target object is updated to achieve accurate tracking of the target object.
  • FIG. 2 is a flow chart showing a target object tracking method based on a color-structure feature according to a second embodiment of the present invention.
  • a target object tracking method based on a color-structure feature includes the following steps:
  • Step S201 Perform object detection on the image in the video, and acquire at least one object in the current frame image of the video.
  • Step S202 Perform superpixel segmentation on the object according to the acquired pixel color information of the object.
  • Step S203 Determine a color feature and a structural feature of the object according to the super pixel that meets the preset condition in the object.
  • Step S204 determining, by matching with the object to be tracked in the object model database, the target object to be tracked in the current frame image, and recording the position of the target object in the current frame image. information.
  • Step S205 In the current frame image of the video, according to the location information of the target object, the motion model is used to estimate the location information of the target object in the next frame image.
  • Step S206 in the next frame image of the video, using the estimated position information as a reference position, extracting a sub-image within a preset range determined based on the reference position, and using step S202 for the sub-image
  • the technique of 203 tracking the target object in the sub-image according to the color feature and the structural feature of the target object in the current frame image, obtaining position information of the target object in the sub-image, and acquiring the target object
  • the location information is updated.
  • performing superpixel segmentation on the sub-image acquiring the sub-image color feature and the structural feature, and performing color feature and structure on the target object in the previous frame image (current frame image) of the video.
  • Aligning the features to determine whether there is a target object to be tracked in the sub-image specifically, if the sub-image and the color feature of the target object in the previous frame image (current frame image) of the video
  • matching the structural feature to a preset matching degree threshold that is, determining that the target object is tracked in the sub-image, and determining the target object according to the position information of the sub-image in the next frame image.
  • the position information in the next frame image is updated with the position information of the target object in the next frame image.
  • the position information of the target object in the current frame image of the video image combined with the object motion model, the motion interval or the motion trend feature of the object and the time interval between each frame of the video image are estimated.
  • the location at which the target object may appear in the next frame of the video Extracting a sub-image within a range of 100% to 200% of the range centered on the position in the range of the possible regions in the next frame image of the video, only the sub-image is super Pixel segmentation and object recognition and comparison, determining whether the target object is included in the sub-image by using the matching degree between the sub-image and the color feature and the structural feature of the target object, thereby determining whether the target object is successfully tracked.
  • the target object is positioned more quickly and accurately, and the positioning efficiency of the target object is improved.
  • the motion state information of the target object including the position, velocity and acceleration of the target object, can be accurately evaluated and predicted by using least squares filtering, Kalman filtering, extended Kalman filtering or particle filtering.
  • the motion state information of the target object in the next frame image is predicted in combination with the video frame interval time, and the reference range of the target object search in the next frame image is determined, which is reduced.
  • the search range of the target object simplifies the computational complexity and computational complexity of the object recognition, thereby quickly and efficiently searching for and matching the target object, and realizing accurate tracking of the target object in real time.
  • the model may also pass The objects to be tracked in the database are compared and matched, and the target object to be tracked is re-determined. If the matching is successful, the location information of the target object is updated, and the tracking of the target object is completed.
  • the object to be tracked in the model database since the object to be tracked in the model database has different color features and structural features from the target object in the previous frame image of the video, belonging to different alignment samples, so under the video
  • the color feature of the object to be tracked in the object model database may further be The structural features are subjected to a second alignment match. If the matching degree of the second matching match reaches the preset matching degree threshold, determining that the target object matches successfully and updating the position information of the target object can significantly improve the matching accuracy of the target object, and enhance the tracking of the target object. Credibility and robustness.
  • the target object is compared in the subsequent frame image of the video.
  • the target object can be determined in the subsequent image according to the similarity by performing color feature and structural feature comparison with the target object in the previous image.
  • the technical solution of the present invention can also perform color feature and structural feature comparison with the object to be tracked in the object model database, and then determine the target object in the image according to the similarity matching; or use two comparison methods at the same time to improve the ratio The accuracy and credibility of the match.
  • the object detection is performed on the image in the video, and before the step of acquiring the at least one object, the object model database may be pre-established, and the color feature and the structural feature of the object to be tracked are stored for Subsequent matching with the objects in the image to determine the target object to be tracked from the image.
  • the object model database acquires image information of the object to be tracked by online and/or offline manner, and updates color features and structural features of the object to be tracked in the object model database.
  • the present invention provides a target object tracking method based on color-structure features, which determines a target object to be tracked in a video image by feature matching, where the target object background environment is complex or the target object has occlusion, or the target object is fast.
  • the motion causes the target object to have a large position in the adjacent two frames of video images, it can predict the motion trend of the target object, and quickly and accurately achieve the tracking and tracking of the target object, which has good reliability and robustness.
  • the steps in the method provided by the foregoing embodiments of the present application may be concentrated on a single computing device for execution, or distributed on a network composed of multiple computing devices for execution.
  • they may be implemented in program code executable by a computing device.
  • they may be stored in a storage device by a computing device, or they may be fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof may be implemented as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明针对现有技术中对视频图像中的运动对象跟踪的复杂度高与准确性低的缺陷,提供一种基于颜色-结构特征的目标对象跟踪方法,该方法包括:对视频中的图像进行对象检测,获取视频的当前帧图像中的至少一个对象,对对象进行超像素分割,确定对象的颜色特征和结构特征;通过与预设的对象模型数据库中的待跟踪对象进行颜色特征和结构特征的比对匹配,在当前帧图像中确定待跟踪的目标对象,记录目标对象在当前帧图像中的位置信息;根据目标对象在当前帧图像中的颜色特征和结构特征,在视频的下一帧图像中跟踪目标对象,更新目标对象的位置信息。本发明有效增强了视频跟踪算法对纹理单一目标跟踪的准确性和鲁棒性。

Description

基于颜色-结构特征的目标对象跟踪方法
相关申请的交叉引用
本申请要求享有于2014年9月4日提交的名称为“基于颜色-结构特征的目标对象跟踪方法”的中国专利申请CN201410450138.1的优先权,该申请的全部内容通过引用并入本文中。
技术领域
本发明涉及模式识别以及计算机视觉技术领域,尤其涉及一种基于颜色-结构特征的目标对象跟踪方法。
背景技术
增强现实(Augmented Reality,AR)技术能够将真实世界中的对象和信息,与计算机生成的虚拟世界中的对象和信息进行无缝地融合,具有虚实结合、实时交互等特点,可以为人们提供更加丰富的信息和更加便捷的信息获取体验,增强人们对真实世界的理解和感知。
基于视频的增强现实技术,因应用成本较低且能够普遍适用于多种环境中,近年来发展迅速。如何准确地跟踪真实世界中的物体,是实现增强现实技术中虚实结合的关键之一。作为增强现实技术实现的基础,基于视频图像的目标跟踪技术,目前广泛应用于安全监控、车辆自主驾驶、导航制导与控制、人机交互等领域,是近年来计算机视觉领域的重点研究方向之一。
基于视频的增强现实技术中,视频对象跟踪通常需要将虚拟对象跟踪并注册在一个实时拍摄的现实对象上。对于运动对象的跟踪,如果一个视频序列的每幅关键帧图像都重复同样的跟踪算法,则整个运算的复杂度和计算量将非常大。
同时,鉴于对运动对象的特征识别及对运动中形态变化的对象的跟踪的复杂性,如何有效地保证对运动对象的识别精度及检测跟踪的实时性,成为增强现实技术实现广泛应用亟待解决的技术问题之一。
发明内容
本发明所要解决的技术问题在于针对现有技术中对视频图像中的运动对象跟踪的复杂度高与准确性低的缺陷,提供一种基于颜色-结构特征的目标对象跟踪方法,根据颜色特征和结构特征的结合,对视频图像中的目标对象进行识别,并与预设的模型数据库进行对象的比对匹配,实现对目标对象的确定及跟踪,提高基于视频图像的目标跟踪系统的准确性、实时性和鲁棒性。
有鉴于此,本发明提供了一种基于颜色-结构特征的目标对象跟踪方法,包括:对视频中的图像进行对象检测,获取所述视频的当前帧图像中的至少一个对象;根据所述对象的像素颜色信息,对所述对象进行超像素分割;根据所述对象中符合预设条件的超像素,确定所述对象的颜色特征和结构特征;通过与预设的对象模型数据库中的待跟踪对象进行颜色特征和结构特征的比对匹配,在所述当前帧图像中确定待跟踪的目标对象,记录所述目标对象在所述当前帧图像中的位置信息;根据所述目标对象在所述当前帧图像中的颜色特征和结构特征,在所述视频的下一帧图像中跟踪所述目标对象,更新所述目标对象的位置信息。
优选地,对视频中的图像进行对象检测,包括:读取所述视频中的图像,通过前景识别或轮廓识别,对所述视频中的图像进行所述对象检测。
优选地,该方法包括:对所述对象进行所述超像素分割,所述对象得到一组包含l个超像素的集合{S1,S2,S3,...,Sl},其中,l为大于等于1的正整数。
优选地,根据所述对象中符合预设条件的超像素,确定所述对象的颜色特征和结构特征,包括:所述对象的超像素集合中,超像素Sk所包含的像素数为nk,所述超像素Sk的大小ρk为:
Figure PCTCN2015088095-appb-000001
根据所述对象的超像素集合中ρ大于预设阈值的超像素,计算得到所述对象的颜色特征和结构特征。
优选地,该方法还包括:将基于HSV颜色空间描述的所述像素颜色信息,转换为通过柱坐标系下的欧氏空间坐标表示所述像素的颜色特征。
优选地,所述对象的结构特征包括所述对象中的超像素的距离和夹角。
优选地,在所述当前帧图像中确定待跟踪的目标对象,包括:进行所述比对匹配后,计算所述当前帧图像中的对象与所述待跟踪对象的匹配度,若所述匹配度达到预设的匹配阈值,则将所述当前帧图像中的该对象确定为所述目标对象。
优选地,该方法还包括:记录所述目标对象在所述当前帧图像中的位置信息后,根 据所述目标对象在所述当前帧图像中的位置信息,估测所述目标对象在所述下一帧图像中的位置信息。
优选地,根据所述目标对象在所述当前帧图像中的颜色特征和结构特征,在所述视频的下一帧图像中跟踪所述目标对象,包括:根据估测出的所述目标对象在所述下一帧图像中的位置信息,在所述下一帧图像中提取子图像,根据所述目标对象在所述子图像中的颜色特征和结构特征,在所述子图像中确定所述目标对象。
优选地,该方法还包括:对所述视频中的图像进行所述对象检测前,建立所述对象模型数据库,存储所述待跟踪对象的颜色特征和结构特征。
以上本发明的技术方案,在对视频图像中的目标对象进行跟踪时,采用与颜色相关性较高的超像素对图像中的对象进行超像素分割,通过将超像素与对象的颜色特征和结构特征相结合,再计算视频图像中的对象与模型对象的特征匹配度,通过特征匹配来确定待跟踪的目标对象。在视频的下一帧图像中,通过与视频的上一帧图像中的目标对象的颜色特征和结构特征进行比对匹配,实现了对视频中的对象的跟踪。本发明的技术方案有效地克服了基于像素特征描述的对象跟踪方法依赖于目标对象纹理的缺陷,同时提高了视频图像中目标对象跟踪算法对纹理单一目标的适用性。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明的技术方案而了解。本发明的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构和/或流程来实现和获得。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例的描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例的说明,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1示出了根据本发明第一种实施例的基于颜色-结构特征的目标对象跟踪方法的流程示意图。
图2示出了根据本发明第二种实施例的基于颜色-结构特征的目标对象跟踪方法的流程示意图。
具体实施方式
为了能够更清楚地理解本发明的目的、特征和优点,下面结合附图和具体实施方式对本发明做进一步的详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互结合。
在下面的描述中,阐述了很多具体的技术细节,以便于充分理解本发明。但是,这仅仅是本发明的一些实施例,本发明还可以采用其他不同于在此处描述的其他方式来实施。因此,本发明的保护范围并不受下面公开的具体实施例的限制。
图1示出了根据本发明第一种实施例的基于颜色-结构特征的目标对象跟踪方法的流程示意图。
如图1所示,根据本发明第一种实施例的基于颜色-结构特征的目标对象跟踪方法,主要包括以下步骤。
步骤S101,对视频中的图像进行对象检测,获取所述视频的当前帧图像中的至少一个对象。
步骤S102,根据所述对象的像素颜色信息,对所述对象进行超像素分割。
步骤S103,根据所述对象中符合预设条件的超像素,确定所述对象的颜色特征和结构特征。
步骤S104,将该对象与预设的对象模型数据库中的待跟踪对象进行颜色特征和结构特征的比对匹配,在所述当前帧图像中确定待跟踪的目标对象,记录该目标对象在该当前帧图像中的位置信息。
步骤S105,根据该目标对象在该当前帧图像中的颜色特征和结构特征,在该视频的下一帧图像中跟踪该目标对象,获得该目标对象在该下一帧图像中的位置信息,并对该目标对象的位置信息进行更新。
在所述视频的下一帧图像中,利用步骤S101至步骤S103的技术,对该下一帧图像进行对象检测,获取其中的至少一个对象。将该下一帧图像中的对象与所述视频的上一帧图像(即前述的当前帧图像)中的目标对象进行颜色特征和结构特征的比对匹配,在所述下一帧图像中确定该目标对象,也即在该下一帧图像中跟踪该目标对象,确定该目标对象在该下一帧图像中的位置信息,利用该目标对象在该下一帧图像中的位置信息对所述目标对象进行位置信息的更新。
在视频的相邻两帧图像中的上一帧图像中获得对象后,采用对象的颜色特征和结构特征在对象模型数据库中进行比对匹配,获得待跟踪的目标对象,记录目标对象在该帧图像 中的位置信息。再利用目标对象的颜色特征和结构特征,在该相邻两帧图像中的下一帧图像中确定目标对象的位置信息。根据目标对象在该下一帧图像中位置信息,对目标对象的位置信息进行更新。
在该技术方案中,为了准确地对视频图像中的目标对象进行确定与跟踪,先根据像素的颜色特征对图像的像素进行分组聚类,再采用与颜色相关性较高的超像素对图像中的对象进行超像素分割。基于符合预设条件的超像素,计算得出构成图像中的对象的超像素的颜色特征和结构特征,从而大大减少了对图像中的对象进行分析、识别操作所处理的数据量,同时最大限度地保留了图像中的与对象相关的结构特征信息。通过将构成对象的超像素集合中每一个超像素的颜色特征和结构特征相结合,得到该对象的颜色特征与结构特征集合。通过匹配比对图像中的对象与模型数据库中待跟踪对象在颜色特征与结构特征上的相似度,确定视频图像中待跟踪的目标对象,通过与上一帧图像中的目标对象的颜色特征和结构特征进行比对匹配,实现对视频图像中目标对象的实时、准确地跟踪。本发明的技术方案有效地克服了基于像素特征描述的对象跟踪方法依赖于目标对象纹理的缺陷,同时提高了视频图像中目标对象跟踪算法对纹理单一目标的适用性。
在上述技术方案中,优选地,通过读取并解析视频图像序列,采用背景差分法进行前景识别或轮廓识别,提取所述视频当前帧图像中的一个或多个主要对象,或者根据需要也可以提取所能够识别出的所有对象。
在上述技术方案中,优选地,根据所述获取到的对象的像素颜色信息,对所述对象进行超像素分割,所述对象得到一组包含l个超像素的集合{S1,S2,S3,...,Sl},其中,l为大于等于1的正整数。
在该技术方案中,根据对象的像素颜色,对从视频的当前帧图像中获取的对象进行超像素分割,得到多个具有不同颜色的区域,每一个区域即是一个超像素。其中,每个超像素中包含多个像素。
在上述技术方案中,优选地,任一对象的超像素集合中,超像素Sk所包含的像素数为nk,所述对象包含的像素数为N,则所述超像素Sk的大小ρk为:
Figure PCTCN2015088095-appb-000002
  式(1)
其中,k为超像素的序号,1≤k≤l。根据所述对象超像素集合中大小ρ大于预设的阈值0.05的超像素,计算得到所述对象的颜色特征和结构特征。
在该技术方案中,对于图像对象中已经分割出的多个超像素,可以根据每一超像素包含的像素数计算得出该超像素的相对大小ρ,表示该图像对象中的超像素大小占图像对象大小的比例。其中,同一图像对象中ρ值大于0.05的超像素相较于ρ值小于0.05的超像素包含更多的像素,能够提供更多的超像素颜色特征和结构特征信息,因此在超像素的颜色特征和结构特征计算分析时,对图像对象所包含的超像素进行条件选择,筛选组成该对象的超像素集合中ρ值大于0.05(或者其他预设的阈值也是可行的)的超像素,用于计算得出图像对象的颜色特征和结构特征。
在上述技术方案中,优选地,在确定对象的颜色特征之前,将基于HSV颜色空间描述的像素颜色信息,转换为通过柱坐标系下的欧氏空间坐标所表示的像素的颜色特征,则所述超像素的颜色特征描述为(c1,c2,c3),其中,
Figure PCTCN2015088095-appb-000003
  式(2)
其中,h表示色调,s表示饱和度,v表示亮度。
在该技术方案中,可以通过HSV颜色模型将对象像素RGB颜色空间描述值转换为HSV颜色空间描述,同时为了更加准确地进行颜色特征比对匹配,将HSV颜色空间描述的色度坐标统一转换为通过柱坐标系下的欧氏空间坐标,用于描述所述超像素的颜色特征。
在上述技术方案中,优选地,对象的结构特征包括对象中的超像素的距离和夹角。
在该技术方案中,计算得出对象的结构特征中超像素的距离和夹角的方法,具体为,选取对象的超像素集合中大小ρ大于预设阈值(0.05)的m个超像素,定义超像素Sk的中心Ck为其包含的所有像素的坐标平均值,即:
Figure PCTCN2015088095-appb-000004
  式(3)
其中,m为大于等于1的正整数。
定义所述对象的中心C0为:
Figure PCTCN2015088095-appb-000005
  式(4)
超像素Sk的距离lk定义为超像素Sk的中心Ck到所述对象的中心C0的距离,即:
Figure PCTCN2015088095-appb-000006
  式(5)
将所述对象中的m个超像素按照上述距离以从小到大或者从大到小的顺序进行排列,得到超像素集合{S1,S2,S3,...,Sm}。
所述对象的主方向为所述对象的中心C0到该对象所有超像素中与该对象中心距离最小(或者最大)的超像素,即S1的中心C1的方向
Figure PCTCN2015088095-appb-000007
超像素Sk的夹角θk定义为
Figure PCTCN2015088095-appb-000008
与所述对象的主方向
Figure PCTCN2015088095-appb-000009
的夹角,即
Figure PCTCN2015088095-appb-000010
  式(5)
Figure PCTCN2015088095-appb-000011
  式(6)
则对于所述对象的特征描述包括其颜色特征和结构特征,其中,所述对象的颜色特征为:
Figure PCTCN2015088095-appb-000012
  式(7)
所述对象的结构特征为:
((l11),(l22),...,(lmm))T  式(8)
在上述技术方案中,通过与对象模型数据库中的待跟踪对象进行颜色特征和结构特征的比对匹配,计算当前帧图像中的对象与所述待跟踪对象的匹配度,若所述匹配度达到预设的匹配阈值,则确定当前帧图像中的与所述待跟踪对象的匹配度达到预设的匹配阈值的对象为待跟踪的目标对象,记录所述目标对象在所述视频的当前帧图像中的位置信息。
在该技术方案中,在对象模型数据库中选择待跟踪的对象,通过与图像中的对象分别进行颜色特征与结构特征对比匹配,计算特征匹配的匹配度,具体为:
定义待跟踪对象与图像对象的超像素相似度δ,具体地,
δ=wcδc+wsδs  式(9)
其中,δc为颜色特征相似度,δs为结构特征相似度,wc和ws分别为颜色特征权重和结构特征权重,wc+ws=1。
通过余弦距离分别计算得出待跟踪对象与图像中的对象中超像素的颜色特征相似度和结构特征相似度,具体地,颜色特征相似度δc通过以下表达式计算得出:
Figure PCTCN2015088095-appb-000013
  式(10)
结构特征的相似度通过以下表达式计算得出:
Figure PCTCN2015088095-appb-000014
  式(11)
上述计算表达式中,图像对象中超像素的特征参数以上标符号q表示,待跟踪对象中超像素的特征参数以上标符号r表示。
通过上述特征相似度的计算,可得到待跟踪对象超像素集合与图像对象超像素集合中的两个超像素特征匹配的相似度δ,若δ>0.7,即可确定该待跟踪对象中的超像素与图像对象中的超像素匹配成功。若待跟踪对象的超像素集合与图像对象集合中匹配成功的超像素的数量达到预设的比例范围,或超过预设的比例,例如匹配成功的超像素的数量达到图像对象中超像素总数的50%~90%,则确定所述图像中的该对象与待跟踪对象的匹配度达到预设的匹配阈值,匹配成功,所述图像对象即为待跟踪的目标对象,并记录所述目标对像的位置信息。
在该技术方案中,采用余弦距离进行相似度计算。需要说明的是,对于待跟踪对象与图像中的对象的结构相似度计算,还可以采用马氏距离或其他可实现该目的的计算方法,此处不再赘述。
在该技术方案中,在所述视频的下一帧图像中,重复步骤S101至步骤S103,通过与所述视频上一帧图像(步骤S101中的当前帧图像)中目标对象的颜色特征和结构特征进行比对匹配,确定该下一帧图像中待跟踪的目标对象。其中,比对匹配的具体方法与上述相似度比对方式及相似度判断条件一致,此处不再赘述。若相似度达到预设的相似度阈值的超像素数量达到对象超像素集合中超像素总数的预设比例,例如匹配相似度δ>0.7的超像素数量达到图像对象中超像素总数的50%~90%,则确定目标对象匹配成功,更新所述目标对象的位置信息,实现对目标对象的准确跟踪。
图2示出了根据本发明第二种实施例的基于颜色-结构特征的目标对象跟踪方法的流程示意图。
如图2所示,根据本发明第二种实施例的基于颜色-结构特征的目标对象跟踪方法, 包括以下步骤:
步骤S201,对视频中的图像进行对象检测,获取所述视频的当前帧图像中的至少一个对象。
步骤S202,根据所获取到的对象的像素颜色信息,对该对象进行超像素分割。
步骤S203,根据所述对象中符合预设条件的超像素,确定所述对象的颜色特征和结构特征。
步骤S204,通过与对象模型数据库中待跟踪对象进行颜色特征和结构特征的比对匹配,确定所述当前帧图像中待跟踪的目标对象,记录所述目标对象在所述当前帧图像中的位置信息。
步骤S205,在所述视频的当前帧图像中,根据目标对象的位置信息,采用运动模型估测所述目标对象在下一帧图像中的位置信息。
步骤S206,在所述视频的下一帧图像中,以所估测的位置信息为参考位置,提取基于所述参考位置而确定的预设范围内的子图像,对所述子图像利用步骤S202至203的技术,根据该目标对象在该当前帧图像中的颜色特征和结构特征,在该子图像中跟踪该目标对象,获得该目标对象在该子图像中的位置信息,并对该目标对象的位置信息进行更新。
具体地,对该子图像进行超像素分割,获取该子图像颜色特征和结构特征,通过将该子图像与所述视频的上一帧图像(当前帧图像)中的目标对象进行颜色特征和结构特征的比对匹配,确定在所述子图像中是否存在待跟踪的目标对象,具体地,若所述子图像与所述视频的上一帧图像(当前帧图像)中的目标对象的颜色特征和结构特征的匹配度达到预设的匹配度阈值,即确定在该子图像中跟踪到该目标对象,根据该子图像在该下一帧图像中的位置信息,进而可以确定该目标对象在该下一帧图像中的位置信息,利用该目标对象在该下一帧图像中的位置信息,对所述目标对象的位置信息进行更新。
在该技术方案中,根据所述视频图像的当前帧图像中目标对象的位置信息,结合对象运动模型,根据对象运动轨迹或运动趋势特征以及视频图像每一帧之间的时间间隔,估测出目标对象在视频的下一帧图像中可能出现的位置。在视频的下一帧图像中的所述可能出现的位置的预设区域范围内,如以所述位置为中心的100%至200%范围区域内提取子图像,仅对所述子图像进行超像素分割及对象的识别与比对,通过该子图像与目标对象的颜色特征和结构特征的匹配度,确定该子图像中是否包含目标对象,进而确定目标对象是否跟踪成功。相比于对整帧图像进行全部图像内容的识别与比对,由于仅对图像部分区域进行识别分析,减少了图像识别的数据量,可以有效减少目标对象的识别时 间,同时更加快速准确地定位出目标对象,提高了目标对象的定位效率。
在该技术方案中,通过采用最小二乘滤波、卡尔曼滤波、扩展卡尔曼滤波或粒子滤波可以准确评估并预测目标对象的运动状态信息,包括目标对象的位置、速度和加速度等。基于目标对象在视频的上一帧图像中的运动状态信息,结合视频帧间隔时间预测目标对象在下一帧图像中的运动状态信息,确定在下一帧图像中进行目标对象搜索的参考范围,缩小了目标对象搜索范围,简化了对象识别的运算量与运算复杂度,从而快速有效地搜索匹配出目标对象,实现了对目标对象的实时准确跟踪。
在上述任一实施例中,优选地,在所述视频的下一帧图像中,若与所述视频的上一帧图像中的目标对象的颜色特征和结构特征匹配失败,还可以通过与模型数据库中的待跟踪对象进行比对匹配,重新确定待跟踪的目标对象。若匹配成功,则更新所述目标对象的位置信息,完成对目标对象的跟踪。
在该技术方案中,由于模型数据库中的待跟踪对象与视频的上一帧图像中的目标对象具有不完全相同的颜色特征和结构特征,属于不同的比对样本,因此在所述视频的下一帧图像中,若与所述视频的上一帧图像中目标对象的颜色特征和结构特征匹配失败,导致目标跟踪丢失时,还可以进一步通过与对象模型数据库中的待跟踪对象的颜色特征和结构特征进行二次的比对匹配。如果二次的比对匹配的匹配度达到了预设的匹配度阈值,则确定目标对象匹配成功并更新目标对象的位置信息,能够显著提高目标对象的匹配的精确度,增强目标对象的跟踪的可信度与鲁棒性。
进一步地,通过与对象模型数据库中的待跟踪对象进行比对匹配,首次匹配确定待跟踪的目标对象之后,在目标对象的跟踪过程中,在视频的后续帧图像中进行该目标对象的比对匹配时,可以通过与之前的图像中的目标对象进行颜色特征和结构特征比对,根据相似度在后续的图像中确定目标对象。本发明的技术方案,也可以通过与对象模型数据库中的待跟踪对象进行颜色特征和结构特征比对,再根据相似度匹配在图像中确定目标对象;或者同时采用两种比对方式,提高比对匹配的精确性与可信度。
在上述任一实施例中,优选地,对视频中的图像进行对象检测,从中获取至少一个对象的步骤之前,还可以预先建立对象模型数据库,存储待跟踪对象的颜色特征和结构特征,用于后续与图像中的对象进行比对匹配,从图像中确定待跟踪的目标对象。
在上述任一实施例中,优选地,所述对象模型数据库,通过在线和/或离线方式获取待跟踪对象的图像信息,更新所述对象模型数据库中待跟踪对象的颜色特征和结构特征。
综上所述,本发明提供一种基于颜色-结构特征的目标对象跟踪方法,通过特征匹配确定视频图像中待跟踪的目标对象,在目标对象背景环境复杂或者目标对象存在遮挡,或者目标对象快速运动导致目标对象在相邻两帧视频图像中位置相距较大的情况下,能够对目标对象的运动趋势进行预测,快速准确地实现对目标对象的定位跟踪,具有良好的可靠性与鲁棒性。
再次声明,本说明书中公开的所有特征,或公开的所有方法或过程中的步骤,除了互相排斥的特征和/或步骤以外,均可以以任何方式组合。
本说明书(包括任何附加权利要求、摘要和附图)中公开的任一特征,除非特别叙述,均可被其他等效或具有类似目的的替代特征加以替换。即,除非特别叙述,每个特征只是一系列等效或类似特征中的一个例子而已。
本领域的技术人员应该明白,上述的本申请实施例所提供的方法中的各步骤,它们可以集中在单个的计算装置进行执行,或者分布在多个计算装置所组成的网络上进行执行。可选地,它们可以用计算装置可执行的程序代码来实现。从而,可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
虽然本发明所揭露的实施方式如上,但所述的内容仅为便于理解本发明技术方案而采用的实施方式,并非用以限定本发明。任何本发明所属领域内的技术人员,在不脱离本发明所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本发明的专利保护范围,仍须以所附的权利要求书所界定的范围为准。

Claims (10)

  1. 一种基于颜色-结构特征的目标对象跟踪方法,其特征在于,所述方法包括:
    对视频中的图像进行对象检测,获取所述视频的当前帧图像中的至少一个对象;
    根据所述对象的像素颜色信息,对所述对象进行超像素分割;
    根据所述对象中符合预设条件的超像素,确定所述对象的颜色特征和结构特征;
    通过与预设的对象模型数据库中的待跟踪对象进行颜色特征和结构特征的比对匹配,在所述当前帧图像中确定待跟踪的目标对象,记录所述目标对象在所述当前帧图像中的位置信息;
    根据所述目标对象在所述当前帧图像中的颜色特征和结构特征,在所述视频的下一帧图像中跟踪所述目标对象,更新所述目标对象的位置信息。
  2. 根据权利要求1所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,对视频中的图像进行对象检测,包括:
    读取所述视频中的图像,通过前景识别或轮廓识别,对所述视频中的图像进行所述对象检测。
  3. 根据权利要求1或2所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,该方法包括:
    对所述对象进行所述超像素分割,所述对象得到一组包含l个超像素的集合{S1,S2,S3,...,Sl},其中,l为大于等于1的正整数。
  4. 根据权利要求3所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,根据所述对象中符合预设条件的超像素,确定所述对象的颜色特征和结构特征,包括:
    所述对象的超像素集合中,超像素Sk所包含的像素数为nk,所述超像素Sk的大小ρk为:
    Figure PCTCN2015088095-appb-100001
    根据所述对象的超像素集合中ρ大于预设阈值的超像素,计算得到所述对象的颜色特征和结构特征。
  5. 根据权利要求4所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,该 方法还包括:
    将基于HSV颜色空间描述的所述像素颜色信息,转换为通过柱坐标系下的欧氏空间坐标表示所述像素的颜色特征。
  6. 根据权利要求4所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,所述对象的结构特征包括所述对象中的超像素的距离和夹角。
  7. 根据权利要求4所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,在所述当前帧图像中确定待跟踪的目标对象,包括:
    进行所述比对匹配后,计算所述当前帧图像中的对象与所述待跟踪对象的匹配度,若所述匹配度达到预设的匹配阈值,则将所述当前帧图像中的该对象确定为所述目标对象。
  8. 根据权利要求1所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,该方法还包括:
    记录所述目标对象在所述当前帧图像中的位置信息后,根据所述目标对象在所述当前帧图像中的位置信息,估测所述目标对象在所述下一帧图像中的位置信息。
  9. 根据权利要求8所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,根据所述目标对象在所述当前帧图像中的颜色特征和结构特征,在所述视频的下一帧图像中跟踪所述目标对象,包括:
    根据估测出的所述目标对象在所述下一帧图像中的位置信息,在所述下一帧图像中提取子图像,根据所述目标对象在所述子图像中的颜色特征和结构特征,在所述子图像中确定所述目标对象。
  10. 根据权利要求1所述的基于颜色-结构特征的目标对象跟踪方法,其特征在于,该方法还包括:
    对所述视频中的图像进行所述对象检测前,建立所述对象模型数据库,存储所述待跟踪对象的颜色特征和结构特征。
PCT/CN2015/088095 2014-09-04 2015-08-26 基于颜色-结构特征的目标对象跟踪方法 WO2016034059A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410450138.1 2014-09-04
CN201410450138.1A CN104240266A (zh) 2014-09-04 2014-09-04 基于颜色-结构特征的目标对象跟踪方法

Publications (1)

Publication Number Publication Date
WO2016034059A1 true WO2016034059A1 (zh) 2016-03-10

Family

ID=52228272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/088095 WO2016034059A1 (zh) 2014-09-04 2015-08-26 基于颜色-结构特征的目标对象跟踪方法

Country Status (2)

Country Link
CN (2) CN104240266A (zh)
WO (1) WO2016034059A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930815A (zh) * 2016-05-04 2016-09-07 中国农业大学 一种水下生物检测方法和系统
CN106780582A (zh) * 2016-12-16 2017-05-31 西安电子科技大学 基于纹理特征和颜色特征融合的图像显著性检测方法
CN107301651A (zh) * 2016-04-13 2017-10-27 索尼公司 对象跟踪装置和方法
CN112101207A (zh) * 2020-09-15 2020-12-18 精英数智科技股份有限公司 一种目标跟踪方法、装置、电子设备及可读存储介质
CN112244887A (zh) * 2019-07-06 2021-01-22 西南林业大学 一种基于b超图像颈动脉血管壁运动轨迹提取装置与方法
CN113361388A (zh) * 2021-06-03 2021-09-07 北京百度网讯科技有限公司 图像数据修正方法、装置、电子设备及自动驾驶车辆
CN115225815A (zh) * 2022-06-20 2022-10-21 南方科技大学 目标智能追踪拍摄方法、服务器、拍摄系统、设备及介质

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240266A (zh) * 2014-09-04 2014-12-24 成都理想境界科技有限公司 基于颜色-结构特征的目标对象跟踪方法
CN106156248B (zh) * 2015-04-28 2020-03-03 北京智谷睿拓技术服务有限公司 信息处理方法和设备
CN106373143A (zh) * 2015-07-22 2017-02-01 中兴通讯股份有限公司 一种自适应跨摄像机多目标跟踪方法及系统
CN109416535B (zh) 2016-05-25 2022-11-11 深圳市大疆创新科技有限公司 基于图像识别的飞行器导航技术
CN108268823B (zh) * 2016-12-30 2021-07-20 纳恩博(北京)科技有限公司 目标再识别方法和装置
CN106897735A (zh) * 2017-01-19 2017-06-27 博康智能信息技术有限公司上海分公司 一种快速移动目标的跟踪方法及装置
CN106909935B (zh) * 2017-01-19 2021-02-05 博康智能信息技术有限公司上海分公司 一种目标跟踪方法及装置
CN106909934B (zh) * 2017-01-19 2021-02-05 博康智能信息技术有限公司上海分公司 一种基于自适应搜索的目标跟踪方法及装置
CN109658326B (zh) * 2017-10-11 2024-01-16 深圳市中兴微电子技术有限公司 一种图像显示方法及装置、计算机可读存储介质
CN108090436B (zh) * 2017-12-13 2021-11-19 深圳市航盛电子股份有限公司 一种运动物体的检测方法、系统及介质
CN108229554A (zh) * 2017-12-29 2018-06-29 北京中船信息科技有限公司 一体化触控指挥桌以及指挥方法
CN108492314B (zh) * 2018-01-24 2020-05-19 浙江科技学院 基于颜色特性和结构特征的车辆跟踪方法
CN110580707A (zh) * 2018-06-08 2019-12-17 杭州海康威视数字技术股份有限公司 一种对象跟踪方法及系统
CN111383246B (zh) * 2018-12-29 2023-11-07 杭州海康威视数字技术股份有限公司 条幅检测方法、装置及设备
US10928898B2 (en) 2019-01-03 2021-02-23 International Business Machines Corporation Augmented reality safety
CN109918997B (zh) * 2019-01-22 2023-04-07 深圳职业技术学院 一种基于多示例学习的行人目标跟踪方法
CN110163076B (zh) * 2019-03-05 2024-05-24 腾讯科技(深圳)有限公司 一种图像数据处理方法和相关装置
CN110264493B (zh) * 2019-06-17 2021-06-18 北京影谱科技股份有限公司 一种针对运动状态下的多目标对象追踪方法和装置
CN110503696B (zh) * 2019-07-09 2021-09-21 浙江浩腾电子科技股份有限公司 一种基于超像素采样的车脸颜色特征检测方法
CN110647658A (zh) * 2019-08-02 2020-01-03 惠州市德赛西威汽车电子股份有限公司 一种基于云计算的车载图像特征自动识别方法与系统
CN113240712A (zh) * 2021-05-11 2021-08-10 西北工业大学 一种基于视觉的水下集群邻居跟踪测量方法
CN115439509B (zh) * 2022-11-07 2023-02-03 成都泰盟软件有限公司 一种多目标跟踪方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080232643A1 (en) * 2007-03-23 2008-09-25 Technion Research & Development Foundation Ltd. Bitmap tracker for visual tracking under very general conditions
CN101325690A (zh) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 监控视频流中人流分析与人群聚集过程的检测方法及系统
CN102930539A (zh) * 2012-10-25 2013-02-13 江苏物联网研究发展中心 基于动态图匹配的目标跟踪方法
CN103037140A (zh) * 2012-12-12 2013-04-10 杭州国策商图科技有限公司 一种基于块匹配的鲁棒性极强的目标跟踪算法
CN104240266A (zh) * 2014-09-04 2014-12-24 成都理想境界科技有限公司 基于颜色-结构特征的目标对象跟踪方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9520040B2 (en) * 2008-11-21 2016-12-13 Raytheon Company System and method for real-time 3-D object tracking and alerting via networked sensors
KR20130091441A (ko) * 2012-02-08 2013-08-19 삼성전자주식회사 물체 추적 장치 및 그 제어 방법
CN103092930B (zh) * 2012-12-30 2017-02-08 贺江涛 视频摘要生成方法和视频摘要生成装置
CN103281477B (zh) * 2013-05-17 2016-05-11 天津大学 基于多级别特征数据关联的多目标视觉跟踪方法
CN103426183B (zh) * 2013-07-10 2016-12-28 上海理工大学 运动物体跟踪方法以及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080232643A1 (en) * 2007-03-23 2008-09-25 Technion Research & Development Foundation Ltd. Bitmap tracker for visual tracking under very general conditions
CN101325690A (zh) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 监控视频流中人流分析与人群聚集过程的检测方法及系统
CN102930539A (zh) * 2012-10-25 2013-02-13 江苏物联网研究发展中心 基于动态图匹配的目标跟踪方法
CN103037140A (zh) * 2012-12-12 2013-04-10 杭州国策商图科技有限公司 一种基于块匹配的鲁棒性极强的目标跟踪算法
CN104240266A (zh) * 2014-09-04 2014-12-24 成都理想境界科技有限公司 基于颜色-结构特征的目标对象跟踪方法

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301651A (zh) * 2016-04-13 2017-10-27 索尼公司 对象跟踪装置和方法
CN105930815B (zh) * 2016-05-04 2022-10-04 中国农业大学 一种水下生物检测方法和系统
CN105930815A (zh) * 2016-05-04 2016-09-07 中国农业大学 一种水下生物检测方法和系统
CN106780582A (zh) * 2016-12-16 2017-05-31 西安电子科技大学 基于纹理特征和颜色特征融合的图像显著性检测方法
CN106780582B (zh) * 2016-12-16 2019-08-13 西安电子科技大学 基于纹理特征和颜色特征融合的图像显著性检测方法
CN112244887B (zh) * 2019-07-06 2023-07-18 西南林业大学 一种基于b超图像颈动脉血管壁运动轨迹提取装置与方法
CN112244887A (zh) * 2019-07-06 2021-01-22 西南林业大学 一种基于b超图像颈动脉血管壁运动轨迹提取装置与方法
CN112101207A (zh) * 2020-09-15 2020-12-18 精英数智科技股份有限公司 一种目标跟踪方法、装置、电子设备及可读存储介质
CN112101207B (zh) * 2020-09-15 2023-12-22 精英数智科技股份有限公司 一种目标跟踪方法、装置、电子设备及可读存储介质
CN113361388A (zh) * 2021-06-03 2021-09-07 北京百度网讯科技有限公司 图像数据修正方法、装置、电子设备及自动驾驶车辆
CN113361388B (zh) * 2021-06-03 2023-11-24 北京百度网讯科技有限公司 图像数据修正方法、装置、电子设备及自动驾驶车辆
CN115225815A (zh) * 2022-06-20 2022-10-21 南方科技大学 目标智能追踪拍摄方法、服务器、拍摄系统、设备及介质
CN115225815B (zh) * 2022-06-20 2023-07-25 南方科技大学 目标智能追踪拍摄方法、服务器、拍摄系统、设备及介质

Also Published As

Publication number Publication date
CN105405154B (zh) 2018-06-15
CN104240266A (zh) 2014-12-24
CN105405154A (zh) 2016-03-16

Similar Documents

Publication Publication Date Title
WO2016034059A1 (zh) 基于颜色-结构特征的目标对象跟踪方法
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
WO2020108362A1 (zh) 人体姿态检测方法、装置、设备及存储介质
CN101593022B (zh) 一种基于指端跟踪的快速人机交互方法
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN110490158B (zh) 一种基于多级模型的鲁棒人脸对齐方法
US9489561B2 (en) Method and system for estimating fingerprint pose
US9639943B1 (en) Scanning of a handheld object for 3-dimensional reconstruction
CN110766723B (zh) 一种基于颜色直方图相似性的无人机目标跟踪方法及系统
CN104050475A (zh) 基于图像特征匹配的增强现实的系统和方法
JP2014504410A (ja) 移動オブジェクトの検出及び追跡
CN112149762A (zh) 目标跟踪方法、目标跟踪装置及计算机可读存储介质
WO2015181179A1 (en) Method and apparatus for object tracking and segmentation via background tracking
CN110276314A (zh) 人脸识别方法及人脸识别摄像机
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN112836566A (zh) 针对边缘设备的多任务神经网络人脸关键点检测方法
Donoser et al. Robust planar target tracking and pose estimation from a single concavity
Wang et al. Hand posture recognition from disparity cost map
CN113095385A (zh) 一种基于全局和局部特征描述的多模图像匹配方法
CN110322479B (zh) 一种基于时空显著性的双核kcf目标跟踪方法
JP2014102805A (ja) 情報処理装置、情報処理方法及びプログラム
CN114283199B (zh) 一种面向动态场景的点线融合语义slam方法
Zhang et al. Hand tracking algorithm based on superpixels feature
Kassir et al. A region based CAMShift tracking with a moving camera
JP7253967B2 (ja) 物体対応付け装置、物体対応付けシステム、物体対応付け方法及びコンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15837763

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15837763

Country of ref document: EP

Kind code of ref document: A1