WO2020224460A1 - 一种目标追踪方法及便携式终端 - Google Patents

一种目标追踪方法及便携式终端 Download PDF

Info

Publication number
WO2020224460A1
WO2020224460A1 PCT/CN2020/086972 CN2020086972W WO2020224460A1 WO 2020224460 A1 WO2020224460 A1 WO 2020224460A1 CN 2020086972 W CN2020086972 W CN 2020086972W WO 2020224460 A1 WO2020224460 A1 WO 2020224460A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
target
frame
confidence
video frame
Prior art date
Application number
PCT/CN2020/086972
Other languages
English (en)
French (fr)
Inventor
姜文杰
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2020224460A1 publication Critical patent/WO2020224460A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the invention belongs to the field of video, and particularly relates to a target tracking method and a portable terminal.
  • Target tracking is an important research direction in computer vision, which has been widely used in the fields of video surveillance and human-computer interaction; target tracking is to generate the motion trajectory of the target by locating the target in each frame of the video. A method for continuous inference of the target state in the.
  • Panoramic video is the conversion of static panoramic pictures into dynamic panoramic video images. Users can watch the dynamic video within the shooting angle range of the panoramic camera at will; when watching panoramic video, because the flat display can only display one of the panoramic videos at a certain moment A viewing angle, when the user wants to continuously watch a specific target object, it may be necessary to continuously control the rotation angle of the display because the target disappears from the current viewing angle. Therefore, the operation is troublesome and also affects the viewing experience.
  • the present invention provides a target tracking method, computer-readable storage medium and portable terminal, which aims to detect and track target objects in a video frame by frame through deep learning target detection algorithms and related filters, so as to realize when playing panoramic video ,
  • the monitor screen is always centered on the specified target object to track the effect of playback.
  • the present invention provides a target tracking method, the method includes:
  • the position and status of the target object tracking in subsequent video frames are determined through the preset confidence value range.
  • the present invention provides a computer-readable storage medium that stores a computer program that, when executed by a processor, implements the target tracking method described in the first aspect A step of.
  • the present invention provides a portable terminal, including:
  • One or more processors are One or more processors;
  • One or more computer programs wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, and the processor implements The steps of a target tracking method as described in the first aspect.
  • the specified target object in the panoramic video is detected and tracked frame by frame through the deep learning target detection algorithm and related filters, so that when the panoramic video is played, the user selects the target object to be tracked, and the video playback window That is, it can automatically detect and track the movement of the object, so that the object is always displayed in the center of the display screen, and the user experience is improved.
  • FIG. 1 is a flowchart of a target tracking method provided by Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of judging detection targets and tracking according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic structural diagram of a portable terminal provided in Embodiment 3 of the present invention.
  • a target tracking method provided by Embodiment 1 of the present invention includes the following steps:
  • the target object is the target object to be tracked selected by the user in the video frame, including but not limited to objects such as people, animals, and vehicles; the objects in the video frame are detected by a target detection algorithm, and the target detection algorithm includes but not Limited to ssd algorithm (Single Shot MultiBox Detector), rcnn algorithm (Region-Convolutional Neural Networks) and yolo series algorithm (You Only Look Once) in deep learning;
  • the target object selected by the user is identified by a rectangular frame; the length and width of the rectangular frame are the adaptive length and width of the target object detection;
  • the video frame is a video frame of a panoramic video
  • the panoramic video may be a movie resource downloaded on the Internet or the like, or may be a video shot by a user with a panoramic camera.
  • Extract features from the target object area identified by a rectangular frame the features include but not limited to: color histogram feature, hog feature (Histogram of Oriented Gradient, Hog: directional gradient histogram), etc.;
  • the features are trained to obtain related filters; the training specifically is:
  • the training formula (1) is as follows:
  • f -1 represents the inverse Fourier transform, Represents the Fourier transform of x i , Represents the complex conjugate of the Fourier transform of h i , and * represents element-wise multiplication;
  • correlation filtering comes from the field of signal processing. Correlation is used to indicate the degree of similarity between two signals. Convolution is usually used to express correlation operations. The basic idea of tracking methods based on correlation filters is to find a The filter template causes the image of the next frame to perform a convolution operation with the filter template, and the region with the greatest response confidence is the region where the predicted target is located.
  • the correlation filter calculated based on the i-th video frame is h i .
  • the convolution in the time domain is equivalent to the product in the frequency domain.
  • the preset confidence value range is [4.5, 7].
  • the target object is tracked so that the target object is always tracked and displayed in the center of the display screen;
  • the detection confidence is within the preset confidence interval, return to step S101 to recalculate the filter template of the target object; when the detection confidence is less than the confidence value, the tracking is terminated;
  • calculating the confidence of subsequent video frames through the filter template and determining the tracking position and status of the target object specifically includes the following steps:
  • the detection confidence C 7.0, it means that the accuracy of predicting the position based on the current filter template is high, and the area with the largest confidence value can be obtained, and the viewing angle of the panoramic video display is updated, so that the area where the target object is always on the display screen Center for tracking display;
  • step S1042 When the detected confidence value is 7.0>C ⁇ 4.5, return to step S101 to re-detect and track the target object;
  • the detected confidence value is 7.0>C ⁇ 4.5, it means that the accuracy of predicting the position according to the current filter template is low, and it is necessary to return to step S101 to detect the target object of the current video frame through the deep learning target detection method, and pass The feature correlation determines the target object to be tracked, and then initializes the relevant filter to be calculated as a filter template of the current video frame, and calculates the target object of subsequent video frames frame by frame for tracking;
  • the target object tracking can be ended.
  • the specified target object in the panoramic video is detected and tracked frame by frame through the deep learning target detection algorithm and related filters, so that when the panoramic video is played, the user selects the target object to be tracked, and the video playback window That is, it can automatically detect and track the movement of the object, so that the object is always displayed in the center of the display screen, and the user experience is improved.
  • the second embodiment of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it implements a target method as provided in the first embodiment of the present invention.
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium.
  • FIG. 3 shows a specific structural block diagram of a portable terminal provided in the third embodiment of the present invention.
  • a portable terminal 100 includes: one or more processors 101, a memory 102, and one or more computer programs.
  • the device 101 and the memory 102 are connected by a bus.
  • the one or more computer programs are stored in the memory 102 and are configured to be executed by the one or more processors 101, and the processor 101 executes
  • the computer program implements the steps of a target tracking method provided in the first embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本发明提供了一种目标追踪方法及便携式终端。所述方法包括:采用深度学习的目标检测算法获取视频帧中待追踪目标对象;对所述目标对象提取特征进行训练,得到相关滤波器;将当前视频帧得到的相关滤波器作为滤波器模板,逐帧检测后续视频帧的置信度;通过预设的置信度数值的范围,判断后续视频帧目标对象追踪的位置和状态。本发明技术方案通过深度学习的目标检测算法和相关滤波器来对全景视频中指定目标对象进行逐帧检测追踪,实现在播放全景视频的过程中,使目标对象始终在显示屏中心进行显示的效果,提升了用户体验。

Description

一种目标追踪方法及便携式终端 技术领域
本发明属于视频领域,尤其涉及一种目标追踪方法及便携式终端。
背景技术
目标追踪是计算机视觉中的一个重要研究方向,已广泛应用于视频监控和人机交互等领域;目标追踪是通过在视频的每一帧中定位目标,来生成目标的运动轨迹,是对视频序列中的目标状态进行持续推断的一种方法。
全景视频是将静态的全景图片转化为动态的全景视频图像,用户能够任意观看在全景摄像机拍摄角度范围内的动态视频;在观看全景视频时,由于平面显示器某一时刻只能显示全景视频的其中一个视角,当用户想要持续观看特定目标对象时,可能由于目标消失在当前视角而需要不断控制显示器转动视角,因此操作比较麻烦,同时也会影响观看体验。
技术问题
本发明提出一种目标追踪方法、计算机可读存储介质及便携式终端,旨在通过深度学习的目标检测算法和相关滤波器来对视频中的目标对象进行逐帧检测追踪,实现在播放全景视频时,显示器画面始终以指定的目标对象为中心,进行追踪播放的效果。
技术解决方案
第一方面,本发明提供了一种目标追踪方法,所述方法包括:
采用深度学习的目标检测算法获取视频帧中待追踪目标对象;
对所述目标对象提取特征进行训练,得到相关滤波器;
将当前视频帧得到的相关滤波器作为滤波器模板,逐帧检测后续视频帧的置信度;
通过预设的置信度数值的范围,判断后续视频帧目标对象追踪的位置和状态。
第二方面,本发明提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的一种目标追踪方法的步骤。
第三方面,本发明提供了一种便携式终端,包括:
一个或多个处理器;
存储器;以及
一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述处理器执行所述计算机程序时实现如第一方面所述的一种目标追踪方法的步骤。
有益效果
在本发明中,通过深度学习的目标检测算法和相关滤波器来对全景视频中指定目标对象进行逐帧检测与追踪,实现在播放全景视频时,用户选定待追踪的目标对象,视频播放窗口即可以自动检测并追踪该对象的移动,使该对象始终在显示屏中心进行显示的效果,提升了用户体验。
附图说明
图1是本发明实施例一提供的一种目标追踪方法流程图。
图2是本发明实施例一提供的判断检测目标与追踪示意图。
图3是本发明实施例三提供的便携式终端的结构示意图。
本发明的实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
实施例一:
请参阅图1,本发明实施例一提供的一种目标追踪方法包括以下步骤:
S101.采用深度学习的目标检测算法获取视频帧中待追踪目标对象;
所述目标对象为用户在所述视频帧中选定的待追踪目标对象,包括但不限于人、动物和车辆等物体;视频帧中物体采用目标检测算法检测,所述目标检测算法包括但不限于深度学习中的ssd算法(Single Shot MultiBox Detector)、rcnn算法(Region-Convolutional Neural Networks)和yolo系列算法(You Only Look Once)等;
用户选定的目标对象用矩形框标识出来;所述矩形框的长宽为目标对象检测的自适应长宽;
需要说明的是,所述视频帧为全景视频的视频帧,所述全景视频可以为网上下载的电影资源等,也可以为用户用全景相机拍摄的视频。
S102.对所述目标对象提取特征进行训练,得到相关滤波器;
对用矩形框标识出来的目标对象区域提取特征,所述特征包括但不限于:颜色直方图特征、hog特征(Histogram of Oriented Gradient,Hog:方向梯度直方图)等;
将所述特征进行训练,得到相关滤波器;所述训练具体为:
令当前视频帧为第i个视频帧,i>0;定义y i为期望输出,x i为目标对象提取的特征,h i为相关滤波器,有训练公式(1)如下:
Figure PCTCN2020086972-appb-000001
公式(1)中,
Figure PCTCN2020086972-appb-000002
f -1表示反傅里叶变换,
Figure PCTCN2020086972-appb-000003
表示x i的傅里叶变换,
Figure PCTCN2020086972-appb-000004
表示h i傅里叶变换的复共轭,*表示逐元素相乘;
由公式(1)可得第i个视频帧经过特征训练后得到的相关滤波器h i满足:
Figure PCTCN2020086972-appb-000005
还需要说明的是,相关滤波源于信号处理领域,相关性用于表示两个信号之间的相似程度,通常用卷积表示相关操作;基于相关滤波器的跟踪方 法的基本思想是,寻找一个滤波器模板,使得下一帧的图像与所述滤波器模板做卷积操作,响应置信度最大的区域则是预测的目标所在区域。
S103.将当前视频帧得到的相关滤波器作为滤波器模板,逐帧检测后续视频帧的置信度;
具体包括:基于第i个视频帧计算的相关滤波器为h i,根据卷积定理,时域上的卷积相当于频域上的乘积,对于第i+1个视频帧,置信度计算公式为:
Figure PCTCN2020086972-appb-000006
公式(2)中,
Figure PCTCN2020086972-appb-000007
表示卷积,x i+1为所述第i+1个视频帧的特征输入,
Figure PCTCN2020086972-appb-000008
表示x i+1的傅里叶变换,
Figure PCTCN2020086972-appb-000009
表示h i傅里叶变换的复共轭;
基于第i个视频帧相关滤波器h i计算第i+1个视频帧的置信度,置信度最大值所在的区域即为要追踪的目标对象在第i+1个视频帧中的新区域;同理通过改滤波器模板h i,可以用于预测第i+2个视频帧中目标对象。
S104.通过预设的置信度数值的范围,判断后续视频帧目标对象追踪的位置和状态;
具体的,预设的置信度数值范围为[4.5,7],当检测的置信度大于预设置信度数值时,对所述目标对象进行追踪,使目标对象始终在显示屏中心进行跟踪显示;当检测的置信度在预设的置信度区间时,返回步骤S101,重新计算所述目标对象的滤波器模板;当检测的置信度小于置信度数值时,令追踪结束;
请参阅图2,在本发明实施例一中,通过滤波器模板计算后续视频帧的置信度,确定目标对象追踪的位置和状态具体包括以下步骤:
S1041:当检测的置信度C≥7.0时,对所述目标对象进行追踪,使目标对象始终在显示屏中心进行跟踪显示;
当检测的置信度C≥7.0时,说明根据当前滤波器模板预测位置的准确率 较高,可获取置信度值最大的区域,更新全景视频显示的视角,使目标对象所在的区域始终在显示屏中心进行跟踪显示;
S1042:当检测的置信度数值7.0>C≥4.5时,返回步骤S101,重新检测跟踪所述目标对象;
当检测的置信度数值7.0>C≥4.5时,说明根据当前滤波器模板预测位置的准确率较低,需要返回步骤S101,重新通过深度学习的目标检测方法检测当前视频帧的目标对象,并通过特征相关性确定所述待追踪目标对象,然后初始化相关滤波器,作为当前视频帧的滤波模板计算,逐帧计算后续视频帧的目标对象,进行追踪;
S1043:当检测计算的置信度C<4.5时,追踪结束;
当检测计算的置信度C<4.5时,表示没有检测到目标对象,可令目标对象追踪结束。
在本发明中,通过深度学习的目标检测算法和相关滤波器来对全景视频中指定目标对象进行逐帧检测与追踪,实现在播放全景视频时,用户选定待追踪的目标对象,视频播放窗口即可以自动检测并追踪该对象的移动,使该对象始终在显示屏中心进行显示的效果,提升了用户体验。
实施例二:
本发明实施例二提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本发明实施例一提供的一种目标方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
实施例三:
图3示出示出了本发明实施例三提供的便携式终端的具体结构框图,一种便携式终端100包括:一个或多个处理器101、存储器102、以及一个或多个计算机程序,其中所述处理器101和所述存储器102通过总线连接,所述一个或多个计算机程序被存储在所述存储器102中,并且被配置成由 所述一个或多个处理器101执行,所述处理器101执行所述计算机程序时实现如本发明实施例一提供的一种目标追踪方法的步骤。
在本发明实施例中,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,所述的存储介质,如ROM/RAM、磁盘、光盘等。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种目标追踪方法,其特征在于,包括以下步骤:
    采用深度学习的目标检测算法获取视频帧中待追踪目标对象;
    对所述目标对象提取特征进行训练,得到相关滤波器;
    将当前视频帧得到的相关滤波器作为滤波器模板,逐帧检测后续视频帧的置信度;
    通过预设的置信度数值的范围,判断后续视频帧目标对象追踪的位置和状态。
  2. 如权利要求1所述的目标追踪方法,其特征在于,所述采用深度学习的目标检测算法获取视频帧中待追踪目标对象具体为:
    采用深度学习的目标检测算法对视频帧中物体进行检测,所述目标检测方法可以为ssd算法、rcnn算法和yolo算法;
    获取用户选定的目标对象,采用矩形框进行标识,矩形框的长宽为目标对象的自适应长宽。
  3. 如权利要求1所述的目标追踪方法,其特征在于,所述对目标对象提取特征进行训练,得到相关滤波器,具体包括:
    对矩形框标识出来的目标对象区域提取特征;
    对所述特征进行训练,得到相关滤波器。
  4. 如权利要求3所述的目标追踪方法,其特征在于,所述训练具体为:
    令当前视频帧为第i个视频帧,i>0;定义y i为期望输出,x i为目标对象提取的特征,h i为相关滤波器,采用训练公式(1)如下:
    Figure PCTCN2020086972-appb-100001
    其中,
    Figure PCTCN2020086972-appb-100002
    f -1表示反傅里叶变换,
    Figure PCTCN2020086972-appb-100003
    表示x i的傅里叶变换,
    Figure PCTCN2020086972-appb-100004
    表示h i傅里叶变换的复共轭,*表示逐元素相乘;
    由公式(1)可得第i个视频帧经过特征训练后得到的相关滤波器h i满足:
    Figure PCTCN2020086972-appb-100005
  5. 如权利要求1所述的目标追踪方法,其特征在于,所述将当前视频帧得到的相关滤波器作为滤波器模板,逐帧检测后续视频帧的置信度具体为:
    基于第i个视频帧的相关滤波器h i计算第i+1个视频帧的置信度;
    所述置信度的计算具体为:
    对于第i+1个视频帧,提取特征,x i+1为所述第i+1个视频帧的特征输入,所述置信度计算公式为:
    Figure PCTCN2020086972-appb-100006
    其中,
    Figure PCTCN2020086972-appb-100007
    表示卷积,
    Figure PCTCN2020086972-appb-100008
    表示x i+1的傅里叶变换,
    Figure PCTCN2020086972-appb-100009
    表示h i傅里叶变换的复共轭。
  6. 如权利要求1所述的目标追踪方法,其特征在于,所述通过预设的置信度数值的范围,判断后续视频帧目标对象追踪的位置和状态具体包括:
    预设的置信度数值范围为[4.5,7];
    通过滤波器模板计算后续视频帧的置信度,确定目标对象追踪的位置和状态具体为:
    当检测的置信度C≥7.0时,对所述目标对象进行追踪,使目标对象始终在显示屏中心进行跟踪显示;
    当检测的置信度数值7.0>C≥4.5时,重复所述目标追踪方法,重新检测跟踪所述目标对象;
    当检测计算的置信度C<4.5时,可令目标对象追踪结束。
  7. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述的目标追踪方法的步骤,所述计算机可读存储介质可以是非暂态计算机可读存储介质。
  8. 一种便携式终端,包括:
    一个或多个处理器;
    存储器;以及
    一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述的目标追踪方法的步骤。
PCT/CN2020/086972 2019-05-06 2020-04-26 一种目标追踪方法及便携式终端 WO2020224460A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910371093.1A CN110197126A (zh) 2019-05-06 2019-05-06 一种目标追踪方法、装置及便携式终端
CN201910371093.1 2019-05-06

Publications (1)

Publication Number Publication Date
WO2020224460A1 true WO2020224460A1 (zh) 2020-11-12

Family

ID=67752467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086972 WO2020224460A1 (zh) 2019-05-06 2020-04-26 一种目标追踪方法及便携式终端

Country Status (2)

Country Link
CN (1) CN110197126A (zh)
WO (1) WO2020224460A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129337A (zh) * 2021-04-14 2021-07-16 桂林电子科技大学 背景感知跟踪方法、计算机可读存储介质及计算机设备
CN113936036A (zh) * 2021-10-08 2022-01-14 中国人民解放军国防科技大学 基于无人机视频的目标跟踪方法、装置及计算机设备
CN114095750A (zh) * 2021-11-20 2022-02-25 深圳市伊登软件有限公司 云平台监控方法及相关产品
CN114764897A (zh) * 2022-03-29 2022-07-19 深圳市移卡科技有限公司 行为识别方法、装置、终端设备以及存储介质
CN117218162A (zh) * 2023-11-09 2023-12-12 深圳市巨龙创视科技有限公司 一种基于ai的全景追踪控视系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197126A (zh) * 2019-05-06 2019-09-03 深圳岚锋创视网络科技有限公司 一种目标追踪方法、装置及便携式终端
CN110570448A (zh) * 2019-09-07 2019-12-13 深圳岚锋创视网络科技有限公司 一种全景视频的目标追踪方法、装置及便携式终端
CN110647836B (zh) * 2019-09-18 2022-09-20 中国科学院光电技术研究所 一种鲁棒的基于深度学习的单目标跟踪方法
CN114821700A (zh) * 2019-09-27 2022-07-29 深圳看到科技有限公司 画面帧更新方法、更新装置及存储介质
CN114095780A (zh) * 2020-08-03 2022-02-25 影石创新科技股份有限公司 一种全景视频剪辑方法、装置、存储介质及设备
CN112954443A (zh) * 2021-03-23 2021-06-11 影石创新科技股份有限公司 全景视频的播放方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574445A (zh) * 2015-01-23 2015-04-29 北京航空航天大学 一种目标跟踪方法及装置
CN105989367A (zh) * 2015-02-04 2016-10-05 阿里巴巴集团控股有限公司 目标获取方法及设备
CN107092883A (zh) * 2017-04-20 2017-08-25 上海极链网络科技有限公司 物体识别追踪方法
CN108734723A (zh) * 2018-05-11 2018-11-02 江南大学 一种基于自适应权重联合学习的相关滤波目标跟踪方法
CN110197126A (zh) * 2019-05-06 2019-09-03 深圳岚锋创视网络科技有限公司 一种目标追踪方法、装置及便携式终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154024A (zh) * 2017-05-19 2017-09-12 南京理工大学 基于深度特征核相关滤波器的尺度自适应目标跟踪方法
CN108848304B (zh) * 2018-05-30 2020-08-11 影石创新科技股份有限公司 一种全景视频的目标跟踪方法、装置和全景相机
CN109410246B (zh) * 2018-09-25 2021-06-11 杭州视语智能视觉系统技术有限公司 基于相关滤波的视觉跟踪的方法及装置
CN109697727A (zh) * 2018-11-27 2019-04-30 哈尔滨工业大学(深圳) 基于相关滤波和度量学习的目标跟踪方法、系统及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574445A (zh) * 2015-01-23 2015-04-29 北京航空航天大学 一种目标跟踪方法及装置
CN105989367A (zh) * 2015-02-04 2016-10-05 阿里巴巴集团控股有限公司 目标获取方法及设备
CN107092883A (zh) * 2017-04-20 2017-08-25 上海极链网络科技有限公司 物体识别追踪方法
CN108734723A (zh) * 2018-05-11 2018-11-02 江南大学 一种基于自适应权重联合学习的相关滤波目标跟踪方法
CN110197126A (zh) * 2019-05-06 2019-09-03 深圳岚锋创视网络科技有限公司 一种目标追踪方法、装置及便携式终端

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129337A (zh) * 2021-04-14 2021-07-16 桂林电子科技大学 背景感知跟踪方法、计算机可读存储介质及计算机设备
CN113129337B (zh) * 2021-04-14 2022-07-19 桂林电子科技大学 背景感知跟踪方法、计算机可读存储介质及计算机设备
CN113936036A (zh) * 2021-10-08 2022-01-14 中国人民解放军国防科技大学 基于无人机视频的目标跟踪方法、装置及计算机设备
CN113936036B (zh) * 2021-10-08 2024-03-08 中国人民解放军国防科技大学 基于无人机视频的目标跟踪方法、装置及计算机设备
CN114095750A (zh) * 2021-11-20 2022-02-25 深圳市伊登软件有限公司 云平台监控方法及相关产品
CN114764897A (zh) * 2022-03-29 2022-07-19 深圳市移卡科技有限公司 行为识别方法、装置、终端设备以及存储介质
CN117218162A (zh) * 2023-11-09 2023-12-12 深圳市巨龙创视科技有限公司 一种基于ai的全景追踪控视系统
CN117218162B (zh) * 2023-11-09 2024-03-12 深圳市巨龙创视科技有限公司 一种基于ai的全景追踪控视系统

Also Published As

Publication number Publication date
CN110197126A (zh) 2019-09-03

Similar Documents

Publication Publication Date Title
WO2020224460A1 (zh) 一种目标追踪方法及便携式终端
CN110728697B (zh) 基于卷积神经网络的红外弱小目标检测跟踪方法
US11102417B2 (en) Target object capturing method and device, and video monitoring device
US8913103B1 (en) Method and apparatus for focus-of-attention control
CN110049206B (zh) 图像处理方法、装置及计算机可读存储介质
WO2017080399A1 (zh) 一种人脸位置跟踪方法、装置和电子设备
Benezeth et al. Review and evaluation of commonly-implemented background subtraction algorithms
Fang et al. Video saliency incorporating spatiotemporal cues and uncertainty weighting
WO2020094091A1 (zh) 一种图像抓拍方法、监控相机及监控系统
US10430667B2 (en) Method, device, and computer program for re-identification of objects in images obtained from a plurality of cameras
KR100860988B1 (ko) 시퀀스들에서의 객체 탐지를 위한 방법 및 장치
US10896495B2 (en) Method for detecting and tracking target object, target object tracking apparatus, and computer-program product
US20060056702A1 (en) Image processing apparatus and image processing method
KR20080054368A (ko) 화염 검출 방법 및 장치
CN110287907B (zh) 一种对象检测方法和装置
JP2009027393A (ja) 映像検索システムおよび人物検索方法
JP4999794B2 (ja) 静止領域検出方法とその装置、プログラム及び記録媒体
KR101542206B1 (ko) 코아스-파인 기법을 이용한 객체 추출과 추적 장치 및 방법
JP2010011016A (ja) 追尾点検出装置および方法、プログラム、並びに記録媒体
CN112257492A (zh) 一种多路摄像头实时入侵检测与跟踪方法
WO2018179119A1 (ja) 映像解析装置、映像解析方法および記録媒体
CN111382646B (zh) 一种活体识别方法、存储介质及终端设备
CN109389624B (zh) 基于相似度度量的模型漂移抑制方法及其装置
Wang et al. Visual cue integration for small target motion detection in natural cluttered backgrounds
Wang et al. A novel visual saliency detection method for infrared video sequences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20801446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20801446

Country of ref document: EP

Kind code of ref document: A1