WO2013023474A1 - 智能跟踪球机及其跟踪方法 - Google Patents

智能跟踪球机及其跟踪方法 Download PDF

Info

Publication number
WO2013023474A1
WO2013023474A1 PCT/CN2012/076223 CN2012076223W WO2013023474A1 WO 2013023474 A1 WO2013023474 A1 WO 2013023474A1 CN 2012076223 W CN2012076223 W CN 2012076223W WO 2013023474 A1 WO2013023474 A1 WO 2013023474A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
distance
moving object
intelligent
unit
Prior art date
Application number
PCT/CN2012/076223
Other languages
English (en)
French (fr)
Inventor
吴占伟
郭海训
全晓臣
刘志宇
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2013023474A1 publication Critical patent/WO2013023474A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the invention relates to the field of video surveillance; in particular to a security technology for video surveillance using an intelligent trackball machine.
  • Intelligent video surveillance uses computer vision technology to process, analyze and understand video signals. By automatically analyzing sequence images without the need of human intervention, the changes in the monitoring scene are located, identified and tracked. Based on the analysis and judgment of the behavior of the target, it can promptly issue an alarm or provide useful information when an abnormal situation occurs, effectively assist the security personnel in handling the crisis, and minimize false positives and false negatives.
  • the motion detection of the automatic tracking ball machine on the market is mainly based on the basic principle of image processing and pattern recognition, that is, the special image and video processing unit is composed of differential, optical flow method, etc. to detect the moving object, and then the detection is detected.
  • the trigger object is tracked by the dome.
  • the inventors of the present invention have found that the main disadvantage of this method is that the algorithm has a large amount of processing, and an algorithm is needed to simulate the current object motion trajectory to complete the trigger judgment, and further susceptible to interference, and the motion environment of the detected object is The more stringent requirements, the last is not intuitive, and can not reflect effective information such as the speed and actual height of the current object. Therefore, a new video surveillance technology is needed, which can greatly reduce the complexity of the algorithm and the dependence on the scene, and the triggered tracking is more accurate, effective and intuitive.
  • the object of the present invention is to provide an intelligent trackball machine and a tracking method thereof, which can reduce the calculation amount and the dependence on the scene, and effectively improve the accuracy, real-time and intuitiveness of the automatic tracking.
  • an embodiment of the present invention provides a tracking method for an intelligent tracking ball machine, including the following steps: detecting a moving object according to a picture obtained by a video capturing unit;
  • Embodiments of the present invention also provide an intelligent trackball machine, including:
  • Video capture unit
  • a detecting unit configured to detect a moving object according to a picture obtained by the video capturing unit
  • a ranging unit configured to calculate a distance of the moving object to the specified point when the detecting unit detects the moving object
  • a tracking unit configured to track the moving object if the distance span periodically calculated by the ranging unit includes a preset alarm distance.
  • the distance information of any point in the current picture can be measured, and the distance between the object to be measured and the video acquisition unit can be measured, and the in-situ area can be measured.
  • the height of the object on the plane or the distance between any two points on the ground plane, the obtained distance information is more comprehensive and effective, and more practical. It is not necessary to resort to the existing coordinate system or rely on the relationship between focus and zoom, which can effectively eliminate external interference factors and has strong adaptability.
  • the present invention determines whether the alarm line is triggered by calculating the distance instead of using the related image processing and video detection algorithms based on the monitoring image, thereby greatly reducing the complexity of the algorithm and the dependence on the scene, and effectively improving the automatic tracking. Accuracy, real-time, and intuitive. Further, the tracking magnification of the video capture unit is reduced according to the moving speed of the object or the tracking rate of the dome is increased, so that the continuity and effectiveness of the tracking can be ensured. Further, according to the interruption, the tracking magnification is changed according to the height change of the initially tracked object before and after being interrupted, so that the tracking object is always at the center of the screen, thereby effectively tracking.
  • FIG. 1 is a schematic flow chart of a tracking method of an intelligent trackball machine in a first embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a tracking method of an intelligent trackball machine in a second embodiment of the present invention
  • Figure 3 is a top view of the trackball machine
  • FIG. 4 is a schematic diagram showing the distance range of the current screen of the tracking machine
  • FIG. 5 is a schematic diagram of the equidistance curve
  • Figure 6 is a schematic diagram of a trigger equidistance curve
  • FIG. 7 is a schematic diagram of a calibration geometry of a video capture unit
  • FIG. 8 is a schematic diagram of a triggering alarm curve of a moving object
  • FIG. 9 is a schematic structural view of an intelligent trackball machine according to a third embodiment of the present invention. A schematic diagram of the structure of an intelligent trackball machine. detailed description
  • FIG. 1 is a schematic flow chart of a tracking method of the intelligent trackball machine.
  • the tracking method of the intelligent trackball machine mainly includes the following steps: In step 101, a moving object is detected according to a picture obtained by the video capturing unit.
  • the video collection unit refers to a component that acquires a video image in the dome camera, and may have other names, such as a camera, a screen capture module, and the like.
  • the point specified preferentially is the intelligent tracking dome itself.
  • the designated point may also be a point other than the intelligent tracking dome, for example, it may be a position point of the target to be protected, such as a center point of the door, a center point of the exhibit. , seats for important people, etc.
  • the distance of the moving object to the specified point is calculated.
  • the periodically calculated distance span includes a preset alarm distance. For example, the calculation distance is periodically performed, the Nth distance is X, the N+1th distance is Z, and the alarm distance is Y, where X>Y>Z, then the distance distance calculated periodically is determined. Contains a preset alarm distance.
  • a second embodiment of the present invention relates to a tracking method of an intelligent trackball machine.
  • 2 is a schematic flow chart of a tracking method of the intelligent trackball machine. The second embodiment has been improved on the basis of the first embodiment, the main improvement being: by the following formula v_ (ZH)»u
  • the tracking method of the intelligent trackball machine mainly includes the following steps: In step 201, the trackball machine is installed to obtain the height of the dome.
  • a tracking strategy is formulated, including determining a trigger tracking manner and triggering a tracking distance.
  • determining the farthest distance and the closest distance point in the current picture obtaining the distance Range and draw an isometric curve.
  • the distance range of the current picture depends on the distance from the farthest point and the closest point of the dome camera. The analysis is as follows:
  • the farthest point of the normalized image is B (-0.5, f*tg()/D) and B'(0.5, f*tg((D)/D)).
  • the nearest one is the lower left corner point C (-0.5, -0.5) and C' (0.5, -0.5), which is closer to the distance from the dome.
  • A', B, B', C The C coordinate is substituted into the ranging formula (3) to obtain the farthest and closest distance of the current picture.
  • the ranging formula (3) is inversely derived according to the principle of equidistance to obtain a series of points of equal distance, so that several equidistant curves are drawn on the current picture, that is, points on any one of the curves.
  • the distances to the dome are equal, as shown in Figure 5, where A, B, C, D, and E are equidistant curves, and the distance between each adjacent two curves is determined.
  • step 204 an equidistance curve for triggering the tracking distance is drawn in the current picture according to actual needs.
  • the trigger equidistance curve can be visually drawn in the picture according to the trigger distance requirement, as shown in FIG.
  • the red curve is the trigger equidistance curve.
  • the distance is the distance from the moving object to the video capturing unit, or the distance from the moving object to the specified point.
  • the isometric curve is only auxiliary or not.
  • the alarm line triggered by the tracking can also be omitted according to actual needs. You can draw the equidistant curve first and then draw the alarm line, or you can draw the alarm line directly.
  • a moving object detection algorithm is started to acquire distance change information of all moving objects in the current picture. Specifically, as shown in Fig. 7, in the figure, there is an image coordinate system UO'V, a video capturing unit coordinate system XcYcOcZc, and a world coordinate system XYOZ.
  • O'Oc is the optical axis of the video acquisition unit
  • Oc is the focus of the video acquisition unit
  • 0' is the intersection of the optical axis and the CCD imaging target surface
  • 0 is the intersection of the optical axis and the ground plane.
  • f is an internal parameter, which refers to the focal length of the video acquisition unit
  • H is external parameters, respectively referring to the assumed height of the video acquisition unit and the vertical clamping of the video acquisition unit and the ground plane.
  • the angle is set to (u, V) on the image coordinate system, (Xc, Yc, Zc) on the coordinate system of the video acquisition unit, and (X, Y, z) on the coordinates of the world coordinate system.
  • the conversion relationship between the image coordinate system and the video acquisition unit coordinate system is:
  • u, V, and f are the coordinates and focal length values normalized by the unit.
  • the video acquisition unit coordinates can be obtained.
  • the three-dimensional transformation relationship between the system and the world coordinate system namely:
  • the transformation relationship of the coordinate system Based on the imaging principle of the video acquisition unit and the ranging method of the coordinate conversion principle, the distance information of any point in the current picture can be measured, and not only the distance from the object to be measured to the video acquisition unit but also the object on the ground plane in the picture can be measured.
  • the height or the distance between any two points on the ground plane, the distance information obtained is more comprehensive and effective, and more practical. It is not necessary to resort to the existing coordinate system or rely on the relationship between focus and zoom, which can effectively eliminate external interference factors and is highly adaptable.
  • ranging can also be performed by conversion between other coordinate systems, for example, by polar coordinates, etc., and the corresponding calculation formulas will also be different.
  • step 206 it is determined whether an object is crossing the line. If yes, go to step 207; if no, go back to step 206 again.
  • step 207 the tracking algorithm is initiated to track the cross-line object. The distance of the moving object from the dome is calculated periodically, and if the moving object crosses the alarm line, tracking is performed.
  • the period may be any time interval that satisfies the need, such as 1 second, 2 seconds, and 3 seconds. Specifically, as shown in FIG.
  • step 208 it is judged whether the scale of the tracked object in the screen is appropriate. If yes, go to step 209; if no, go to step 210. In step 209, the appropriate magnification tracking is maintained.
  • step 210 the zoom lens is given a suitable tracking magnification.
  • magnification or tilt magnification refers to the zoom ratio of the lens to the video image, and is also a “Zoom” function.
  • an automatic tracking algorithm is initiated to track the object.
  • the tracking magnification and tracking speed there is an important problem that is how to determine the tracking magnification and tracking speed. This can also be solved by using the ranging algorithm. 1 According to the distance of the detected object from the dome camera, the motion trend and motion speed of the object can be obtained. The tracking rate can be adjusted according to the trend and speed to ensure the continuity and effectiveness of the tracking.
  • the specific steps are as follows: First, the dome is at a certain magnification. There is an optimal tracking speed in both horizontal and vertical directions.
  • Va, Vb this speed is related to the performance of the motor and timing belt used in the moving part of the ball machine. It can be used as an empirical value in this patent.
  • the distance of the current moving object from the dome camera can be obtained in real time, and the approximate uniform motion speed of the object can be obtained according to the change of the distance of the object from the dome camera per unit time.
  • the change of the object is decomposed into the direction of the vertical equidistant line and the direction parallel to the equidistant line, and the speeds of the two directions are respectively obtained, and the two speeds are respectively Vh and Vv, and It is converted into the moving speeds Vha, Vvb of the horizontal and vertical directions of the ball machine.
  • Vha, Vvb are compared with the optimal tracking speeds Va, Vb at the magnification. If the difference is within an acceptable range, the dome keeps track of the existing speed and magnification, and if the speed is high, the dome magnification is reduced. If the speed is low, the dome magnification is increased, and the zooming and increasing magnification is approximately linearly related to the change of the moving speed of the tracking object.
  • the unit time can be 0.5 seconds, 1 second, 2 seconds, and so on.
  • the algorithm is mainly used when the tracking is interrupted.
  • the specific steps are as follows: When tracking the object using the tracking algorithm, the actual height of the current object can be obtained in real time. If the tracking is not interrupted, the object is The height should not change significantly. At this time, the algorithm does not change the tracking magnification. If the tracking is interrupted, the algorithm determines the height change of the tracked object before and after the interruption. If the tracking object becomes lower, the tracking magnification is increased. When it goes high, the tracking magnification is reduced; if the tracking object height does not change, the tracking magnification is maintained.
  • the actual height information of the current tracking object is obtained, and the zoom ratio of the dome camera is adjusted according to the ratio of the image occupied by the object in the screen. Ensure that the tracking object is always at the center of the screen for effective tracking.
  • the algorithm is mainly used in the case where the tracking is interrupted, and can also be used in other cases.
  • the method embodiments of the present invention can all be implemented in software, hardware, firmware, and the like. Whether the invention is implemented in software, hardware, or firmware, the instruction code can be stored in any type of computer-accessible memory (eg, permanent or modifiable, volatile or non-volatile, solid state Or non-solid, fixed or replaceable media, etc.).
  • the memory may be, for example, Programmable Array Logic ("PAL"), Random Access Memory (RAM), Programmable Read Only Memory (Programmable Read Only Memory) "PROM”), read-only memory (Read-Only Memory, "ROM”), Electrically Erasable Programmable ROM (EEPROM), disk, CD, digital universal CD (Digital Versatile Disc, "DVD”) and so on.
  • PAL Programmable Array Logic
  • RAM Random Access Memory
  • PROM Programmable Read Only Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable ROM
  • disk CD
  • digital universal CD Digital Versatile Disc, "DVD"
  • a third embodiment of the invention relates to an intelligent trackball machine.
  • the intelligent trackball machine mainly includes: a video collection unit.
  • the detecting unit is configured to detect the moving object according to the picture obtained by the video capturing unit.
  • the ranging unit is configured to calculate a distance of the moving object to the specified point when the detecting unit detects the moving object.
  • a fourth embodiment of the invention relates to an intelligent trackball machine. 4 is a schematic structural view of the intelligent trackball machine. The fourth embodiment is improved on the basis of the third embodiment, and the main improvement is In:
  • the ranging unit first passes the following formula
  • f is an internal parameter, which refers to the focal length of the video acquisition unit
  • is an external parameter, which refers to the assumed height of the video acquisition unit and the vertical angle between the video acquisition unit and the ground plane
  • the coordinates of the point on the image coordinate system are (u , V)
  • the point coordinates on the coordinate system of the video acquisition unit are (Xc, Yc, Zc)
  • the coordinates of the points on the world coordinate system are (X, Y, Z).
  • Equation (3) is only a preferred ranging formula in the present invention, and other ranging formulas without a form may be used.
  • a distance range obtaining unit configured to determine the farthest distance point and the closest distance point in the current picture, to obtain the current picture.
  • the line drawing unit is configured to draw an equidistance curve within a distance range of the current picture acquired by the distance range obtaining unit, and draw an alarm line for tracking trigger according to the preset alarm distance.
  • the first ball adjustment unit is configured to calculate the moving speed of the object according to the difference between the calculated distances of the distance measuring unit per unit time, shrink the tracking magnification of the video capturing unit according to the speed, or increase the tracking rate of the ball machine.
  • the second dome adjustment unit is configured to acquire the actual height information of the current object according to the calculation method of the height of the object, and adjust the zoom ratio of the dome machine according to the ratio of the image occupied by the object in the screen.
  • the second embodiment is implemented in cooperation with each other.
  • the related technical details mentioned in the second embodiment are still effective in the present embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the present embodiment can also be applied to the second embodiment.
  • each unit mentioned in each intelligent trackball machine of the present invention is a logical unit.
  • one logical unit may be a physical unit, or may be part of one physical unit, or may be multiple
  • the combined implementation of physical units, the physical implementation of these logical units themselves is not the most important, and the combination of functions implemented by these logical units is the key to solving the technical problems raised by the present invention.
  • the above-mentioned intelligent trackball machines of the present invention do not introduce units that are not closely related to solving the technical problem proposed by the present invention, which does not indicate that the above-mentioned intelligent trackball machine does not exist. Other units. While the invention has been illustrated and described with reference to the preferred embodiments embodiments The spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

本发明涉及视频监控领域,公开了一种智能跟踪球机及其跟踪方法。本发明中,检测到移动物体后,计算移动物体到指定点的距离,根据该距离决定是否触发跟踪,可以减少计算量,且跟踪的触发更为准确。可以测得目前画面内任一点的距离信息,获得的距离信息更加全面有效,实用性更强。不需要借助于现有的坐标系统或者依赖于聚焦、变倍之间的关系,可有效排除外界干扰因素,适应性强。画等距曲线和跟踪触发的报警线会使用户对报警线的比较有较直观的判断。跟踪物体始终处于画面中心位置,可以保证跟踪的持续性和有效性。

Description

智能跟踪球机及其跟踪方法 技术领域
本发明涉及视频监控领域;特别涉及运用智能跟踪球机进行视频监控的 安防技术。
背景技术
智能视频监控是利用计算机视觉技术对视频信号进行处理、 分析和理 解, 在不需要人为干预的情况下, 通过对序列图像自动分析, 对监控场景中 的变化进行定位、 识别和跟踪, 并在此基础上分析和判断目标的行为, 能在 异常情况发生时及时发出警报或提供有用信息,有效地协助安全人员处理危 机, 并最大限度地降低误报和漏报现象。 目前,市场上的自动跟踪球机其移动检测主要是基于图像处理和模式识 别的基本原理, 即通过差分、 光流法等组成专门的图像视频处理单元进行移 动物体的检测, 随之对检测到的触发物体进行球机跟踪。 本发明的发明人发现, 此种方法存在的主要缺点一方面是算法处理量 大,需要用算法模拟出当前物体运动轨迹来完成触发判断,再者是易受干扰, 对检测物体的运动环境有较严格的要求, 最后是不直观形象, 不能反映当前 物体的运动速度和实际高度等有效信息。所以,亟需一种新的视频监控技术, 可以较大程度地较少算法复杂度和对场景的依赖, 且触发的跟踪更为准确、 有效、 直观。
发明内容
本发明的目的在于提供一种智能跟踪球机及其跟踪方法,可以减少计算 量和对场景的依赖, 有效提高自动跟踪的准确性、 实时性和直观性。 为解决上述技术问题,本发明的实施方式提供了一种智能跟踪球机的跟 踪方法, 包括以下步骤: 根据视频采集单元所得的画面检测移动物体;
如果检测到移动物体, 则计算该移动物体到指定点的距离; 如果周期性计算所得的距离跨度包含预先设定的报警距离,则对该移动 物体进行跟踪。 本发明的实施方式还提供了一种智能跟踪球机, 包括:
视频采集单元;
检测单元, 用于根据视频采集单元所得的画面检测移动物体;
测距单元, 用于在检测单元检测到移动物体时, 计算移动物体到指定点 的距离;
跟踪单元,用于在测距单元周期性计算所得的距离跨度包含预先设定的 报警距离的情况下, 对该移动物体进行跟踪。 本发明实施方式与现有技术相比, 主要区别及其效果在于:
检测到移动物体后, 计算移动物体到指定点的距离, 根据该距离决定是 否触发跟踪, 可以减少计算量, 且跟踪的触发更为准确。 进一步地, 基于视频采集单元成像原理及坐标转换原理的测距方法, 可 以测得目前画面内任一点的距离信息,不仅可以测量出待测物体到视频采集 单元的距离,还可以测出画面内地平面上物体的高度或者地平面上任意两点 之间的距离, 获得的距离信息更加全面有效, 实用性更强。 不需要借助于现 有的坐标系统或者依赖于聚焦、变倍之间的关系,可有效排除外界干扰因素, 适应性强。
进一步地,画等距曲线和跟踪触发的报警线会使用户对报警线的比较有 较直观的判断。 进一步地, 本发明通过计算距离, 而不是运用基于监控画面的相关图像 处理和视频检测算法来判断是否触发报警线,较大程度上减少了算法复杂度 和对场景的依赖, 可以有效提高自动跟踪的准确性、 实时性、 直观性。 进一步地,根据该物体运动速度收缩视频采集单元的跟踪倍率或者提高 球机的跟踪速率, 这样可以保证跟踪的持续性和有效性。 进一步地, 根据被打断时, 根据被打断前后初跟踪物体的高度变化改变 跟踪倍率, 保证跟踪物体始终处于画面中心位置, 从而进行有效跟踪。
附图说明
图 1 是本发明第一实施方式中一种智能跟踪球机的跟踪方法的流程示 意图;
图 2 是本发明第二实施方式中一种智能跟踪球机的跟踪方法的流程示 意图;
图 3是跟踪球机俯视模型图;
图 4是跟踪球机当前画面的距离范围示意图; 图 5是等距离曲线示意图;
图 6是触发等距曲线示意图;
图 7是视频采集单元标定几何模型图; 图 8是移动物体触发报警曲线的示意图; 图 9是本发明第三实施方式中一种智能跟踪球机的结构示意图; 图 10是本发明第四实施方式中一种智能跟踪球机的结构示意图。 具体实施方式
在以下的叙述中, 为了使读者更好地理解本申请而提出了许多技术细 节。 但是, 本领域的普通技术人员可以理解, 即使没有这些技术细节和基于 以下各实施方式的种种变化和修改,也可以实现本申请各权利要求所要求保 护的技术方案。 为使本发明的目的、技术方案和优点更加清楚, 下面将结合附图对本发 明的实施方式作进一步地详细描述。 本发明第一实施方式涉及一种智能跟踪球机的跟踪方法。图 1是该智能 跟踪球机的跟踪方法的流程示意图。 具体地说,如图 1所示,该智能跟踪球机的跟踪方法主要包括以下步骤: 在步骤 101 中, 根据视频采集单元所得的画面检测移动物体。 本发明 各实施方式中, 视频采集单元是指球机中获取视频画面的部件, 也可以有其 它的名称, 如摄像头, 画面捕捉模块等等。 此后进入步骤 102, 判断是否检测到移动物体。 若是, 则进入步骤 103; 若否, 则再次回到步骤 102。 如果检测到移动物体, 则计算该移动物体到指定点的距离。 本实施方式中, 优先地指定点是智能跟踪球机本身。 可以理解, 在本发 明的其它某些实例中, 指定点也可以是智能跟踪球机之外的点, 例如, 可以 是要保护的目标所在的位置点, 例如门的中心点, 展品的中心点, 重要人物 的座位等等。 在步骤 103中, 计算移动物体到指定点的距离。 此后进入步骤 104,判断周期性计算所得的距离跨度包含预先设定的报 警距离。 例如, 周期性地进行计算距离, 第 N次距离为 X, 第 N+1 次距离 为 Z, 而报警距离为 Y, 其中 X>Y>Z, 则判定为周期性计算所得的距离跨度 包含预先设定的报警距离。 若是, 则进入步骤 105; 若否, 则再次回到步骤 104。 如果周期性计算所得的距离跨度包含预先设定的报警距离,则对该移动 物体进行跟踪。 在步骤 105中, 跟踪移动物体。 此后结束本流程。 检测到移动物体后, 计算移动物体到指定点的距离, 根据该距离决定是 否触发跟踪, 可以减少计算量, 且跟踪的触发更为准确。 本发明第二实施方式涉及一种智能跟踪球机的跟踪方法。图 2是该智能 跟踪球机的跟踪方法的流程示意图。第二实施方式在第一实施方式的基础上 进行了改进, 主要改进之处在于: 通过以下公式 v_ (Z-H)»u
v cos Φ - sin Φ
γ_ Z(vsin + cos )-H*v/sin ( ^ )
ν cos Φ - sin Φ
Ζ = Ζ 来计算图像坐标系在世界坐标系下的坐标,再根据世界坐标系下的坐标 计算移动物体到指定点的距离。 其中 f 为内参数, 指视频采集单元的焦距, Η, Φ为外参数, 分别指视 频采集单元假设的高度及视频采集单元与地平面的垂直夹角, (U, V)为图像 坐标系上点坐标, (X, Y, Z)为世界坐标系上点的坐标。 具体地说,如图 2所示,该智能跟踪球机的跟踪方法主要包括以下步骤: 在步骤 201 中, 安装跟踪球机, 获取球机高度 Η。 此后进入步骤 202, 制定跟踪策略, 包括确定触发跟踪方式及触发跟踪 距离。 此后进入步骤 203, 确定当前画面中最远距离和最近距离点, 获得距离 范围并画出等距曲线。
当跟踪球移动到某一位置, 此时需获取当前画面的距离范围。 当前画面 的距离范围取决于距离球机最远点和最近点的距离, 分析如下:
图 3为跟踪球机俯视模型图。因为不是所有图像中的像素点均在水平面 上有投影, 故必须找到与地平面有交点的临界点的位置。 如图示中 d近似为 画面在视频采集单元电荷藕合器件( Charge Coupled Device ,筒称" CCD" ) 垂直方向上的投影, f为焦距, Φ为球机的水平夹角, 且 d = f*tg((D ), 假设 视频采集单元电荷耦合器件 CCD垂直尺寸为 D , 则根据 d与 D的关系可以 得出在视频采集单元电荷耦合器件 CCD 平面上成像位置最远点的图像坐 标, 即
如果 Γ¾( Φ ) > D , 则位置最远点归一化图像坐标为 Α(-0.5 , 0.5)和 Α'(0.5 , 0.5)中距离较远者;
否则位置最远点归一化图像坐标为 B(-0.5 , f*tg( )/D )和 B'(0.5 , f*tg((D )/D)中距离较远者。 距离球机距离最近者为左下角点 C(-0.5, -0.5)和 C'(0.5, -0.5)中距离球 机距离较近者。 如图 4所示, 将 、 A'、 B、 B'、 C 、 C 坐标分别代入测 距公式 (3)中即可获取当前画面的最远和最近距离。
根据当前画面的距离范围, 按照等距离的原则将测距公式 (3)反推得出 等距离的一系列点, 从而在当前画面画出若干条等距离曲线, 即其中任一曲 线上的点到球机的距离均相等, 如图 5所示, 其中 A、 B、 C、 D、 E为等 距离曲线, 且每相邻两条曲线间距离为确定的。
此后进入步骤 204 ,根据实际需要在当前画面中画出触发跟踪距离的等 距曲线。
在等距曲线完成后可以根据触发距离需求形象地在画面中画出触发等 距曲线, 如图 6所示, 其中红色曲线为触发等距离曲线。
画等距曲线和跟踪触发的报警线会使用户对报警线的比较有较直观的 判断。
此外, 可以理解, 距离是移动物体到视频采集单元的距离, 也可以是移 动物体到指定点的距离。 等距曲线只是辅助作用, 也可以不画。 根据实际需 要也可以不画跟踪触发的报警线。 可以先画等距曲线再画报警线, 也可以直 接画报警线。 此后进入步骤 205, 启动移动物体检测算法, 获取当前画面中所有移动 物体的距离变化信息。 具体地说, 如图 7所示, 在图中, 存在图像坐标系 UO'V, 视频采集单 元坐标系 XcYcOcZc, 世界坐标系 XYOZ。 其中 O'Oc为视频采集单元的光 轴, Oc为视频采集单元的焦点, 0'为光轴与 CCD成像靶面的交点, 0为光 轴与地平面的交点。 在图中还存在视频采集单元的内外参数, 其中 f为内参 数, 指视频采集单元的焦距, H, Φ为外参数, 分别指视频采集单元假设的 高度及视频采集单元与地平面的垂直夹角,设图像坐标系上点坐标为(u , V) , 视频采集单元坐标系上的点坐标为 (Xc, Yc, Zc), 世界坐标系上点的坐标为 (X, Y, z)。 根据视频采集单元的成像原理,得图像坐标系和视频采集单元坐标系的 转化关系为:
Figure imgf000009_0001
其中 u, V , f为经过单位归一化的坐标和焦距值。
另根据三维坐标系之间的平移旋转转化关系,可得到视频采集单元坐标 系和世界坐标系的三维转化关系, 即:
xc = x
7c =Zcos + 7sin (2)
Zc =Zsin -7cos -H/sin 将公式( 1 )和公式(2)结合可获得图像坐标系中一点在世界坐标系下 的坐标, v_ (Z-H)»u
JC―
cos Φ-f sin
Y=Z(v→ + fcos )-H.v/→ ( 3) , 这样就得到了图像坐标系和世界 cos Φ - sin Φ
Z = Z
坐标系的转化关系式。 基于视频采集单元成像原理及坐标转换原理的测距方法,可以测得目前 画面内任一点的距离信息, 不仅可以测量出待测物体到视频采集单元的距 离,还可以测出画面内地平面上物体的高度或者地平面上任意两点之间的距 离, 获得的距离信息更加全面有效, 实用性更强。 不需要借助于现有的坐标 系统或者依赖于聚焦、 变倍之间的关系, 可有效排除外界干扰因素, 适应性 强。
此外, 可以理解, 在本发明的其它某些实例中, 也可以通过别的坐标系 间的转化进行测距, 例如通过极坐标等, 则相应的计算公式也会有所不同。 此后进入步骤 206, 判断是否有物体跨线。 若是, 则进入步骤 207; 若否, 则再次回到步骤 206。 在步骤 207中, 启动跟踪算法对跨线物体进行跟踪。 周期性计算移动物体距离球机的距离, 如果移动物体跨越报警线, 则进 行跟踪。 这里周期可以是 1秒、 2秒、 3秒等任意满足需要的时间间隔。 具体地说, 如图 8所示, 如 t时刻画面中在位置 C存在一移动物体, 在 t'时刻该物体移动到位置 C' , 则算法检测到该物体位置的变化并将该物体标 记为触发报警曲线的物体进行跟踪。 本发明通过计算距离,而不是运用基于监控画面的相关图像处理和视频 检测算法来判断是否触发报警线,较大程度上减少了算法复杂度和对场景的 依赖, 可以有效提高自动跟踪的准确性、 实时性、 直观性。 此后进入步骤 208, 判断被跟踪物体在画面中比例是否合适。 若是, 则进入步骤 209; 若否, 则进入步骤 210。 在步骤 209中, 保持合适的倍率跟踪。 此后结束本流程。 在步骤 210中,缩放镜头得到合适的跟踪倍率。此后再次回到步骤 208。 本发明各实施方式中, "倍率" 或 "跟踪倍率" 指镜头对视频图像的缩 放倍率, 也是 "Zoom" 功能。 一旦检测到有物体触线, 则启动自动跟踪算法进行对该物体的跟踪。 在 实际跟踪中, 存在重要的问题即如何确定跟踪倍率及跟踪速度, 这点也可以 利用测距算法进行解决。 ①根据侦测物体距离球机的距离可以获取该物体运动趋势和运动速度, 根据趋势和速度的变化适当调整跟踪速率保证跟踪的持续性和有效性,具体 步骤如下: 首先球机在某一倍率下其水平和垂直方向各有一个最佳的跟踪速度
Va、 Vb, 此速度与球机运动部分选用的电机、 同步带等性能有关, 在本专 利里可以作为经验值使用。 当已经启动对物体的跟踪后,可以实时获取当前移动物体距离球机的距 离, 根据单位时间内物体距离球机距离的变化, 可以获取该物体的近似匀速 运动速度。 根据被跟踪物体在单位时间内与球机距离的变化和该时间段内球机位 置的变化,将被跟踪物体的速度分解为垂直等距线的方向和平行于等距线的 方向, 分别获得这两个方向的速度, 设这两个速度分别为 Vh和 Vv, 并将其 转化为球机水平方向和垂直方向的运动速度 Vha、 Vvb。 将 Vha、 Vvb与该倍率下的最佳跟踪速度 Va、 Vb进行对比, 如果差距 在可接受的范围内, 则球机保持现有速度和倍率进行跟踪, 如果速度偏高则 缩小球机倍率, 如速度偏低则增大球机倍率, 缩小和增大的倍率与跟踪物体 运动速度的变化近似线性相关。
根据单位时间内侦测物体距离球机的距离之差计算物体运动速度,根据 速度收缩视频采集单元的倍率或者提高球机的跟踪速率,这样可以保证跟踪 的持续性和有效性, 单位时间可以是 0.5秒、 1秒、 2秒等等。
②根据物体高度的计算方法可以获取当前跟踪物体的实际高度信息,并 结合该物体在画面中所占的画面比来调整球机的变倍速率,保证跟踪物体始 终处于画面中心位置, 从而进行有效跟踪, 该算法主要用在跟踪被打断的情 况下, 具体步骤如下: 在运用跟踪算法对物体进行跟踪时,当前物体的实际高度是可以实时获 得的, 如果跟踪不被打断, 则物体的高度应该不发生明显变化, 此时该算法 不改变跟踪倍率; 如跟踪被打断, 则算法判断打断前后被跟踪物体的高度变化, 如果跟踪 物体变低, 则增大跟踪倍率; 如果跟踪物体变高, 则缩小跟踪倍率; 如果跟 踪物体高度未发生变化, 则保持跟踪倍率。
根据物体高度的计算方法, 获取当前跟踪物体的实际高度信息, 并结合 该物体在画面中所占的画面比来调整球机的变倍速率。保证跟踪物体始终处 于画面中心位置, 从而进行有效跟踪。 此外, 可以理解, 该算法主要用在跟踪被打断的情况下, 也可以用在其 它情况下。 本发明的各方法实施方式均可以以软件、 硬件、 固件等方式实现。 不管 本发明是以软件、 硬件、 还是固件方式实现, 指令代码都可以存储在任何类 型的计算机可访问的存储器中(例如永久的或者可修改的, 易失性的或者非 易失性的, 固态的或者非固态的, 固定的或者可更换的介质等等) 。 同样, 存储器可以例如是可编程阵列逻辑 ( Programmable Array Logic , 筒称 "PAL" ) 、随机存取存储器( Random Access Memory, 筒称 " RAM" ) 、 可编程只读存储器( Programmable Read Only Memory,筒称 "PROM" ) 、 只读存储器 (Read-Only Memory, 筒称 "ROM" ) 、 电可擦除可编程只 读存储器( Electrically Erasable Programmable ROM ,筒称 "EEPROM" ) 、 磁盘、 光盘、 数字通用光盘 ( Digital Versatile Disc, 筒称 "DVD" ) 等等。 本发明第三实施方式涉及一种智能跟踪球机。图 9是该智能跟踪球机的 结构示意图。 如图 9所示, 该智能跟踪球机主要包括: 视频采集单元。 检测单元, 用于根据视频采集单元所得的画面检测移动物体。 测距单元, 用于在检测单元检测到移动物体时, 计算移动物体到指定点 的距离。 跟踪单元,用于在测距单元周期性计算所得的距离跨度包含预先设定的 报警距离的情况下, 对该移动物体进行跟踪。
第一实施方式互相配合实施。第一实施方式中提到的相关技术细节在本实施 方式中依然有效, 为了减少重复, 这里不再赘述。 相应地, 本实施方式中提 到的相关技术细节也可应用在第一实施方式中。 本发明第四实施方式涉及一种智能跟踪球机。图 4是该智能跟踪球机的 结构示意图。 第四实施方式在第三实施方式的基础上进行了改进, 主要改进之处在 于: 测距单元首先通过以下公式
v _ (Z - H) » u
v cos Φ - sin Φ
γ _ Z(vsin Φ + / cos Φ) - H · / sin Φ ( ^ )
ν cos Φ - sin Φ
Ζ = Ζ
来计算图像坐标系在世界坐标系下的坐标。然后再根据世界坐标系下的 坐标, 计算移动物体到指定点的距离。 其中 f 为内参数, 指视频采集单元的焦距, Η , Φ为外参数, 分别指视 频采集单元假设的高度及视频采集单元与地平面的垂直夹角,设图像坐标系 上点坐标为(u , V) , 视频采集单元坐标系上的点坐标为 (Xc, Yc, Zc), 世界 坐标系上点的坐标为(X, Y, Z)。 公式 (3 ) 只是本发明中的一种优选的测距公式, 也可以使用不用形式 的其它测距公式, 例如, 可以在极坐标中结合上文提到的类似于公式 ( 1 ) 和公式 (2 ) 的推导方式推导出另一种形式的测距公式。 图 10是该智能跟踪球机的结构示意图。 如图 10所示, 该智能跟踪球 机在第三实施方式的基础上还增加了以下单元: 距离范围获取单元, 用于确定当前画面中的最远距离点和最近距离点, 获得当前画面的距离范围。 画线单元,用于在距离范围获取单元获取的当前画面的距离范围内画出 等距曲线, 并根据预先设定的报警距离画出跟踪触发的报警线。 第一球机调节单元,用于根据单位时间内测距单元计算所得的距离之差 计算物体运动速度,根据速度收缩视频采集单元的跟踪倍率或者提高球机的 跟踪速率。
第二球机调节单元, 用于根据物体高度的计算方法, 获取当前物体的实 际高度信息, 并结合物体在画面中所占的画面比来调整球机的变倍速率。 第二实施方式互相配合实施。第二实施方式中提到的相关技术细节在本实施 方式中依然有效, 为了减少重复, 这里不再赘述。 相应地, 本实施方式中提 到的相关技术细节也可应用在第二实施方式中。 需要说明的是, 本发明各智能跟踪球机中提到的各单元都是逻辑单元, 在物理上, 一个逻辑单元可以是一个物理单元, 也可以是一个物理单元的一 部分, 还可以以多个物理单元的组合实现, 这些逻辑单元本身的物理实现方 式并不是最重要的,这些逻辑单元所实现的功能的组合是才解决本发明所提 出的技术问题的关键。 此外, 为了突出本发明的创新部分, 本发明上述各智 能跟踪球机并没有将与解决本发明所提出的技术问题关系不太密切的单元 引入, 这并不表明上述智能跟踪球机并不存在其它的单元。 虽然通过参照本发明的某些优选实施方式, 已经对本发明进行了图示和 描述, 但本领域的普通技术人员应该明白, 可以在形式上和细节上对其作各 种改变, 而不偏离本发明的精神和范围。

Claims

权 利 要 求
1. 一种智能跟踪球机的跟踪方法, 其特征在于, 包括以下步骤: 根据视频采集单元所得的画面检测移动物体;
如果检测到移动物体, 则计算该移动物体到指定点的距离; 如果周期性计算所得的距离跨度包含预先设定的报警距离,则对该移动 物体进行跟踪。
2. 根据权利要求 1所述的智能跟踪球机的跟踪方法, 其特征在于, 所 述指定点是该智能跟踪球机。
3. 根据权利要求 1所述的智能跟踪球机的跟踪方法, 其特征在于, 在 所述如果检测到移动物体,则计算该移动物体到指定点的距离的步骤中还包 括以下子步骤:
根据以下公式
ν _ {Ζ - Η) · η
v cos Φ - sin Φ
γ _ Z( sin Φ + cos Φ) - H · ν / sin Φ
ν cos Φ - sin Φ
Ζ = Ζ
计算图像坐标系中的移动物体在世界坐标系下的坐标; 根据世界坐标系下的坐标计算移动物体到指定点的距离; 其中 f 为内参数, 指视频采集单元的焦距, Η, Φ为外参数, 分别指视 频采集单元假设的高度及视频采集单元与地平面的垂直夹角, (u, V)为图像 坐标系上点坐标, (X, Y, Z)为世界坐标系上点的坐标。
4. 根据权利要求 1至 3中任一项所述的智能跟踪球机的跟踪方法, 其 特征在于, 所述根据视频采集单元所得的画面检测移动物体的步骤中, 包括 以下子步骤:
确定当前画面中的最远距离点和最近距离点, 获得当前画面的距离范 围; 根据所述距离范围计算并在显示当前画面的显示器中画出等距曲线; 在所述显示器中计算并画出跟踪触发的报警线。
5. 根据权利要求 4所述的智能跟踪球机的跟踪方法, 其特征在于, 在 所述确定当前画面中的最远距离点和最近距离点,获得当前画面的距离范围 的步骤中, 还包括以下子步骤: 如果 f*tg(D) > D, 则位置最远点归一化图像坐标为 A (-0.5, 0.5) 和 A' ( 0.5, 0.5) 中距离较远者; 否则位置最远点归一化图像坐标为 B (-0.5, f*tg(D)/D) 和 B' ( 0.5, f*tg(D)/D); 其中 f为焦距, Φ为球机的水平夹角, D为视频采集单元电荷耦合器件 的垂直尺寸。
6. 根据权利要求 5所述的智能跟踪球机的跟踪方法, 其特征在于, 所 述如果周期性计算所得的距离跨度包含预先设定的报警距离则对该移动物 体进行跟踪的步骤, 包括以下子步骤: 周期性计算移动物体距离球机的距离, 如果移动物体跨越报警线, 则进 行跟踪。
7. 根据权利要求 6所述的智能跟踪球机的跟踪方法, 其特征在于, 在 所述跟踪移动物体的步骤中, 还包括以下步骤: 根据单位时间内侦测物体距离球机的距离之差计算物体运动速度,根据 该物体运动速度改变视频采集单元的跟踪倍率或者改变球机的跟踪速率。
8. 根据权利要求 6所述的智能跟踪球机的跟踪方法, 其特征在于, 在 所述跟踪移动物体的步骤中, 还包括以下步骤: 如跟踪被打断, 则判断打断前后被跟踪物体的高度变化, 如果被跟踪物 体变低, 则增大跟踪倍率, 如果被跟踪物体变高, 则缩小跟踪倍率, 如果被 跟踪物体高度未发生变化, 则保持跟踪倍率。
9. 一种智能跟踪球机, 其特征在于, 包括: 视频采集单元;
检测单元, 用于根据视频采集单元所得的画面检测移动物体;
测距单元, 用于在所述检测单元检测到移动物体时, 计算移动物体到指 定点的距离;
跟踪单元,用于在所述测距单元周期性计算所得的距离跨度包含预先设 定的报警距离的情况下, 对该移动物体进行跟踪。
10. 根据权利要求 9 所述的智能跟踪球机, 其特征在于, 所述测距单 元通过以下方式, 计算移动物体到指定点的距离:
根据以下公式
ν _ {Ζ - Η) · η
v cos Φ - sin Φ
γ _ Z( sin Φ + cos Φ) - H · ν / sin Φ
ν cos Φ - sin Φ
Ζ = Ζ
计算图像坐标系在世界坐标系下的坐标; 根据世界坐标系下的坐标计算移动物体到指定点的距离; 其中 f 为内参数, 指视频采集单元的焦距, Η, Φ为外参数, 分别指视 频采集单元假设的高度及视频采集单元与地平面的垂直夹角, (u, V)为图像 坐标系上点坐标, (X, Y, Z)为世界坐标系上点的坐标。
1 1. 根据权利要求 9所述的智能跟踪球机, 其特征在于, 还包括: 距离范围获取单元, 用于确定当前画面中的最远距离点和最近距离点, 获得当前画面的距离范围; 画线单元,用于在所述距离范围获取单元获取的当前画面的距离范围内 在显示当前画面的显示器上画出等距曲线,并根据预先设定的报警距离画出 跟踪触发的报警线。
12.根据权利要求 9至 1 1 中任一项所述的智能跟踪球机,其特征在于, 还包括:
第一球机调节单元,用于根据单位时间内所述测距单元计算所得的距离 之差计算物体运动速度,根据该物体运动速度改变所述视频采集单元的跟踪 倍率或者改变球机的跟踪速率。
13.根据权利要求 9至 1 1 中任一项所述的智能跟踪球机,其特征在于, 还包括:
第二球机调节单元,用于在跟踪被打断时判断打断前后被跟踪物体的高 度变化, 如果被跟踪物体变低, 则增大跟踪倍率, 如果被跟踪物体变高, 则 缩小跟踪倍率, 如果被跟踪物体高度未发生变化, 则保持跟踪倍率。
PCT/CN2012/076223 2011-08-16 2012-05-29 智能跟踪球机及其跟踪方法 WO2013023474A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110233861.0 2011-08-16
CN201110233861.0A CN102289820B (zh) 2011-08-16 2011-08-16 智能跟踪球机及其跟踪方法

Publications (1)

Publication Number Publication Date
WO2013023474A1 true WO2013023474A1 (zh) 2013-02-21

Family

ID=45336209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/076223 WO2013023474A1 (zh) 2011-08-16 2012-05-29 智能跟踪球机及其跟踪方法

Country Status (2)

Country Link
CN (1) CN102289820B (zh)
WO (1) WO2013023474A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289820B (zh) * 2011-08-16 2014-04-02 杭州海康威视数字技术股份有限公司 智能跟踪球机及其跟踪方法
CN107846549A (zh) * 2016-09-21 2018-03-27 杭州海康威视数字技术股份有限公司 一种目标跟踪方法、装置及系统
CN107659790A (zh) * 2017-10-23 2018-02-02 上海集光安防科技股份有限公司 一种球机自动跟踪目标的方法
CN114071362A (zh) * 2021-11-05 2022-02-18 国网江苏省电力有限公司电力科学研究院 一种多目标动态监控方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098465A (zh) * 2007-07-20 2008-01-02 哈尔滨工程大学 一种视频监控中运动目标检测与跟踪方法
CN101616307A (zh) * 2009-07-08 2009-12-30 宝鸡市公安局 视频目标跟踪选择输出方法及系统
CN101969548A (zh) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
CN102289820A (zh) * 2011-08-16 2011-12-21 杭州海康威视数字技术股份有限公司 智能跟踪球机及其跟踪方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100627841B1 (ko) * 2004-06-30 2006-09-25 에스케이 텔레콤주식회사 이동단말기의 과거 위치 정보로 설정된 안전구역에 대한이탈경보 서비스 방법
CN1725266A (zh) * 2004-07-21 2006-01-25 上海高德威智能交通系统有限公司 基于视频触发和测速的车辆智能监测记录系统和方法
US8761434B2 (en) * 2008-12-17 2014-06-24 Sony Computer Entertainment Inc. Tracking system calibration by reconciling inertial data with computed acceleration of a tracked object in the three-dimensional coordinate system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098465A (zh) * 2007-07-20 2008-01-02 哈尔滨工程大学 一种视频监控中运动目标检测与跟踪方法
CN101616307A (zh) * 2009-07-08 2009-12-30 宝鸡市公安局 视频目标跟踪选择输出方法及系统
CN101969548A (zh) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 基于双目摄像的主动视频获取方法及装置
CN102289820A (zh) * 2011-08-16 2011-12-21 杭州海康威视数字技术股份有限公司 智能跟踪球机及其跟踪方法

Also Published As

Publication number Publication date
CN102289820B (zh) 2014-04-02
CN102289820A (zh) 2011-12-21

Similar Documents

Publication Publication Date Title
WO2018228410A1 (zh) 一种目标对象抓拍方法、装置及视频监控设备
CN109446942B (zh) 目标跟踪方法、装置和系统
Lukezic et al. Cdtb: A color and depth visual object tracking dataset and benchmark
WO2019114617A1 (zh) 快速抓拍的方法、装置及系统
US9898829B2 (en) Monitoring apparatus and system using 3D information of images and monitoring method using the same
US10701281B2 (en) Image processing apparatus, solid-state imaging device, and electronic apparatus
WO2018209934A1 (zh) 基于时空约束的跨镜头多目标跟踪方法及装置
KR101687530B1 (ko) 촬상 시스템에 있어서의 제어방법, 제어장치 및 컴퓨터 판독 가능한 기억매체
WO2020094091A1 (zh) 一种图像抓拍方法、监控相机及监控系统
TWI496114B (zh) 影像追蹤裝置及其影像追蹤方法
JP5484184B2 (ja) 画像処理装置、画像処理方法及びプログラム
US8406468B2 (en) Image capturing device and method for adjusting a position of a lens of the image capturing device
CN101924923B (zh) 嵌入式智能自动变焦抓拍方法
CN106780550B (zh) 一种目标跟踪方法及电子设备
US10277888B2 (en) Depth triggered event feature
TWI407386B (zh) 影像自動追蹤之監控方法
WO2013023474A1 (zh) 智能跟踪球机及其跟踪方法
CN116030099B (zh) 一种基于ptz相机的多目标跟踪方法及装置
US20120147145A1 (en) Image processing device, image processing method, and program
KR20150130901A (ko) 카메라 장치 및 이를 이용한 객체 추적 방법
JP4198536B2 (ja) 物体撮影装置、物体撮影方法及び物体撮影プログラム
CN102930554B (zh) 一种监控场景下目标精确抓取的方法及系统
JP5127692B2 (ja) 撮像装置及びその追尾方法
JP2013098746A (ja) 撮像装置、撮像方法およびプログラム
JP2008211534A (ja) 顔検知装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12823360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12823360

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 12823360

Country of ref document: EP

Kind code of ref document: A1