WO2020258889A1 - Procédé de suivi de dispositif de suivi vidéo, et dispositif de suivi vidéo - Google Patents

Procédé de suivi de dispositif de suivi vidéo, et dispositif de suivi vidéo Download PDF

Info

Publication number
WO2020258889A1
WO2020258889A1 PCT/CN2020/075481 CN2020075481W WO2020258889A1 WO 2020258889 A1 WO2020258889 A1 WO 2020258889A1 CN 2020075481 W CN2020075481 W CN 2020075481W WO 2020258889 A1 WO2020258889 A1 WO 2020258889A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
video image
target
video
tracking algorithm
Prior art date
Application number
PCT/CN2020/075481
Other languages
English (en)
Chinese (zh)
Inventor
高宗伟
Original Assignee
杭州海康微影传感科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康微影传感科技有限公司 filed Critical 杭州海康微影传感科技有限公司
Publication of WO2020258889A1 publication Critical patent/WO2020258889A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • This application relates to the field of security technology, and in particular to a tracking method of a video tracking device and a video tracking device.
  • Video tracking devices such as existing video trackers usually use a single tracking algorithm for target tracking.
  • the main tracking algorithms are contrast tracking algorithm, correlation tracking algorithm and binary tracking algorithm.
  • the related tracking algorithm can track multiple types of targets. When the tracked target has no boundaries, the motion is not very strong, and the scene is more complicated, the tracking effect of the related tracking algorithm is good.
  • the contrast tracking algorithm can track fast-moving targets and is highly adaptable to changes in target posture.
  • the binary tracking algorithm can automatically detect the target, the tracking gate is adaptive to the target size, the closed loop speed is fast, the tracking is stable, and it is suitable for tracking the air target.
  • each tracking algorithm has its focus on application scenarios, when a video tracking device uses a single tracking algorithm for target tracking, the tracking effect of the video tracking device will be different when used in different scenarios.
  • the purpose of this application is to provide a tracking method for a video tracking device and a video tracking device, which can adaptively adjust the tracking algorithm according to scene changes, thereby ensuring the tracking effect.
  • an embodiment of the present application provides a tracking method for a video tracking device, the video tracking device supports multiple tracking algorithms; the method includes:
  • the shooting angle of the imaging device is adjusted according to the distance, so that the tracking target is at the center of the next video image captured by the imaging device.
  • an embodiment of the present application provides a video tracking device that supports multiple tracking algorithms;
  • the video tracking device includes a non-transitory computer-readable storage medium and a processor, wherein:
  • the non-transitory computer-readable storage medium is used to store instructions that can be executed by the processor, and when the instructions are executed by the processor, the processor is caused to:
  • the shooting angle of the imaging device is adjusted according to the distance, so that the tracking target is at the center of the next video image captured by the imaging device.
  • the tracking algorithm suitable for the frame of video image is adaptively selected based on the parameters that characterize the scene complexity of the frame of video image , Based on the selected tracking algorithm to track the tracking target in the frame of the video image, and adjust the shooting angle of the imaging device according to the distance between the tracking target in the video image and the center of the video image, so that the tracking target is in the next video shot by the imaging device The center position of the image. Since the tracking algorithm can be switched at any time according to the parameters that characterize the scene complexity of each frame of video image, it can well adapt to the changes in the shooting scene of the imaging device, thereby ensuring the video tracking effect.
  • FIG. 1 is a schematic diagram of the architecture of a video tracking system provided by an embodiment of the present application
  • FIG. 2 is a flowchart of a tracking method of a video tracking device provided by an embodiment of the present application
  • Fig. 3 is a schematic structural diagram of a video tracking device provided by an embodiment of the present application.
  • the tracking algorithm is adaptively adjusted according to the different complexity of each frame of video image captured by the imaging device, so as to avoid the difference in tracking effect caused by the change of the imaging device shooting scene.
  • Fig. 1 is a schematic diagram of the architecture of a video tracking system provided by an embodiment of the present application.
  • the video tracking system includes: imaging equipment, video tracking equipment, display terminals, and follow-up control equipment. The following are introduced separately.
  • the imaging device is integrated with an imaging movement, which is used to shoot videos, and the imaging device sends each frame of video image taken by the imaging movement to the video tracking device.
  • the imaging device will not only send each frame of video image captured by the imaging core to the video tracking device, but also characterize each frame of video image.
  • the parameters of the scene complexity are sent to the video tracking device.
  • the parameters that characterize the scene complexity of each frame of video image may be represented by the definition evaluation parameter of each frame of video image.
  • the imaging core integrated with the imaging device may include a thermal imaging core and a visible light imaging core. Both the thermal imaging core and the visible light imaging core are used to capture video images, the former captures the thermal imaging video image, and the latter captures the visible light video image.
  • the imaging device can send both the thermal imaging video image captured by the thermal imaging core and the visible light video image captured by the visible light imaging core to the video tracking device.
  • the imaging device needs to send the thermal imaging video image to the video tracking device and also need to send the parameters that characterize the scene complexity of the thermal imaging video image to the video. Tracking equipment.
  • the video tracking device performs target tracking on the visible light video image
  • the imaging device needs to send the visible light video image to the video tracking device and also need to send the parameters representing the scene complexity of the visible light video image to the video tracking device.
  • Video tracking equipment can be implemented using a Digital Signal Processing (DSP) chip and supports multiple tracking algorithms.
  • DSP Digital Signal Processing
  • the video tracking device For each frame of video image sent by the imaging device, the video tracking device selects an appropriate tracking algorithm for target tracking based on the parameters that characterize the scene complexity of each frame of video image, and obtains the position information of the tracking target in each frame of video image, according to the above The position information obtains the tracking result, and then adjusts the shooting angle of the imaging device according to the tracking result, so that the tracking target is at the center position of the video image subsequently captured by the imaging device.
  • the video tracking device can send the tracking result to the follow-up control device, and the follow-up control device adjusts the shooting angle of the imaging device.
  • the tracking result may be: the distance between the tracking target and the center of the video image in the video image calculated based on the position information.
  • the video tracking device performs target tracking on the thermal imaging video image sent by the imaging device, and the visible light video image sent by the imaging device is directly sent to the display device for display.
  • the imaging device and the video tracking device need to maintain timing synchronization.
  • the video tracking device can use the synchronization signal of the thermal imaging core or the visible light imaging core in the imaging device as the synchronization source, and use the aforementioned synchronization source as the basic working sequence of the video tracking device, so as to ensure that it is connected to the thermal imaging core or The visible light imaging movement is strictly synchronized.
  • the video tracking equipment generates the working timing of each unit circuit on this basis, forming a unified timing synchronization system.
  • the follow-up control device executes the follow-up control operation according to the tracking result sent by the video tracking device. For example, the shooting angle of the imaging device is directly adjusted, or the shooting angle of the imaging device is adjusted by moving and/or rotating the follow-up control device itself, so as to ensure that the tracking target is located in the center of the video image captured by the imaging device.
  • Figure 2 is an example of target tracking of thermal imaging video images captured by the thermal imaging core in the imaging device.
  • the video tracking device can also capture the visible light captured by the visible light imaging core in the imaging device. Video image for target tracking.
  • the solution provided by the embodiment of the present application is applied to a video tracking device for tracking a tracking target in a video image.
  • FIG. 2 is a flowchart of a tracking method of a video tracking device provided in an embodiment of the present application. As shown in FIG. 2, the method includes the following steps 201 to 205.
  • Step 201 Obtain a video image and a parameter that characterizes the scene complexity of the video image from the imaging device.
  • the video tracking device can also obtain video images from the imaging device frame by frame.
  • the target tracking method is the same, and all of them can be implemented through step 201 to step 205 provided in the embodiment of the present application.
  • the definition evaluation parameter of the video image may be used as a parameter that characterizes the scene complexity of the video image.
  • the definition evaluation parameter of the video image can be expressed by using an active image format descriptor (Active Format Description, AFD).
  • AFD Active Format Description
  • the size of the AFD value can represent the complexity of the scene corresponding to the video image to a certain extent. Specifically, the larger the AFD value, the higher the scene complexity of the video image. Conversely, the smaller the AFD value, the lower the scene complexity of the video image. According to the AFD value of the video image, the complexity of the scene can be determined.
  • the value range of the AFD value of the first frame of video image is [5000, 10000] and the value range of the AFD value of the second frame of video image is [500, 1000], then
  • the scene complexity of one frame of video image is higher than the scene complexity of the second frame of video image.
  • the AFD value of a frame of video image can be obtained through the thermal imaging core.
  • the processor of the thermal imaging core performs high-frequency filtering processing on the parameters that characterize the clarity of a frame of video image to obtain a set of data. Therefore, each data in the group of data can be used to characterize the clarity of the video image. The larger the data in the group of data, the higher the clarity of the video image. On the contrary, the smaller the data in the group of data, the higher the clarity of the video image. The lower the level of clarity.
  • Each data in this set of data can be referred to as the aforementioned AFD value.
  • the processor of the thermal imaging core may be a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the imaging device may integrate a thermal imaging core, a visible light core, or other imaging cores capable of performing video shooting, and use the integrated imaging core to capture video images and send each frame of the captured video images to Video tracking equipment.
  • the imaging device also analyzes each frame of video image captured by the imaging core to determine its definition evaluation parameters, and sends the definition evaluation parameters of each frame of video image as a parameter that characterizes the scene complexity of each frame of video image to the video. Tracking equipment.
  • only one imaging core is integrated in the imaging device. After the imaging device sends each frame of video image captured by the imaging core to the video tracking device, the video tracking device needs to track each frame of video image on the one hand. On the other hand, it can also send each frame of video image to the display terminal for display.
  • multiple imaging cores can be integrated in the imaging device, the imaging device sends each frame of video image captured by one of the imaging cores to the video tracking device, and the video tracking device performs target tracking on each frame of video image;
  • each frame of video images captured by other imaging cores can be sent to a video tracking device, and the video tracking device performs other video processing on each frame of video images captured by other imaging cores, for example, sending them to a display terminal for display.
  • a thermal imaging core and a visible light imaging core are integrated in the imaging device.
  • the imaging device integrating the visible light imaging core and the thermal imaging core performs video shooting of the current scene, where the thermal imaging core in the imaging device captures the thermal imaging video image of the current scene, and the visible light imaging core in the imaging device captures Obtain the visible light video image of the current scene.
  • the imaging device also analyzes the thermal imaging video image of the current scene to determine its sharpness evaluation parameters.
  • the imaging device sends each frame of thermal imaging video image captured by the thermal imaging core and each frame of visible light video image captured by the visible light imaging core to the video tracking device, and also uses the definition evaluation parameters of each frame of thermal imaging video image as a characterization The parameters of the scene complexity of each thermal imaging video image are sent to the video tracking device.
  • Step 202 Select a tracking algorithm suitable for the video image from among multiple tracking algorithms based on the parameter that characterizes the scene complexity of the video image.
  • the video tracking device can support multiple tracking algorithms such as contrast tracking algorithms, related tracking algorithms, and binary tracking algorithms.
  • the parameter value range corresponding to each tracking algorithm can be preset.
  • the scene complexity of the applicable shooting scene is consistent with the scene complexity represented by the parameter value range corresponding to each tracking algorithm.
  • the definition evaluation parameter of the video image can be used as a parameter that characterizes the scene complexity of the video image.
  • the preset value range of the parameter corresponding to each tracking algorithm is the definition corresponding to each tracking algorithm Evaluation parameter value range.
  • the AFD value output by the imaging device can be divided into multiple levels according to the order of magnitude:...n-1th level, nth level, n+1th level..., where n represents the serial number of the level.
  • n represents the serial number of the level.
  • the AFD value is near the n-1th level, that is, when the AFD value is near 40,000, for example, 40,000 ⁇ 15,000, it means that the current scene is a simple scene, such as a sky scene, a sea scene, and the video tracking device selects binary tracking
  • the algorithm is more suitable for target tracking.
  • the video tracking device chooses the contrast tracking algorithm for target tracking; when the AFD value is in the nth
  • the +1 level that is, when the AFD value is near 100000, for example, 100000 ⁇ 15000, it indicates that the current scene is very complicated, and it is more appropriate for the video tracking device to select the relative tracking algorithm for target tracking.
  • the following parameter value ranges can be set:..., (25000, 55000), (55000, 85000), (85000, 115000)..., among them, the parameter value range corresponding to the binary tracking algorithm Yes (25000, 55000), the parameter value range corresponding to the contrast tracking algorithm is (55000, 85000), and the parameter value range corresponding to the correlation tracking algorithm is (85000, 115000).
  • the video tracking device determines the binary tracking algorithm as a tracking algorithm suitable for the frame of thermal imaging video image, and uses the binary tracking algorithm to track the tracking target in the frame of thermal imaging video image.
  • the video tracking device can evaluate the definition of the frame of video image after obtaining a frame of video image and the definition evaluation parameters of the frame of video image from the imaging device The parameter is compared with the parameter value range corresponding to each tracking algorithm, and the parameter value range corresponding to the definition evaluation parameter of the frame of video image is found, and the tracking algorithm corresponding to the parameter value range is determined as suitable for the frame Video image tracking algorithm.
  • the parameter value range corresponding to the definition evaluation parameter of the video image may be the parameter value range to which the definition evaluation parameter of the video image belongs.
  • this step 202 a specific method for selecting a tracking algorithm suitable for a video image from among multiple tracking algorithms based on a parameter that characterizes the scene complexity of the video image is shown in the following S11-S12.
  • the parameter value ranges corresponding to various tracking algorithms are: the parameter value range corresponding to algorithm 1 (25000, 55000), the parameter value range corresponding to algorithm 2 ( 55000, 85000), the parameter value range corresponding to Algorithm 3 (85000, 115000).
  • the value range of the parameter to which 44000 belongs is: (25000, 55000).
  • the parameter value range of 44000 is: (25000, 55000)
  • the tracking algorithm corresponding to the parameter value range of (25000, 55000) is Algorithm 1, so it is suitable for video images
  • the tracking algorithm is Algorithm 1.
  • Step 203 Determine the tracking target in the video image according to the tracking algorithm suitable for the video image.
  • a frame of video image includes more than one type of object.
  • the contrast tracking algorithm when using the contrast tracking algorithm to track a frame of video image, it is necessary to extract the tracking information for the contrast tracking algorithm corresponding to each detected target from the frame of the video image, for example, edge information, contour The length, area, center of gravity, and/or centroid, etc., and then compare the tracking information of all detected targets with the tracking information of the detected target in the previous frame of video image to find out the tracking information and the detected target in the previous frame of video image The detection target with the greatest matching degree of tracking information is determined as the tracking target in the frame of video image.
  • the frame of video image before extracting the tracking information for the contrast tracking algorithm corresponding to each detection target from a frame of video image, the frame of video image can also be preprocessed, and the specific method is shown as X1-X3.
  • Gaussian filtering can be used to achieve noise removal processing, and Gaussian filtering of the video image can remove scattered noise in the video image.
  • Edge detection is mainly to identify pixels with obvious brightness changes in video images.
  • the algorithm for edge detection can include two types of edge detection algorithms based on search and zero-crossing.
  • X3. Determine the segmentation threshold range of the edge-detected video image, and perform binarization processing on the edge-detected video image according to the segmentation threshold range.
  • the determination of the segmentation threshold range is for the subsequent binarization of the edge-detected video image.
  • the segmentation threshold range can be determined according to the gray limit value of the video image and the maximum video signal amplitude.
  • the gray limit value can be the maximum gray value or the minimum gray value.
  • P is The gray limit value of the video image;
  • V P is the maximum video signal amplitude of the video image, for example, 700 mV;
  • is the preset contrast parameter value, and the value range can be 5% to 15% of the value of V P.
  • the gray values of pixels with gray values lower than T V_min can be uniformly set to 0, and the gray values of pixels with gray values greater than T V_max can be uniform. Set to 255, and the gray value of the pixel with gray value between T V_min and T V_max can remain unchanged.
  • the tracking information for the contrast tracking algorithm corresponding to each detection target can be extracted from the preprocessed video image, so that according to the extracted tracking information of all detection targets, from the video image
  • the tracking target is filtered out of all detection targets.
  • the tracking target in the video image is determined according to the tracking algorithm suitable for the video image, which specifically includes:
  • the tracking target is selected from all the detected targets.
  • the contrast tracking algorithm when used to track the target, it is necessary to use the tracking information of the detected target in the previous frame of video image, but for the first frame of video image to start tracking the target, there is no previous frame.
  • the tracking target can be determined in the first frame of video image by manual designation.
  • the binary tracking algorithm when using the binary tracking algorithm to track a frame of video image, it is necessary to extract the tracking information for the binary tracking algorithm corresponding to each detection target from the video image, for example, the aforementioned tracking information It can include edge information, contour length, area, center of gravity, and/or centroid, and then compare the tracking information of all detected targets with the tracking information of the detected target in the previous frame of video image to find out the tracking information and the previous frame of video
  • the detection target in the image with the greatest matching degree of the tracking information of the detection target is determined as the tracking target in the frame of the video image.
  • the frame of video image before extracting the tracking information for the contrast tracking algorithm corresponding to each detection target from a frame of video image, the frame of video image can also be preprocessed, and the preprocessing method is the same as that in the contrast tracking algorithm . After the preprocessing, it is also necessary to perform area filling for each detection target in the frame of the video image after the preprocessing. The area filling can more highlight each detection target in the video image.
  • the tracking information for the binary tracking algorithm corresponding to each detection target can be extracted from the preprocessed and area filled video image, so as to correspond to all the extracted detection targets
  • the tracking information is selected from all the detected targets in the frame of video image.
  • the tracking target in the video image is determined according to the tracking algorithm suitable for the video image, which specifically includes the following S31 and S32.
  • the tracking target is selected from all the detection targets.
  • the previous frame of video image may be manually designated to determine the tracking target in the first frame of video image.
  • the video image can also be preprocessed, and the preprocessing method is the same as the preprocessing method in the contrast tracking algorithm.
  • the pre-selected template image containing the tracking target can be preset or extracted from previous video images.
  • the image containing the tracking target is extracted from the video image where the tracking target appears for the first time as the template image , Or extract the image containing the tracking target from the previous frame of video image as a template image.
  • the tracking target in the video image is determined according to the tracking algorithm suitable for the video image, which specifically includes the following S41.
  • a detection target with the greatest degree of matching with the template image is selected from all detection targets in the video image, and the screened detection target is determined as the tracking target of the video image.
  • Step 204 Extract the position information of the tracking target in the video image, and calculate the distance between the tracking target and the center of the video image based on the position information.
  • the location information of the tracking target mainly includes information such as the height, width, coordinates of the tracking target.
  • the height and width of the tracking target can be determined according to the projection of the video image on the x-axis and y-axis in the three-dimensional coordinate system. For example, the projection on the x-axis falls into the interval [x1, x2], on the y-axis If the projection falls within the interval [y1, y2], it can be determined that the width of the detection target is x2-x1, the height is y2-y1, and the center point coordinates of the tracking target are: ((x1+x2)/2, (y1+y2 )/2).
  • the distance between the tracking target and the center of the video image can be obtained by calculating the distance between the coordinates.
  • the center of gravity and/or centroid of the tracking target can also be used as the location information of the tracking target.
  • the location of a specific point on the tracking target can also be used as the location information of the tracking target. Such as tracking a corner point or protruding end point on the edge of the target.
  • Step 205 Adjust the shooting angle of the imaging device according to the above distance, so that the tracking target is at the center of the next video image shot by the imaging device.
  • the specific implementation of adjusting the shooting angle of the imaging device according to the above distance is: sending the distance between the tracking target and the center of the video image to the follow-up control device equipped with the video tracking device, so that the follow-up control device is based on the foregoing The distance performs the follow-up control operation to adjust the shooting angle of the imaging device.
  • the position between the imaging device and the video tracking device is very close or directly integrated, and both are installed on the follow-up control device and move with the movement of the follow-up control device.
  • the video tracking device After the video tracking device determines the distance between the tracking target in the video image of the current scene and the center of the video image, it can send this distance to the follow-up control device, and the follow-up control device can drive the imaging device to move by controlling its own movement, or directly The imaging device is controlled to rotate or move so that the tracking target is located at the center of the next frame of video image captured by the imaging device.
  • the tracking algorithm suitable for the frame of video image is adaptively selected based on the parameter that characterizes the scene complexity of the frame of video image.
  • the tracking algorithm tracks the tracking target in this frame of video image, and adjusts the shooting angle of the imaging device according to the distance between the tracking target in the video image and the center of the video image, so that the tracking target is at the center of the next video image captured by the imaging device position. Since the tracking algorithm can be switched at any time according to the parameters that characterize the scene complexity of each frame of video image, it can well adapt to the changes in the shooting scene of the imaging device, thereby ensuring the video tracking effect.
  • the tracking method of the video tracking device provided by the embodiment of the application is described in detail above, and the embodiment of the application also provides a video tracking device, which is described in detail below with reference to FIG. 3.
  • FIG. 3 is a schematic structural diagram of a video tracking device provided by an embodiment of the present application.
  • the video tracking device 300 includes a processor 301 and a non-transitory computer-readable storage medium 302, wherein,
  • the non-transitory computer-readable storage medium 302 is configured to store instructions that can be executed by the processor 301, and when the instructions are executed by the processor 301, the processor 301 is caused to:
  • the shooting angle of the imaging device is adjusted according to the distance, so that the tracking target is at the center of the next video image captured by the imaging device.
  • an imaging core is integrated in the imaging device
  • the processor 301 when acquiring a video image and a parameter characterizing the scene complexity of the video image from an imaging device, includes:
  • the method when the processor 301 selects a tracking algorithm suitable for the video image from among the multiple tracking algorithms based on the parameter, the method includes:
  • the tracking algorithm corresponding to the parameter value range to which the parameter belongs is determined as the tracking algorithm suitable for the video image.
  • the multiple tracking algorithms include: a contrast tracking algorithm, a binary tracking algorithm, and a related tracking algorithm;
  • the processor 301 determines the tracking target in the video image according to the tracking algorithm suitable for the video image, it includes:
  • the method includes:
  • the detection target with the greatest degree of matching with the template image is selected from all the detection targets of the video image, and the screened detection target is determined as the video image. Track the target.
  • the processor 301 when the processor 301 adjusts the shooting angle of the imaging device according to the distance, it includes: sending the distance to a follow-up control equipped with the video tracking device Device, so that the follow-up control device performs a follow-up control operation according to the distance, thereby adjusting the shooting angle of the imaging device.
  • the video tracking device when the video tracking device provided by the foregoing embodiments performs target tracking, for each frame of video image captured by the imaging device, adaptive selection is applied to the frame of video image based on the parameter that characterizes the scene complexity of the frame of video image.
  • Tracking algorithm based on the selected tracking algorithm to track the tracking target in the frame of video image, and adjust the shooting angle of the imaging device according to the distance between the tracking target in the video image and the center of the video image, so that the tracking target is under the shooting of the imaging device The center position of a video image. Since the video tracking device can switch the tracking algorithm at any time according to the parameters characterizing the scene complexity of each frame of video image, it can well adapt to changes in the shooting scene of the imaging device, thereby ensuring the video tracking effect.
  • an embodiment of the present application also provides a tracking device of the video tracking device.
  • the following describes the tracking device of the video tracking device provided in the embodiment of the present application.
  • the aforementioned video tracking device supports multiple tracking algorithms; the tracking device of the aforementioned video tracking device includes:
  • An information acquisition module for acquiring a video image from an imaging device and parameters that characterize the scene complexity of the video image
  • An algorithm selection module configured to select a tracking algorithm suitable for the video image among the multiple tracking algorithms based on the parameter
  • a target determination module configured to determine a tracking target in the video image according to a tracking algorithm applicable to the video image
  • a distance calculation module configured to extract position information of the tracking target in the video image, and calculate the distance between the tracking target and the center of the video image based on the position information
  • the angle adjustment module is configured to adjust the shooting angle of the imaging device according to the distance, so that the tracking target is at the center of the next video image shot by the imaging device.
  • an imaging core is integrated in the imaging device
  • the information acquisition module is specifically used for:
  • the algorithm selection module is specifically used for:
  • the tracking algorithm corresponding to the parameter value range to which the parameter belongs is determined as the tracking algorithm suitable for the video image.
  • the multiple tracking algorithms include: a contrast tracking algorithm, a binary tracking algorithm, and a related tracking algorithm;
  • the target determination module is specifically used for:
  • the target determination module is specifically used for:
  • the detection target with the greatest degree of matching with the template image is selected from all detection targets in the video image, and the selected detection target is determined as the video image Tracking target.
  • the angle adjustment module is specifically used for:
  • the distance is sent to a follow-up control device equipped with the video tracking device, so that the follow-up control device performs a follow-up control operation according to the distance, thereby adjusting the shooting angle of the imaging device.
  • the tracking algorithm suitable for the frame of video image is adaptively selected based on the parameter that characterizes the scene complexity of the frame of video image.
  • the tracking algorithm tracks the tracking target in this frame of video image, and adjusts the shooting angle of the imaging device according to the distance between the tracking target in the video image and the center of the video image, so that the tracking target is at the center of the next video image captured by the imaging device position. Since the tracking algorithm can be switched at any time according to the parameters that characterize the scene complexity of each frame of video image, it can well adapt to the changes in the shooting scene of the imaging device, thereby ensuring the video tracking effect.
  • an embodiment of the present application also provides a computer-readable storage medium in which a computer program is stored, and the computer program is executed by a processor to realize this The steps of the tracking method of the video tracking device described in the application embodiment.
  • an embodiment of the present application also provides a computer program product containing instructions that, when it runs on a computer, causes the computer to perform the tracking of the video tracking device described in the embodiment of the application. method.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un procédé de suivi d'un dispositif de suivi vidéo et un dispositif de suivi vidéo. Le dispositif de suivi vidéo incorpore de multiples algorithmes de suivi. Le procédé consiste à : obtenir une image vidéo et un paramètre représentant la complexité de scène de l'image vidéo d'un dispositif d'imagerie ; sélectionner un algorithme de suivi approprié pour l'image vidéo parmi les multiples algorithmes de suivi sur la base du paramètre ; déterminer une cible de suivi dans l'image vidéo selon l'algorithme de suivi approprié à l'image vidéo ; extraire des informations de position de la cible de suivi dans l'image vidéo, et calculer la distance entre la cible de suivi et le centre de l'image vidéo sur la base des informations de position ; et ajuster un angle de capture du dispositif d'imagerie en fonction de la distance, de telle sorte que la cible de suivi est située au centre de l'image vidéo suivante capturée par le dispositif d'imagerie. En appliquant la solution fournie dans les modes de réalisation de la présente invention pour un suivi de cible, un ajustement d'algorithme de suivi adaptatif peut être obtenu en fonction d'un changement de scène, ce qui permet d'assurer un effet de suivi.
PCT/CN2020/075481 2019-06-25 2020-02-17 Procédé de suivi de dispositif de suivi vidéo, et dispositif de suivi vidéo WO2020258889A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910555419.6 2019-06-25
CN201910555419.6A CN112132858A (zh) 2019-06-25 2019-06-25 一种视频跟踪设备的跟踪方法和视频跟踪设备

Publications (1)

Publication Number Publication Date
WO2020258889A1 true WO2020258889A1 (fr) 2020-12-30

Family

ID=73849405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/075481 WO2020258889A1 (fr) 2019-06-25 2020-02-17 Procédé de suivi de dispositif de suivi vidéo, et dispositif de suivi vidéo

Country Status (2)

Country Link
CN (1) CN112132858A (fr)
WO (1) WO2020258889A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949830B (zh) * 2021-09-30 2023-11-24 国家能源集团广西电力有限公司 一种图像处理方法
CN115170615B (zh) * 2022-09-02 2022-12-09 环球数科集团有限公司 一种基于智能摄像机的高速视觉系统及其目标跟踪算法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184932A (zh) * 2013-05-20 2014-12-03 浙江大华技术股份有限公司 球机控制方法及装置
CN108496350A (zh) * 2017-09-27 2018-09-04 深圳市大疆创新科技有限公司 一种对焦处理方法及设备
CN109815844A (zh) * 2018-12-29 2019-05-28 西安天和防务技术股份有限公司 目标检测方法及装置、电子设备和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090053295A (ko) * 2007-11-23 2009-05-27 주식회사 에스원 비디오 감시 방법 및 시스템
TWI482123B (zh) * 2009-11-18 2015-04-21 Ind Tech Res Inst 多狀態目標物追蹤方法及系統
US9501701B2 (en) * 2014-01-31 2016-11-22 The Charles Stark Draper Technology, Inc. Systems and methods for detecting and tracking objects in a video stream
CN107016367B (zh) * 2017-04-06 2021-02-26 北京精英路通科技有限公司 一种跟踪控制方法及跟踪控制系统
TWI618032B (zh) * 2017-10-25 2018-03-11 財團法人資訊工業策進會 目標偵測與追蹤方法及系統

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184932A (zh) * 2013-05-20 2014-12-03 浙江大华技术股份有限公司 球机控制方法及装置
CN108496350A (zh) * 2017-09-27 2018-09-04 深圳市大疆创新科技有限公司 一种对焦处理方法及设备
CN109815844A (zh) * 2018-12-29 2019-05-28 西安天和防务技术股份有限公司 目标检测方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN112132858A (zh) 2020-12-25

Similar Documents

Publication Publication Date Title
US20230077355A1 (en) Tracker assisted image capture
US10327045B2 (en) Image processing method, image processing device and monitoring system
US9124812B2 (en) Object image capture apparatus and method
KR101958116B1 (ko) 객체 세그먼트화를 위한 전경 마스크 정정을 위한 이미지 처리 디바이스 및 방법
US10573018B2 (en) Three dimensional scene reconstruction based on contextual analysis
US9628837B2 (en) Systems and methods for providing synchronized content
CN110691193B (zh) 摄像头切换方法、装置、存储介质及电子设备
US8508605B2 (en) Method and apparatus for image stabilization
US11823404B2 (en) Structured light depth imaging under various lighting conditions
US8913103B1 (en) Method and apparatus for focus-of-attention control
WO2020258889A1 (fr) Procédé de suivi de dispositif de suivi vidéo, et dispositif de suivi vidéo
WO2017080237A1 (fr) Procédé d'imagerie de caméra et dispositif de caméra
WO2022057670A1 (fr) Procédé, appareil et système de mise au point en temps réel, et support d'enregistrement lisible par ordinateur
JP2009093644A (ja) コンピュータによって実施されるシーン内を移動している物体の3d位置を追跡する方法
WO2021008205A1 (fr) Traitement d'images
WO2019226673A1 (fr) Récupération de pieds manquants d'un objet humain à partir d'une séquence d'images en fonction d'une détection de plan de sol
CN106600548A (zh) 鱼眼摄像头图像处理方法和系统
US20220309627A1 (en) Face image straight line processing method, terminal device and storage medium
CN102129692A (zh) 一种双门限场面运动目标检测方法及其系统
US20170048441A1 (en) System and method to control an imaging process
US20230368343A1 (en) Global motion detection-based image parameter control
US20240046426A1 (en) Noise removal for surveillance camera image by means of ai-based object recognition
Guilluy et al. Feature trajectories selection for video stabilization
JP2002312792A (ja) 画像処理装置および画像処理方法、記録媒体、並びにプログラム
KR101432783B1 (ko) 카메라 성능 평가용 이미지 전처리 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20832359

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20832359

Country of ref document: EP

Kind code of ref document: A1