WO2019242672A1 - 一种目标跟踪方法、装置及系统 - Google Patents

一种目标跟踪方法、装置及系统 Download PDF

Info

Publication number
WO2019242672A1
WO2019242672A1 PCT/CN2019/092027 CN2019092027W WO2019242672A1 WO 2019242672 A1 WO2019242672 A1 WO 2019242672A1 CN 2019092027 W CN2019092027 W CN 2019092027W WO 2019242672 A1 WO2019242672 A1 WO 2019242672A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
tracking
attribute information
frame
identifier
Prior art date
Application number
PCT/CN2019/092027
Other languages
English (en)
French (fr)
Inventor
戚玉青
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019242672A1 publication Critical patent/WO2019242672A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present application relates to the field of image processing technology, and in particular, to a method, a device, and a system for tracking an object.
  • Target tracking refers to tracking a moving object in a continuous image sequence, obtaining the position of the moving object in each frame of the image, and then determining the moving trajectory of the moving object.
  • Target tracking has a wide range of applications in video surveillance, autonomous driving and video entertainment.
  • the target tracking method includes: initializing the characteristic information of the target moving object, predicting the position of the target moving object in the current image based on the current moving information of the target moving object; and then determining multiple containing motions from the current image based on the predicted position
  • the candidate frame of the object determine the confidence of the moving object contained in each candidate frame based on the feature information of the target moving object and the characteristic information of the moving object contained in each candidate frame; select the moving object contained in the candidate frame with the highest confidence
  • the target moving object the current moving track of the target moving object is determined.
  • the purpose of the embodiments of the present application is to provide a target tracking method, device, and system to improve the efficiency of achieving target tracking and optimize the tracking effect.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides a method for tracking an object, where the method includes:
  • the target attribute information is the attribute information of the tracking target
  • the step of detecting the target image and determining a plurality of target frames containing moving objects includes:
  • a candidate frame of a moving object of the same type as the target type is selected as the target frame.
  • the target type includes one or more of a vehicle, a person, and a human face.
  • the method further includes:
  • the step of extracting attribute information of each target frame includes:
  • the method After acquiring the target frame with the same attribute information as the preset target attribute information as the current frame, the method further includes:
  • the method further includes:
  • a target frame with the same identifier as the target identifier is obtained from multiple target frames as the current frame.
  • the method further includes:
  • the target position information includes a pitch angle, a yaw angle, and a roll angle of the image acquisition device.
  • the tracking target is a vehicle
  • the attribute information includes a vehicle color and a vehicle type.
  • an embodiment of the present application further provides a target tracking device, where the device includes:
  • a first acquisition module configured to acquire a target image including a tracking target
  • a first determining module configured to detect the target image and determine multiple target frames containing moving objects
  • An extraction module for extracting attribute information of each target frame
  • a second obtaining module configured to obtain, from a plurality of target frames, a target frame having the same attribute information as preset target attribute information as the current frame; the target attribute information is attribute information of the tracking target;
  • a second determining module is configured to determine a motion trajectory of the tracking target according to the position information of the current frame.
  • the first determining module is specifically configured to:
  • a candidate frame of a moving object of the same type as the target type is selected as the target frame.
  • the target type includes one or more of a vehicle, a person, and a human face.
  • the first determining module is further configured to determine an identifier of each target frame; wherein the identifiers of all target frames including the same moving object are the same;
  • the extraction module is specifically configured to determine whether a target identifier corresponding to the tracking target is recorded; if not, extract attribute information of each target frame;
  • the second obtaining module is further configured to obtain the target frame with the same attribute information as the preset target attribute information as the current frame, and use the identifier of the current frame as the target identifier to record the target identifier and the tracking target. Corresponding relationship.
  • the second obtaining module is further configured to, if it is determined that the target identifier is recorded, obtain a target frame having the same identifier as the target identifier from multiple target frames as the current frame.
  • the device further includes:
  • a third determining unit configured to determine motion information of the tracking target
  • a fourth determining unit configured to determine target position information of the image acquisition device according to the motion information
  • a sending unit is configured to send the target position information to the image acquisition device, so that the image acquisition device adjusts a position according to the target position information.
  • the target position information includes a pitch angle, a yaw angle, and a roll angle of the image acquisition device.
  • the tracking target is a vehicle
  • the attribute information includes a vehicle color and a vehicle type.
  • an embodiment of the present application further provides an electronic device including a processor and a memory; the memory is used to store a computer program; the processor is used to execute a program stored on the memory, Implement any of the above target tracking method steps.
  • an embodiment of the present application further provides a target tracking system, which includes an image acquisition device and any of the foregoing target tracking devices.
  • an embodiment of the present application further provides a machine-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements any of the foregoing target tracking method steps .
  • the electronic device after obtaining a target image including a tracking target, extracts attribute information of each target frame including a moving object, and obtains a target frame with the same attribute information as the target attribute information from multiple target frames as the current frame. After that, the electronic device determines the motion trajectory of the tracking target according to the position information of the current frame.
  • the target attribute information is attribute information of the tracking target.
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • it is not necessary to achieve all the advantages described above at the same time.
  • FIG. 1 is a first flowchart of a target tracking method according to an embodiment of the present application
  • FIG. 2 is a second schematic flowchart of a target tracking method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a third method of a target tracking method according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a fourth method of a target tracking method according to an embodiment of the present application.
  • FIG. 5 is a first schematic structural diagram of a target tracking device according to an embodiment of the present application.
  • FIG. 6 is a second schematic structural diagram of a target tracking device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the target tracking method can be applied to an image acquisition device or an electronic device connected to the image acquisition device.
  • the electronic device may be a mobile phone, a tablet computer, and a desktop computer.
  • the following description is made by taking an electronic device as an example.
  • the target tracking method includes: acquiring a target image including a tracking target; detecting the target image to determine a plurality of target frames containing a moving object; extracting attribute information of each target frame; and obtaining attribute information and a prediction from the multiple target frames Set a target frame with the same target attribute information as the current frame, where the target attribute information is the attribute information of the tracking target; and determine the motion trajectory of the tracking target based on the position information of the current frame.
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • FIG. 1 is a schematic flowchart of a first method of a target tracking method according to an embodiment of the present application. The method includes the following steps.
  • Step 101 Obtain a target image including a tracking target.
  • the target image acquired by the electronic device may be an image sent by the image acquisition device after acquisition.
  • the target image acquired by the electronic device may also be an image input by a user.
  • the image acquisition device may be a dome camera, a binocular camera, and the like.
  • the tracking target may be a moving object detected by the electronic device in the image. That is, after the electronic device obtains the image, all the moving objects detected from the image are used as the tracking target.
  • the tracking target can also be a moving object specified by the user. That is, before the electronic device performs target tracking, the user inputs information of the tracking target in advance.
  • Step 102 The target image is detected, and a plurality of target frames containing moving objects are determined.
  • the electronic device may use a HOG (Histogram of Oriented Gradient) + SVM (Support Vector Machine) algorithm, a YOLO (You Only Look Look Once) algorithm or a neural network model,
  • HOG Heistogram of Oriented Gradient
  • SVM Serial Vector Machine
  • YOLO You Only Look Look Once
  • the above step 102 may include the following steps.
  • Step 1021 Determine the type of the tracking target as the target type.
  • the target type may include one or more of a vehicle, a person, and a human face.
  • the target type may also include other types, which are not limited in the embodiment of the present application.
  • the target type may be a type specified by the user in advance, or may be a type determined by the electronic device when performing image detection. For example, after the electronic device obtains an image, it detects the image, determines multiple moving objects in the image, and the type of each moving object, and stores the type of each moving object. When the electronic device needs to track a moving object detected from the image, it obtains the type of the tracking target from the types of stored moving objects as the target type.
  • Step 1022 Detect the target image, and determine multiple candidate frames containing moving objects.
  • the electronic device detects the target image and can determine multiple candidate frames, for example, a candidate frame containing a human face, a candidate frame containing a person, and a candidate frame containing a vehicle.
  • the embodiment of this application does not limit the execution order of steps 1021 and 1022.
  • Step 1023 From multiple candidate frames, select a candidate frame of a moving object of the same type as the target type as the target frame.
  • the electronic device filters out a plurality of candidate frames of moving objects whose types are different from the target type, and retains candidate frames of moving objects of the same type as the target type as target frames.
  • the candidate frames of the above-mentioned moving objects of the same type as the target type are candidate frames containing the same types of moving objects as the target type.
  • the electronic device detects a target image, and the determined candidate frames include: a candidate frame containing a human face, a candidate frame containing a person, and a candidate frame containing a vehicle. If the tracking target is a vehicle, the electronic device can filter out the candidate frame containing the face and the candidate frame containing the person, retain the candidate frame containing the vehicle, and use the candidate frame containing the vehicle as the target frame.
  • Step 103 Extract attribute information of each target frame.
  • the attribute information is structured data, that is, information obtained after the electronic device performs video structure processing on the image data.
  • Video structure is to intelligently analyze the original video to extract key information, and to describe the semantic information of the extracted key information.
  • the complexity of the electronic device to extract the attribute information is low, which is much lower than the complexity of calculating the confidence level.
  • the attribute information of the target frame is the attribute information of the moving object contained in the target frame.
  • attribute information of the target frame may include vehicle color, vehicle model, and vehicle brand.
  • the attribute information may include polygons formed by key points on the human face, eye distance, and the like.
  • the attribute information may also include other information, which is not repeated here one by one.
  • the user sets the attribute information of the tracking target in advance, that is, the user sets the target attribute information in advance.
  • the electronic device can extract attribute information of each target frame according to the target attribute information. For example, if the tracking target is a vehicle and the user sets the target attribute information including the vehicle color and the vehicle model in advance, the electronic device extracts the vehicle color and the vehicle model of the target frame from the target frame.
  • Step 104 Obtain a target frame with the same attribute information as the preset target attribute information from the multiple target frames as the current frame.
  • the target attribute information is attribute information of the tracking target.
  • the electronic device filters target frames with different attribute information from the target attribute information from multiple target frames, and retains the target frame with the same attribute information as the target attribute information as the current frame.
  • the target frame with the same attribute information as the preset target attribute information is a target frame containing attribute information of a moving object that is the same as the preset target attribute information.
  • the target attribute information is: the vehicle color is “red” and the vehicle type is “small car”.
  • the attribute information of the target frame extracted by the electronic device is:
  • ⁇ Attribute information of target box 1 vehicle color is "black”, vehicle type is "small car” ⁇ ;
  • ⁇ Attribute information of target box 2 vehicle color is "yellow” and vehicle type is "SUV" ⁇ ;
  • ⁇ Attribute information of target box 3 vehicle color is "yellow” and vehicle type is "mini car” ⁇ ;
  • ⁇ Attribute information 4 of target frame 4 vehicle color is "red” and vehicle type is "small car” ⁇ .
  • the electronic device can determine that the attribute information 4 is the same as the target attribute information, and the attribute information 1-3 is different from the target attribute information.
  • the electronic device filters out target frames 1-3, keeps target frame 4, and uses target frame 4 as the current frame.
  • Step 105 Determine the trajectory of the tracking target according to the position information of the current frame.
  • the position information may be position information of the current frame in the image coordinate system, or may be position information of the current frame in the world coordinate system.
  • the electronic device After the electronic device determines the position information of the current frame, it adds the position information to the motion trajectory set of the tracking target, and then determines the motion trajectory of the tracking target.
  • the track set of the tracking target includes position information 1 and position information 2.
  • the track of the tracking target is: position information 1 ⁇ position information 2. If the electronic device determines that the position information of the current frame is position information 3, the position information 3 is added to the motion track set of the tracking target, and then the motion track of the tracking target is determined as: position information 1 ⁇ position information 2 ⁇ position information 3.
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data.
  • the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking, improves the real-time performance of target tracking, and then optimizes the tracking effect.
  • FIG. 2 a second flowchart of the target tracking method shown in FIG. 2. Based on FIG. 1, the inclusion may include the following steps.
  • Step 201 Obtain a target image including a tracking target.
  • Step 201 is the same as step 101.
  • Step 202 Detect the target image, and determine a plurality of target frames containing moving objects.
  • Step 202 is the same as step 102.
  • Step 203 Determine the identifier of each target frame. Among them, the identifiers of all target frames containing the same moving object are the same.
  • the electronic device when it detects a plurality of target frames containing the same moving object, it marks the plurality of target frames with the same identifier. For example, the electronic device marks a target frame including the vehicle 1 in the image 1 and identifies the target a. When the electronic device obtains the image 2, the target frame containing the vehicle 1 is also detected in the image 2, and the target frame containing the vehicle 1 in the image 2 is also marked with the identifier a.
  • the electronic device when the electronic device detects a target image and determines a plurality of candidate frames containing a moving object, it marks an identifier for each candidate frame. After the electronic device determines the target frame, it can directly obtain the identification of the target frame.
  • Step 204 Determine whether a target identifier corresponding to the tracking target is recorded. If not, step 205 is performed. If yes, go to step 208.
  • the identification of the target frame corresponds to the moving object
  • the identification of the target frame is the same
  • the moving object contained in the target frame is the same. If the target identifier corresponding to the tracking target is recorded, the electronic device can directly search the current frame according to the target identifier without determining the current frame by comparing the attribute information, which effectively improves the efficiency of achieving target tracking.
  • Step 205 Extract attribute information of each target frame.
  • Step 205 is the same as step 103.
  • Step 206 Obtain a target frame with the same attribute information as the preset target attribute information from the multiple target frames as the current frame.
  • the target attribute information is attribute information of the tracking target. Go to steps 207 and 209.
  • Step 206 is the same as step 104.
  • Step 207 Use the identifier of the current frame as the target identifier, and record the correspondence between the target identifier and the tracking target.
  • the electronic device records a target identifier corresponding to the tracking target, so that subsequent electronic devices can track the tracking target according to the recorded target identifier, and improve the efficiency of target tracking.
  • Step 208 Obtain a target box with the same identifier as the target box from the multiple target boxes as the current box. Go to step 209.
  • a target frame with the same identification as the target identification may be obtained from multiple target frames.
  • the identifier of the target frame acquired by the electronic device is the same as the target identifier, it can be determined that the acquired target frame includes a tracking target, and the acquired target frame is used as the current frame.
  • the electronic device directly searches for the current frame according to the target identifier, and does not have to compare the attribute information to determine the current frame, which effectively improves the efficiency of achieving target tracking.
  • Step 209 Determine the trajectory of the tracking target according to the position information of the current frame.
  • Step 209 is the same as step 105.
  • the identification of the target frame is also one of the attribute information.
  • the electronic device can determine the motion trajectory of the tracking target, improve the efficiency of target tracking, and then optimize the tracking effect.
  • the inclusion may include:
  • Step 301 Obtain a target image including a tracking target.
  • Step 302 Detect the target image and determine a plurality of target frames containing moving objects.
  • Step 303 Extract attribute information of each target frame.
  • Step 304 Obtain a target frame with the same attribute information as the preset target attribute information from the multiple target frames as the current frame.
  • the target attribute information is attribute information of the tracking target.
  • Step 305 Determine the trajectory of the tracking target according to the position information of the current frame.
  • Steps 301-305 are the same as steps 101-105.
  • Step 306 Determine the motion information of the tracking target.
  • the movement information may include the movement speed and movement direction of the tracking target.
  • the movement speed may be how many meters are moved in a unit time in the world coordinate system, or how many pixels are moved in a unit time in the image coordinate system.
  • the embodiments of the present application are not limited.
  • the direction of movement can be east, south, west, north, northeast, southeast, northwest, southwest, etc.
  • the direction of movement can also be the direction of the clock, such as 1 o'clock, 2 o'clock ... 12 o'clock.
  • the embodiments of the present application are not limited.
  • Step 306 may be performed before or after any step of step 305, for example, step 306 is performed after step 304, and step 305 is performed after that. This embodiment of the present application does not limit this.
  • Step 307 Determine target position information of the image acquisition device according to the motion information of the tracking target.
  • the image acquisition device may be a dome camera, a binocular camera, or the like.
  • the target position information may include a pitch angle, a yaw angle, a roll angle, and the like of the image acquisition device.
  • the electronic device determines position information, such as a pitch angle, a yaw angle, and a roll angle, of the image acquisition device according to the motion information of the tracking target as target position information.
  • Step 308 Send the target position information to the image acquisition device.
  • the image acquisition device can adjust the position according to the target position information, such as adjusting the pitch angle, yaw angle, and roll angle, etc., and then determine the detection area again to track the target as much as possible.
  • the target position information such as adjusting the pitch angle, yaw angle, and roll angle, etc.
  • the tracking target is a vehicle
  • the target attribute information is a vehicle color and a vehicle model.
  • Step 1 The dome camera collects an image including the tracking target, and sends the acquired image to the electronic device.
  • Step 2 The electronic device detects the received image, determines a plurality of candidate frames containing moving objects, filters out candidate frames containing non-vehicles, and retains the candidate frames containing vehicles as target frames.
  • non-vehicles include people and faces.
  • step 3 the electronic device marks an identifier for each target frame. Target boxes containing the same vehicle, with the same identification. Target boxes containing different vehicles with different identifications.
  • step 4 If the electronic device does not store an ID (Identity) corresponding to the tracking target, step 4 is performed. If the electronic device stores an ID corresponding to the tracking target, step 6 is performed.
  • Step 4 The electronic device extracts attribute information of each target frame.
  • the attribute information of the target frame includes a vehicle color and a vehicle type.
  • Step 5 The electronic device compares the attribute information of the target frame with the target attribute information, and obtains a target frame with the same attribute information as the preset target attribute information from the multiple target frames as the current frame. In addition, the electronic device obtains the ID of the current frame, and stores the ID of the current frame as the ID corresponding to the tracking target. After that, go to step 7.
  • the electronic device determines the target frame as the current frame. To obtain the ID of the target frame, and store the ID of the target frame as the ID corresponding to the tracking target.
  • Step 6 The electronic device compares the ID of the target frame with the ID corresponding to the stored tracking target, and uses the target frame with the same ID as the ID corresponding to the tracking target as the current frame. After that, go to step 7.
  • step 7 the electronic device determines the motion trajectory of the tracking target according to the position information of the current frame.
  • Step 8 The electronic device determines target position information of the dome camera according to the motion information of the tracking target, and sends the target position information to the dome camera.
  • the dome camera adjusts according to the target position information to re-determine the detection area, and collects the image containing the tracking target according to the re-determined detection area.
  • the electronic device detects an image sent by the dome camera according to a detection area newly determined by the dome camera to determine a target frame.
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • the electronic device updates the detection area of the dome camera in real time according to the motion information of the tracking target, which further improves the tracking effect.
  • FIG. 5 is a first schematic structural diagram of a target tracking device according to an embodiment of the present application.
  • the device includes:
  • a first acquisition module 501 configured to acquire a target image including a tracking target
  • a first determining module 502 configured to detect a target image and determine a plurality of target frames containing moving objects
  • An extraction module 503, configured to extract attribute information of each target frame
  • a second acquisition module 504 is configured to acquire, from multiple target frames, a target frame with the same attribute information as the preset target attribute information as the current frame; the target attribute information is the attribute information of the tracking target;
  • the second determining module 505 is configured to determine a motion trajectory of the tracking target according to the position information of the current frame.
  • the first determining module 502 may be specifically configured to:
  • a candidate frame of a moving object of the same type as the target type is selected as the target frame.
  • the target type may include one or more of a vehicle, a person, and a human face.
  • the first determining module 502 may be further configured to determine an identifier of each target frame; wherein the identifiers of all target frames including the same moving object are the same;
  • the extraction module 503 may specifically be used to determine whether a target identifier corresponding to a tracking target is recorded; if not, extract attribute information of each target frame;
  • the second obtaining module 504 may be further configured to obtain the target frame with the same attribute information as the preset target attribute information as the current frame, use the identifier of the current frame as the target identifier, and record the correspondence between the target identifier and the tracking target.
  • the second obtaining module 504 may be further configured to, if it is determined that a target identifier is recorded, obtain a target frame with the same identifier as the target frame from multiple target frames as the current frame.
  • the method may further include:
  • a third determining unit 506, configured to determine motion information of a tracking target
  • a fourth determining unit 507 configured to determine target position information of the image acquisition device according to the motion information
  • the sending unit 508 is configured to send the target position information to the image acquisition device, so that the image acquisition device adjusts the position according to the target position information.
  • the target position information may include a pitch angle, a yaw angle, and a roll angle of the image acquisition device.
  • the tracking target is a vehicle
  • the attribute information includes a vehicle color and a vehicle type.
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • an embodiment of the present application further provides an electronic device, as shown in FIG. 7, including a processor 701 and a memory 702; the memory 702 is used to store a computer program; the processor 701 When used to execute a computer program stored in the memory 702, any one of the target tracking method embodiments shown in FIG. 1 to FIG. 4 is implemented.
  • target tracking methods include:
  • the target attribute information is the attribute information of the tracking target
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • the memory may include RAM (Random Access Memory, Random Access Memory), and may also include NVM (Non-Volatile Memory, non-volatile memory), such as at least one magnetic disk memory.
  • NVM Non-Volatile Memory, non-volatile memory
  • the memory may also be at least one storage device located far from the foregoing processor.
  • the processor may be a general-purpose processor, including CPU (Central Processing Unit), NP (Network Processor), etc .; it may also be DSP (Digital Signal Processing), ASIC (Application Specific) Integrated Circuit (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific
  • Application Specific Integrated Circuit Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • programmable logic devices discrete gate or transistor logic devices, discrete hardware components.
  • an embodiment of the present application further provides a machine-readable storage medium.
  • the machine-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the above-mentioned FIG. 1-
  • An embodiment of any target tracking method shown in FIG. 4. Among them, the target tracking method may include:
  • the target attribute information is the attribute information of the tracking target
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • an embodiment of the present application further provides a target tracking system, which includes an image acquisition device and any one of the above target tracking devices.
  • an embodiment of the present application further provides a computer program.
  • the computer program is executed by a processor, any one of the target tracking method embodiments shown in FIG. 1 to FIG. 4 is implemented.
  • the target tracking method may include:
  • the target attribute information is the attribute information of the tracking target
  • the electronic device can determine the motion trajectory of the tracking target through the attribute information.
  • the attribute information is structured data, and the complexity of extracting the attribute information is lower than the complexity of calculating the confidence level, which improves the efficiency of achieving target tracking and further optimizes the tracking effect.
  • each embodiment in this specification is described in a related manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and the related parts are shown in FIG. 1 -Partial description of the embodiment of the target tracking method shown in FIG. 4 is sufficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种目标跟踪方法、装置及系统,该方法包括:获取包含跟踪目标的目标图像(101);对目标图像进行检测,确定多个包含运动物体的目标框(102);提取各个目标框的属性信息(103);从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;目标属性信息为跟踪目标的属性信息(104);根据当前框的位置信息,确定跟踪目标的运动轨迹(105)。该方法能够提高实现目标跟踪的效率,优化跟踪效果。

Description

一种目标跟踪方法、装置及系统
本申请要求于2018年6月22日提交中国专利局、申请号为201810654154.0发明名称为“一种目标跟踪方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种目标跟踪方法、装置及系统。
背景技术
目标跟踪是指在连续的图像序列中对运动物体进行跟踪,得到运动物体在每一帧图像中的位置,进而确定运动物体的运动轨迹。目标跟踪在视频监控、自动驾驶和视频娱乐等领域有着广泛的应用。
目前,目标跟踪方法包括:初始化目标运动物体的特征信息,基于当前目标运动物体的运动信息,预测目标运动物体在当前图像中的位置;然后根据预测的位置,从当前图像中确定多个包含运动物体的候选框;根据目标运动物体的特征信息,以及每一候选框包含的运动物体的特征信息,确定每一候选框包含的运动物体的置信度;选取置信度最大的候选框包含的运动物体作为目标运动物体,进而确定为目标运动物体的当前运动轨迹。
在对目标进行跟踪时,每获取到一帧图像的一个包含运动物体的候选框,均需要计算一次置信度。置信度的计算较为复杂。若在一帧图像中确定出多个候选框,置信度的计算将消耗时间较多,这使得实现目标跟踪的效率较低,跟踪效果较差。
发明内容
本申请实施例的目的在于提供一种目标跟踪方法、装置及系统,以提高实现目标跟踪的效率,优化跟踪效果。具体技术方案如下:
为实现上述目的,本申请实施例提供了一种目标跟踪方法,所述方法包括:
获取包含跟踪目标的目标图像;
对所述目标图像进行检测,确定多个包含运动物体的目标框;
提取各个目标框的属性信息;
从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;所述目标属性信息为所述跟踪目标的属性信息;
根据所述当前框的位置信息,确定所述跟踪目标的运动轨迹。
可选的,所述对所述目标图像进行检测,确定多个包含运动物体的目标框的步骤,包括:
确定所述跟踪目标的类型,作为目标类型;
对所述目标图像进行检测,确定多个包含运动物体的候选框;
从多个候选框中,选择类型与所述目标类型相同的运动物体的候选框,作为目标框。
可选的,所述目标类型包括:车辆、人和人脸中的一种或多种。
可选的,所述方法还包括:
确定每一目标框的标识;其中,包含同一运动物体的所有目标框的标识相同;
所述提取各个目标框的属性信息的步骤,包括:
判断是否记录有所述跟踪目标对应的目标标识;若否,则提取各个目标框的属性信息;
在获取属性信息与预设的目标属性信息相同的目标框,作为当前框之后,还包括:
将当前框的标识作为目标标识,记录所述目标标识与所述跟踪目标的对应关系。
可选的,所述方法还包括:
若判定记录有所述目标标识,则从多个目标框中,获取标识与所述目标标识相同的目标框,作为当前框。
可选的,所述方法还包括:
确定所述跟踪目标的运动信息;
根据所述运动信息,确定图像采集设备的目标位置信息;
将所述目标位置信息发送给所述图像采集设备,以使所述图像采集设备根据所述目标位置信息调整位置。
可选的,所述目标位置信息包括:所述图像采集设备的俯仰角、偏航角和滚转角。
可选的,所述跟踪目标为车辆,所述属性信息包括车辆颜色和车型。
为实现上述目的,本申请实施例还提供了一种目标跟踪装置,所述装置包括:
第一获取模块,用于获取包含跟踪目标的目标图像;
第一确定模块,用于对所述目标图像进行检测,确定多个包含运动物体的目标框;
提取模块,用于提取各个目标框的属性信息;
第二获取模块,用于从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;所述目标属性信息为所述跟踪目标的属性信息;
第二确定模块,用于根据所述当前框的位置信息,确定所述跟踪目标的运动轨迹。
可选的,所述第一确定模块,具体用于:
确定所述跟踪目标的类型,作为目标类型;
对所述目标图像进行检测,确定多个包含运动物体的候选框;
从多个候选框中,选择类型与所述目标类型相同的运动物体的候选框,作为目标框。
可选的,所述目标类型包括:车辆、人和人脸中的一种或多种。
可选的,所述第一确定模块,还用于确定每一目标框的标识;其中,包含同一运动物体的所有目标框的标识相同;
所述提取模块,具体用于判断是否记录有所述跟踪目标对应的目标标识;若否,则提取各个目标框的属性信息;
所述第二获取模块,还用于在获取属性信息与预设的目标属性信息相同的目标框,作为当前框之后,将当前框的标识作为目标标识,记录所述目标标识与所述跟踪目标的对应关系。
可选的,所述第二获取模块,还用于若判定记录有所述目标标识,则从多个目标框中,获取标识与所述目标标识相同的目标框,作为当前框。
可选的,所述装置还包括:
第三确定单元,用于确定所述跟踪目标的运动信息;
第四确定单元,用于根据所述运动信息,确定图像采集设备的目标位置信息;
发送单元,用于将所述目标位置信息发送给所述图像采集设备,以使所述图像采集设备根据所述目标位置信息调整位置。
可选的,所述目标位置信息包括:所述图像采集设备的俯仰角、偏航角和滚转角。
可选的,所述跟踪目标为车辆,所述属性信息包括车辆颜色和车型。
为实现上述目的,本申请实施例还提供了一种电子设备,包括处理器和存储器;所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序,实现上述任一目标跟踪方法步骤。
为实现上述目的,本申请实施例还提供一种目标跟踪系统,包括图像采集设备,以及上述任一目标跟踪装置。
为实现上述目的,本申请实施例还提供了一种机器可读存储介质,所述机器可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一目标跟踪方法步骤。
本申请实施例中,电子设备获取到包含跟踪目标的目标图像后,提取包含运动物体的各个目标框的属性信息,从多个目标框中获取属性信息与目标属性信息相同的目标框,作为当前框。之后,电子设备根据当前框的位置信息,确定跟踪目标的运动轨迹。其中,目标属性信息为跟踪目标的属性信息。
可见,本申请实施例提供的技术方案中,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。当然,实施本申请的任一产品或方法必不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的目标跟踪方法的第一种流程示意图;
图2为本申请实施例提供的目标跟踪方法的第二种流程示意图;
图3为本申请实施例提供的目标跟踪方法的第三种流程示意图;
图4为本申请实施例提供的目标跟踪方法的第四种流程示意图;
图5为本申请实施例提供的目标跟踪装置的第一种结构示意图;
图6为本申请实施例提供的目标跟踪装置的第二种结构示意图;
图7为本申请实施例提供的电子设备的一种结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
目前,在对目标进行跟踪时,每获取到一帧图像的一个候选框,均需要 根据特征信息计算一次置信度。置信度的计算较为复杂。若在一帧图像中确定出多个候选框,置信度的计算将消耗时间较多,这使得实现目标跟踪的效率较低,跟踪效果较差。
为了提高实现目标跟踪的效率,优化跟踪效果,本申请实施例提供了一种目标跟踪方法、装置及系统。该目标跟踪方法可以应用于图像采集设备,也可以应用于与图像采集设备连接的电子设备等。其中,电子设备可以为手机、平板电脑和台式电脑等。为便于理解,下面均以执行主体为电子设备为例进行说明。
该目标跟踪方法包括:获取包含跟踪目标的目标图像;对目标图像进行检测,确定多个包含运动物体的目标框;提取各个目标框的属性信息;从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框,其中,目标属性信息为跟踪目标的属性信息;根据当前框的位置信息,确定跟踪目标的运动轨迹。
本申请实施例中,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。
下面通过具体实施例,对本申请实施例提供的目标跟踪方法进行说明。
参考图1,图1为本申请实施例提供的目标跟踪方法的第一种流程示意图。该方法包括如下步骤。
步骤101:获取包含跟踪目标的目标图像。
电子设备获取的目标图像,可以为图像采集设备采集后发送来的图像。电子设备获取的目标图像,也可以为用户输入的图像。其中,图像采集设备可以为球机、双目摄像机等。
跟踪目标可以为电子设备在图像中检测到的运动物体。也就是,电子设备获取到图像后,将从图像中检测到的运动物体均作为跟踪目标。
跟踪目标也可以为用户指定的运动物体。也就是,电子设备在进行目标跟踪前,用户预先输入跟踪目标的信息。
步骤102:对目标图像进行检测,确定多个包含运动物体的目标框。
本申请实施例中,电子设备可以采用HOG(Histogram of Oriented Gradient,方向梯度直方图))+SVM(Support Vector Machine,支持向量机)算法、YOLO(You Only Look Once)算法或神经网络模型等,对目标图像进行检测,确定多个包含运动物体的目标框。
在本申请的一个实施例中,上述步骤102可以包括如下步骤。
步骤1021:确定跟踪目标的类型,作为目标类型。
这里,目标类型可以包括:车辆、人和人脸中的一种或多种。目标类型还可以包括其他类型,本申请实施例对此不进行限定。
目标类型可以为用户预先指定的类型,也可以为电子设备在进行图像检测时确定的类型。例如,电子设备获取到图像后,对图像进行检测,确定图像中多个运动物体,以及每一运动物体的类型,存储每一运动物体的类型。当电子设备需要跟踪从图像中检测到的运动物体时,从存储的运动物体的类型中,获取跟踪目标的类型,作为目标类型。
步骤1022:对目标图像进行检测,确定多个包含运动物体的候选框。
运动物体有多种类型,例如人脸、人和车辆等。基于此,电子设备对目标图像进行检测,可以确定多种候选框,例如,包含人脸的候选框、包含人的候选框和包含车辆的候选框等。
本申请实施例不限定步骤1021和1022的执行顺序。
步骤1023:从多个候选框中,选择类型与目标类型相同的运动物体的候选框,作为目标框。
本申请实施例中,电子设备过滤掉多个候选框中类型与目标类型不同的运动物体的候选框,保留类型与目标类型相同的运动物体的候选框,作为目标框。上述类型与目标类型相同的运动物体的候选框,即为包含运动物体的类型与目标类型相同的候选框。
例如,电子设备对目标图像进行检测,所确定的候选框包括:包含人脸的候选框、包含人的候选框和包含车辆的候选框。若跟踪目标为车辆,则电 子设备可以过滤掉包含人脸的候选框和包含人的候选框,保留包含车辆的候选框,将包含车辆的候选框作为目标框。
这样,有效减少了后续处理的候选框的数量,提高了目标跟踪效率。
步骤103:提取各个目标框的属性信息。
本申请实施例中,属性信息为结构化数据,即为电子设备对图像数据进行视频结构化处理后得到的信息。视频结构化就是通过对原始视频进行智能分析,提取出关键信息,并对提取的关键信息进行文本的语义描述。电子设备提取属性信息的复杂度低,远低于计算置信度的复杂度。
目标框的属性信息,即为目标框包含的运动物体的属性信息。目标框的属性信息的种类可以有多种。例如,若目标框包含车辆,属性信息可以包括车辆颜色、车型和车辆品牌等;若目标框包含人脸,属性信息可以包括人脸上关键点构成的多边形和眼间距等。属性信息还可以包括其他信息,此处不再一一赘述。
一种实现方式中,为减少工作量,提高电子设备的工作效率,用户预先设定跟踪目标的属性信息,即用户预先设定目标属性信息。电子设备可根据目标属性信息,提取各个目标框的属性信息。例如,跟踪目标为车辆,用户预先设定目标属性信息包括车辆颜色和车型,则电子设备从目标框中,提取目标框的车辆颜色和车型。
步骤104:从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框。其中,目标属性信息为跟踪目标的属性信息。
本申请实施例中,电子设备从多个目标框中,过滤掉属性信息与目标属性信息不同的目标框,保留属性信息与目标属性信息相同的目标框,作为当前框。上述属性信息与预设的目标属性信息相同的目标框,即为包含运动物体的属性信息与预设的目标属性信息相同的目标框。
例如,目标属性信息为:车辆颜色为“红色”,车型为“小型车”。电子设备提取的目标框的属性信息有:
{目标框1的属性信息1:车辆颜色为“黑色”,车型为“小型车”};
{目标框2的属性信息2:车辆颜色为“黄色”,车型为“SUV型车”};
{目标框3的属性信息3:车辆颜色为“黄色”,车型为“微型车”};
{目标框4的属性信息4:车辆颜色为“红色”,车型为“小型车”}。
此时,电子设备可确定属性信息4与目标属性信息相同,属性信息1-3与目标属性信息不同。电子设备过滤掉目标框1-3,保留目标框4,将目标框4作为当前框。
步骤105:根据当前框的位置信息,确定跟踪目标的运动轨迹。
这里,位置信息可以为当前框在图像坐标系中的位置信息,也可以为当前框在世界坐标系中的位置信息。
电子设备确定当前框的位置信息后,将位置信息添加至跟踪目标的运动轨迹集合中,进而确定跟踪目标的运动轨迹。例如,跟踪目标的运动轨迹集合中包括位置信息1和位置信息2,跟踪目标的运动轨迹为:位置信息1→位置信息2。若电子设备确定当前框的位置信息为位置信息3,将位置信息3添加至跟踪目标的运动轨迹集合中,进而确定跟踪目标的运动轨迹为:位置信息1→位置信息2→位置信息3。
应用本申请实施例提供的技术方案,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,提高了目标跟踪的实时性,进而优化了跟踪效果。
为了进一步提高实现目标跟踪的效率,优化跟踪效果,在本申请的一个实施例中,参考图2所示的目标跟踪方法的第二种流程示意图,基于图1,该包括可以包括如下步骤。
步骤201:获取包含跟踪目标的目标图像。
步骤201与步骤101相同。
步骤202:对目标图像进行检测,确定多个包含运动物体的目标框。
步骤202与步骤102相同。
步骤203:确定每一目标框的标识。其中,包含同一运动物体的所有目标框的标识相同。
在本申请的一个实施例中,电子设备在检测到包含同一运动物体的多个目标框时,为这多个目标框标记同一标识。例如,电子设备为图像1中包含车辆1的目标框标记标识a。当电子设备获取到图像2时,在图像2中同样检测到包含车辆1的目标框,则同样为图像2中包含车辆1的目标框标记标识a。
在本申请的一个实施例中,电子设备对目标图像进行检测,确定多个包含运动物体的候选框时,就为每一候选框标记标识。电子设备确定目标框后,可以直接获取目标框的标识。
步骤204:判断是否记录有跟踪目标对应的目标标识。若否,则执行步骤205。若是,则执行步骤208。
本申请实施例中,目标框的标识与运动物体一一对应,目标框的标识相同,目标框包含的运动物体相同。若记录有跟踪目标对应的目标标识,则电子设备可以直接根据目标标识查找当前框,而不必对比属性信息确定当前框,有效提高了实现目标跟踪的效率。
步骤205:提取各个目标框的属性信息。
步骤205与步骤103相同。
步骤206:从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框。其中,目标属性信息为跟踪目标的属性信息。执行步骤207和209。
步骤206与步骤104相同。
步骤207:将当前框的标识作为目标标识,记录目标标识与跟踪目标的对应关系。
电子设备记录下跟踪目标对应的目标标识,以便于后续电子设备根据记录的目标标识对跟踪目标进行跟踪,提高实现目标跟踪的效率。
步骤208:从多个目标框中,获取标识与目标标识相同的目标框,作为当前框。执行步骤209。
若电子设备中记录有目标标识,则可以从多个目标框中获取标识与目标标识相同的目标框。电子设备获取的目标框的标识与目标标识相同,则可确定获取的目标框包含跟踪目标,将获取的目标框作为当前框。
电子设备直接根据目标标识查找当前框,而不必对比属性信息确定当前框,有效提高了实现目标跟踪的效率。
步骤209:根据当前框的位置信息,确定跟踪目标的运动轨迹。
步骤209与步骤105相同。
本申请实施例中,目标框的标识也是属性信息中的一种。电子设备通过对比目标框的标识与跟踪目标对应的标识,就可以确定跟踪目标的运动轨迹,提高了实现目标跟踪的效率,进而优化了跟踪效果。
为了提高跟踪效果,在本申请的一个实施例中,参考图3所示的目标跟踪方法的第三种流程示意图,基于图1,该包括可以包括:
步骤301:获取包含跟踪目标的目标图像。
步骤302:对目标图像进行检测,确定多个包含运动物体的目标框。
步骤303:提取各个目标框的属性信息。
步骤304:从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框。其中,目标属性信息为跟踪目标的属性信息。
步骤305:根据当前框的位置信息,确定跟踪目标的运动轨迹。
步骤301-305与步骤101-105相同。
步骤306:确定跟踪目标的运动信息。
其中,运动信息可以包括跟踪目标的运动速度和运动方向等。
这里,运动速度可以为在世界坐标系下单位时间内移动了多少米,也可以为在图像坐标系下单位时间内移动了多少个像素点。本申请实施例不进行限定。
运动方向可以为东、南、西、北、东北、东南、西北、西南等。运动方 向也可以为时钟的方向,例如1点钟方向、2点钟方向…12点钟方向等。本申请实施例不进行限定。
步骤306可以在步骤305任意步骤之前或之后执行,例如,步骤304之后步骤306,再之后执行步骤305。本申请实施例对此不进行限定。
步骤307:根据跟踪目标的运动信息,确定图像采集设备的目标位置信息。
本申请实施例中,图像采集设备可以为球机、双目摄像机等。目标位置信息可以包括图像采集设备的俯仰角、偏航角和滚转角等。电子设备根据跟踪目标的运动信息,确定图像采集设备的俯仰角、偏航角和滚转角等位置信息,作为目标位置信息。
步骤308:将目标位置信息发送给图像采集设备。
这样,图像采集设备就可根据目标位置信息调整位置,例如调整俯仰角、偏航角和滚转角等,进而重新确定检测区域,以尽可能的跟踪目标。
下面结合图4所示的目标跟踪方法的第四种流程示意图,对本申请实施例提供的技术方案进行说明。其中,跟踪目标为车辆,目标属性信息为车辆颜色和车型。
步骤1,球机采集包含跟踪目标的图像,并将采集的图像发送给电子设备。
步骤2,电子设备对接收的图像进行检测,确定多个包含运动物体的候选框,并过滤掉包含非车辆的候选框,保留包含车辆的候选框作为目标框。
其中,非车辆包括人和人脸等。
步骤3,电子设备为每一目标框标记标识。包含同一车辆的目标框,标识相同。包含不同车辆的目标框,标识不同。
若电子设备未存储跟踪目标对应的ID(Identity,标识),则执行步骤4。若电子设备存储了跟踪目标对应的ID,则步骤6。
步骤4,电子设备提取各个目标框的属性信息。这里,目标框的属性信息包括车辆颜色和车型。
步骤5,电子设备对比目标框的属性信息和目标属性信息,从多个目标框 中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框。另外,电子设备获取当前框的ID,将当前框的ID作为跟踪目标对应的ID存储。之后,执行步骤7。
也就是,若一目标框的属性信息中车辆颜色与目标属性信息中车辆颜色相同,且该目标框的属性信息中车型与目标属性信息中车型相同,则电子设备将该目标框确定为当前框,获取该目标框的ID,将该目标框的ID作为跟踪目标对应的ID存储。
步骤6,电子设备将目标框的ID与存储的跟踪目标对应的ID进行对比,将ID与跟踪目标对应的ID相同的目标框作为当前框。之后,执行步骤7。
步骤7,电子设备根据当前框的位置信息,确定跟踪目标的运动轨迹。
步骤8,电子设备根据跟踪目标的运动信息,确定球机的目标位置信息,将目标位置信息发送给球机。
球机根据目标位置信息进行调整,以重新确定检测区域,并根据重新确定检测区域采集包含跟踪目标的图像。
另外,电子设备根据球机重新确定的检测区域,对球机发送的图像进行检测,确定目标框。
可见,本申请实施例提供的技术方案中,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。另外,电子设备根据跟踪目标的运动信息,实时更新球机的检测区域,进一步提高了跟踪效果。
基于相同的发明构思,根据上述目标跟踪方法,本申请实施例还提供了一种目标跟踪装置。参考图5,图5为本申请实施例提供的目标跟踪装置的第一种结构示意图。该装置包括:
第一获取模块501,用于获取包含跟踪目标的目标图像;
第一确定模块502,用于对目标图像进行检测,确定多个包含运动物体的目标框;
提取模块503,用于提取各个目标框的属性信息;
第二获取模块504,用于从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;目标属性信息为跟踪目标的属性信息;
第二确定模块505,用于根据当前框的位置信息,确定跟踪目标的运动轨迹。
在本申请的一个实施例中,第一确定模块502,具体可以用于:
确定跟踪目标的类型,作为目标类型;
对目标图像进行检测,确定多个包含运动物体的候选框;
从多个候选框中,选择类型与目标类型相同的运动物体的候选框,作为目标框。
在本申请的一个实施例中,目标类型可以包括:车辆、人和人脸中的一种或多种。
在本申请的一个实施例中,第一确定模块502,还可以用于确定每一目标框的标识;其中,包含同一运动物体的所有目标框的标识相同;
提取模块503,具体可以用于判断是否记录有跟踪目标对应的目标标识;若否,则提取各个目标框的属性信息;
第二获取模块504,还可以用于在获取属性信息与预设的目标属性信息相同的目标框,作为当前框之后,将当前框的标识作为目标标识,记录目标标识与跟踪目标的对应关系。
在本申请的一个实施例中,第二获取模块504,还可以用于若判定记录有目标标识,则从多个目标框中,获取标识与目标标识相同的目标框,作为当前框。
在本申请的一个实施例中,参考图6所示的目标跟踪装置的第二种结构示意图,基于图5,还可以包括:
第三确定单元506,用于确定跟踪目标的运动信息;
第四确定单元507,用于根据运动信息,确定图像采集设备的目标位置信 息;
发送单元508,用于将目标位置信息发送给图像采集设备,以使图像采集设备根据目标位置信息调整位置。
在本申请的一个实施例中,目标位置信息可以包括:图像采集设备的俯仰角、偏航角和滚转角。
在本申请的一个实施例中,跟踪目标为车辆,属性信息包括车辆颜色和车型。
应用本申请实施例提供的技术方案,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。
基于相同的发明构思,根据上述目标跟踪方法,本申请实施例还提供了一种电子设备,如图7所示,包括处理器701和存储器702;存储器702,用于存放计算机程序;处理器701,用于执行存储器702上所存放的计算机程序时,实现上述图1-图4所示的任一目标跟踪方法实施例。其中,目标跟踪方法,包括:
获取包含跟踪目标的目标图像;
对目标图像进行检测,确定多个包含运动物体的目标框;
提取各个目标框的属性信息;
从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;目标属性信息为跟踪目标的属性信息;
根据当前框的位置信息,确定跟踪目标的运动轨迹。
应用本申请实施例提供的技术方案,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。
存储器可以包括RAM(Random Access Memory,随机存取存储器),也可以包括NVM(Non-Volatile Memory,非易失性存储器),例如至少一个磁 盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
处理器可以是通用处理器,包括CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processing,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
基于相同的发明构思,根据上述目标跟踪方法,本申请实施例还提供了一种机器可读存储介质,机器可读存储介质内存储有计算机程序,计算机程序被处理器执行时实现上述图1-图4所示的任一目标跟踪方法实施例。其中,目标跟踪方法,可包括:
获取包含跟踪目标的目标图像;
对目标图像进行检测,确定多个包含运动物体的目标框;
提取各个目标框的属性信息;
从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;目标属性信息为跟踪目标的属性信息;
根据当前框的位置信息,确定跟踪目标的运动轨迹。
应用本申请实施例提供的技术方案,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。
基于相同的发明构思,根据上述目标跟踪方法,本申请实施例还提供一种目标跟踪系统,包括图像采集设备,以及上述任一目标跟踪装置。
基于相同的发明构思,根据上述目标跟踪方法,本申请实施例还提供了一种计算机程序,计算机程序被处理器执行时实现上述图1-图4所示的任一目标跟踪方法实施例。其中,目标跟踪方法,可包括:
获取包含跟踪目标的目标图像;
对目标图像进行检测,确定多个包含运动物体的目标框;
提取各个目标框的属性信息;
从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;目标属性信息为跟踪目标的属性信息;
根据当前框的位置信息,确定跟踪目标的运动轨迹。
应用本申请实施例提供的技术方案,电子设备通过属性信息,就可以确定跟踪目标的运动轨迹。属性信息为结构化数据,提取属性信息的复杂度低于计算置信度的复杂度,提高了实现目标跟踪的效率,进而优化了跟踪效果。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于目标跟踪装置、电子设备、机器可读存储介质、目标跟踪系统、计算机程序实施例而言,由于其基本相似于目标跟踪方法实施例,所以描述的比较简单,相关之处参见图1-图4所示的目标跟踪方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (20)

  1. 一种目标跟踪方法,其特征在于,所述方法包括:
    获取包含跟踪目标的目标图像;
    对所述目标图像进行检测,确定多个包含运动物体的目标框;
    提取各个目标框的属性信息;
    从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;所述目标属性信息为所述跟踪目标的属性信息;
    根据所述当前框的位置信息,确定所述跟踪目标的运动轨迹。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述目标图像进行检测,确定多个包含运动物体的目标框的步骤,包括:
    确定所述跟踪目标的类型,作为目标类型;
    对所述目标图像进行检测,确定多个包含运动物体的候选框;
    从多个候选框中,选择类型与所述目标类型相同的运动物体的候选框,作为目标框。
  3. 根据权利要求2所述的方法,其特征在于,所述目标类型包括:车辆、人和人脸中的一种或多种。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定每一目标框的标识;其中,包含同一运动物体的所有目标框的标识相同;
    所述提取各个目标框的属性信息的步骤,包括:
    判断是否记录有所述跟踪目标对应的目标标识;若否,则提取各个目标框的属性信息;
    在获取属性信息与预设的目标属性信息相同的目标框,作为当前框之后,还包括:
    将当前框的标识作为目标标识,记录所述目标标识与所述跟踪目标的对 应关系。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    若判定记录有所述目标标识,则从多个目标框中,获取标识与所述目标标识相同的目标框,作为当前框。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述跟踪目标的运动信息;
    根据所述运动信息,确定图像采集设备的目标位置信息;
    将所述目标位置信息发送给所述图像采集设备,以使所述图像采集设备根据所述目标位置信息调整位置。
  7. 根据权利要求6所述的方法,其特征在于,所述目标位置信息包括:所述图像采集设备的俯仰角、偏航角和滚转角。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述跟踪目标为车辆,所述属性信息包括车辆颜色和车型。
  9. 一种目标跟踪装置,其特征在于,所述装置包括:
    第一获取模块,用于获取包含跟踪目标的目标图像;
    第一确定模块,用于对所述目标图像进行检测,确定多个包含运动物体的目标框;
    提取模块,用于提取各个目标框的属性信息;
    第二获取模块,用于从多个目标框中,获取属性信息与预设的目标属性信息相同的目标框,作为当前框;所述目标属性信息为所述跟踪目标的属性信息;
    第二确定模块,用于根据所述当前框的位置信息,确定所述跟踪目标的运动轨迹。
  10. 根据权利要求9所述的装置,其特征在于,所述第一确定模块,具体用于:
    确定所述跟踪目标的类型,作为目标类型;
    对所述目标图像进行检测,确定多个包含运动物体的候选框;
    从多个候选框中,选择类型与所述目标类型相同的运动物体的候选框,作为目标框。
  11. 根据权利要求10所述的装置,其特征在于,所述目标类型包括:车辆、人和人脸中的一种或多种。
  12. 根据权利要求9所述的装置,其特征在于,
    所述第一确定模块,还用于确定每一目标框的标识;其中,包含同一运动物体的所有目标框的标识相同;
    所述提取模块,具体用于判断是否记录有所述跟踪目标对应的目标标识;若否,则提取各个目标框的属性信息;
    所述第二获取模块,还用于在获取属性信息与预设的目标属性信息相同的目标框,作为当前框之后,将当前框的标识作为目标标识,记录所述目标标识与所述跟踪目标的对应关系。
  13. 根据权利要求12所述的装置,其特征在于,所述第二获取模块,还用于若判定记录有所述目标标识,则从多个目标框中,获取标识与所述目标标识相同的目标框,作为当前框。
  14. 根据权利要求9所述的装置,其特征在于,还包括:
    第三确定单元,用于确定所述跟踪目标的运动信息;
    第四确定单元,用于根据所述运动信息,确定图像采集设备的目标位置信息;
    发送单元,用于将所述目标位置信息发送给所述图像采集设备,以使所述图像采集设备根据所述目标位置信息调整位置。
  15. 根据权利要求14所述的装置,其特征在于,所述目标位置信息包括:所述图像采集设备的俯仰角、偏航角和滚转角。
  16. 根据权利要求9-15任一项所述的装置,其特征在于,所述跟踪目标 为车辆,所述属性信息包括车辆颜色和车型。
  17. 一种目标跟踪系统,其特征在于,包括图像采集设备,以及权利要求9-16中任一项所述的装置。
  18. 一种电子设备,其特征在于,包括处理器和存储器;所述存储器,用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序,实现权利要求1-8任一所述的方法步骤。
  19. 一种机器可读存储介质,其特征在于,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-8任一所述的方法步骤。
  20. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-8任一所述的方法步骤。
PCT/CN2019/092027 2018-06-22 2019-06-20 一种目标跟踪方法、装置及系统 WO2019242672A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810654154.0A CN110706247B (zh) 2018-06-22 2018-06-22 一种目标跟踪方法、装置及系统
CN201810654154.0 2018-06-22

Publications (1)

Publication Number Publication Date
WO2019242672A1 true WO2019242672A1 (zh) 2019-12-26

Family

ID=68983454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/092027 WO2019242672A1 (zh) 2018-06-22 2019-06-20 一种目标跟踪方法、装置及系统

Country Status (2)

Country Link
CN (1) CN110706247B (zh)
WO (1) WO2019242672A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292352A (zh) * 2020-01-20 2020-06-16 杭州电子科技大学 多目标跟踪方法、装置、设备及存储介质
CN112434566A (zh) * 2020-11-04 2021-03-02 深圳云天励飞技术股份有限公司 客流统计方法、装置、电子设备及存储介质
CN112907622A (zh) * 2021-01-20 2021-06-04 厦门市七星通联科技有限公司 视频中目标物体的轨迹识别方法、装置、设备、存储介质
CN113127758A (zh) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 物品存放点推送方法、装置、电子设备及存储介质
CN116744102A (zh) * 2023-06-19 2023-09-12 北京拙河科技有限公司 一种基于反馈调节的球机跟踪方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339852B (zh) * 2020-02-14 2023-12-26 阿波罗智联(北京)科技有限公司 追踪方法、装置、电子设备和计算机可读存储介质
CN111898436A (zh) * 2020-06-29 2020-11-06 北京大学 一种基于视觉信号的多目标跟踪处理优化方法
CN112053556B (zh) * 2020-08-17 2021-09-21 青岛海信网络科技股份有限公司 一种交通监控复眼动态识别交通事故自我进化系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739551A (zh) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 运动目标识别方法及系统
US20120163657A1 (en) * 2010-12-24 2012-06-28 Canon Kabushiki Kaisha Summary View of Video Objects Sharing Common Attributes
CN105631418A (zh) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 一种人数统计的方法和装置
CN107105207A (zh) * 2017-06-09 2017-08-29 北京深瞐科技有限公司 目标监控方法、目标监控装置及摄像机

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102149531B1 (ko) * 2015-11-09 2020-08-31 한국전자통신연구원 넷플로우 기반 연결 핑거프린트 생성 및 경유지 역추적 방법
CN106899827A (zh) * 2015-12-17 2017-06-27 杭州海康威视数字技术股份有限公司 图像数据采集、查询、视频监控方法、设备及系统
CN105654512B (zh) * 2015-12-29 2018-12-07 深圳微服机器人科技有限公司 一种目标跟踪方法和装置
CN107403437A (zh) * 2016-05-19 2017-11-28 上海慧流云计算科技有限公司 机器人跟踪物体的方法、装置及机器人
CN105979210B (zh) * 2016-06-06 2019-02-05 深圳市深网视界科技有限公司 一种基于多枪多球摄像机阵列的行人识别系统
CN106845385A (zh) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 视频目标跟踪的方法和装置
CN108009473B (zh) * 2017-10-31 2021-08-24 深圳大学 基于目标行为属性视频结构化处理方法、系统及存储装置
CN108171207A (zh) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 基于视频序列的人脸识别方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739551A (zh) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 运动目标识别方法及系统
US20120163657A1 (en) * 2010-12-24 2012-06-28 Canon Kabushiki Kaisha Summary View of Video Objects Sharing Common Attributes
CN105631418A (zh) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 一种人数统计的方法和装置
CN107105207A (zh) * 2017-06-09 2017-08-29 北京深瞐科技有限公司 目标监控方法、目标监控装置及摄像机

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113127758A (zh) * 2019-12-31 2021-07-16 深圳云天励飞技术有限公司 物品存放点推送方法、装置、电子设备及存储介质
CN113127758B (zh) * 2019-12-31 2024-05-07 深圳云天励飞技术有限公司 物品存放点推送方法、装置、电子设备及存储介质
CN111292352A (zh) * 2020-01-20 2020-06-16 杭州电子科技大学 多目标跟踪方法、装置、设备及存储介质
CN111292352B (zh) * 2020-01-20 2023-08-25 杭州电子科技大学 多目标跟踪方法、装置、设备及存储介质
CN112434566A (zh) * 2020-11-04 2021-03-02 深圳云天励飞技术股份有限公司 客流统计方法、装置、电子设备及存储介质
CN112434566B (zh) * 2020-11-04 2024-05-07 深圳云天励飞技术股份有限公司 客流统计方法、装置、电子设备及存储介质
CN112907622A (zh) * 2021-01-20 2021-06-04 厦门市七星通联科技有限公司 视频中目标物体的轨迹识别方法、装置、设备、存储介质
CN116744102A (zh) * 2023-06-19 2023-09-12 北京拙河科技有限公司 一种基于反馈调节的球机跟踪方法及装置
CN116744102B (zh) * 2023-06-19 2024-03-12 北京拙河科技有限公司 一种基于反馈调节的球机跟踪方法及装置

Also Published As

Publication number Publication date
CN110706247A (zh) 2020-01-17
CN110706247B (zh) 2023-03-07

Similar Documents

Publication Publication Date Title
WO2019242672A1 (zh) 一种目标跟踪方法、装置及系统
WO2019218824A1 (zh) 一种移动轨迹获取方法及其设备、存储介质、终端
US10891465B2 (en) Methods and apparatuses for searching for target person, devices, and media
US9317762B2 (en) Face recognition using depth based tracking
WO2020082258A1 (zh) 一种多目标实时跟踪方法、装置及电子设备
JP2018509678A (ja) ターゲット取得の方法及び装置
CN110852269B (zh) 一种基于特征聚类的跨镜头人像关联分析方法及装置
WO2019057197A1 (zh) 运动目标的视觉跟踪方法、装置、电子设备及存储介质
CN107798313A (zh) 一种人体姿态识别方法、装置、终端和存储介质
EP3531340B1 (en) Human body tracing method, apparatus and device, and storage medium
Yang et al. Binary descriptor based nonparametric background modeling for foreground extraction by using detection theory
CN111553234A (zh) 融合人脸特征与Re-ID特征排序的行人跟踪方法及装置
WO2017107345A1 (zh) 一种图像处理方法及装置
EP4209959A1 (en) Target identification method and apparatus, and electronic device
Zhang et al. Fast moving pedestrian detection based on motion segmentation and new motion features
Ait Abdelali et al. An adaptive object tracking using Kalman filter and probability product kernel
KR20200046152A (ko) 얼굴 인식 방법 및 얼굴 인식 장치
Sadkhan et al. An investigate on moving object tracking and detection in images
WO2020232697A1 (zh) 一种在线人脸聚类的方法及系统
de-la-Calle-Silos et al. Mid-level feature set for specific event and anomaly detection in crowded scenes
CN116012421A (zh) 目标跟踪方法及装置
Li et al. Multitarget tracking of pedestrians in video sequences based on particle filters
CN115035160A (zh) 一种基于视觉跟随的目标追踪方法、装置、设备及介质
CN109145737B (zh) 一种快速人脸识别方法、装置、电子设备及存储介质
WO2020237674A1 (zh) 目标跟踪方法、目标跟踪装置和无人机

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19823221

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19823221

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19823221

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30/06/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19823221

Country of ref document: EP

Kind code of ref document: A1