WO2020114136A1 - Method and device for evaluating algorithm performance - Google Patents

Method and device for evaluating algorithm performance Download PDF

Info

Publication number
WO2020114136A1
WO2020114136A1 PCT/CN2019/112914 CN2019112914W WO2020114136A1 WO 2020114136 A1 WO2020114136 A1 WO 2020114136A1 CN 2019112914 W CN2019112914 W CN 2019112914W WO 2020114136 A1 WO2020114136 A1 WO 2020114136A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
predicted
target object
value
real
Prior art date
Application number
PCT/CN2019/112914
Other languages
French (fr)
Chinese (zh)
Inventor
刘若鹏
栾琳
季春霖
赵盟盟
Original Assignee
西安光启未来技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安光启未来技术研究院 filed Critical 西安光启未来技术研究院
Publication of WO2020114136A1 publication Critical patent/WO2020114136A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • This application relates to, but not limited to, the field of electronic tracking, and in particular, to an algorithm performance measurement method and device.
  • MSE mean square error
  • RMSE root mean square error
  • the embodiments of the present application provide an algorithm performance measurement method and device, so as to at least solve the problem that the performance evaluation scheme for the target tracking system algorithm in the related art is insufficient.
  • an algorithm performance measurement method which includes: acquiring, from video stream data, prediction data of the target object's action trajectory predicted by a first algorithm, and the trueness of the target object's action trajectory Data, wherein the first algorithm is used to track the target object; based on the predicted data and the real data, at least one of the following parameters is obtained: an average difference, multiple frame data of the video stream data The average value of the position difference of each frame, where the position difference of each frame of data is the difference between the predicted position and the true position; the first value is the number of real data that does not correspond to the predicted data; the second value is not the corresponding real data The number of prediction data of the third; the third value is the number of changes in the correspondence between the prediction data and the real data after the real walking trajectories of multiple target objects cross; the first algorithm is measured according to at least one of the parameters Performance.
  • an algorithm performance measurement device including: a first acquisition module, configured to acquire the prediction of the target object's action trajectory predicted by the first algorithm for video stream data Data, and real data of the target object's action trajectory, wherein the first algorithm is used to track the target object; the second acquisition module is used to acquire the following parameters based on the predicted data and the real data At least one of: the average difference, the average value of the position difference of multiple frame data of the video stream data, wherein the position difference of each frame data is the difference between the predicted position and the true position; the first value, no The number of real data corresponding to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, after the real walking trajectories of multiple target objects cross, the correspondence between the predicted data and the real data occurs The number of changes; a measurement module, configured to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
  • a storage medium in which a computer program is stored, wherein the computer program is set to execute the steps in any one of the above method embodiments during runtime.
  • an electronic device including a memory and a processor, the memory stores a computer program, the processor is configured to run the computer program to perform any of the above The steps in the method embodiment.
  • the tracking system algorithm to be evaluated determine the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video.
  • the one or more parameters are used to measure the tracking system algorithm in all aspects, including the measurement tracking system algorithm.
  • the stability, accuracy and other aspects of the above, using the above scheme solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
  • FIG. 1 is a block diagram of a hardware structure of a computer terminal of an algorithm performance measurement method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a file format according to a specific embodiment of the present application.
  • FIG. 4 is a structural block diagram of an algorithm performance measurement device according to an embodiment of the present application.
  • the scheme of this application document is used to measure the performance of the algorithm of the target tracking system, and it can be a pedestrian intelligent tracking system, which is applied to urban security and other fields.
  • FIG. 1 is a block diagram of a hardware structure of a computer terminal of an algorithm performance measurement method according to an embodiment of the present application.
  • the computer terminal 10 may include one or more (FIG. 1 Only one is shown in the figure) a processor 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data.
  • the computer terminal A transmission device 106 for communication functions and an input and output device 108 may be included.
  • FIG. 1 is merely an illustration, which does not limit the structure of the computer terminal described above.
  • the computer terminal 10 may further include more or fewer components than those shown in FIG. 1, or have a configuration different from that shown in FIG.
  • the memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the algorithm performance measurement method in the embodiments of the present application.
  • the processor 102 executes the software programs and modules stored in the memory 104 to execute Various functional applications and data processing, namely to achieve the above method.
  • the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include memories remotely provided with respect to the processor 102, and these remote memories may be connected to the computer terminal 10 through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
  • the transmission device 106 is used to receive or send data via a network.
  • the above specific example of the network may include a wireless network provided by a communication provider of the computer terminal 10.
  • the transmission device 106 includes a network adapter (Network Interface Controller (NIC), which can be connected to other network devices through the base station to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • FIG. 2 is a flowchart of a method for measuring performance of an algorithm according to an embodiment of the present application. As shown in FIG. 2, the process includes the following steps :
  • Step S202 Obtain the predicted data of the action trajectory of the target object predicted by the first algorithm and the real data of the action trajectory of the target object for the video stream data, wherein the first algorithm is used to perform track;
  • the above video is the video of the tracking system application.
  • the tracking system tracks the walking path of the target object in the video.
  • videos recorded by multiple cameras can be combined to achieve target tracking in a certain range or scene.
  • the video of a camera can only track the trajectory of the target object in the square, and the video combined with the surrounding environment can track the trajectory of the target object in the urban area.
  • Step S204 Acquire at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of multiple frame data of the video stream data, where the position difference of each frame data The value is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, in multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
  • the above average difference may be equivalent to the position accuracy index in subsequent embodiments
  • the above first value may be equivalent to the miss index of the number of statistical losses in subsequent embodiments
  • the second value may be equivalent to the false alarm index fp
  • the third value It may be equivalent to the cross misjudgment index mix, and the average value of the first value, the second value, and the third value may be called a statistical accuracy index.
  • the solution of the above embodiment calculates the average difference of all frames from the difference of each frame image, and uses the average difference to measure the accuracy of the tracking system algorithm to be more accurate and complete.
  • Step S206 Measure the performance of the first algorithm according to at least one of the parameters.
  • the tracking system algorithm determines the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video.
  • the one or more parameters are used to measure the tracking system algorithm in all aspects, including the measurement tracking system algorithm.
  • the stability, accuracy and other aspects of the above, using the above scheme solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
  • obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include: acquiring the first algorithm prediction for the frame data of the video stream data The predicted position of the target object and the true position of the target object; determining that the predicted data includes the predicted position, and the true data includes the true position.
  • obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory including: in the video stream data, the first information of the target object A storage mark, wherein the first information includes at least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, the face size of the target object, and the coordinate position of the center point of the face of the target object;
  • the marked first information is stored to obtain predicted data and the real data about the target object.
  • the target object's activity process is fully marked and recorded, and the above marking information is unified in the entire video, for example, the characters in the entire video have a unique ID from beginning to end, which is convenient for later data analysis .
  • the method for obtaining the first value and/or the second value includes: obtaining the predicted data and real data for each frame of the video stream data; there will be no real data corresponding to the predicted data
  • the number of data, as a third value obtains the second average difference of the video stream data according to the third value of the multiple frame data, and uses the second average difference as the first value; there will be no corresponding
  • the number of the prediction data of the real data is taken as the fourth value
  • the third average difference of the video stream data is obtained according to the fourth value of the plurality of frame data
  • the third average difference is taken as the first Two numerical values.
  • the solution of the above embodiment calculates the second average difference or the third average difference of all frames from the difference of each frame image, and uses the second average difference or the third average difference to measure the accuracy of the tracking system algorithm More accurate and complete.
  • the method for obtaining the third value includes: for the video stream data, after the real walking trajectories of a plurality of the target objects cross, acquiring the plurality of target objects predicted by the first algorithm Prediction data; obtaining the number of changes in the correspondence between the prediction data and the real data before and after the real walking track crosses; obtaining the fourth average of the video stream data according to the number of changes in multiple frame data For the difference, the fourth average difference is used as the third value.
  • the solution of the above embodiment calculates a fourth average difference value of all frames from the difference value of each frame image, and uses the fourth average difference value to measure the accuracy of the tracking system algorithm to be more accurate and complete.
  • the target tracking evaluation methods in related technologies are mostly single-target or multi-target simple path scenarios, and can only measure the performance of the tracking algorithm itself. There is no reasonable evaluation method for the overall performance of pedestrian detection, tracking and recognition in complex scenes. To measure the overall performance of the pedestrian tracking system, it is necessary to completely measure the synthetic performance of each module, rather than only considering the tracking accuracy as traditional methods. In response to this problem, combined with the actual application, the proposal of this application is proposed.
  • This application provides a complete method system for measuring the accuracy of a pedestrian tracking and identification system.
  • the system of this system may include a target detection module, a face ID recognition module, and a target trajectory tracking module.
  • the system can be used in the field of urban security.
  • This application can comprehensively and objectively measure the accuracy and performance of the tracking and identification system, and has very important application value for the optimization and evaluation of product development.
  • the complete pedestrian intelligent tracking system in this application should include key functional modules such as target detection, path tracking, and face recognition.
  • This application provides a complete set of methods for measuring the accuracy of pedestrian tracking and identification systems.
  • the system is divided into three parts: data preprocessing methods, position accuracy indicators, and statistical accuracy indicators.
  • Data preprocessing refers to the interception and marking of the video data to be evaluated, including the interception of face, uniform name, frame number alignment and file format uniformity
  • position accuracy index refers to the error value of the target tracking prediction trajectory and the actual video trajectory , Which reflects the ability of the tracking algorithm of the system to predict the target position
  • statistical accuracy indicators are divided into three types, which describe the performance of the system target detection module, face recognition module and tracking module when the target crosses.
  • This evaluation system can completely and objectively measure the performance of the pedestrian intelligent tracking system. Make up for the lack of traditional tracking and evaluation methods.
  • the tracking evaluation methods in related technologies mostly preprocess the data for simple location marking, and the purpose of this system is to comprehensively measure the detection, tracking, and recognition effects. Therefore, it is necessary to fully mark the pedestrian target activity process.
  • the content to be recorded includes: the number of video frames, the ID of different people in each frame (the same person ID must be the same in different frames), the width of the pedestrian face, the coordinates of the center point of the pedestrian face, etc. After obtaining the above records, it is organized into a program-readable .json format file.
  • FIG. 3 is a schematic diagram of the file format according to the specific embodiment 1 of the present application. As shown in FIG. 3, the specific content format is shown in FIG. 3, of which No.
  • the box is the outer field of the complete data; the box 2 is the digital field of the video frame; the box 3 is the fixed field; the box 4 is the pedestrian name ID field; the box 5 is the file path field.
  • This data preprocessing method can completely record the activity trajectories of all pedestrians at all times to achieve an objective measurement of the performance of the entire system.
  • the position accuracy index can measure the accuracy of the tracking algorithm.
  • we must establish the corresponding relationship between the real target and the prediction result of the system, which can be realized according to the number of frames and the ID name.
  • you can calculate the error between the predicted position coordinate and the real position coordinate of each target system in each frame, sum all the errors in all frames, and then log the relationship between the predicted position and the real position.
  • the average error can be calculated by averaging. This indicator can measure the performance of the system tracking module.
  • the statistical accuracy index can measure the comprehensive performance of the system detection target module, ID recognition module and tracking module. It is specifically divided into three indicators: the number of missing statistics, the false alarm indicator fp, and the cross-fault indicator mix, and the statistical accuracy indicator It is the average of the above three indicators. As with the position accuracy index, the statistical accuracy index needs to be calculated on the basis of establishing the corresponding relationship between the real target and the system prediction result.
  • the statistical missing number miss indicator refers to the loss of the target number in the real data in the predicted data.
  • the reason for the loss is the missed detection of the detection module or the loss of the tracking module; the false alarm indicator fp refers to the fact that the true error appears in the predicted data There is no target in the data, the reason for fp is the multi-detection of the detection module or the false recognition of the face recognition module; the cross misjudgment index mix refers to the prediction of the system at the intersection of the trajectories is completely opposite to the real target. Tracking module error tracking or identification module misidentification.
  • the above indicators should first sum all the targets in the frame, and then average the logarithm of the relationship between the predicted position and the true position.
  • the performance of pedestrian tracking and identification system can be measured and evaluated, which is of great significance to system development and testing.
  • This application provides a complete set of methods for measuring the accuracy of pedestrian tracking and identification systems. Due to the complexity of the system modules, the evaluation effect cannot be achieved using traditional evaluation indicators. This application comprehensively and objectively measures the comprehensive performance of each module of the system by establishing data preprocessing and two categories of indicators, which provides important evaluation significance for product development and performance evaluation.
  • the method according to the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present application can essentially be reflected in the form of a software product that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk,
  • the CD-ROM includes several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to perform the methods described in the embodiments of the present application.
  • an algorithm performance measurement device is also provided.
  • the device is used to implement the above-mentioned embodiments and preferred implementation modes, and descriptions that have already been described will not be repeated.
  • the term "module” may implement a combination of software and/or hardware for a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived.
  • FIG. 4 is a structural block diagram of an algorithm performance measurement device according to an embodiment of the present application. As shown in FIG. 4, the device includes:
  • the first obtaining module 42 is used to obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory for the video stream data, wherein the first algorithm is used to Tracking the target object;
  • the second obtaining module 44 is configured to obtain at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, where each The position difference of the frame data is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, After the real walking trajectories of multiple target objects cross, the number of correspondences between the predicted data and the real data changes;
  • the measurement module 46 is configured to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
  • the tracking system algorithm determines the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video.
  • the one or more parameters are used to measure the tracking system algorithm in all aspects, including the tracking system algorithm.
  • the stability, accuracy and other aspects of the above, using the above scheme solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
  • the first obtaining module 42 is configured to obtain the predicted position of the target object predicted by the first algorithm and the real position of the target object for the frame data of the video stream data; and It is used to determine that the predicted data includes the predicted position, and the real data includes the true position.
  • the first obtaining module 42 is further configured to store a mark for the first information of the target object in the video stream data, where the first information includes at least one of the following: the number of video frames , The identification ID of the target object in each frame of the data image, the size of the face of the target object, the coordinate position of the center point of the face of the target object; and used to obtain information about the target object based on the first information in which the marker is stored Forecast data and the real data.
  • the above modules can be implemented by software or hardware. For the latter, they can be implemented by the following methods, but not limited to this: the above modules are all located in the same processor; or, the above modules can be combined in any combination The forms are located in different processors.
  • the embodiments of the present application also provide a storage medium.
  • the above storage medium may be set to store program code for performing the following steps:
  • an average difference value an average value of position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data Is the difference between the predicted position and the true position
  • the first value is the number of real data that does not correspond to the predicted data
  • the second value is the number of predicted data that does not correspond to the real data
  • the third value is the value of multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
  • the above storage medium may include, but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media such as discs or optical discs that can store program codes.
  • An embodiment of the present application further provides an electronic device, including a memory and a processor, where the computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any one of the foregoing method embodiments.
  • the electronic device may further include a transmission device and an input-output device, where the transmission device is connected to the processor, and the input-output device is connected to the processor.
  • the foregoing processor may be configured to perform the following steps through a computer program:
  • an average difference value an average value of position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data Is the difference between the predicted position and the true position
  • the first value is the number of real data that does not correspond to the predicted data
  • the second value is the number of predicted data that does not correspond to the real data
  • the third value is the value of multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
  • modules or steps of the present application can be implemented by a general-purpose computing device, they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Above, optionally, they can be implemented with program code executable by the computing device, so that they can be stored in the storage device to be executed by the computing device, and in some cases, can be in a different order than here
  • the steps shown or described are performed, or they are made into individual integrated circuit modules respectively, or multiple modules or steps among them are made into a single integrated circuit module to achieve. In this way, this application is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present application provides a method and device for evaluating algorithm performance. The method comprises: determining a tracking-system algorithm to be evaluated, and obtaining one or more parameters of the tracking-system algorithm with respect to a tracked target object in a video, wherein the one or more parameters are used to evaluate the tracking-system algorithm comprehensively, including evaluating various aspects such as the stability and accuracy of the tracking-system algorithm. The solution resolves the issue in the related art in which performance evaluation of a target-tracking-system algorithm is not comprehensive enough, and can fully and objectively evaluate the performance of the target-tracking-system algorithm.

Description

算法的性能衡量方法及装置Algorithm performance measurement method and device 技术领域Technical field
本申请涉及但不限于电子追踪领域,具体而言,涉及一种算法的性能衡量方法及装置。This application relates to, but not limited to, the field of electronic tracking, and in particular, to an algorithm performance measurement method and device.
背景技术Background technique
在相关技术中,目标跟踪评价方法中比较常见的是均方误差(Mean-Squre Error,简称为MSE)。MSE 指的是真值和估计值之间的期望差值。实际上由于期望通常很难获得,直接计算MSE指标就非常困难。因此,经常使用的一个指标是均方根误差RMSE,它利用 Monte Carlo 仿真的采样值来统计逼近该期望值。RMSE 在多目标跟踪领域是一个最为常用的指标,但是,RMSE指标有几个不足:首先,它不是欧氏空间上的距离概念;其次,当目标个数很大的情况下,例如上百个目标,用RMSE作为多目标跟踪评价指标就显得过于冗余。In the related technology, the most common target tracking evaluation method is the mean square error (Mean-Squre Error, referred to as MSE). MSE refers to the expected difference between the true value and the estimated value. In fact, since expectations are often difficult to obtain, it is very difficult to calculate MSE indicators directly. Therefore, a commonly used indicator is the root mean square error RMSE, which uses Monte Carlo simulation of the sampled values to statistically approximate the expected value. RMSE is one of the most commonly used indicators in the field of multi-target tracking. However, the RMSE indicator has several shortcomings: first, it is not the concept of distance in Euclidean space; second, when the number of targets is large, such as hundreds Targets, using RMSE as a multi-target tracking evaluation index is too redundant.
技术问题technical problem
实际上在多目标情况下,单个目标的跟踪性能指标越来越被弱化,在这种情况下,人们更多地关注对整体目标群的跟踪性能的评价,而不再关注单个目标的跟踪性能。因此,RMSE在复杂目标情况下应用是很繁琐的,也是不全面的。针对相关技术中针对目标追踪系统算法的性能评估方案不够全面的问题,目前还没有有效的解决方案。In fact, in the case of multiple targets, the tracking performance index of a single target is increasingly weakened. In this case, people pay more attention to the evaluation of the tracking performance of the overall target group rather than the tracking performance of a single target. . Therefore, the application of RMSE in complex target situations is very cumbersome and incomplete. In view of the problem that the performance evaluation scheme for the target tracking system algorithm in the related art is not comprehensive enough, there is currently no effective solution.
技术解决方案Technical solution
本申请实施例提供了一种算法的性能衡量方法及装置,以至少解决相关技术中针对目标追踪系统算法的性能评估方案不够全面的问题。The embodiments of the present application provide an algorithm performance measurement method and device, so as to at least solve the problem that the performance evaluation scheme for the target tracking system algorithm in the related art is insufficient.
根据本申请的一个实施例,提供了一种算法的性能衡量方法,包括:针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;依据所述参数至少之一衡量所述第一算法的性能。According to an embodiment of the present application, an algorithm performance measurement method is provided, which includes: acquiring, from video stream data, prediction data of the target object's action trajectory predicted by a first algorithm, and the trueness of the target object's action trajectory Data, wherein the first algorithm is used to track the target object; based on the predicted data and the real data, at least one of the following parameters is obtained: an average difference, multiple frame data of the video stream data The average value of the position difference of each frame, where the position difference of each frame of data is the difference between the predicted position and the true position; the first value is the number of real data that does not correspond to the predicted data; the second value is not the corresponding real data The number of prediction data of the third; the third value is the number of changes in the correspondence between the prediction data and the real data after the real walking trajectories of multiple target objects cross; the first algorithm is measured according to at least one of the parameters Performance.
根据本申请的另一个实施例,还提供了一种一种算法的性能衡量装置,包括:第一获取模块,用于针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;第二获取模块,用于依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;衡量模块,用于依据所述参数至少之一衡量所述第一算法的性能,其中,所述第一算法用于对所述目标对象进行追踪。According to another embodiment of the present application, there is also provided an algorithm performance measurement device, including: a first acquisition module, configured to acquire the prediction of the target object's action trajectory predicted by the first algorithm for video stream data Data, and real data of the target object's action trajectory, wherein the first algorithm is used to track the target object; the second acquisition module is used to acquire the following parameters based on the predicted data and the real data At least one of: the average difference, the average value of the position difference of multiple frame data of the video stream data, wherein the position difference of each frame data is the difference between the predicted position and the true position; the first value, no The number of real data corresponding to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, after the real walking trajectories of multiple target objects cross, the correspondence between the predicted data and the real data occurs The number of changes; a measurement module, configured to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
根据本申请的又一个实施例,还提供了一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。According to yet another embodiment of the present application, there is also provided a storage medium in which a computer program is stored, wherein the computer program is set to execute the steps in any one of the above method embodiments during runtime.
根据本申请的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。According to yet another embodiment of the present application, there is also provided an electronic device including a memory and a processor, the memory stores a computer program, the processor is configured to run the computer program to perform any of the above The steps in the method embodiment.
有益效果Beneficial effect
通过本申请,确定待评价的追踪系统算法,获取该追踪系统算法在视频中追踪目标对象的一个或多个参数,该一个或多个参数用于全方面衡量追踪系统算法,包括衡量追踪系统算法的稳定性,准确度等多个方面,采用上述方案,解决了相关技术中针对目标追踪系统算法的性能评估方案不够全面的问题,完整客观地衡量了目标追踪系统算法的性能。Through this application, determine the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video. The one or more parameters are used to measure the tracking system algorithm in all aspects, including the measurement tracking system algorithm The stability, accuracy and other aspects of the above, using the above scheme, solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
附图说明BRIEF DESCRIPTION
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described herein are used to provide a further understanding of the present application and form a part of the present application. The schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute an undue limitation on the present application. In the drawings:
图1是本申请实施例的一种算法的性能衡量方法的计算机终端的硬件结构框图;1 is a block diagram of a hardware structure of a computer terminal of an algorithm performance measurement method according to an embodiment of the present application;
图2是根据本申请实施例的算法的性能衡量方法的流程图;2 is a flowchart of an algorithm performance measurement method according to an embodiment of the present application;
图3是根据本申请具体实施例一的文件格式的示意图;3 is a schematic diagram of a file format according to a specific embodiment of the present application;
图4是根据本申请实施例的算法的性能衡量装置的结构框图。4 is a structural block diagram of an algorithm performance measurement device according to an embodiment of the present application.
本发明的实施方式Embodiments of the invention
下文中将参考附图并结合实施例来详细说明本申请。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。Hereinafter, the present application will be described in detail with reference to the drawings and in conjunction with the embodiments. It should be noted that the embodiments in the present application and the features in the embodiments can be combined with each other if there is no conflict.
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that the terms “first” and “second” in the description and claims of the present application and the above drawings are used to distinguish similar objects, and do not have to be used to describe a specific order or sequence.
本申请文件的方案用于衡量目标追踪系统算法的性能,可以是行人智能追踪系统,应用于城市安防等领域。The scheme of this application document is used to measure the performance of the algorithm of the target tracking system, and it can be a pedestrian intelligent tracking system, which is applied to urban security and other fields.
实施例一Example one
本申请实施例一所提供的方法实施例可以在计算机终端或者类似的运算装置中执行。以运行在计算机终端上为例,图1是本申请实施例的一种算法的性能衡量方法的计算机终端的硬件结构框图,如图1所示,计算机终端10可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和用于存储数据的存储器104,可选地,上述计算机终端还可以包括用于通信功能的传输装置106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述计算机终端的结构造成限定。例如,计算机终端10还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。The method embodiment provided in Embodiment 1 of the present application may be executed in a computer terminal or a similar computing device. Taking running on a computer terminal as an example, FIG. 1 is a block diagram of a hardware structure of a computer terminal of an algorithm performance measurement method according to an embodiment of the present application. As shown in FIG. 1, the computer terminal 10 may include one or more (FIG. 1 Only one is shown in the figure) a processor 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data. Optionally, the computer terminal A transmission device 106 for communication functions and an input and output device 108 may be included. Persons of ordinary skill in the art may understand that the structure shown in FIG. 1 is merely an illustration, which does not limit the structure of the computer terminal described above. For example, the computer terminal 10 may further include more or fewer components than those shown in FIG. 1, or have a configuration different from that shown in FIG.
存储器104可用于存储应用软件的软件程序以及模块,如本申请实施例中的算法的性能衡量方法对应的程序指令/模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the algorithm performance measurement method in the embodiments of the present application. The processor 102 executes the software programs and modules stored in the memory 104 to execute Various functional applications and data processing, namely to achieve the above method. The memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memories remotely provided with respect to the processor 102, and these remote memories may be connected to the computer terminal 10 through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。The transmission device 106 is used to receive or send data via a network. The above specific example of the network may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller (NIC), which can be connected to other network devices through the base station to communicate with the Internet. In one example, the transmission device 106 may be a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet in a wireless manner.
在本实施例中提供了一种运行于上述计算机终端的算法的性能衡量方法,图2是根据本申请实施例的算法的性能衡量方法的流程图,如图2所示,该流程包括如下步骤:In this embodiment, a method for measuring performance of an algorithm running on the computer terminal is provided. FIG. 2 is a flowchart of a method for measuring performance of an algorithm according to an embodiment of the present application. As shown in FIG. 2, the process includes the following steps :
步骤S202,针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;Step S202: Obtain the predicted data of the action trajectory of the target object predicted by the first algorithm and the real data of the action trajectory of the target object for the video stream data, wherein the first algorithm is used to perform track;
上述视频即追踪系统应用的视频,追踪系统追踪该视频中的目标对象的行走轨迹,可选地,可以将多个摄像头录制的视频结合起来,实现在一定的范围或者场景中的目标追踪。例如一个摄像头的视频仅可追踪目标对象在广场上的轨迹,结合周围环境的视频则可追踪目标对象在市区的活动轨迹。The above video is the video of the tracking system application. The tracking system tracks the walking path of the target object in the video. Optionally, videos recorded by multiple cameras can be combined to achieve target tracking in a certain range or scene. For example, the video of a camera can only track the trajectory of the target object in the square, and the video combined with the surrounding environment can track the trajectory of the target object in the urban area.
步骤S204,依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;Step S204: Acquire at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of multiple frame data of the video stream data, where the position difference of each frame data The value is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, in multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
上述的平均差值可以相当于后续实施例中的位置精度指标,上述第一数值可以相当于后续实施例中的统计丢失个数miss指标,第二数值可以相当于虚警指标fp,第三数值可以相当于交叉错判指标mix,第一数值、第二数值和第三数值的平均值可以称为统计准确度指标。The above average difference may be equivalent to the position accuracy index in subsequent embodiments, the above first value may be equivalent to the miss index of the number of statistical losses in subsequent embodiments, the second value may be equivalent to the false alarm index fp, and the third value It may be equivalent to the cross misjudgment index mix, and the average value of the first value, the second value, and the third value may be called a statistical accuracy index.
上述实施例的方案从每帧图像的差值计算出全部帧数的平均差值,使用平均差值去衡量追踪系统算法的精准性更为准确和完整。The solution of the above embodiment calculates the average difference of all frames from the difference of each frame image, and uses the average difference to measure the accuracy of the tracking system algorithm to be more accurate and complete.
步骤S206,依据所述参数至少之一衡量所述第一算法的性能。Step S206: Measure the performance of the first algorithm according to at least one of the parameters.
通过上述步骤,确定待评价的追踪系统算法,获取该追踪系统算法在视频中追踪目标对象的一个或多个参数,该一个或多个参数用于全方面衡量追踪系统算法,包括衡量追踪系统算法的稳定性,准确度等多个方面,采用上述方案,解决了相关技术中针对目标追踪系统算法的性能评估方案不够全面的问题,完整客观地衡量了目标追踪系统算法的性能。Through the above steps, determine the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video. The one or more parameters are used to measure the tracking system algorithm in all aspects, including the measurement tracking system algorithm The stability, accuracy and other aspects of the above, using the above scheme, solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
可选地,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,包括:针对所述视频流数据的帧数据,获取所述第一算法预测出的所述目标对象的预测位置,与所述目标对象的真实位置;确定所述预测数据包括所述预测位置,所述真实数据包括所述真实位置。Optionally, obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include: acquiring the first algorithm prediction for the frame data of the video stream data The predicted position of the target object and the true position of the target object; determining that the predicted data includes the predicted position, and the true data includes the true position.
可选地,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,包括:在所述视频流数据中,为所述目标对象的第一信息存储标记,其中,所述第一信息包括以下至少之一:视频帧数,每帧数据图像中的目标对象的标识ID,目标对象的脸部尺寸、目标对象的脸部中心点坐标位置;依据存储有标记的所述第一信息,获取关于所述目标对象的预测数据和所述真实数据。Optionally, obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory, including: in the video stream data, the first information of the target object A storage mark, wherein the first information includes at least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, the face size of the target object, and the coordinate position of the center point of the face of the target object; The marked first information is stored to obtain predicted data and the real data about the target object.
上述可选实施例中对目标对象的活动过程进行全面打标记录,同时将上述打标信息在整个视频中统一格式,例如整个视频中人物从始至终拥有唯一的ID,便于后期进行数据分析。In the above optional embodiments, the target object's activity process is fully marked and recorded, and the above marking information is unified in the entire video, for example, the characters in the entire video have a unique ID from beginning to end, which is convenient for later data analysis .
可选地,获取所述第一数值和/或第二数值的方式包括:针对所述视频流数据的每帧数据,获取所述预测数据,以及真实数据;将没有对应预测数据的所述真实数据的个数,作为第三数值,依据多个帧数据的第三数值获取所述视频流数据的第二平均差值,将所述第二平均差值作为所述第一数值;将没有对应真实数据的所述预测数据的个数,作为第四数值,依据多个帧数据的第四数值获取所述视频流数据的第三平均差值,将所述第三平均差值作为所述第二数值。上述实施例的方案从每帧图像的差值计算出全部帧数的第二平均差值或第三平均差值,使用第二平均差值或第三平均差值去衡量追踪系统算法的精准性更为准确和完整。Optionally, the method for obtaining the first value and/or the second value includes: obtaining the predicted data and real data for each frame of the video stream data; there will be no real data corresponding to the predicted data The number of data, as a third value, obtains the second average difference of the video stream data according to the third value of the multiple frame data, and uses the second average difference as the first value; there will be no corresponding The number of the prediction data of the real data is taken as the fourth value, and the third average difference of the video stream data is obtained according to the fourth value of the plurality of frame data, and the third average difference is taken as the first Two numerical values. The solution of the above embodiment calculates the second average difference or the third average difference of all frames from the difference of each frame image, and uses the second average difference or the third average difference to measure the accuracy of the tracking system algorithm More accurate and complete.
可选地,获取所述第三数值的方式包括:针对所述视频流数据,在多个所述目标对象的真实行走轨迹交叉后,获取所述第一算法预测的所述多个目标对象的预测数据;获取所述真实行走轨迹交叉前后,所述预测数据和所述真实数据的对应关系发生变化的变化个数;依据多个帧数据的变化个数获取所述视频流数据的第四平均差值,将所述第四平均差值作为所述第三数值。上述实施例的方案从每帧图像的差值计算出全部帧数的第四平均差值,使用第四平均差值去衡量追踪系统算法的精准性更为准确和完整。Optionally, the method for obtaining the third value includes: for the video stream data, after the real walking trajectories of a plurality of the target objects cross, acquiring the plurality of target objects predicted by the first algorithm Prediction data; obtaining the number of changes in the correspondence between the prediction data and the real data before and after the real walking track crosses; obtaining the fourth average of the video stream data according to the number of changes in multiple frame data For the difference, the fourth average difference is used as the third value. The solution of the above embodiment calculates a fourth average difference value of all frames from the difference value of each frame image, and uses the fourth average difference value to measure the accuracy of the tracking system algorithm to be more accurate and complete.
相关技术中的目标跟踪评价方法多为单目标或多目标简单路径场景下评价,且仅能衡量跟踪算法本身性能。对于复杂场景下行人检测、跟踪及识别整体性能缺少合理的评价方法。衡量行人追踪系统的整体性能,需要完整地衡量各个模块的合成性能,而非传统方法那样仅考虑跟踪准确性。针对这一问题,结合应用实际,提出本申请的方案。The target tracking evaluation methods in related technologies are mostly single-target or multi-target simple path scenarios, and can only measure the performance of the tracking algorithm itself. There is no reasonable evaluation method for the overall performance of pedestrian detection, tracking and recognition in complex scenes. To measure the overall performance of the pedestrian tracking system, it is necessary to completely measure the synthetic performance of each module, rather than only considering the tracking accuracy as traditional methods. In response to this problem, combined with the actual application, the proposal of this application is proposed.
本申请提供了一套完整的用于衡量行人追踪识别系统准确性方法体系,该体系的系统可以包含目标检测模块、人脸ID识别模块、目标轨迹追踪模块。系统可用于城市安防领域。本申请可以全面客观衡量追踪识别系统的准确性性能,对产品开发的优化和评价有十分重要的应用价值。This application provides a complete method system for measuring the accuracy of a pedestrian tracking and identification system. The system of this system may include a target detection module, a face ID recognition module, and a target trajectory tracking module. The system can be used in the field of urban security. This application can comprehensively and objectively measure the accuracy and performance of the tracking and identification system, and has very important application value for the optimization and evaluation of product development.
本申请中完整的行人智能追踪系统应该包含目标检测、路径跟踪、人脸识别等关键功能模块。本申请提供一套完整的用于衡量行人追踪识别系统准确性方法体系。该体系分为三部分:数据预处理方法、位置精度指标、统计准确度指标。数据预处理是指给待评价的视频数据进行截取及打标处理,包括截取人脸、人名统一、帧数对齐及文件格式统一;位置精度指标是指目标跟踪预测轨迹与视频实际轨迹的误差值,它反映了该系统的跟踪算法预测目标位置的能力;统计准确度指标分为三种,分别描述了系统目标检测模块、人脸识别模块和目标交叉时跟踪模块的性能。The complete pedestrian intelligent tracking system in this application should include key functional modules such as target detection, path tracking, and face recognition. This application provides a complete set of methods for measuring the accuracy of pedestrian tracking and identification systems. The system is divided into three parts: data preprocessing methods, position accuracy indicators, and statistical accuracy indicators. Data preprocessing refers to the interception and marking of the video data to be evaluated, including the interception of face, uniform name, frame number alignment and file format uniformity; position accuracy index refers to the error value of the target tracking prediction trajectory and the actual video trajectory , Which reflects the ability of the tracking algorithm of the system to predict the target position; statistical accuracy indicators are divided into three types, which describe the performance of the system target detection module, face recognition module and tracking module when the target crosses.
本套评价体系可以完整客观地衡量行人智能追踪系统的性能。弥补传统跟踪评价方法的欠缺。This evaluation system can completely and objectively measure the performance of the pedestrian intelligent tracking system. Make up for the lack of traditional tracking and evaluation methods.
下面结合本申请具体实施例进一步说明。The following is further described in conjunction with specific embodiments of the present application.
例子1:数据预处理方法Example 1: Data preprocessing method
相关技术中的跟踪评价方法对数据的预处理多为简单的位置打标,而本体系的目的是全面衡量检测、跟踪、识别效果,因此需要对行人目标活动过程进行全面打标记录。需要记录的内容包括:视频帧数、每帧内不同人的ID(同一个人在不同帧内ID要相同)、行人脸部长宽尺寸、行人脸部中心点坐标等。获取上述记录后,整理为程序可读的.json格式文件,图3是根据本申请具体实施例一的文件格式的示意图,如图3所示,具体内容格式如图3所示,其中1号框为完整数据外层字段;2号框为视频帧数字段;3号框为固定字段;4号框为行人姓名ID字段;5号框为文件路径字段。这样的数据预处理方式可以完整记录所有行人在所有时刻的活动轨迹,以实现对整个系统性能的客观衡量。The tracking evaluation methods in related technologies mostly preprocess the data for simple location marking, and the purpose of this system is to comprehensively measure the detection, tracking, and recognition effects. Therefore, it is necessary to fully mark the pedestrian target activity process. The content to be recorded includes: the number of video frames, the ID of different people in each frame (the same person ID must be the same in different frames), the width of the pedestrian face, the coordinates of the center point of the pedestrian face, etc. After obtaining the above records, it is organized into a program-readable .json format file. FIG. 3 is a schematic diagram of the file format according to the specific embodiment 1 of the present application. As shown in FIG. 3, the specific content format is shown in FIG. 3, of which No. 1 The box is the outer field of the complete data; the box 2 is the digital field of the video frame; the box 3 is the fixed field; the box 4 is the pedestrian name ID field; the box 5 is the file path field. This data preprocessing method can completely record the activity trajectories of all pedestrians at all times to achieve an objective measurement of the performance of the entire system.
例子2:位置精度指标Example 2: Position accuracy index
位置精度指标可衡量跟踪算法的准确度。在完成数据预处理的基础上,首先要建立真实目标与系统预测结果的对应关系,可以根据帧数与ID名一一对应实现。建立目标对应关系后,可以计算出每一帧中每个目标系统预测位置坐标与真实位置坐标的误差,对全部帧数帧内的所有误差求和,再对预测位置与真实位置的关系对数求平均值,即可算出平均误差。本指标可以衡量系统跟踪模块的性能。The position accuracy index can measure the accuracy of the tracking algorithm. On the basis of completing the data preprocessing, first of all, we must establish the corresponding relationship between the real target and the prediction result of the system, which can be realized according to the number of frames and the ID name. After establishing the target correspondence, you can calculate the error between the predicted position coordinate and the real position coordinate of each target system in each frame, sum all the errors in all frames, and then log the relationship between the predicted position and the real position. The average error can be calculated by averaging. This indicator can measure the performance of the system tracking module.
例子3:统计准确度指标Example 3: Statistical accuracy index
统计准确度指标可衡量系统检测目标模块、ID识别模块和跟踪模块的综合性能,具体分为三项指标:统计丢失个数miss指标、虚警指标fp和交叉错判指标mix,统计准确度指标为上述三项指标的平均值。与位置精度指标一样,需要在建立真实目标与系统预测结果对应关系的基础上计算统计准确度指标。统计丢失个数miss指标是指预测数据中丢失了真实数据中的目标个数,造成丢失的原因是检测模块漏检或跟踪模块跟丢;虚警指标fp是指预测数据中错误地出现了真实数据中没有的目标,造成fp的原因是检测模块的多检或人脸识别模块的误识别;交叉错判指标mix是指系统在轨迹交叉处的预测与真实目标完全相反,造成mix的原因是跟踪模块错误跟踪或识别模块误识别。上述指标均要先对所有帧内目标求和,之后再对预测位置与真实位置的关系对数求平均值。The statistical accuracy index can measure the comprehensive performance of the system detection target module, ID recognition module and tracking module. It is specifically divided into three indicators: the number of missing statistics, the false alarm indicator fp, and the cross-fault indicator mix, and the statistical accuracy indicator It is the average of the above three indicators. As with the position accuracy index, the statistical accuracy index needs to be calculated on the basis of establishing the corresponding relationship between the real target and the system prediction result. The statistical missing number miss indicator refers to the loss of the target number in the real data in the predicted data. The reason for the loss is the missed detection of the detection module or the loss of the tracking module; the false alarm indicator fp refers to the fact that the true error appears in the predicted data There is no target in the data, the reason for fp is the multi-detection of the detection module or the false recognition of the face recognition module; the cross misjudgment index mix refers to the prediction of the system at the intersection of the trajectories is completely opposite to the real target. Tracking module error tracking or identification module misidentification. The above indicators should first sum all the targets in the frame, and then average the logarithm of the relationship between the predicted position and the true position.
使用上述指标体系,可以对行人追踪识别系统性能进行完成衡量评价,对系统开发和测试具有重要意义。Using the above-mentioned index system, the performance of pedestrian tracking and identification system can be measured and evaluated, which is of great significance to system development and testing.
本申请提供一套完整的用于衡量行人追踪识别系统准确性方法体系。由于系统模块的复杂性,使用传统的评价指标无法达到评价效果。本申请通过建立数据预处理和两大类指标,全面客观地衡量了系统各个模块的综合性能,为产品开发和性能评定提供了重要的评价意义。This application provides a complete set of methods for measuring the accuracy of pedestrian tracking and identification systems. Due to the complexity of the system modules, the evaluation effect cannot be achieved using traditional evaluation indicators. This application comprehensively and objectively measures the comprehensive performance of each module of the system by establishing data preprocessing and two categories of indicators, which provides important evaluation significance for product development and performance evaluation.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware, but in many cases the former is Better implementation. Based on this understanding, the technical solution of the present application can essentially be reflected in the form of a software product that contributes to the existing technology. The computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The CD-ROM includes several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to perform the methods described in the embodiments of the present application.
实施例二Example 2
在本实施例中还提供了一种算法的性能衡量装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。In this embodiment, an algorithm performance measurement device is also provided. The device is used to implement the above-mentioned embodiments and preferred implementation modes, and descriptions that have already been described will not be repeated. As used below, the term "module" may implement a combination of software and/or hardware for a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived.
图4是根据本申请实施例的算法的性能衡量装置的结构框图,如图4所示,该装置包括:FIG. 4 is a structural block diagram of an algorithm performance measurement device according to an embodiment of the present application. As shown in FIG. 4, the device includes:
第一获取模块42,用于针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;The first obtaining module 42 is used to obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory for the video stream data, wherein the first algorithm is used to Tracking the target object;
第二获取模块44,用于依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;The second obtaining module 44 is configured to obtain at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, where each The position difference of the frame data is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, After the real walking trajectories of multiple target objects cross, the number of correspondences between the predicted data and the real data changes;
衡量模块46,用于依据所述参数至少之一衡量所述第一算法的性能,其中,所述第一算法用于对所述目标对象进行追踪。The measurement module 46 is configured to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
通过上述装置,确定待评价的追踪系统算法,获取该追踪系统算法在视频中追踪目标对象的一个或多个参数,该一个或多个参数用于全方面衡量追踪系统算法,包括衡量追踪系统算法的稳定性,准确度等多个方面,采用上述方案,解决了相关技术中针对目标追踪系统算法的性能评估方案不够全面的问题,完整客观地衡量了目标追踪系统算法的性能。Through the above device, determine the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video. The one or more parameters are used to measure the tracking system algorithm in all aspects, including the tracking system algorithm The stability, accuracy and other aspects of the above, using the above scheme, solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
可选地,所述第一获取模块42用于针对所述视频流数据的帧数据,获取所述第一算法预测出的所述目标对象的预测位置,与所述目标对象的真实位置;以及用于确定所述预测数据包括所述预测位置,所述真实数据包括所述真实位置。Optionally, the first obtaining module 42 is configured to obtain the predicted position of the target object predicted by the first algorithm and the real position of the target object for the frame data of the video stream data; and It is used to determine that the predicted data includes the predicted position, and the real data includes the true position.
可选地,所述第一获取模块42还用于在所述视频流数据中,为所述目标对象的第一信息存储标记,其中,所述第一信息包括以下至少之一:视频帧数,每帧数据图像中的目标对象的标识ID,目标对象的脸部尺寸、目标对象的脸部中心点坐标位置;以及用于依据存储有标记的所述第一信息,获取关于所述目标对象的预测数据和所述真实数据。Optionally, the first obtaining module 42 is further configured to store a mark for the first information of the target object in the video stream data, where the first information includes at least one of the following: the number of video frames , The identification ID of the target object in each frame of the data image, the size of the face of the target object, the coordinate position of the center point of the face of the target object; and used to obtain information about the target object based on the first information in which the marker is stored Forecast data and the real data.
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。It should be noted that the above modules can be implemented by software or hardware. For the latter, they can be implemented by the following methods, but not limited to this: the above modules are all located in the same processor; or, the above modules can be combined in any combination The forms are located in different processors.
实施例三Example Three
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的程序代码:The embodiments of the present application also provide a storage medium. Optionally, in this embodiment, the above storage medium may be set to store program code for performing the following steps:
S1,针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;S1. For the video stream data, obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory, where the first algorithm is used to track the target object ;
S2,依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;S2, obtaining at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data Is the difference between the predicted position and the true position; the first value is the number of real data that does not correspond to the predicted data; the second value is the number of predicted data that does not correspond to the real data; the third value is the value of multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
S3,依据所述参数至少之一衡量所述第一算法的性能。S3, measuring the performance of the first algorithm according to at least one of the parameters.
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。Optionally, in this embodiment, the above storage medium may include, but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media such as discs or optical discs that can store program codes.
本申请的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。An embodiment of the present application further provides an electronic device, including a memory and a processor, where the computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any one of the foregoing method embodiments.
可选地,上述电子装置还可以包括传输装置以及输入输出设备,其中,该传输装置和上述处理器连接,该输入输出设备和上述处理器连接。Optionally, the electronic device may further include a transmission device and an input-output device, where the transmission device is connected to the processor, and the input-output device is connected to the processor.
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:Optionally, in this embodiment, the foregoing processor may be configured to perform the following steps through a computer program:
S1,针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;S1. For the video stream data, obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory, where the first algorithm is used to track the target object ;
S2,依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;S2, obtaining at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data Is the difference between the predicted position and the true position; the first value is the number of real data that does not correspond to the predicted data; the second value is the number of predicted data that does not correspond to the real data; the third value is the value of multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
S3,依据所述参数至少之一衡量所述第一算法的性能。S3, measuring the performance of the first algorithm according to at least one of the parameters.
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementation manners, and details are not repeated in this embodiment.
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementation manners, and details are not repeated in this embodiment.
显然,本领域的技术人员应该明白,上述的本申请的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that the above-mentioned modules or steps of the present application can be implemented by a general-purpose computing device, they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Above, optionally, they can be implemented with program code executable by the computing device, so that they can be stored in the storage device to be executed by the computing device, and in some cases, can be in a different order than here The steps shown or described are performed, or they are made into individual integrated circuit modules respectively, or multiple modules or steps among them are made into a single integrated circuit module to achieve. In this way, this application is not limited to any specific combination of hardware and software.
工业实用性Industrial applicability
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above are only preferred embodiments of the present application, and are not intended to limit the present application. For those skilled in the art, the present application may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. within the spirit and principle of this application shall be included in the scope of protection of this application.

Claims (10)

  1. 一种算法的性能衡量方法,其特征在于,包括:An algorithm performance measurement method, characterized by including:
    针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;For the video stream data, obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory, where the first algorithm is used to track the target object;
    依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;Acquire at least one of the following parameters according to the prediction data and the real data: average difference, the average value of the position difference of multiple frames of the video stream data, where the position difference of each frame of data is the prediction The difference between the position and the real position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, the real walking on multiple target objects After the trajectories cross, the number of correspondences between the predicted data and the real data changes;
    依据所述参数至少之一衡量所述第一算法的性能。The performance of the first algorithm is measured according to at least one of the parameters.
  2. 根据权利要求1所述的方法,其特征在于,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,包括:The method according to claim 1, wherein obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include:
    针对所述视频流数据的帧数据,获取所述第一算法预测出的所述目标对象的预测位置,与所述目标对象的真实位置;For the frame data of the video stream data, obtain the predicted position of the target object predicted by the first algorithm and the real position of the target object;
    确定所述预测数据包括所述预测位置,所述真实数据包括所述真实位置。It is determined that the predicted data includes the predicted position, and the real data includes the true position.
  3. 根据权利要求1所述的方法,其特征在于,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,包括:The method according to claim 1, wherein obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include:
    在所述视频流数据中,为所述目标对象的第一信息存储标记,其中,所述第一信息包括以下至少之一:视频帧数,每帧数据图像中的目标对象的标识ID,目标对象的脸部尺寸、目标对象的脸部中心点坐标位置;In the video stream data, a mark is stored for the first information of the target object, wherein the first information includes at least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, and the target The size of the subject's face and the coordinate position of the center point of the target's face;
    依据存储有标记的所述第一信息,获取关于所述目标对象的预测数据和所述真实数据。According to the first information in which the mark is stored, the predicted data and the real data about the target object are acquired.
  4. 根据权利要求1所述的方法,其特征在于,获取所述第一数值和/或第二数值的方式包括:The method according to claim 1, wherein the method of obtaining the first value and/or the second value comprises:
    针对所述视频流数据的每帧数据,获取所述预测数据,以及真实数据;For each frame of the video stream data, obtain the predicted data and real data;
    将没有对应预测数据的所述真实数据的个数,作为第三数值,依据多个帧数据的第三数值获取所述视频流数据的第二平均差值,将所述第二平均差值作为所述第一数值;The number of the real data without corresponding prediction data is taken as the third value, and the second average difference value of the video stream data is obtained according to the third value of the multiple frame data, and the second average difference value is taken as The first value;
    将没有对应真实数据的所述预测数据的个数,作为第四数值,依据多个帧数据的第四数值获取所述视频流数据的第三平均差值,将所述第三平均差值作为所述第二数值。The number of the prediction data without corresponding real data is taken as the fourth value, and the third average difference value of the video stream data is obtained according to the fourth value of the multiple frame data, and the third average difference value is taken as The second value.
  5. 根据权利要求1所述的方法,其特征在于,获取所述第三数值的方式包括:The method according to claim 1, wherein the method of obtaining the third value comprises:
    针对所述视频流数据,在多个所述目标对象的真实行走轨迹交叉后,获取所述第一算法预测的所述多个目标对象的预测数据;For the video stream data, after the real walking trajectories of a plurality of the target objects cross, obtain the prediction data of the plurality of target objects predicted by the first algorithm;
    获取所述真实行走轨迹交叉前后,所述预测数据和所述真实数据的对应关系发生变化的变化个数;Acquiring the number of changes in the correspondence between the predicted data and the real data before and after the real walking track crosses;
    依据多个帧数据的变化个数获取所述视频流数据的第四平均差值,将所述第四平均差值作为所述第三数值。A fourth average difference value of the video stream data is obtained according to the changed number of multiple frame data, and the fourth average difference value is used as the third numerical value.
  6. 一种算法的性能衡量装置,其特征在于,包括:An algorithm performance measurement device, characterized in that it includes:
    第一获取模块,用于针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;The first obtaining module is used to obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory for the video stream data, wherein the first algorithm is used to To track the target object;
    第二获取模块,用于依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;The second obtaining module is configured to obtain at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, where each frame The position difference of the data is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, in After the real walking trajectories of multiple target objects cross, the number of correspondences between the predicted data and the real data changes;
    衡量模块,用于依据所述参数至少之一衡量所述第一算法的性能,其中,所述第一算法用于对所述目标对象进行追踪。The measurement module is used to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
  7. 根据权利要求6所述的装置,其特征在于,所述第一获取模块用于针对所述视频流数据的帧数据,获取所述第一算法预测出的所述目标对象的预测位置,与所述目标对象的真实位置;The apparatus according to claim 6, wherein the first acquiring module is configured to acquire the predicted position of the target object predicted by the first algorithm for the frame data of the video stream data, and State the true location of the target object;
    以及用于确定所述预测数据包括所述预测位置,所述真实数据包括所述真实位置。And for determining that the predicted data includes the predicted location, and the real data includes the true location.
  8. 根据权利要求6所述的装置,其特征在于,所述第一获取模块还用于在所述视频流数据中,为所述目标对象的第一信息存储标记,其中,所述第一信息包括以下至少之一:视频帧数,每帧数据图像中的目标对象的标识ID,目标对象的脸部尺寸、目标对象的脸部中心点坐标位置;The apparatus according to claim 6, wherein the first acquisition module is further configured to store a mark for the first information of the target object in the video stream data, wherein the first information includes At least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, the face size of the target object, and the coordinate position of the center point of the face of the target object;
    以及用于依据存储有标记的所述第一信息,获取关于所述目标对象的预测数据和所述真实数据。And for acquiring the predicted data and the real data about the target object according to the first information in which the mark is stored.
  9. 一种存储介质,其特征在于,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至5任一项中所述的方法。A storage medium characterized in that a computer program is stored in the storage medium, wherein the computer program is configured to execute the method described in any one of claims 1 to 5 when it is run.
  10. 一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至5任一项中所述的方法。An electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute any one of claims 1 to 5. Described method.
PCT/CN2019/112914 2018-12-06 2019-10-24 Method and device for evaluating algorithm performance WO2020114136A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811493566.7 2018-12-06
CN201811493566.7A CN111292359A (en) 2018-12-06 2018-12-06 Method and device for measuring performance of algorithm

Publications (1)

Publication Number Publication Date
WO2020114136A1 true WO2020114136A1 (en) 2020-06-11

Family

ID=70974092

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/112914 WO2020114136A1 (en) 2018-12-06 2019-10-24 Method and device for evaluating algorithm performance

Country Status (2)

Country Link
CN (1) CN111292359A (en)
WO (1) WO2020114136A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462808A (en) * 2014-12-04 2015-03-25 河海大学 Method for fitting safe horizontal displacement and dynamic data of variable sliding window of water level
US9129400B1 (en) * 2011-09-23 2015-09-08 Amazon Technologies, Inc. Movement prediction for image capture
CN107492113A (en) * 2017-06-01 2017-12-19 南京行者易智能交通科技有限公司 A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582811B2 (en) * 2011-09-01 2013-11-12 Xerox Corporation Unsupervised parameter settings for object tracking algorithms
US20140341465A1 (en) * 2013-05-16 2014-11-20 The Regents Of The University Of California Real-time pose estimation system using inertial and feature measurements
CN104977022B (en) * 2014-04-04 2018-02-27 西北工业大学 Multiple-target system Performance Evaluation emulation mode
CN107679578B (en) * 2017-10-12 2020-03-31 北京旷视科技有限公司 Target recognition algorithm testing method, device and system
CN108364301B (en) * 2018-02-12 2020-09-04 中国科学院自动化研究所 Visual tracking algorithm stability evaluation method and device based on cross-time overlapping rate

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129400B1 (en) * 2011-09-23 2015-09-08 Amazon Technologies, Inc. Movement prediction for image capture
CN104462808A (en) * 2014-12-04 2015-03-25 河海大学 Method for fitting safe horizontal displacement and dynamic data of variable sliding window of water level
CN107492113A (en) * 2017-06-01 2017-12-19 南京行者易智能交通科技有限公司 A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method

Also Published As

Publication number Publication date
CN111292359A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
US9824460B2 (en) Method, apparatus and system for acquiring headcount
US10728536B2 (en) System and method for camera commissioning beacons
CN111160243A (en) Passenger flow volume statistical method and related product
CN111148030A (en) Fingerprint database updating method and device, server and storage medium
US11200406B2 (en) Customer flow statistical method, apparatus and device
CN112465866A (en) Multi-target track acquisition method, device, system and storage medium
CN113205037A (en) Event detection method and device, electronic equipment and readable storage medium
CN108647587A (en) Demographic method, device, terminal and storage medium
CN112562005A (en) Space calibration method and system
CN111524394A (en) Method, device and system for improving accuracy of comprehensive track monitoring data of apron
CN115546705A (en) Target identification method, terminal device and storage medium
CN111899279A (en) Method and device for detecting motion speed of target object
CN113762229B (en) Intelligent identification method and system for building equipment in building site
CN111461222A (en) Method and device for acquiring target object track similarity and electronic equipment
CN116563841B (en) Detection method and detection device for power distribution network equipment identification plate and electronic equipment
CN112989916A (en) Crowd counting method combining density estimation and target detection
WO2020114136A1 (en) Method and device for evaluating algorithm performance
CN116645612A (en) Forest resource asset determination method and system
WO2021114985A1 (en) Companionship object identification method and apparatus, server and system
CN113810665A (en) Video processing method, device, equipment, storage medium and product
Dias et al. Real-time visual ground-truth system for indoor robotic applications
CN112241686A (en) Trajectory comparison matching method and system based on feature vectors
CN111243289A (en) Target vehicle tracking method and device, storage medium and electronic device
CN117877100B (en) Behavior mode determining method and device, electronic equipment and storage medium
Ding et al. Who is partner: A new perspective on data association of multi-object tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19893503

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19893503

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19893503

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19893503

Country of ref document: EP

Kind code of ref document: A1