WO2020114136A1 - Method and device for evaluating algorithm performance - Google Patents
Method and device for evaluating algorithm performance Download PDFInfo
- Publication number
- WO2020114136A1 WO2020114136A1 PCT/CN2019/112914 CN2019112914W WO2020114136A1 WO 2020114136 A1 WO2020114136 A1 WO 2020114136A1 CN 2019112914 W CN2019112914 W CN 2019112914W WO 2020114136 A1 WO2020114136 A1 WO 2020114136A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- predicted
- target object
- value
- real
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Definitions
- This application relates to, but not limited to, the field of electronic tracking, and in particular, to an algorithm performance measurement method and device.
- MSE mean square error
- RMSE root mean square error
- the embodiments of the present application provide an algorithm performance measurement method and device, so as to at least solve the problem that the performance evaluation scheme for the target tracking system algorithm in the related art is insufficient.
- an algorithm performance measurement method which includes: acquiring, from video stream data, prediction data of the target object's action trajectory predicted by a first algorithm, and the trueness of the target object's action trajectory Data, wherein the first algorithm is used to track the target object; based on the predicted data and the real data, at least one of the following parameters is obtained: an average difference, multiple frame data of the video stream data The average value of the position difference of each frame, where the position difference of each frame of data is the difference between the predicted position and the true position; the first value is the number of real data that does not correspond to the predicted data; the second value is not the corresponding real data The number of prediction data of the third; the third value is the number of changes in the correspondence between the prediction data and the real data after the real walking trajectories of multiple target objects cross; the first algorithm is measured according to at least one of the parameters Performance.
- an algorithm performance measurement device including: a first acquisition module, configured to acquire the prediction of the target object's action trajectory predicted by the first algorithm for video stream data Data, and real data of the target object's action trajectory, wherein the first algorithm is used to track the target object; the second acquisition module is used to acquire the following parameters based on the predicted data and the real data At least one of: the average difference, the average value of the position difference of multiple frame data of the video stream data, wherein the position difference of each frame data is the difference between the predicted position and the true position; the first value, no The number of real data corresponding to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, after the real walking trajectories of multiple target objects cross, the correspondence between the predicted data and the real data occurs The number of changes; a measurement module, configured to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
- a storage medium in which a computer program is stored, wherein the computer program is set to execute the steps in any one of the above method embodiments during runtime.
- an electronic device including a memory and a processor, the memory stores a computer program, the processor is configured to run the computer program to perform any of the above The steps in the method embodiment.
- the tracking system algorithm to be evaluated determine the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video.
- the one or more parameters are used to measure the tracking system algorithm in all aspects, including the measurement tracking system algorithm.
- the stability, accuracy and other aspects of the above, using the above scheme solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
- FIG. 1 is a block diagram of a hardware structure of a computer terminal of an algorithm performance measurement method according to an embodiment of the present application
- FIG. 3 is a schematic diagram of a file format according to a specific embodiment of the present application.
- FIG. 4 is a structural block diagram of an algorithm performance measurement device according to an embodiment of the present application.
- the scheme of this application document is used to measure the performance of the algorithm of the target tracking system, and it can be a pedestrian intelligent tracking system, which is applied to urban security and other fields.
- FIG. 1 is a block diagram of a hardware structure of a computer terminal of an algorithm performance measurement method according to an embodiment of the present application.
- the computer terminal 10 may include one or more (FIG. 1 Only one is shown in the figure) a processor 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data.
- the computer terminal A transmission device 106 for communication functions and an input and output device 108 may be included.
- FIG. 1 is merely an illustration, which does not limit the structure of the computer terminal described above.
- the computer terminal 10 may further include more or fewer components than those shown in FIG. 1, or have a configuration different from that shown in FIG.
- the memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the algorithm performance measurement method in the embodiments of the present application.
- the processor 102 executes the software programs and modules stored in the memory 104 to execute Various functional applications and data processing, namely to achieve the above method.
- the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
- the memory 104 may further include memories remotely provided with respect to the processor 102, and these remote memories may be connected to the computer terminal 10 through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
- the transmission device 106 is used to receive or send data via a network.
- the above specific example of the network may include a wireless network provided by a communication provider of the computer terminal 10.
- the transmission device 106 includes a network adapter (Network Interface Controller (NIC), which can be connected to other network devices through the base station to communicate with the Internet.
- the transmission device 106 may be a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet in a wireless manner.
- RF Radio Frequency
- FIG. 2 is a flowchart of a method for measuring performance of an algorithm according to an embodiment of the present application. As shown in FIG. 2, the process includes the following steps :
- Step S202 Obtain the predicted data of the action trajectory of the target object predicted by the first algorithm and the real data of the action trajectory of the target object for the video stream data, wherein the first algorithm is used to perform track;
- the above video is the video of the tracking system application.
- the tracking system tracks the walking path of the target object in the video.
- videos recorded by multiple cameras can be combined to achieve target tracking in a certain range or scene.
- the video of a camera can only track the trajectory of the target object in the square, and the video combined with the surrounding environment can track the trajectory of the target object in the urban area.
- Step S204 Acquire at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of multiple frame data of the video stream data, where the position difference of each frame data The value is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, in multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
- the above average difference may be equivalent to the position accuracy index in subsequent embodiments
- the above first value may be equivalent to the miss index of the number of statistical losses in subsequent embodiments
- the second value may be equivalent to the false alarm index fp
- the third value It may be equivalent to the cross misjudgment index mix, and the average value of the first value, the second value, and the third value may be called a statistical accuracy index.
- the solution of the above embodiment calculates the average difference of all frames from the difference of each frame image, and uses the average difference to measure the accuracy of the tracking system algorithm to be more accurate and complete.
- Step S206 Measure the performance of the first algorithm according to at least one of the parameters.
- the tracking system algorithm determines the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video.
- the one or more parameters are used to measure the tracking system algorithm in all aspects, including the measurement tracking system algorithm.
- the stability, accuracy and other aspects of the above, using the above scheme solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
- obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include: acquiring the first algorithm prediction for the frame data of the video stream data The predicted position of the target object and the true position of the target object; determining that the predicted data includes the predicted position, and the true data includes the true position.
- obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory including: in the video stream data, the first information of the target object A storage mark, wherein the first information includes at least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, the face size of the target object, and the coordinate position of the center point of the face of the target object;
- the marked first information is stored to obtain predicted data and the real data about the target object.
- the target object's activity process is fully marked and recorded, and the above marking information is unified in the entire video, for example, the characters in the entire video have a unique ID from beginning to end, which is convenient for later data analysis .
- the method for obtaining the first value and/or the second value includes: obtaining the predicted data and real data for each frame of the video stream data; there will be no real data corresponding to the predicted data
- the number of data, as a third value obtains the second average difference of the video stream data according to the third value of the multiple frame data, and uses the second average difference as the first value; there will be no corresponding
- the number of the prediction data of the real data is taken as the fourth value
- the third average difference of the video stream data is obtained according to the fourth value of the plurality of frame data
- the third average difference is taken as the first Two numerical values.
- the solution of the above embodiment calculates the second average difference or the third average difference of all frames from the difference of each frame image, and uses the second average difference or the third average difference to measure the accuracy of the tracking system algorithm More accurate and complete.
- the method for obtaining the third value includes: for the video stream data, after the real walking trajectories of a plurality of the target objects cross, acquiring the plurality of target objects predicted by the first algorithm Prediction data; obtaining the number of changes in the correspondence between the prediction data and the real data before and after the real walking track crosses; obtaining the fourth average of the video stream data according to the number of changes in multiple frame data For the difference, the fourth average difference is used as the third value.
- the solution of the above embodiment calculates a fourth average difference value of all frames from the difference value of each frame image, and uses the fourth average difference value to measure the accuracy of the tracking system algorithm to be more accurate and complete.
- the target tracking evaluation methods in related technologies are mostly single-target or multi-target simple path scenarios, and can only measure the performance of the tracking algorithm itself. There is no reasonable evaluation method for the overall performance of pedestrian detection, tracking and recognition in complex scenes. To measure the overall performance of the pedestrian tracking system, it is necessary to completely measure the synthetic performance of each module, rather than only considering the tracking accuracy as traditional methods. In response to this problem, combined with the actual application, the proposal of this application is proposed.
- This application provides a complete method system for measuring the accuracy of a pedestrian tracking and identification system.
- the system of this system may include a target detection module, a face ID recognition module, and a target trajectory tracking module.
- the system can be used in the field of urban security.
- This application can comprehensively and objectively measure the accuracy and performance of the tracking and identification system, and has very important application value for the optimization and evaluation of product development.
- the complete pedestrian intelligent tracking system in this application should include key functional modules such as target detection, path tracking, and face recognition.
- This application provides a complete set of methods for measuring the accuracy of pedestrian tracking and identification systems.
- the system is divided into three parts: data preprocessing methods, position accuracy indicators, and statistical accuracy indicators.
- Data preprocessing refers to the interception and marking of the video data to be evaluated, including the interception of face, uniform name, frame number alignment and file format uniformity
- position accuracy index refers to the error value of the target tracking prediction trajectory and the actual video trajectory , Which reflects the ability of the tracking algorithm of the system to predict the target position
- statistical accuracy indicators are divided into three types, which describe the performance of the system target detection module, face recognition module and tracking module when the target crosses.
- This evaluation system can completely and objectively measure the performance of the pedestrian intelligent tracking system. Make up for the lack of traditional tracking and evaluation methods.
- the tracking evaluation methods in related technologies mostly preprocess the data for simple location marking, and the purpose of this system is to comprehensively measure the detection, tracking, and recognition effects. Therefore, it is necessary to fully mark the pedestrian target activity process.
- the content to be recorded includes: the number of video frames, the ID of different people in each frame (the same person ID must be the same in different frames), the width of the pedestrian face, the coordinates of the center point of the pedestrian face, etc. After obtaining the above records, it is organized into a program-readable .json format file.
- FIG. 3 is a schematic diagram of the file format according to the specific embodiment 1 of the present application. As shown in FIG. 3, the specific content format is shown in FIG. 3, of which No.
- the box is the outer field of the complete data; the box 2 is the digital field of the video frame; the box 3 is the fixed field; the box 4 is the pedestrian name ID field; the box 5 is the file path field.
- This data preprocessing method can completely record the activity trajectories of all pedestrians at all times to achieve an objective measurement of the performance of the entire system.
- the position accuracy index can measure the accuracy of the tracking algorithm.
- we must establish the corresponding relationship between the real target and the prediction result of the system, which can be realized according to the number of frames and the ID name.
- you can calculate the error between the predicted position coordinate and the real position coordinate of each target system in each frame, sum all the errors in all frames, and then log the relationship between the predicted position and the real position.
- the average error can be calculated by averaging. This indicator can measure the performance of the system tracking module.
- the statistical accuracy index can measure the comprehensive performance of the system detection target module, ID recognition module and tracking module. It is specifically divided into three indicators: the number of missing statistics, the false alarm indicator fp, and the cross-fault indicator mix, and the statistical accuracy indicator It is the average of the above three indicators. As with the position accuracy index, the statistical accuracy index needs to be calculated on the basis of establishing the corresponding relationship between the real target and the system prediction result.
- the statistical missing number miss indicator refers to the loss of the target number in the real data in the predicted data.
- the reason for the loss is the missed detection of the detection module or the loss of the tracking module; the false alarm indicator fp refers to the fact that the true error appears in the predicted data There is no target in the data, the reason for fp is the multi-detection of the detection module or the false recognition of the face recognition module; the cross misjudgment index mix refers to the prediction of the system at the intersection of the trajectories is completely opposite to the real target. Tracking module error tracking or identification module misidentification.
- the above indicators should first sum all the targets in the frame, and then average the logarithm of the relationship between the predicted position and the true position.
- the performance of pedestrian tracking and identification system can be measured and evaluated, which is of great significance to system development and testing.
- This application provides a complete set of methods for measuring the accuracy of pedestrian tracking and identification systems. Due to the complexity of the system modules, the evaluation effect cannot be achieved using traditional evaluation indicators. This application comprehensively and objectively measures the comprehensive performance of each module of the system by establishing data preprocessing and two categories of indicators, which provides important evaluation significance for product development and performance evaluation.
- the method according to the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
- the technical solution of the present application can essentially be reflected in the form of a software product that contributes to the existing technology.
- the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk,
- the CD-ROM includes several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to perform the methods described in the embodiments of the present application.
- an algorithm performance measurement device is also provided.
- the device is used to implement the above-mentioned embodiments and preferred implementation modes, and descriptions that have already been described will not be repeated.
- the term "module” may implement a combination of software and/or hardware for a predetermined function.
- the devices described in the following embodiments are preferably implemented in software, implementation of hardware or a combination of software and hardware is also possible and conceived.
- FIG. 4 is a structural block diagram of an algorithm performance measurement device according to an embodiment of the present application. As shown in FIG. 4, the device includes:
- the first obtaining module 42 is used to obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory for the video stream data, wherein the first algorithm is used to Tracking the target object;
- the second obtaining module 44 is configured to obtain at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, where each The position difference of the frame data is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, After the real walking trajectories of multiple target objects cross, the number of correspondences between the predicted data and the real data changes;
- the measurement module 46 is configured to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
- the tracking system algorithm determines the tracking system algorithm to be evaluated and obtain one or more parameters of the tracking system algorithm to track the target object in the video.
- the one or more parameters are used to measure the tracking system algorithm in all aspects, including the tracking system algorithm.
- the stability, accuracy and other aspects of the above, using the above scheme solves the problem of insufficient performance evaluation scheme for the target tracking system algorithm in the related technology, and completely and objectively measures the performance of the target tracking system algorithm.
- the first obtaining module 42 is configured to obtain the predicted position of the target object predicted by the first algorithm and the real position of the target object for the frame data of the video stream data; and It is used to determine that the predicted data includes the predicted position, and the real data includes the true position.
- the first obtaining module 42 is further configured to store a mark for the first information of the target object in the video stream data, where the first information includes at least one of the following: the number of video frames , The identification ID of the target object in each frame of the data image, the size of the face of the target object, the coordinate position of the center point of the face of the target object; and used to obtain information about the target object based on the first information in which the marker is stored Forecast data and the real data.
- the above modules can be implemented by software or hardware. For the latter, they can be implemented by the following methods, but not limited to this: the above modules are all located in the same processor; or, the above modules can be combined in any combination The forms are located in different processors.
- the embodiments of the present application also provide a storage medium.
- the above storage medium may be set to store program code for performing the following steps:
- an average difference value an average value of position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data Is the difference between the predicted position and the true position
- the first value is the number of real data that does not correspond to the predicted data
- the second value is the number of predicted data that does not correspond to the real data
- the third value is the value of multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
- the above storage medium may include, but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media such as discs or optical discs that can store program codes.
- An embodiment of the present application further provides an electronic device, including a memory and a processor, where the computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any one of the foregoing method embodiments.
- the electronic device may further include a transmission device and an input-output device, where the transmission device is connected to the processor, and the input-output device is connected to the processor.
- the foregoing processor may be configured to perform the following steps through a computer program:
- an average difference value an average value of position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data Is the difference between the predicted position and the true position
- the first value is the number of real data that does not correspond to the predicted data
- the second value is the number of predicted data that does not correspond to the real data
- the third value is the value of multiple target objects After the real walking trajectory crosses, the number of correspondences between the predicted data and the real data changes;
- modules or steps of the present application can be implemented by a general-purpose computing device, they can be concentrated on a single computing device, or distributed in a network composed of multiple computing devices Above, optionally, they can be implemented with program code executable by the computing device, so that they can be stored in the storage device to be executed by the computing device, and in some cases, can be in a different order than here
- the steps shown or described are performed, or they are made into individual integrated circuit modules respectively, or multiple modules or steps among them are made into a single integrated circuit module to achieve. In this way, this application is not limited to any specific combination of hardware and software.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种算法的性能衡量方法,其特征在于,包括:An algorithm performance measurement method, characterized by including:针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;For the video stream data, obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory, where the first algorithm is used to track the target object;依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;Acquire at least one of the following parameters according to the prediction data and the real data: average difference, the average value of the position difference of multiple frames of the video stream data, where the position difference of each frame of data is the prediction The difference between the position and the real position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, the real walking on multiple target objects After the trajectories cross, the number of correspondences between the predicted data and the real data changes;依据所述参数至少之一衡量所述第一算法的性能。The performance of the first algorithm is measured according to at least one of the parameters.
- 根据权利要求1所述的方法,其特征在于,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,包括:The method according to claim 1, wherein obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include:针对所述视频流数据的帧数据,获取所述第一算法预测出的所述目标对象的预测位置,与所述目标对象的真实位置;For the frame data of the video stream data, obtain the predicted position of the target object predicted by the first algorithm and the real position of the target object;确定所述预测数据包括所述预测位置,所述真实数据包括所述真实位置。It is determined that the predicted data includes the predicted position, and the real data includes the true position.
- 根据权利要求1所述的方法,其特征在于,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,包括:The method according to claim 1, wherein obtaining the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory include:在所述视频流数据中,为所述目标对象的第一信息存储标记,其中,所述第一信息包括以下至少之一:视频帧数,每帧数据图像中的目标对象的标识ID,目标对象的脸部尺寸、目标对象的脸部中心点坐标位置;In the video stream data, a mark is stored for the first information of the target object, wherein the first information includes at least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, and the target The size of the subject's face and the coordinate position of the center point of the target's face;依据存储有标记的所述第一信息,获取关于所述目标对象的预测数据和所述真实数据。According to the first information in which the mark is stored, the predicted data and the real data about the target object are acquired.
- 根据权利要求1所述的方法,其特征在于,获取所述第一数值和/或第二数值的方式包括:The method according to claim 1, wherein the method of obtaining the first value and/or the second value comprises:针对所述视频流数据的每帧数据,获取所述预测数据,以及真实数据;For each frame of the video stream data, obtain the predicted data and real data;将没有对应预测数据的所述真实数据的个数,作为第三数值,依据多个帧数据的第三数值获取所述视频流数据的第二平均差值,将所述第二平均差值作为所述第一数值;The number of the real data without corresponding prediction data is taken as the third value, and the second average difference value of the video stream data is obtained according to the third value of the multiple frame data, and the second average difference value is taken as The first value;将没有对应真实数据的所述预测数据的个数,作为第四数值,依据多个帧数据的第四数值获取所述视频流数据的第三平均差值,将所述第三平均差值作为所述第二数值。The number of the prediction data without corresponding real data is taken as the fourth value, and the third average difference value of the video stream data is obtained according to the fourth value of the multiple frame data, and the third average difference value is taken as The second value.
- 根据权利要求1所述的方法,其特征在于,获取所述第三数值的方式包括:The method according to claim 1, wherein the method of obtaining the third value comprises:针对所述视频流数据,在多个所述目标对象的真实行走轨迹交叉后,获取所述第一算法预测的所述多个目标对象的预测数据;For the video stream data, after the real walking trajectories of a plurality of the target objects cross, obtain the prediction data of the plurality of target objects predicted by the first algorithm;获取所述真实行走轨迹交叉前后,所述预测数据和所述真实数据的对应关系发生变化的变化个数;Acquiring the number of changes in the correspondence between the predicted data and the real data before and after the real walking track crosses;依据多个帧数据的变化个数获取所述视频流数据的第四平均差值,将所述第四平均差值作为所述第三数值。A fourth average difference value of the video stream data is obtained according to the changed number of multiple frame data, and the fourth average difference value is used as the third numerical value.
- 一种算法的性能衡量装置,其特征在于,包括:An algorithm performance measurement device, characterized in that it includes:第一获取模块,用于针对视频流数据,获取第一算法预测的所述目标对象行动轨迹的预测数据,和所述目标对象行动轨迹的真实数据,其中,所述第一算法用于对所述目标对象进行追踪;The first obtaining module is used to obtain the predicted data of the target object's action trajectory predicted by the first algorithm and the real data of the target object's action trajectory for the video stream data, wherein the first algorithm is used to To track the target object;第二获取模块,用于依据所述预测数据和所述真实数据获取以下参数至少之一:平均差值,所述视频流数据的多个帧数据的位置差值的均值,其中,每个帧数据的位置差值为预测位置与真实位置的差值;第一数值,没有对应预测数据的真实数据的个数;第二数值,没有对应真实数据的预测数据的个数;第三数值,在多个目标对象的真实行走轨迹交叉之后,预测数据与真实数据的对应关系发生变化的个数;The second obtaining module is configured to obtain at least one of the following parameters according to the predicted data and the real data: an average difference value, an average value of position difference values of a plurality of frame data of the video stream data, where each frame The position difference of the data is the difference between the predicted position and the true position; the first value, the number of real data that does not correspond to the predicted data; the second value, the number of predicted data that does not correspond to the real data; the third value, in After the real walking trajectories of multiple target objects cross, the number of correspondences between the predicted data and the real data changes;衡量模块,用于依据所述参数至少之一衡量所述第一算法的性能,其中,所述第一算法用于对所述目标对象进行追踪。The measurement module is used to measure the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used to track the target object.
- 根据权利要求6所述的装置,其特征在于,所述第一获取模块用于针对所述视频流数据的帧数据,获取所述第一算法预测出的所述目标对象的预测位置,与所述目标对象的真实位置;The apparatus according to claim 6, wherein the first acquiring module is configured to acquire the predicted position of the target object predicted by the first algorithm for the frame data of the video stream data, and State the true location of the target object;以及用于确定所述预测数据包括所述预测位置,所述真实数据包括所述真实位置。And for determining that the predicted data includes the predicted location, and the real data includes the true location.
- 根据权利要求6所述的装置,其特征在于,所述第一获取模块还用于在所述视频流数据中,为所述目标对象的第一信息存储标记,其中,所述第一信息包括以下至少之一:视频帧数,每帧数据图像中的目标对象的标识ID,目标对象的脸部尺寸、目标对象的脸部中心点坐标位置;The apparatus according to claim 6, wherein the first acquisition module is further configured to store a mark for the first information of the target object in the video stream data, wherein the first information includes At least one of the following: the number of video frames, the identification ID of the target object in each frame of the data image, the face size of the target object, and the coordinate position of the center point of the face of the target object;以及用于依据存储有标记的所述第一信息,获取关于所述目标对象的预测数据和所述真实数据。And for acquiring the predicted data and the real data about the target object according to the first information in which the mark is stored.
- 一种存储介质,其特征在于,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至5任一项中所述的方法。A storage medium characterized in that a computer program is stored in the storage medium, wherein the computer program is configured to execute the method described in any one of claims 1 to 5 when it is run.
- 一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至5任一项中所述的方法。An electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute any one of claims 1 to 5. Described method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811493566.7 | 2018-12-06 | ||
CN201811493566.7A CN111292359A (en) | 2018-12-06 | 2018-12-06 | Method and device for measuring performance of algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020114136A1 true WO2020114136A1 (en) | 2020-06-11 |
Family
ID=70974092
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/112914 WO2020114136A1 (en) | 2018-12-06 | 2019-10-24 | Method and device for evaluating algorithm performance |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111292359A (en) |
WO (1) | WO2020114136A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104462808A (en) * | 2014-12-04 | 2015-03-25 | 河海大学 | Method for fitting safe horizontal displacement and dynamic data of variable sliding window of water level |
US9129400B1 (en) * | 2011-09-23 | 2015-09-08 | Amazon Technologies, Inc. | Movement prediction for image capture |
CN107492113A (en) * | 2017-06-01 | 2017-12-19 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8582811B2 (en) * | 2011-09-01 | 2013-11-12 | Xerox Corporation | Unsupervised parameter settings for object tracking algorithms |
US20140341465A1 (en) * | 2013-05-16 | 2014-11-20 | The Regents Of The University Of California | Real-time pose estimation system using inertial and feature measurements |
CN104977022B (en) * | 2014-04-04 | 2018-02-27 | 西北工业大学 | Multiple-target system Performance Evaluation emulation mode |
CN107679578B (en) * | 2017-10-12 | 2020-03-31 | 北京旷视科技有限公司 | Target recognition algorithm testing method, device and system |
CN108364301B (en) * | 2018-02-12 | 2020-09-04 | 中国科学院自动化研究所 | Visual tracking algorithm stability evaluation method and device based on cross-time overlapping rate |
-
2018
- 2018-12-06 CN CN201811493566.7A patent/CN111292359A/en active Pending
-
2019
- 2019-10-24 WO PCT/CN2019/112914 patent/WO2020114136A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9129400B1 (en) * | 2011-09-23 | 2015-09-08 | Amazon Technologies, Inc. | Movement prediction for image capture |
CN104462808A (en) * | 2014-12-04 | 2015-03-25 | 河海大学 | Method for fitting safe horizontal displacement and dynamic data of variable sliding window of water level |
CN107492113A (en) * | 2017-06-01 | 2017-12-19 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
Also Published As
Publication number | Publication date |
---|---|
CN111292359A (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9824460B2 (en) | Method, apparatus and system for acquiring headcount | |
US10728536B2 (en) | System and method for camera commissioning beacons | |
CN111160243A (en) | Passenger flow volume statistical method and related product | |
CN111148030A (en) | Fingerprint database updating method and device, server and storage medium | |
US11200406B2 (en) | Customer flow statistical method, apparatus and device | |
CN112465866A (en) | Multi-target track acquisition method, device, system and storage medium | |
CN113205037A (en) | Event detection method and device, electronic equipment and readable storage medium | |
CN108647587A (en) | Demographic method, device, terminal and storage medium | |
CN112562005A (en) | Space calibration method and system | |
CN111524394A (en) | Method, device and system for improving accuracy of comprehensive track monitoring data of apron | |
CN115546705A (en) | Target identification method, terminal device and storage medium | |
CN111899279A (en) | Method and device for detecting motion speed of target object | |
CN113762229B (en) | Intelligent identification method and system for building equipment in building site | |
CN111461222A (en) | Method and device for acquiring target object track similarity and electronic equipment | |
CN116563841B (en) | Detection method and detection device for power distribution network equipment identification plate and electronic equipment | |
CN112989916A (en) | Crowd counting method combining density estimation and target detection | |
WO2020114136A1 (en) | Method and device for evaluating algorithm performance | |
CN116645612A (en) | Forest resource asset determination method and system | |
WO2021114985A1 (en) | Companionship object identification method and apparatus, server and system | |
CN113810665A (en) | Video processing method, device, equipment, storage medium and product | |
Dias et al. | Real-time visual ground-truth system for indoor robotic applications | |
CN112241686A (en) | Trajectory comparison matching method and system based on feature vectors | |
CN111243289A (en) | Target vehicle tracking method and device, storage medium and electronic device | |
CN117877100B (en) | Behavior mode determining method and device, electronic equipment and storage medium | |
Ding et al. | Who is partner: A new perspective on data association of multi-object tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19893503 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19893503 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19893503 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.06.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19893503 Country of ref document: EP Kind code of ref document: A1 |