CN111292359A - Method and device for measuring performance of algorithm - Google Patents

Method and device for measuring performance of algorithm Download PDF

Info

Publication number
CN111292359A
CN111292359A CN201811493566.7A CN201811493566A CN111292359A CN 111292359 A CN111292359 A CN 111292359A CN 201811493566 A CN201811493566 A CN 201811493566A CN 111292359 A CN111292359 A CN 111292359A
Authority
CN
China
Prior art keywords
data
real
predicted
target object
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811493566.7A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
季春霖
赵盟盟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Guangqi Intelligent Technology Co ltd
Original Assignee
Xi'an Guangqi Future Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Guangqi Future Technology Research Institute filed Critical Xi'an Guangqi Future Technology Research Institute
Priority to CN201811493566.7A priority Critical patent/CN111292359A/en
Priority to PCT/CN2019/112914 priority patent/WO2020114136A1/en
Publication of CN111292359A publication Critical patent/CN111292359A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

The application provides a performance measuring method and device of an algorithm, wherein the method comprises the following steps: the method comprises the steps of determining a tracking system algorithm to be evaluated, obtaining one or more parameters of the tracking system algorithm for tracking a target object in a video, wherein the one or more parameters are used for comprehensively measuring the tracking system algorithm, and the method comprises the aspects of measuring the stability, the accuracy and the like of the tracking system algorithm.

Description

Method and device for measuring performance of algorithm
Technical Field
The present application relates to, but not limited to, the field of electronic tracking, and in particular, to a method and an apparatus for measuring performance of an algorithm.
Background
In the related art, a Mean-square Error (MSE) is a common method for target tracking evaluation. MSE refers to the expected difference between the true and estimated values. In practice, it is very difficult to directly calculate the MSE index, since it is expected to be usually difficult to obtain. Therefore, one indicator that is often used is the root mean square error RMSE, which statistically approximates the expected value using sampled values from MonteCarlo simulations. RMSE is the most commonly used indicator in the field of multi-target tracking, however, the RMSE indicator has several disadvantages: first, it is not a distance concept in euclidean space; secondly, when the number of targets is large, for example, hundreds of targets, the RMSE is too redundant as a multi-target tracking evaluation index. In fact, in the case of multiple targets, the tracking performance index of a single target is weakened more and more, and in this case, people pay more attention to the evaluation of the tracking performance of the whole target group, and no more attention to the tracking performance of a single target. Therefore, RMSE is cumbersome and not comprehensive to apply in complex target situations.
Aiming at the problem that a performance evaluation scheme aiming at a target tracking system algorithm in the related technology is not comprehensive enough, no effective solution is provided at present.
Disclosure of Invention
The embodiment of the application provides a performance measurement method and device of an algorithm, so as to at least solve the problem that a performance evaluation scheme aiming at a target tracking system algorithm in the related art is not comprehensive enough.
According to an embodiment of the present application, there is provided a performance measurement method of an algorithm, including: acquiring predicted data of the target object action track predicted by a first algorithm and real data of the target object action track aiming at video stream data, wherein the first algorithm is used for tracking the target object; obtaining at least one of the following parameters from the predicted data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed; the performance of the first algorithm is measured in terms of at least one of the parameters.
According to another embodiment of the present application, there is also provided an apparatus for measuring performance of an algorithm, including: a first obtaining module, configured to obtain, for video stream data, prediction data of a target object motion trajectory predicted by a first algorithm, and real data of the target object motion trajectory, where the first algorithm is used to track the target object; a second obtaining module configured to obtain at least one of the following parameters from the predicted data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed; and the measuring module is used for measuring the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used for tracking the target object.
According to a further embodiment of the present application, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present application, there is also provided an electronic device, comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
By the method and the device, the tracking system algorithm to be evaluated is determined, one or more parameters of the tracking system algorithm for tracking the target object in the video are obtained, the one or more parameters are used for comprehensively measuring the tracking system algorithm, the stability, the accuracy and the like of the tracking system algorithm are measured, the problem that a performance evaluation scheme aiming at the target tracking system algorithm in the related technology is not comprehensive is solved, and the performance of the target tracking system algorithm is completely and objectively measured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal of a performance measurement method of an algorithm according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of performance measurement of an algorithm according to an embodiment of the application;
FIG. 3 is a diagram illustrating a file format according to a first embodiment of the present application;
fig. 4 is a block diagram of a performance measuring apparatus of an algorithm according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The scheme of the application file is used for measuring the performance of the target tracking system algorithm, can be used as a pedestrian intelligent tracking system, and is applied to the fields of urban security and the like.
Example one
The method of the first embodiment can be executed in a computer terminal or a similar computing device. Taking a computer terminal as an example, fig. 1 is a hardware structure block diagram of a computer terminal of a performance measurement method of an algorithm according to an embodiment of the present application, as shown in fig. 1, a computer terminal 10 may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), and a memory 104 for storing data, and optionally, the computer terminal may further include a transmission device 106 for a communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the performance measurement method of the algorithm in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a performance measurement method of an algorithm running on the computer terminal is provided, and fig. 2 is a flowchart of the performance measurement method of the algorithm according to the embodiment of the present application, as shown in fig. 2, the flowchart includes the following steps:
step S202, aiming at video stream data, obtaining predicted data of the target object action track predicted by a first algorithm and real data of the target object action track, wherein the first algorithm is used for tracking the target object;
the video is a video applied by a tracking system, the tracking system tracks the walking track of a target object in the video, and optionally, videos recorded by a plurality of cameras may be combined to realize target tracking in a certain range or scene. For example, the video of one camera can track the track of the target object on the square only, and the video of the surrounding environment can track the moving track of the target object in the urban area.
Step S204, obtaining at least one of the following parameters according to the prediction data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed;
the average difference may be equal to a position accuracy indicator in a subsequent embodiment, the first value may be equal to a miss count indicator in the subsequent embodiment, the second value may be equal to a false alarm indicator fp, the third value may be equal to a cross-misjudgment indicator mix, and an average of the first value, the second value, and the third value may be referred to as a statistical accuracy indicator.
The scheme of the embodiment calculates the average difference of all the frames from the difference of each frame of image, and the average difference is used for measuring the accuracy of the tracking system algorithm to be more accurate and complete.
Step S206, the performance of the first algorithm is measured according to at least one of the parameters.
Through the steps, the tracking system algorithm to be evaluated is determined, one or more parameters of the tracking system algorithm for tracking the target object in the video are obtained, the one or more parameters are used for comprehensively measuring the tracking system algorithm, the stability, the accuracy and the like of the tracking system algorithm are measured, the problem that a performance evaluation scheme aiming at the target tracking system algorithm in the related technology is not comprehensive is solved, and the performance of the target tracking system algorithm is completely and objectively measured.
Optionally, obtaining predicted data of the target object action trajectory predicted by the first algorithm and real data of the target object action trajectory comprises: acquiring the predicted position of the target object predicted by the first algorithm and the real position of the target object aiming at the frame data of the video stream data; determining that the predicted data includes the predicted location and the real data includes the real location.
Optionally, obtaining predicted data of the target object action trajectory predicted by the first algorithm and real data of the target object action trajectory comprises: storing a marker for first information of the target object in the video stream data, wherein the first information includes at least one of: the method comprises the steps of (1) video frame number, identification ID of a target object in each frame of data image, face size of the target object and face center point coordinate position of the target object; acquiring prediction data and the real data about the target object according to the first information stored with the mark.
In the optional embodiment, the moving process of the target object is subjected to overall marking recording, and the marking information is unified in the whole video, for example, people in the whole video have unique IDs from beginning to end, so that data analysis can be performed at a later stage conveniently.
Optionally, the manner of obtaining the first numerical value and/or the second numerical value includes: acquiring the predicted data and real data aiming at each frame of data of the video stream data; taking the number of the real data without corresponding prediction data as a third numerical value, obtaining a second average difference value of the video stream data according to the third numerical value of a plurality of frame data, and taking the second average difference value as the first numerical value; and taking the number of the prediction data without corresponding to the real data as a fourth numerical value, obtaining a third average difference value of the video stream data according to the fourth numerical value of the plurality of frame data, and taking the third average difference value as the second numerical value. According to the scheme of the embodiment, the second average difference or the third average difference of all the frames is calculated from the difference of each frame of image, and the second average difference or the third average difference is used for measuring the accuracy of the tracking system algorithm to be more accurate and complete.
Optionally, the manner of obtaining the third value includes: for the video stream data, acquiring prediction data of a plurality of target objects predicted by the first algorithm after the real walking tracks of the plurality of target objects are crossed; acquiring the number of changes of the corresponding relation between the predicted data and the real data before and after the real walking track is crossed; and acquiring a fourth average difference value of the video stream data according to the change number of the plurality of frame data, and taking the fourth average difference value as the third numerical value. According to the scheme of the embodiment, the fourth average difference of all the frames is calculated from the difference of each frame of image, and the fourth average difference is used for measuring the accuracy of the tracking system algorithm to be more accurate and complete.
Most target tracking evaluation methods in the related art are evaluated under the scene of a single target or multiple targets simple path, and only the performance of a tracking algorithm can be measured. A reasonable evaluation method is lacked for the overall performance of pedestrian detection, tracking and identification in a complex scene. Measuring the overall performance of the pedestrian tracking system requires completely measuring the combined performance of each module, rather than considering only the tracking accuracy as in the conventional method. The scheme of the application is provided in combination with practical application aiming at the problem.
The application provides a set of complete method system for measuring the accuracy of a pedestrian tracking and identifying system, and the system of the system can comprise a target detection module, a face ID (identity) identifying module and a target track tracking module. The system can be used in the field of urban security. The method and the device can comprehensively and objectively measure the accuracy performance of the tracking and identifying system, and have very important application value for optimizing and evaluating product development.
The complete intelligent pedestrian tracking system in the application comprises key functional modules such as target detection, path tracking, face recognition and the like. The application provides a complete method system for measuring the accuracy of the pedestrian tracking and identifying system. The system is divided into three parts: a data preprocessing method, a position precision index and a statistical accuracy index. The data preprocessing refers to intercepting and marking the video data to be evaluated, and comprises intercepting human faces, unifying names, aligning frame numbers and unifying file formats; the position precision index is an error value between a target tracking predicted track and a video actual track, and reflects the capability of a tracking algorithm of the system for predicting the position of the target; the statistical accuracy indexes are divided into three types, which respectively describe the performances of a system target detection module, a face recognition module and a tracking module when a target crosses.
The evaluation system can completely and objectively measure the performance of the pedestrian intelligent tracking system. The method makes up the defects of the traditional tracking evaluation method.
This is further illustrated by the following specific examples.
Example 1: data preprocessing method
The tracking evaluation method in the related technology mostly simply marks the positions for preprocessing data, and the system aims at comprehensively measuring the detection, tracking and identification effects, so that the pedestrian target moving process needs to be comprehensively marked and recorded. The content to be recorded includes: the number of video frames, the ID of different people in each frame (the same person needs to have the same ID in different frames), the length and width of the face of the pedestrian, the coordinates of the center point of the face of the pedestrian, and the like. Acquiring the records, and sorting the records into a json format file readable by a program, wherein fig. 3 is a schematic diagram of a file format according to a first embodiment of the present application, as shown in fig. 3, a specific content format is shown in fig. 3, and a frame No. 1 is an outer field of complete data; frame 2 is the video frame number field; the frame No. 3 is a fixed field; the frame No. 4 is a pedestrian name ID field; box No. 5 is a file path field. The data preprocessing mode can completely record the activity tracks of all pedestrians at all moments so as to realize objective measurement of the performance of the whole system.
Example 2: position accuracy index
The position accuracy index may measure the accuracy of the tracking algorithm. On the basis of finishing data preprocessing, firstly, establishing a corresponding relation between a real target and a system prediction result, and realizing one-to-one correspondence according to the frame number and the ID name. After the target corresponding relation is established, the error between the predicted position coordinate and the real position coordinate of each target system in each frame can be calculated, all errors in all frames are summed, and then the logarithm of the relation between the predicted position and the real position is averaged, so that the average error can be calculated. The index can measure the performance of the system tracking module.
Example 3: statistical accuracy index
The statistical accuracy index can measure the comprehensive performance of a system detection target module, an ID identification module and a tracking module, and is specifically divided into three indexes: and counting the miss number index, the false alarm index fp and the cross misjudgment index mix, wherein the counting accuracy index is the average value of the three indexes. As with the position accuracy index, the statistical accuracy index needs to be calculated on the basis of establishing the correspondence between the real target and the system prediction result. The miss index of the number of the missing data is counted, namely the number of targets in real data which are lost in the predicted data is predicted, and the loss is caused by the missing detection of a detection module or the tracking loss of a tracking module; the false alarm index fp refers to that a target which is not in the real data appears in the prediction data by mistake, and the reason for fp is multi-detection of a detection module or false recognition of a face recognition module; the cross misjudgment index mix means that the prediction of the system at the track intersection is completely opposite to the real target, and causes mix due to the fact that the tracking module performs wrong tracking or the identification module performs wrong identification. The above indexes are all that is to sum up all the targets in the frame, and then to average the logarithm of the relationship between the predicted position and the actual position.
By using the index system, the performance of the pedestrian tracking and identifying system can be evaluated, and the system has important significance for system development and test.
The application provides a complete method system for measuring the accuracy of the pedestrian tracking and identifying system. Due to the complexity of the system module, the evaluation effect cannot be achieved by using the traditional evaluation index. By establishing data preprocessing and two major indexes, the comprehensive performance of each module of the system is comprehensively and objectively measured, and important evaluation significance is provided for product development and performance evaluation.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
Example two
In this embodiment, a performance measurement device of an algorithm is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device that has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a performance measuring apparatus of an algorithm according to an embodiment of the present application, as shown in fig. 4, the apparatus includes:
a first obtaining module 42, configured to obtain, for video stream data, predicted data of the target object motion trajectory predicted by a first algorithm, and real data of the target object motion trajectory, where the first algorithm is used for tracking the target object;
a second obtaining module 44, configured to obtain at least one of the following parameters from the prediction data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed;
a measuring module 46 configured to measure performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is configured to track the target object.
By the device, the tracking system algorithm to be evaluated is determined, one or more parameters of the tracking system algorithm for tracking the target object in the video are obtained, the one or more parameters are used for comprehensively measuring the tracking system algorithm, the stability, the accuracy and the like of the tracking system algorithm are measured, the problem that a performance evaluation scheme aiming at the target tracking system algorithm in the related technology is not comprehensive is solved, and the performance of the target tracking system algorithm is completely and objectively measured.
Optionally, the first obtaining module 42 is configured to obtain, for frame data of the video stream data, a predicted position of the target object predicted by the first algorithm and a real position of the target object; and for determining that the predicted data comprises the predicted location and the real data comprises the real location.
Optionally, the first obtaining module 42 is further configured to store a mark for first information of the target object in the video stream data, where the first information includes at least one of: the method comprises the steps of (1) video frame number, identification ID of a target object in each frame of data image, face size of the target object and face center point coordinate position of the target object; and for obtaining prediction data and said real data about said target object in dependence on said first information stored with a tag.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
EXAMPLE III
Embodiments of the present application also provide a storage medium. Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, acquiring predicted data of the target object action track predicted by a first algorithm and real data of the target object action track aiming at video stream data, wherein the first algorithm is used for tracking the target object;
s2, obtaining at least one of the following parameters according to the prediction data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed;
s3, measuring the performance of the first algorithm according to at least one of the parameters.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Embodiments of the present application further provide an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring predicted data of the target object action track predicted by a first algorithm and real data of the target object action track aiming at video stream data, wherein the first algorithm is used for tracking the target object;
s2, obtaining at least one of the following parameters according to the prediction data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed;
s3, measuring the performance of the first algorithm according to at least one of the parameters.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for measuring performance of an algorithm, comprising:
acquiring predicted data of the target object action track predicted by a first algorithm and real data of the target object action track aiming at video stream data, wherein the first algorithm is used for tracking the target object;
obtaining at least one of the following parameters from the predicted data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed;
the performance of the first algorithm is measured in terms of at least one of the parameters.
2. The method of claim 1, wherein obtaining predicted data of the target object motion trajectory predicted by the first algorithm and real data of the target object motion trajectory comprises:
acquiring the predicted position of the target object predicted by the first algorithm and the real position of the target object aiming at the frame data of the video stream data;
determining that the predicted data includes the predicted location and the real data includes the real location.
3. The method of claim 1, wherein obtaining predicted data of the target object motion trajectory predicted by the first algorithm and real data of the target object motion trajectory comprises:
storing a marker for first information of the target object in the video stream data, wherein the first information includes at least one of: the method comprises the steps of (1) video frame number, identification ID of a target object in each frame of data image, face size of the target object and face center point coordinate position of the target object;
acquiring prediction data and the real data about the target object according to the first information stored with the mark.
4. The method of claim 1, wherein obtaining the first and/or second values comprises:
acquiring the predicted data and real data aiming at each frame of data of the video stream data;
taking the number of the real data without corresponding prediction data as a third numerical value, obtaining a second average difference value of the video stream data according to the third numerical value of a plurality of frame data, and taking the second average difference value as the first numerical value;
and taking the number of the prediction data without corresponding to the real data as a fourth numerical value, obtaining a third average difference value of the video stream data according to the fourth numerical value of the plurality of frame data, and taking the third average difference value as the second numerical value.
5. The method of claim 1, wherein obtaining the third value comprises:
for the video stream data, acquiring prediction data of a plurality of target objects predicted by the first algorithm after the real walking tracks of the plurality of target objects are crossed;
acquiring the number of changes of the corresponding relation between the predicted data and the real data before and after the real walking track is crossed;
and acquiring a fourth average difference value of the video stream data according to the change number of the plurality of frame data, and taking the fourth average difference value as the third numerical value.
6. An apparatus for measuring performance of an algorithm, comprising:
a first obtaining module, configured to obtain, for video stream data, prediction data of a target object motion trajectory predicted by a first algorithm, and real data of the target object motion trajectory, where the first algorithm is used to track the target object;
a second obtaining module configured to obtain at least one of the following parameters from the predicted data and the real data: the average difference value is the average value of the position difference values of a plurality of frame data of the video stream data, wherein the position difference value of each frame data is the difference value between the predicted position and the real position; a first value, the number of real data not corresponding to the predicted data; a second value, the number of predicted data not corresponding to the real data; a third numerical value, which is used for predicting the number of changes of the corresponding relation between the data and the real data after the real walking tracks of the target objects are crossed;
and the measuring module is used for measuring the performance of the first algorithm according to at least one of the parameters, wherein the first algorithm is used for tracking the target object.
7. The apparatus according to claim 6, wherein the first obtaining module is configured to obtain, for frame data of the video stream data, the predicted position of the target object predicted by the first algorithm and the real position of the target object;
and for determining that the predicted data comprises the predicted location and the real data comprises the real location.
8. The apparatus of claim 6, wherein the first obtaining module is further configured to store a flag for first information of the target object in the video stream data, wherein the first information includes at least one of: the method comprises the steps of (1) video frame number, identification ID of a target object in each frame of data image, face size of the target object and face center point coordinate position of the target object;
and for obtaining prediction data and said real data about said target object in dependence on said first information stored with a tag.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN201811493566.7A 2018-12-06 2018-12-06 Method and device for measuring performance of algorithm Pending CN111292359A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811493566.7A CN111292359A (en) 2018-12-06 2018-12-06 Method and device for measuring performance of algorithm
PCT/CN2019/112914 WO2020114136A1 (en) 2018-12-06 2019-10-24 Method and device for evaluating algorithm performance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811493566.7A CN111292359A (en) 2018-12-06 2018-12-06 Method and device for measuring performance of algorithm

Publications (1)

Publication Number Publication Date
CN111292359A true CN111292359A (en) 2020-06-16

Family

ID=70974092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811493566.7A Pending CN111292359A (en) 2018-12-06 2018-12-06 Method and device for measuring performance of algorithm

Country Status (2)

Country Link
CN (1) CN111292359A (en)
WO (1) WO2020114136A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058523A1 (en) * 2011-09-01 2013-03-07 Xerox Corporation Unsupervised parameter settings for object tracking algorithms
US20140341465A1 (en) * 2013-05-16 2014-11-20 The Regents Of The University Of California Real-time pose estimation system using inertial and feature measurements
CN104977022A (en) * 2014-04-04 2015-10-14 西北工业大学 Multi-target track system performance evaluation simulation method
CN107679578A (en) * 2017-10-12 2018-02-09 北京旷视科技有限公司 The method of testing of Target Recognition Algorithms, apparatus and system
CN108364301A (en) * 2018-02-12 2018-08-03 中国科学院自动化研究所 Based on across when Duplication Vision Tracking stability assessment method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129400B1 (en) * 2011-09-23 2015-09-08 Amazon Technologies, Inc. Movement prediction for image capture
CN104462808B (en) * 2014-12-04 2017-06-16 河海大学 Level of security displacement and the slip variable window dynamic data approximating method of water level
CN107492113B (en) * 2017-06-01 2019-11-05 南京行者易智能交通科技有限公司 A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130058523A1 (en) * 2011-09-01 2013-03-07 Xerox Corporation Unsupervised parameter settings for object tracking algorithms
US20140341465A1 (en) * 2013-05-16 2014-11-20 The Regents Of The University Of California Real-time pose estimation system using inertial and feature measurements
CN104977022A (en) * 2014-04-04 2015-10-14 西北工业大学 Multi-target track system performance evaluation simulation method
CN107679578A (en) * 2017-10-12 2018-02-09 北京旷视科技有限公司 The method of testing of Target Recognition Algorithms, apparatus and system
CN108364301A (en) * 2018-02-12 2018-08-03 中国科学院自动化研究所 Based on across when Duplication Vision Tracking stability assessment method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周进等: "弱小目标跟踪算法性能评估的研究", 光电工程, pages 3 *
章圣洁等: "多目标跟踪系统数据融合算法的性能评估", 计算机测量与控制, pages 3 *
苏永钢等: "基于Mean Shift的目标跟踪算法性能比较研究", 激光与红外, pages 2 *

Also Published As

Publication number Publication date
WO2020114136A1 (en) 2020-06-11

Similar Documents

Publication Publication Date Title
CN109961106B (en) Training method and device of trajectory classification model and electronic equipment
CN111753757B (en) Image recognition processing method and device
CN111148030A (en) Fingerprint database updating method and device, server and storage medium
CN112950717A (en) Space calibration method and system
CN112562005A (en) Space calibration method and system
CN111524394A (en) Method, device and system for improving accuracy of comprehensive track monitoring data of apron
CN112651307A (en) Personnel trajectory tracking method, system, device and storage medium
CN111899279A (en) Method and device for detecting motion speed of target object
CN116563841B (en) Detection method and detection device for power distribution network equipment identification plate and electronic equipment
CN115546705A (en) Target identification method, terminal device and storage medium
CN114743165A (en) Method and device for determining vehicle trajectory, storage medium and electronic device
CN109376689B (en) Crowd analysis method and device
CN113077018A (en) Target object identification method and device, storage medium and electronic device
CN113505720A (en) Image processing method and device, storage medium and electronic device
CN111292359A (en) Method and device for measuring performance of algorithm
CN115409472B (en) Intelligent case handling process management method and system and electronic equipment
CN116645612A (en) Forest resource asset determination method and system
WO2021051568A1 (en) Method and apparatus for constructing road network topological structure, and computer device and storage medium
CN111739056A (en) Trajectory tracking system
CN113573263B (en) Method for determining line time based on signaling data and related device
CN113132891B (en) Passenger flow statistical method and system based on mobile signaling
CN113810665A (en) Video processing method, device, equipment, storage medium and product
EP2874117A1 (en) Method and apparatus for determining position related properties of a motion video camera
CN111243289A (en) Target vehicle tracking method and device, storage medium and electronic device
CN111288998A (en) Map drawing method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221207

Address after: 710000 second floor, building B3, yunhuigu, No. 156, Tiangu 8th Road, software new town, high tech Zone, Xi'an, Shaanxi

Applicant after: Xi'an Guangqi Intelligent Technology Co.,Ltd.

Address before: 710003 2nd floor, B3, yunhuigu, 156 Tiangu 8th Road, software new town, Xi'an City, Shaanxi Province

Applicant before: Xi'an Guangqi Future Technology Research Institute

TA01 Transfer of patent application right