CN114973165A - Event recognition algorithm testing method and device and electronic equipment - Google Patents

Event recognition algorithm testing method and device and electronic equipment Download PDF

Info

Publication number
CN114973165A
CN114973165A CN202210823341.3A CN202210823341A CN114973165A CN 114973165 A CN114973165 A CN 114973165A CN 202210823341 A CN202210823341 A CN 202210823341A CN 114973165 A CN114973165 A CN 114973165A
Authority
CN
China
Prior art keywords
subsequence
image
event
images
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210823341.3A
Other languages
Chinese (zh)
Other versions
CN114973165B (en
Inventor
殷俊
吴立
黄鹏
周祥明
侯国飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210823341.3A priority Critical patent/CN114973165B/en
Publication of CN114973165A publication Critical patent/CN114973165A/en
Application granted granted Critical
Publication of CN114973165B publication Critical patent/CN114973165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a test method and a test device for an event recognition algorithm and electronic equipment, which are used for automatically testing performance evaluation indexes of the event recognition algorithm. The method comprises the steps of obtaining a sample image sequence and an annotated image sequence, calling an event recognition algorithm, recognizing at least one first subsequence related to a target event from the sample image sequence, detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset conditions of the target event or not to obtain a first detection result, then aiming at a second subsequence meeting the preset conditions in the first subsequence, detecting whether the images in the second subsequence are matched with the annotated images in an annotated image set or not to obtain a second detection result, and finally calculating a performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result. Based on the method, the space-time logic of the target event occurring in the scene is abstracted, so that the automatic test is realized, and the test efficiency is effectively improved.

Description

Event recognition algorithm testing method and device and electronic equipment
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method and an apparatus for testing an event recognition algorithm, and an electronic device.
Background
In an application scenario of intelligent transportation, an event recognition algorithm is generally adopted to analyze video data of a specified time period and a specified area, so as to recognize a traffic event occurring in the specified time period and the specified area, for example, a red light running event, a lane changing event, an overspeed event, a line pressing event, a collision event or a throwing event.
In practical application, the recognition accuracy of the event recognition algorithm influences subsequent analysis and judgment, and based on the influence, higher and higher requirements are also put forward on the recognition accuracy of the event recognition algorithm in some traffic scenes. However, at present, a method for testing performance evaluation indexes of an event recognition algorithm is still lacking, so that the recognition accuracy of the event recognition algorithm cannot be guaranteed to meet the service requirements of an actual traffic scene.
Disclosure of Invention
The application provides a testing method and device of an event recognition algorithm and electronic equipment, which are used for carrying out portrait clustering on massive portrait data.
In a first aspect, the present application provides a method for testing an event recognition algorithm, the method including:
acquiring a sample image sequence and an annotation image sequence; the sample image sequence comprises a plurality of sample images which are continuously acquired aiming at a specified area, and the annotation image sequence is obtained by annotating a target object in the sample image sequence based on an annotation mode associated with a target event;
calling an event identification algorithm to identify at least one first subsequence associated with the target event from the sample image sequence;
detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset condition of the target event or not to obtain a first detection result; the preset condition is determined based on the logic sequence of each event element contained in the target event;
aiming at a second subsequence which meets the preset condition in the first subsequence, detecting whether the images in the second subsequence are matched with the labeled images in the labeled image set or not to obtain a second detection result;
and calculating the performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result.
By the method, the space-time logic of events in the actual scene can be abstracted from the actual scene, and the accuracy of the test on the space-time logic is further guaranteed. In addition, an automatic testing process can be realized, and the dependence of algorithm testing on manual work can be effectively reduced and the testing efficiency can be effectively improved through the automatic execution under the scene facing mass testing data or complex and large-amount event recognition algorithms.
In one possible design, the acquiring a sample image sequence and an annotation image sequence includes: acquiring a sample image sequence; wherein the sequence of sample images comprises a plurality of sample images successively acquired for a specified area; acquiring an auxiliary line for assisting in judging whether a target event occurs in the designated area; wherein the auxiliary line is generated according to a logic rule of the target event in the designated area; and acquiring an annotated image sequence for annotating the target object in the sample image sequence, and annotating the auxiliary line in each annotated image in the annotated image sequence.
By the method, the logic rule is that the space-time logic of the event in the actual scene is abstracted, for example, the space-time logic of the traffic event and the violation event in the traffic scene is summarized, one event is abstracted into a plurality of stages, each stage can correspond to an image meeting the space-time logic, and based on the result, the correctness of the identification result of the event identification algorithm can be judged from the logic of the event occurrence, so that the accuracy of the test on the space-time logic is ensured.
In one possible design, identifying at least one first subsequence associated with the target event from the sequence of sample images includes: calling an event recognition algorithm, and recognizing a first image of the target object passing through a first auxiliary line, a second image passing through a second auxiliary line and a third image passing through a third auxiliary line from the sample image sequence; wherein the first auxiliary line, the second auxiliary line and the third auxiliary line are generated according to the space-time logic rule of the target event in the designated area; generating a first subsequence associated with the target event based on the respective acquisition time order of the first image, the second image, and the third image.
By the method, three auxiliary lines are set for the scene, the three auxiliary lines are abstract embodiment of space-time logic based on actual scene occurrence time, further, by abstracting the time-space logic, the correctness of the event recognition algorithm recognition result can be judged logically from event occurrence, and the accuracy of the test on the space-time logic is further guaranteed.
In this embodiment of the present application, the detecting whether the image arrangement order and/or the total number of images in the first subsequence meets a preset condition of the target event to obtain a first detection result includes: if the image arrangement sequence in each sub-sample image sequence is detected to be the same as the acquisition sequence, obtaining a first detection result of each sub-sample image sequence as a pass detection; if the image arrangement sequence in each sub-sample image sequence is detected to be different from the acquisition sequence, obtaining a first detection result of each sub-sample image sequence as a detection failure; and/or if the total number of the images in each sub-sample image sequence is detected to be the same as the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a pass detection; and if the total number of the detected images of each sub-sample image sequence is different from the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a detection failure.
By the method, the preset condition of the target event is set based on the space-time logic rule of the target event, and the first detection result is obtained by detecting whether the image arrangement sequence and/or the total number of the images in the first subsequence meet the preset condition of the target event. According to the method, the space-time logic relation of the occurrence of the target event is stripped, the preset condition of the occurrence of the target event is set, the first detection result of the first subsequence corresponding to the occurrence of the target event is automatically detected, further, the automatic detection relates to the total number of images and the image arrangement sequence in the first subsequence, the generation of the first detection result can be accelerated based on the first detection result, and the calculation time and calculation resources spent in the process of obtaining the first detection result are saved.
In a possible design, the detecting, for a second subsequence that meets the preset condition in the first subsequence, whether an image in the second subsequence matches an annotated image in the set of annotated images, to obtain a second detection result includes: for each second subsequence meeting the preset condition in the first subsequence, performing the following operations: determining a first detection frame of the target object in the ith image of the second subsequence; determining a second detection frame of the target object in the corresponding annotation image of each i images; calculating the coincidence degree between the first detection frame and the second detection frame; if the coincidence degree corresponding to the ith image is smaller than a preset threshold value, a second detection result of a second subsequence in which each i image is located is detection failure; and obtaining a second detection result of the second subsequence as successful detection until the coincidence degree corresponding to each image in the second subsequence is greater than or equal to the preset threshold value.
Based on the method, after the first subsequence is detected according to the space-time logic rule of the occurrence of the target event, the second detection needs to be performed on the second subsequence, in which the first detection result of the first detection is that the detection passes, in the second detection, whether the coincidence degree of a first detection frame of the target object in the second sequence and a second detection frame of the same target object in the identification image sequence is greater than or equal to a threshold value is mainly detected, so that a second detection result is obtained, and based on the second detection result, the performance evaluation index of the target event identified by the event identification algorithm is calculated.
In a possible design, the detecting, for a second subsequence that meets the preset condition in the first subsequence, whether an image in the second subsequence matches an annotated image in the set of annotated images, to obtain a second detection result includes: for a second subsequence meeting the preset condition in the first subsequence, executing the following operations: determining a first object identification of the target object in the first image of the second subsequence; determining a second object identifier of the target object in the corresponding annotation image of the first image; if the first object identifier is matched with the second object identifier, the second detection result of the second subsequence in which the first image is located is successful; and if the first object identifier is not matched with the second object identifier, determining that the second detection result of the second subsequence in which the first image is located is detection failure.
Based on the method, after the first subsequence is detected according to the space-time logic rule of the occurrence of the target event, the second subsequence, which passes the detection, is detected as the first detection result of the first detection, and in the second detection, whether the first object identifier of the target object in the second sequence is matched with the second object identifier of the same target object in the identification image sequence is mainly detected to obtain a second detection result, and based on the second detection result, the performance evaluation index of the target event identified by the event identification algorithm is calculated.
In a second aspect, the present application provides a device for testing an event recognition algorithm, the device comprising:
the acquisition module is used for acquiring a sample image sequence and an annotation image sequence; the method comprises the steps that a sample image sequence comprises a plurality of sample images which are continuously collected aiming at a specified area, and an annotation image sequence is obtained by annotating a target object in the sample image sequence based on an annotation mode associated with a target event;
the identification module calls an event identification algorithm to identify at least one first subsequence related to the target event from the sample image sequence;
the first detection module is used for detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset condition of the target event or not to obtain a first detection result; the preset condition is determined based on the logic sequence of each event element contained in the target event;
the second detection module is used for detecting whether the images in the second subsequence are matched with the labeled images in the labeled image set or not aiming at the second subsequence which meets the preset condition in the first subsequence, so that a second detection result is obtained;
and the calculating module is used for calculating the performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result.
In one possible design, the obtaining module is specifically configured to obtain a sample image sequence; wherein the sequence of sample images comprises a plurality of sample images acquired consecutively for a specified area; acquiring an auxiliary line for assisting in judging whether a target event occurs in the designated area; wherein the auxiliary line is generated according to a logic rule of the target event in the designated area; and acquiring an annotated image sequence for annotating the target object in the sample image sequence, and annotating the auxiliary line in each annotated image in the annotated image sequence.
In a possible design, the identification module is specifically configured to invoke an event recognition algorithm to identify, from the sample image sequence, a first image of the target object passing through a first auxiliary line, a second image passing through a second auxiliary line, and a third image passing through a third auxiliary line; wherein the first auxiliary line, the second auxiliary line and the third auxiliary line are generated according to the space-time logic rule of the target event in the designated area; generating a first subsequence associated with the target event based on respective temporal orders of acquisition of the first, second and third images.
In a possible design, the first detection module is specifically configured to obtain a first inspection result of each sub-sample image sequence as an inspection pass if the image arrangement order in each sub-sample image sequence is the same as the acquisition order; if the image arrangement sequence in each sub-sample image sequence is detected to be different from the acquisition sequence, obtaining a first detection result of each sub-sample image sequence as a detection failure; and/or if the total number of the images in each sub-sample image sequence is detected to be the same as the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a pass detection; and if the total number of the detected images of each sub-sample image sequence is different from the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a detection failure.
In a possible design, the second detecting module is specifically configured to, for each second subsequence that meets the preset condition in the first subsequence, perform the following operations: determining a first detection frame of the target object in the ith image of the second subsequence; determining a second detection frame of the target object in the corresponding annotation image of each i images; calculating the coincidence degree between the first detection frame and the second detection frame; if the coincidence degree corresponding to the ith image is smaller than a preset threshold value, a second detection result of a second subsequence in which each i image is located is detection failure;
and obtaining a second detection result of the second subsequence as successful detection until the coincidence degree corresponding to each image in the second subsequence is greater than or equal to the preset threshold value.
In a possible design, the second detecting module is specifically configured to, for a second subsequence that meets the preset condition in the first subsequence, perform the following operations: determining a first object identification of the target object in the first image of the second subsequence; determining a second object identifier of the target object in the corresponding annotation image of the first image; if the first object identifier is matched with the second object identifier, the second detection result of the second subsequence in which the first image is located is successful; and if the first object identifier is not matched with the second object identifier, determining that the second detection result of the second subsequence in which the first image is located is detection failure.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing a computer program;
the processor is used for realizing the steps of the testing method of the event recognition algorithm when executing the computer program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the method steps of testing an event recognition algorithm as described above.
For each of the second to fourth aspects and possible technical effects of each aspect, please refer to the above description of the first aspect or the possible technical effects of each of the possible solutions in the first aspect, and no repeated description is given here.
Drawings
FIG. 1 is a flow chart of a method for testing an event recognition algorithm provided herein;
FIG. 2 is a schematic diagram illustrating logic rules for a red light violation event according to the present application;
FIG. 3 is a schematic diagram of a first subsequence provided herein;
FIG. 4 is a schematic diagram illustrating preset conditions for a red light running event according to the present application;
FIG. 5a is a schematic diagram illustrating the success of the first subsequence detection provided herein;
FIG. 5b is a schematic diagram of a first sub-sequence failure detection provided herein;
FIG. 6a is a schematic diagram illustrating the success of another first subsequence detection provided herein;
FIG. 6b is a schematic diagram of another first subsequence detection failure provided herein;
FIG. 7 is a diagram of a second sub-sequence and a corresponding annotated image provided in the present application;
FIG. 8 is a flow chart of a test platform test event recognition algorithm provided herein;
FIG. 9 is a schematic diagram of a testing apparatus for an event recognition algorithm provided herein;
fig. 10 is a schematic diagram of a structure of an electronic device provided in the present application;
FIG. 11 is a schematic illustration of a visual interface of an automated testing platform provided herein;
fig. 12 is a schematic view of a visualization interface of another automated testing platform provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings. The particular methods of operation in the method embodiments may also be applied to apparatus embodiments or system embodiments.
In the description of the present application "plurality" is understood to mean "at least two". "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. A is connected with B and can represent: a and B are directly connected and A and B are connected through C. In addition, in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
The embodiment of the application provides a test method and device for an event recognition algorithm and electronic equipment, which are used for automatically testing performance evaluation indexes of the event recognition algorithm.
It should be noted that the test method for the event recognition algorithm provided in the embodiment of the present application may be applied to an AI (Artificial Intelligence) open platform, and in particular, to an automated test platform applied to an AI open platform. The automatic test platform can automatically test the test data to be detected and the event recognition algorithm to be detected and output the performance evaluation index of the event recognition algorithm, and optionally, the automatic test platform can also output the recognition result of the event recognition algorithm for the test data.
Specifically, a developer can upload an event recognition algorithm and test data through the automated testing platform, and in more detail, the test data includes a sequence of continuously acquired sample images and a corresponding sequence of labeled images. After receiving the event recognition algorithm and the data to be detected, the automatic test platform detects the performance evaluation index of the event recognition algorithm based on the test method provided by the application.
As shown in fig. 11, a visual interface of an automated testing platform is provided, in which a first display area and a second display area are included. The automatic test platform can acquire test data uploaded by research personnel based on the upload data control in the first display area; the automatic test platform can also acquire an event recognition algorithm uploaded by research personnel based on the uploading algorithm control in the second display area.
Then, the automatic test platform carries out an automatic test process based on the test data and the event recognition algorithm, and generates an algorithm test report after the test process is completed, wherein the algorithm test report can comprise performance evaluation indexes of the algorithm and a test result of the test.
Further, the algorithm test report may be displayed on a visualization interface of the automated testing platform, and the visualization interface shown in fig. 12 may include a third display area and a fourth display area, where the third display area may be used to display performance evaluation indexes of the algorithm, and the fourth display area may be used to display a test result of the current test.
Furthermore, technical features included in the embodiments of the present application may be combined and used at will, and those skilled in the art should understand that, from the practical application situation, technical solutions obtained by reasonably combining the technical features in the embodiments of the present application may also solve the same technical problem or achieve the same technical effect.
The method provided by the embodiment of the application is further described in detail with reference to the attached drawings.
Referring to fig. 1, an embodiment of the present application provides a method for testing an event recognition algorithm, which includes the following specific processes:
step 101: acquiring a sample image sequence and an annotation image sequence;
in the embodiment of the application, a plurality of sample images continuously acquired for a specified area are acquired first, and the plurality of sample images form a sample image sequence according to the sequence of acquisition time. Then, according to the logic rule of the target event occurring in the designated area, determining an auxiliary line for assisting in judging whether the target event occurs in the designated area, marking the auxiliary line for each sample image in the sample image sequence, and marking the target object in each sample image to obtain a marked image sequence.
The sample images are continuously acquired by acquiring equipment at fixed positions, and the acquiring equipment can be cameras including but not limited to fisheye cameras, security cameras, infrared cameras and the like.
The target event is defined based on the space-time logic of the target object in the designated area, and can be specifically determined by combining the application requirements of the actual application scene. For example, in the scene of intelligent transportation, if the target object is a motor vehicle, the target event may be a red light running event, a lane change event, an overspeed event, a line pressing event, a collision event or a throwing event; if the target object is a non-motor vehicle, the target event may be face attribute recognition or the like.
The logic rule is set based on the reasonableness probability of the target event occurring in the designated area, and may be specifically set in combination with the application requirements of the actual application scenario. For example, in the scene of intelligent traffic, if the designated area is an intersection including a zebra crossing, the target object is a motor vehicle, and the target event is a red light running event, three auxiliary lines are set for the designated area based on the zebra crossing of the intersection, and then the three auxiliary lines are used for assisting in judging whether the motor vehicle passing through the intersection runs the red light.
The annotation images in the annotation image sequence are arranged according to the sequence of the acquisition time, and in addition, the annotation images in the annotation image sequence also correspond to the sample images in the sample image sequence. The annotation image includes annotation information corresponding to the sample image, and the annotation information may include a detection frame of the target object, an object identifier of the target object, an auxiliary line of the designated area, and the like.
Further, the following exemplary description is made for a logic rule of occurrence of a target event, referring to fig. 2, fig. 2 is a schematic diagram of a logic rule corresponding to a red light running event, in fig. 2, a lane is included, and three auxiliary lines are set for the lane, respectively: the front line, the middle line and the stop line, if the following conditions are met: acquisition device A at T 1 Collecting the motor vehicle a passing through the front line and collecting the signal A at T 2 Collecting the motor vehicle a passing through the middle line and collecting the motor vehicle A at T 3 Passing a stop line, wherein T 1 <T 2 <T 3 Then it can be determined that the motor vehicle a has a red light running event.
In summary, the embodiment of the present application provides a labeling method for associating target events, which can label not only target objects in sample images, but also collection scenes of the sample images, abstract occurrence probabilities of the target events in the collection scenes into a temporal-spatial logical relationship, abstract the target events in the collection scenes in the form of auxiliary lines or auxiliary block diagrams, and further label the auxiliary lines for the collected images to determine whether the target events occur. And then, labeling the target object in the sample image sequence based on a labeling mode associated with the target event to obtain a labeled image sequence.
Step 102: calling an event identification algorithm to identify at least one first subsequence associated with the target event from the sample image sequence;
in the embodiment of the present application, the event identification algorithm is used to identify the relevant sample images of the target event from the sample image sequence, and the relevant sample images of the same target object with the target event occurring once may form a first subsequence. For example, if the target event is a red light running event, the event recognition algorithm may recognize, from the sample image sequence, a related sample image of the same target object where the red light running event occurs, that is, a first subsequence related to the red light running event.
Specifically, the event recognition algorithm is used for recognizing the target event, and the spatio-temporal logic rule of the target event occurring in the designated area is set to enable the same target object to pass through auxiliary lines arranged in the target area, wherein the number of the auxiliary lines can be set according to the actual application condition. Taking three auxiliary lines as an example, calling an event recognition algorithm to recognize a first image of the same target object passing through the first auxiliary line, a second image passing through the second auxiliary line and a third image passing through the third auxiliary line from the sample image sequence, and then arranging the first image, the second image and the third image according to the acquisition time sequence to generate a first subsequence associated with the target event.
For example, referring to fig. 3, a schematic diagram of identifying a first subsequence associated with a target event for an event identification algorithm is shown, in fig. 3, if a sample image sequence is a sample image set arranged according to an acquisition time order, and a sample image in the sample image set is an image acquired for a specified area, the sample image sequence is identified by the event identification algorithm, where one first subsequence is identified to indicate that the target event occurs to the specified area once, and in fig. 3, m first subsequences are identified to indicate that the target event occurs to the specified area m times.
In this way, at least one first subsequence associated with the target event and identified by the event identification algorithm may be invoked, however, the event identification algorithm is influenced by the algorithm generation and application environment, and it is not necessarily accurate to identify the first subsequence associated with the target event, and therefore, further detection needs to be performed on the identified first subsequences associated with the target event to generate a performance evaluation index for evaluating the identification accuracy of the event identification algorithm.
Step 103: detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset condition of the target event or not to obtain a first detection result;
the first sub-sequence associated with a target event may include one image or multiple images, and the total number of the images should satisfy the preset condition of the target event.
Further, if the first sub-sequence associated with a target event includes multiple images, the arrangement order of the images in the same sub-sequence should also satisfy the preset condition of the target event.
The preset condition may be understood as a condition that is necessary to be satisfied for determining the target event, and in more detail, the preset condition may be determined based on a logical order of each event element included in the target event. Taking the red light running event of the target object as an example, the logic rule of the occurrence of the target event is set as three auxiliary lines, and the preset condition can be understood that the target object passes through the three auxiliary lines respectively according to the time sequence.
For example, as shown in fig. 2, a logic rule of a red light running event is shown, and based on the logic rule, the preset condition of the red light running event can be shown in fig. 4. In fig. 4, a red light running event needs to include three images: a first image of the target object crossing the leading line, a second image of the target object crossing the middle line, and a third image of the target object crossing the stop line. And, if the first image is at t 1 The image acquired at the moment, the second image being at t 2 The image acquired at the moment, the third image being at t 3 The image acquired at that moment, then t 1 <t 2 <t 3
In the embodiment of the application, if the total number of the images in each sub-sample image sequence is detected to be the same as the number of the preset images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a detection pass; and if the total number of the images in each sub-sample image sequence is different from the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as detection failure.
Specifically, as shown in FIG. 5a, a single first sub-sequence, in FIG. 5a, the first sub-sequence comprises three images, the first image being at t 1 The image acquired at the moment, the second image being at t 2 The image acquired at the moment, the third image being at t 3 And if the number of the images acquired at any moment and the number of the preset images corresponding to the target event is three, the first detection result of the first subsequence can be judged to be successful.
A single first sub-sequence as shown in fig. 5b, which in fig. 5b comprises two images, the first image being at t 1 The image acquired at the moment, the second image being at t 2 And if the number of the images acquired at any moment and the number of the preset images corresponding to the target event is three, determining that the first detection result of the first subsequence is detection failure.
Further, in this embodiment of the application, if it is detected that the image arrangement order in each first subsequence is the same as the acquisition order, the first inspection result of each first subsequence is obtained as an inspection pass; and if the image arrangement sequence in each first subsequence is detected to be different from the acquisition sequence, obtaining a first detection result of each first subsequence as a detection failure.
Specifically, as shown in FIG. 6a, a single first sub-sequence, in FIG. 6a, contains three images, the first image being at t 1 The second image is the image passing through the first auxiliary line and collected at the moment t 2 The image passing through the second auxiliary line is acquired at the moment, and the third image is at t 3 Images taken at times passing the third auxiliary line, and t 1 <t 2 <t 3 Then the first detection result of the first subsequence can be determined to be successful detection.
As shown in fig. 6b, a single first sub-sequence, which in fig. 6b contains three images, the first image being at t 2 Passing by the first auxiliary line and being acquired at a momentImage, the second image is at t 1 The image passing through the second auxiliary line is acquired at the moment, and the third image is at t 3 Images taken at times passing the third auxiliary line, and t 1 <t 2 <t 3 Then the first detection result of the first subsequence may be determined to be a detection failure.
In summary, based on the spatio-temporal logic rule of the occurrence of the target event, a preset condition of the occurrence of the target event is set, and a first detection result is obtained by detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset condition of the target event. According to the method, the space-time logic relation of the occurrence of the target event is stripped, the preset condition of the occurrence of the target event is set, the first detection result of the first subsequence corresponding to the occurrence of the target event is automatically detected, further, the automatic detection relates to the total number of images and the image arrangement sequence in the first subsequence, the generation of the first detection result can be accelerated based on the first detection result, and the calculation time and calculation resources spent in the process of obtaining the first detection result are saved.
Step 104: aiming at a second subsequence which meets the preset condition in the first subsequence, detecting whether the images in the second subsequence are matched with the labeled images in the labeled image set or not to obtain a second detection result;
in this embodiment of the application, for the second subsequence that meets the preset condition in the first subsequence, it is further required to detect whether the identification information of the image in each second subsequence matches with the annotation information of the corresponding annotated image in the annotated image set, so as to obtain a second detection result.
Specifically, for a single second subsequence which meets a preset condition in the first subsequence, taking a first image in the second subsequence as an example, first determining a first detection frame of a target object in the first image of the second subsequence, determining a second detection frame of the target object in a labeled image corresponding to the first image of the second subsequence, then calculating a coincidence degree between the first detection frame and the second detection frame, and then judging whether the calculated coincidence degree is smaller than a preset threshold value: if the contact ratio is smaller than a preset threshold value, the second detection result of the second subsequence is detection failure; if the coincidence degree is greater than or equal to the preset threshold, the second image in the second subsequence is judged in the same mode until the coincidence degree corresponding to each image in the second subsequence is greater than or equal to the preset threshold, and then the second detection result of the second subsequence can be judged to be successful in detection.
It should be noted that the second detection frame is label information corresponding to a label image, and the manner of calculating the degree of overlap may be a manner of calculating the cross-over ratio.
For example, as shown in fig. 7, the second sub-sequence and the corresponding annotation image, in fig. 7, the second sub-sequence includes three images, each of which includes a first detection frame of the target object; the three images respectively correspond to corresponding labeled images, and each of the three labeled images comprises a second detection frame of a target object. In fig. 7, a first coincidence degree corresponding to the first image, a second coincidence degree corresponding to the second image, and a third coincidence degree corresponding to the third image are obtained by calculating a coincidence degree between the first detection frame and the second detection frame corresponding to the image. If the first coincidence degree, the second coincidence degree and the third coincidence degree are all larger than or equal to 0.5, the second detection result of the second subsequence can be judged as successful detection; and if the first coincidence degree, the second coincidence degree and the third coincidence degree are all less than 0.5, judging that the second detection result of the second subsequence is successful.
Further, in this embodiment of the present application, it is further required to determine whether the first object identifier in the first image of the second subsequence is the same as the second object identifier in the corresponding labeled image, so as to further obtain a more accurate second detection result.
Specifically, for a first image of a single second subsequence, determining a first object identifier for identifying a target object in the first image by calling an event recognition algorithm, determining a second object identifier for identifying the target object in an annotated image corresponding to the first image, and then judging whether the first object identifier is matched with the second object identifier: if so, judging that the second detection result of the second subsequence is successful; if not, the second detection result of the second subsequence is judged to be detection failure.
For example, for a first image and a corresponding labeled image of the second subsequence, an event recognition algorithm is called to recognize the license plate number of the target object of the first image as a first object identifier, then the labeled information of the labeled image corresponding to the first image is determined, and the license plate number of the target object corresponding to the labeled image is obtained as a second object identifier. Then judging whether the first object identifier is consistent with the second identifier: if so, judging that the second detection result of the second subsequence is successful; if not, the second detection result of the second subsequence is judged to be detection failure.
In some embodiments, it is also possible to directly retrieve, from the annotation information corresponding to the sequence of annotated images, whether there is an object identifier matching the first object identifier: if so, judging that the second detection result of the second subsequence is successful; if not, the second detection result of the second subsequence is judged to be detection failure.
In summary, after the first subsequence is detected according to the spatio-temporal logic rule of occurrence of the target event, a second subsequence is further required to be detected, where a first detection result of the first detection is that the second subsequence passes the detection, and in the second detection, whether the coincidence degree of a first detection frame of the target object in the second sequence and a second detection frame of the same target object in the identification image sequence is greater than or equal to a threshold value is mainly detected to obtain a second detection result, and based on the second detection result, a performance evaluation index of the target event identified by the event identification algorithm is calculated.
Step 105: and calculating the performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result.
In the embodiment of the application, the target event can be identified based on the calling event identification algorithm, and the accuracy and the recall rate of the target event are counted to be used as the performance evaluation index of the event identification algorithm.
Specifically, the first detection result comprises a first number of first detection results which call the event recognition algorithm to recognize the first subsequence as successful detection and a second number of first detection results which recognize the first subsequence as failed detection; the second detection result comprises a third number of successful detections of the second subsequence of the first number, and a fourth number of failed detections of the second subsequence of the first number. And calculating the accuracy and recall rate of the target events according to the first number, the second number, the third number and the fourth number, and using the accuracy and recall rate as a performance evaluation index of the event recognition algorithm.
It should be noted that the above accuracy and recall are some possible performance evaluation indexes, and those skilled in the art will know that accuracy and the like can be used as performance evaluation indexes of the event recognition algorithm, and will not be described in detail herein.
Further, based on the method for testing the event recognition algorithm, the embodiment of the application also provides a possible testing platform, wherein the input of the platform comprises a sample image sequence, a labeled image sequence and an event recognition algorithm, and the platform can automatically execute the event recognition algorithm based on the input and output the performance evaluation index of the event recognition algorithm.
For an application scenario of intelligent transportation, a flow of testing the event recognition algorithm on the testing platform described above may be as shown in fig. 8.
Step 801: acquiring a sample image sequence and an annotation image sequence;
in the embodiment of the application, the sample image sequence may be a video material collected for different traffic scenes, and the annotation image sequence is a group of images pre-annotated for each frame of a time period in which a traffic event occurs, wherein a corresponding position and a license plate are annotated for each vehicle.
Step 802: acquiring auxiliary line configuration;
in the embodiment of the application, auxiliary lines are configured according to the space-time logic of traffic events occurring in traffic scenes, and three auxiliary lines including a front line, a middle line and a stop line can be configured for different traffic scenes based on the logic of some traffic events.
Step 803: acquiring an event identification algorithm;
in an embodiment of the present application, an event identification algorithm is used to identify traffic events.
Step 804: responding to an instruction for executing the event recognition algorithm, and obtaining a recognition result for calling the event recognition algorithm to recognize the sample image sequence;
in an embodiment of the application, invoking the event recognition algorithm may identify an image sequence associated with the traffic event from the sample image sequence as a recognition result.
Step 805: comparing the identification result with the marked image sequence to obtain a comparison result;
in the embodiment of the application, the recognition result is compared with the corresponding marked image, specifically, a cross comparison of the vehicle position in the corresponding image is calculated, the size between the cross comparison and a preset threshold value is compared, whether the license plates of the vehicles in the corresponding image are matched or not is compared, if the cross comparison is larger than or equal to the preset threshold value and the license plates are matched, the comparison is successful, and the comparison result is obtained based on the comparison result.
Step 806: and calculating the performance evaluation index of the event recognition algorithm based on the comparison result.
In the embodiment of the application, the accuracy and the recall rate can be calculated according to the comparison result, and the accuracy and the recall rate are used as performance evaluation indexes of the event recognition algorithm.
Based on the technical scheme provided by the embodiment of the application, the space-time logic of the event in the actual scene can be abstracted from the actual scene, for example, the space-time logic of the traffic event and the violation event in the traffic scene is summarized, one event is abstracted into a plurality of stages, each stage can correspond to one image meeting the space-time logic, and based on the result, the testing method suitable for the event recognition algorithm is provided. By the test method, the correctness of the recognition result of the event recognition algorithm can be judged logically from the occurrence of the event by abstracting the space-time logic, so that the accuracy of the test on the space-time logic is further ensured.
Furthermore, the testing method based on the event recognition algorithm can realize an automatic testing process. For a user, only test data (a sample image sequence and a labeled image sequence) and an event recognition algorithm need to be prepared and used as input of a test platform, and a final output result, namely a performance evaluation index of the event recognition algorithm, can be directly obtained. For equipment, only the sample image sequence, the annotation image sequence and the event recognition algorithm are required to be obtained as input, the test method of the event recognition algorithm can be automatically executed based on preset configuration, and then the performance evaluation index of the event recognition algorithm is output as the output of the test platform. Under the condition of facing mass test data or complex and large-amount event recognition algorithm, the dependence of algorithm test on manual work can be effectively reduced through the automatic execution, and the test efficiency can be effectively improved.
Based on the same inventive concept, the present application further provides a testing apparatus for an event recognition algorithm, which is used to automatically test performance evaluation indexes of the event recognition algorithm, solve the problem that the lack of a method for testing the performance evaluation indexes of the event recognition algorithm cannot ensure that the recognition accuracy of the event recognition algorithm meets the service requirements of an actual traffic scene, and realize automatic testing by the time-space logic of a target event occurring in an abstract scene, so as to effectively improve the testing efficiency, see fig. 9, where the apparatus 9 includes:
an obtaining module 901, which obtains a sample image sequence and an annotation image sequence; the sample image sequence comprises a plurality of sample images which are continuously acquired aiming at a specified area, and the annotation image sequence is obtained by annotating a target object in the sample image sequence based on an annotation mode associated with a target event;
an identifying module 902, which invokes an event identifying algorithm to identify at least one first subsequence associated with the target event from the sample image sequence;
a first detecting module 903, configured to detect whether an image arrangement order and/or a total number of images in the first subsequence meet a preset condition of the target event, to obtain a first detection result; the preset condition is determined based on the logic sequence of each event element contained in the target event;
a second detecting module 904, configured to detect, for a second subsequence that meets the preset condition in the first subsequence, whether an image in the second subsequence matches an annotated image in the annotated image set, to obtain a second detection result;
the calculating module 905 calculates a performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result.
In one possible design, the acquiring module 901 is specifically configured to acquire a sample image sequence; wherein the sequence of sample images comprises a plurality of sample images successively acquired for a specified area; acquiring an auxiliary line for assisting in judging whether a target event occurs in the designated area; wherein the auxiliary line is generated according to a logic rule of the target event in the designated area; and acquiring an annotated image sequence for annotating the target object in the sample image sequence, and annotating the auxiliary line in each annotated image in the annotated image sequence.
In a possible design, the identifying module 902 is specifically configured to invoke an event recognition algorithm to identify, from the sample image sequence, a first image of the target object passing through a first auxiliary line, a second image passing through a second auxiliary line, and a third image passing through a third auxiliary line; wherein the first auxiliary line, the second auxiliary line and the third auxiliary line are generated according to the space-time logic rule of the target event in the designated area; generating a first subsequence associated with the target event based on the respective acquisition time order of the first image, the second image, and the third image.
In a possible design, the first detecting module 903 is specifically configured to obtain a first inspection result of each sub-sample image sequence as an inspection pass if the image arrangement order in each sub-sample image sequence is the same as the acquisition order; if the image arrangement sequence in each sub-sample image sequence is detected to be different from the acquisition sequence, obtaining a first detection result of each sub-sample image sequence as a detection failure; and/or if the total number of the images in each sub-sample image sequence is detected to be the same as the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a pass detection; and if the total number of the detected images of each sub-sample image sequence is different from the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a detection failure.
In one possible design, the second detecting module 904 is specifically configured to, for each second subsequence of the first subsequences that meets the preset condition, perform the following operations: determining a first detection frame of the target object in the ith image of the second subsequence; determining a second detection frame of the target object in the corresponding annotated image of each i images; calculating the contact ratio between the first detection frame and the second detection frame; if the coincidence degree corresponding to the ith image is smaller than a preset threshold value, a second detection result of a second subsequence in which each i image is located is detection failure;
and obtaining a second detection result of the second subsequence as successful detection until the coincidence degree corresponding to each image in the second subsequence is greater than or equal to the preset threshold value.
In a possible design, the second detecting module 904 is specifically configured to, for a second subsequence that meets the preset condition in the first subsequence, perform the following operations: determining a first object identification of the target object in the first image of the second subsequence; determining a second object identifier of the target object in the corresponding annotation image of the first image; if the first object identifier is matched with the second object identifier, the second detection result of the second subsequence in which the first image is located is successful in detection; and if the first object identifier is not matched with the second object identifier, determining that the second detection result of the second subsequence in which the first image is located is detection failure.
Based on the device, the space-time logic of the event in the actual scene can be abstracted from the actual scene, for example, the space-time logic of the traffic event and the violation event in the traffic scene is summarized, one event is abstracted into a plurality of stages, each stage can correspond to one image meeting the space-time logic, and based on the result, the accuracy of the test on the space-time logic is further guaranteed. In addition, an automatic testing process can be realized, and under the condition of facing mass testing data or complex and large-amount event recognition algorithm, the dependence of algorithm testing on manual work can be effectively reduced through the automatic execution, and the testing efficiency can be effectively improved.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, where the electronic device may implement the function of the apparatus for testing an event recognition algorithm, and with reference to fig. 10, the electronic device includes:
at least one processor 11, and a memory 12 connected to the at least one processor 11, in this embodiment, a specific connection medium between the processor 11 and the memory 12 is not limited, and fig. 10 illustrates an example in which the processor 11 and the memory 12 are connected through a bus 10. The bus 10 is shown in fig. 10 by a thick line, and the connection form between other components is merely illustrative and not limited. The bus 10 may be divided into an address bus, a data bus, a control bus, etc., and for ease of illustration only one thick line is shown in fig. 10, but not to indicate only one bus or type of bus. Alternatively, the processor 11 may also be referred to as a controller, without limitation to name a few.
In the embodiment of the present application, the memory 12 stores instructions executable by the at least one processor 11, and the at least one processor 11 may execute the test method of the event recognition algorithm discussed above by executing the instructions stored in the memory 12. The processor 11 may implement the functions of the various modules in the apparatus/system shown in fig. 9.
The processor 11 is a control center of the apparatus/system, and may connect various parts of the entire control device by using various interfaces and lines, and perform various functions of the apparatus/system and process data by executing or executing instructions stored in the memory 12 and calling up data stored in the memory 12, thereby performing overall monitoring of the apparatus/system.
In one possible design, processor 11 may include one or more processing units, and processor 11 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 11. In some embodiments, the processor 11 and the memory 12 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 11 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the testing method of the event recognition algorithm disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
Memory 12, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 12 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 12 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 12 in the embodiments of the present application may also be circuitry or any other device/system capable of performing a storage function for storing program instructions and/or data.
The processor 11 is programmed to solidify the code corresponding to the test method of the event recognition algorithm described in the foregoing embodiment into the chip, so that the chip can execute the steps of the test method of the event recognition algorithm of the embodiment shown in fig. 1 when running. How to program the processor 11 is well known to those skilled in the art and will not be described in detail here.
Based on the same inventive concept, the embodiment of the present application further provides a storage medium storing computer instructions, which, when run on a computer, cause the computer to execute the test method of the event recognition algorithm discussed above.
In some possible embodiments, the various aspects of the method for testing an event recognition algorithm provided herein may also be embodied in the form of a program product comprising program code for causing the control apparatus to perform the steps of the method for testing an event recognition algorithm according to various exemplary embodiments of the present application described herein above when the program product is run on a device.
It should be apparent to one skilled in the art that embodiments of the present application may be provided as a method, apparatus/system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for testing an event recognition algorithm, the method comprising:
acquiring a sample image sequence and an annotation image sequence; the sample image sequence comprises a plurality of sample images which are continuously acquired aiming at a specified area, and the annotation image sequence is obtained by annotating a target object in the sample image sequence based on an annotation mode associated with a target event;
calling an event identification algorithm to identify at least one first subsequence associated with the target event from the sample image sequence;
detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset condition of the target event or not to obtain a first detection result; the preset condition is determined based on the logic sequence of each event element contained in the target event;
aiming at a second subsequence which meets the preset condition in the first subsequence, detecting whether the images in the second subsequence are matched with the labeled images in the labeled image set or not to obtain a second detection result;
and calculating the performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result.
2. The method of claim 1, wherein the obtaining of the sequence of sample images and the sequence of annotation images comprises:
acquiring a sample image sequence; wherein the sequence of sample images comprises a plurality of sample images successively acquired for a specified area;
acquiring an auxiliary line for assisting in judging whether a target event occurs in the designated area; wherein the auxiliary line is generated according to a logic rule of the target event in the designated area;
and acquiring an annotated image sequence for annotating the target object in the sample image sequence, and annotating the auxiliary line in each annotated image in the annotated image sequence.
3. The method of claim 1, wherein said invoking an event recognition algorithm to identify at least a first subsequence associated with the target event from the sequence of sample images comprises:
calling an event recognition algorithm, and recognizing a first image of the target object passing through a first auxiliary line, a second image passing through a second auxiliary line and a third image passing through a third auxiliary line from the sample image sequence; wherein the first auxiliary line, the second auxiliary line and the third auxiliary line are generated according to the space-time logic rule of the target event in the designated area;
generating a first subsequence associated with the target event based on the respective acquisition time order of the first image, the second image, and the third image.
4. The method as claimed in claim 1, wherein the detecting whether the image arrangement order and/or the total number of images in the first subsequence meets the preset condition of the target event or not to obtain a first detection result comprises:
if the image arrangement sequence in each sub-sample image sequence is detected to be the same as the acquisition sequence, obtaining a first detection result of each sub-sample image sequence as a pass detection;
if the image arrangement sequence in each sub-sample image sequence is detected to be different from the acquisition sequence, obtaining a first detection result of each sub-sample image sequence as a detection failure; and/or
If the total number of the detected images in each sub-sample image sequence is the same as the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a detection pass;
and if the total number of the detected images of each sub-sample image sequence is different from the preset number of images corresponding to the target event, obtaining a first detection result of each sub-sample image sequence as a detection failure.
5. The method of claim 1, wherein the detecting whether the images in the second subsequence match the annotated images in the set of annotated images for the second subsequence of the first subsequence that meets the preset condition to obtain a second detection result comprises:
for each second subsequence meeting the preset condition in the first subsequence, performing the following operations:
determining a first detection frame of the target object in the ith image of the second subsequence;
determining a second detection frame of the target object in the corresponding annotation image of each i images;
calculating the coincidence degree between the first detection frame and the second detection frame;
if the coincidence degree corresponding to the ith image is smaller than a preset threshold value, a second detection result of a second subsequence in which each i image is located is detection failure;
and obtaining a second detection result of the second subsequence as successful detection until the coincidence degree corresponding to each image in the second subsequence is greater than or equal to the preset threshold value.
6. The method according to any one of claims 1 to 5, wherein the detecting, for a second subsequence of the first subsequence that meets the preset condition, whether the images in the second subsequence match the annotated images in the set of annotated images, to obtain a second detection result, includes:
for a second subsequence meeting the preset condition in the first subsequence, executing the following operations:
determining a first object identification of the target object in the first image of the second subsequence;
determining a second object identifier of the target object in the corresponding annotation image of the first image;
if the first object identifier is matched with the second object identifier, the second detection result of the second subsequence in which the first image is located is successful;
and if the first object identifier is not matched with the second object identifier, determining that the second detection result of the second subsequence in which the first image is located is detection failure.
7. An apparatus for testing an event recognition algorithm, the apparatus comprising:
the acquisition module is used for acquiring a sample image sequence and an annotation image sequence; the sample image sequence comprises a plurality of sample images which are continuously acquired aiming at a specified area, and the annotation image sequence is obtained by annotating a target object in the sample image sequence based on an annotation mode associated with a target event;
the identification module calls an event identification algorithm to identify at least one first subsequence related to the target event from the sample image sequence;
the first detection module is used for detecting whether the image arrangement order and/or the total number of images in the first subsequence meet the preset condition of the target event or not to obtain a first detection result; the preset condition is determined based on the logic sequence of each event element contained in the target event;
the second detection module is used for detecting whether the images in the second subsequence are matched with the labeled images in the labeled image set or not aiming at the second subsequence which meets the preset condition in the first subsequence, so that a second detection result is obtained;
and the calculating module is used for calculating the performance evaluation index of the event recognition algorithm according to the first detection result and the second detection result.
8. The apparatus according to claim 7, wherein the acquisition module is specifically configured to acquire a sequence of sample images; wherein the sequence of sample images comprises a plurality of sample images successively acquired for a specified area; acquiring an auxiliary line for assisting in judging whether a target event occurs in the designated area; wherein the auxiliary line is generated according to a logic rule of the target event in the designated area; and acquiring an annotated image sequence for annotating the target object in the sample image sequence, and annotating the auxiliary line in each annotated image in the annotated image sequence.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-6 when executing the computer program stored on the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-6.
CN202210823341.3A 2022-07-14 2022-07-14 Event recognition algorithm testing method and device and electronic equipment Active CN114973165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210823341.3A CN114973165B (en) 2022-07-14 2022-07-14 Event recognition algorithm testing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210823341.3A CN114973165B (en) 2022-07-14 2022-07-14 Event recognition algorithm testing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114973165A true CN114973165A (en) 2022-08-30
CN114973165B CN114973165B (en) 2022-10-25

Family

ID=82970049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210823341.3A Active CN114973165B (en) 2022-07-14 2022-07-14 Event recognition algorithm testing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114973165B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024741A (en) * 2000-06-02 2002-01-25 Internatl Business Mach Corp <Ibm> Method for distinguishing partial cyclic pattern in event sequence and corresponding event subsequence
CN111191666A (en) * 2018-11-14 2020-05-22 网易(杭州)网络有限公司 Method and device for testing image target detection algorithm
CN111860140A (en) * 2020-06-10 2020-10-30 北京迈格威科技有限公司 Target event detection method and device, computer equipment and storage medium
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN112528716A (en) * 2019-09-19 2021-03-19 杭州海康威视数字技术股份有限公司 Event information acquisition method and device
CN113052048A (en) * 2021-03-18 2021-06-29 北京百度网讯科技有限公司 Traffic incident detection method and device, road side equipment and cloud control platform
KR20210127121A (en) * 2020-12-11 2021-10-21 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Road event detection method, apparatus, device and storage medium
CN113963438A (en) * 2021-10-20 2022-01-21 上海商汤智能科技有限公司 Behavior recognition method and device, equipment and storage medium
CN114333344A (en) * 2021-12-29 2022-04-12 以萨技术股份有限公司 Motor vehicle violation snapshot method and device and electronic equipment
CN114743132A (en) * 2022-03-22 2022-07-12 深圳云天励飞技术股份有限公司 Target algorithm selection method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002024741A (en) * 2000-06-02 2002-01-25 Internatl Business Mach Corp <Ibm> Method for distinguishing partial cyclic pattern in event sequence and corresponding event subsequence
CN111191666A (en) * 2018-11-14 2020-05-22 网易(杭州)网络有限公司 Method and device for testing image target detection algorithm
CN112528716A (en) * 2019-09-19 2021-03-19 杭州海康威视数字技术股份有限公司 Event information acquisition method and device
CN111860140A (en) * 2020-06-10 2020-10-30 北京迈格威科技有限公司 Target event detection method and device, computer equipment and storage medium
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
KR20210127121A (en) * 2020-12-11 2021-10-21 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Road event detection method, apparatus, device and storage medium
CN113052048A (en) * 2021-03-18 2021-06-29 北京百度网讯科技有限公司 Traffic incident detection method and device, road side equipment and cloud control platform
CN113963438A (en) * 2021-10-20 2022-01-21 上海商汤智能科技有限公司 Behavior recognition method and device, equipment and storage medium
CN114333344A (en) * 2021-12-29 2022-04-12 以萨技术股份有限公司 Motor vehicle violation snapshot method and device and electronic equipment
CN114743132A (en) * 2022-03-22 2022-07-12 深圳云天励飞技术股份有限公司 Target algorithm selection method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ARASH JAHANGIRI: "Red-light running violation prediction using observational and simulator data", 《ACCIDENT ANALYSIS & PREVENTION》 *
YANJIE ZENG 等: "Robust Multivehicle Tracking With Wasserstein Association Metric in Surveillance Videos", 《IEEE ACCESS》 *
夏平等: "基于时间序列组合逻辑运算的智能视频监控报警算法", 《现代电子技术》 *
王卓: "基于视觉显著性的交通事件检测算法研究", 《知网硕士电子期刊》 *

Also Published As

Publication number Publication date
CN114973165B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
EP3806064A1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN111476191B (en) Artificial intelligent image processing method based on intelligent traffic and big data cloud server
CN112085952A (en) Vehicle data monitoring method and device, computer equipment and storage medium
CN115830399B (en) Classification model training method, device, equipment, storage medium and program product
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN114627394B (en) Muck vehicle fake plate identification method and system based on unmanned aerial vehicle
CN114943750A (en) Target tracking method and device and electronic equipment
CN114973165B (en) Event recognition algorithm testing method and device and electronic equipment
CN113674318A (en) Target tracking method, device and equipment
CN111369790B (en) Vehicle passing record correction method, device, equipment and storage medium
CN112966687A (en) Image segmentation model training method and device and communication equipment
CN111626419A (en) Convolutional neural network structure, target detection method and device
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN115620098B (en) Evaluation method and system of cross-camera pedestrian tracking algorithm and electronic equipment
CN112989869B (en) Optimization method, device, equipment and storage medium of face quality detection model
CN110751065B (en) Training data acquisition method and device
CN112997192A (en) Gesture recognition method and device, terminal device and readable storage medium
CN112270257A (en) Motion trajectory determination method and device and computer readable storage medium
CN112487966B (en) Mobile vendor behavior recognition management system
CN116091553B (en) Track determination method, track determination device, electronic equipment, vehicle and storage medium
CN114407918B (en) Takeover scene analysis method, takeover scene analysis device, takeover scene analysis equipment and storage medium
CN116824515B (en) Graphic fault diagnosis method and device, electronic equipment and storage medium
CN116246128B (en) Training method and device of detection model crossing data sets and electronic equipment
CN114005093A (en) Driving behavior warning method, device, equipment and medium based on video analysis
CN114510423A (en) Data processing method, vehicle communication device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant