CN113963029A - Track splicing and event detection method, device, equipment and computer storage medium - Google Patents

Track splicing and event detection method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN113963029A
CN113963029A CN202111272252.6A CN202111272252A CN113963029A CN 113963029 A CN113963029 A CN 113963029A CN 202111272252 A CN202111272252 A CN 202111272252A CN 113963029 A CN113963029 A CN 113963029A
Authority
CN
China
Prior art keywords
tracking
track
target
tracked
reference object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111272252.6A
Other languages
Chinese (zh)
Inventor
胡武林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202111272252.6A priority Critical patent/CN113963029A/en
Publication of CN113963029A publication Critical patent/CN113963029A/en
Priority to PCT/CN2022/095304 priority patent/WO2023071171A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Geometry (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the disclosure discloses a track splicing and event detecting method, device and equipment and a computer readable storage medium. The track splicing method comprises the following steps: performing target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame; at least one reference object is tracked from a video frame preceding the current video frame; screening a target object subjected to track splicing with the tracking object from the at least one reference object based on the track information and the at least one reference information under the condition that the track interruption of the tracking object is determined when the reference object matched with the tracking object is not found from the at least one reference object; and carrying out track splicing on the target object and the tracked object to obtain the tracking track of the tracked object. Through the method and the device, the target tracking effect can be improved.

Description

Track splicing and event detection method, device, equipment and computer storage medium
Technical Field
The present disclosure relates to computer vision technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for trajectory stitching and event detection.
Background
The target tracking of the video stream is an important direction of intelligent video processing, and the method is widely applied, for example, the method can be used for performing target tracking on electric vehicles and vehicles on roads, and performing event detection on whether the electric vehicles enter the buildings and whether the roads are congested or not by combining with processing and judging of scenes. However, in the related art, when the target tracking is performed on the tracked object in the video stream, the phenomena of target loss, tracking error and the like easily occur, so that the tracking track of the tracked object is interrupted and the like, and finally the target tracking effect is poor.
Disclosure of Invention
The embodiment of the disclosure provides a track splicing method, an event detection method, a device, equipment and a computer-readable storage medium, which can improve the tracking effect.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides a track splicing method, which includes:
performing target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame; the at least one reference object is tracked from a video frame preceding the current video frame;
under the condition that a reference object matched with the tracking object is not found from the at least one reference object and the track interruption of the tracking object is determined, screening a target object subjected to track splicing with the tracking object from the at least one reference object based on the track information and the at least one reference information;
and carrying out track splicing on the target object and the tracked object to obtain the tracking track of the tracked object.
The embodiment of the disclosure provides an event detection method, which includes:
performing target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame;
under the condition that the track interruption of the tracking object is determined, screening a target object from the at least one reference object based on the track information and the at least one reference information, and carrying out track splicing on the target object and the tracking object to obtain a tracking track of the tracking object;
determining a first projection area of the tracked object, and determining a second projection area corresponding to the first projection area according to the tracking track;
and performing line crossing detection on the first projection area of the tracked object according to the second projection area to obtain a detection result, and generating an event result of the tracked object according to the detection result.
The embodiment of the present disclosure provides a track splicing apparatus, including:
the target tracking unit is used for carrying out target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to the current video frame; the at least one reference object is tracked from a video frame preceding the current video frame;
a target determining unit, configured to, when a reference object matching the tracked object is not found from the at least one reference object and it is determined that a trajectory interruption occurs in the tracked object, screen a target object subjected to trajectory splicing with the tracked object from the at least one reference object based on the trace information and the at least one reference information;
and the track splicing unit is used for carrying out track splicing on the target object and the tracking object to obtain the tracking track of the tracking object.
An embodiment of the present disclosure provides an event detection apparatus, including:
the target tracking unit is used for carrying out target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to the current video frame;
a target determination unit, configured to, if it is determined that a trajectory interruption occurs in the tracked object, screen out a target object from the at least one reference object based on the trajectory information and the at least one reference information;
the track splicing unit is used for carrying out track splicing on the target object and the tracking object to obtain a tracking track of the tracking object;
the projection determining unit is used for determining a first projection area of the tracked object and determining a second projection area corresponding to the first projection area according to the tracking track;
and the event detection unit is used for performing cross-line detection on the first projection area of the tracked object according to the second projection area to obtain a detection result and generating an event result of the tracked object according to the detection result.
An embodiment of the present disclosure provides an electronic device, including: a memory for storing an executable computer program; and the processor is used for realizing the track splicing method or the event detection method when executing the executable computer program stored in the memory.
The embodiment of the present disclosure provides a computer-readable storage medium, which stores a computer program for causing a processor to execute the above-mentioned track stitching method or event detection method.
By adopting the track splicing method in the technical scheme, the electronic equipment can determine whether the track interruption occurs to the tracking object based on the track information of the tracking object tracked by the current video frame and at least one reference information of at least one reference object tracked by the video frame before the current video frame after the video stream is subjected to target tracking, find out the target object capable of being subjected to track splicing with the tracking object from the at least one reference object under the condition of confirming that the track interruption occurs to the tracking object, and perform track splicing on the tracking object and the target object to ensure that the track of the tracking object can be reconnected, thereby obtaining the complete track of the tracking object and reducing the influence of the track interruption on the tracking object, the target tracking effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a first schematic flowchart of a track stitching method provided in an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a second track stitching method provided in the embodiment of the present disclosure;
fig. 3 is a third schematic flowchart of a track stitching method provided in the embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a current trajectory provided by embodiments of the present disclosure;
fig. 5 is a first flowchart illustrating an event detection method according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a second event detection method according to an embodiment of the disclosure;
FIG. 7 is a schematic diagram of a plurality of first boundary vertices crossing a line provided by an embodiment of the present disclosure;
FIG. 8A is a first diagram illustrating that no line crossing occurs at a plurality of first boundary vertices provided by an embodiment of the present disclosure;
FIG. 8B is a second schematic diagram illustrating no line crossing of the first boundary vertices provided by the embodiment of the disclosure;
FIG. 9 is a schematic diagram illustrating a distribution of determined first boundary vertices and corresponding matching vertices on a reference line according to an embodiment of the present disclosure;
FIG. 10 is an alternative architectural diagram of an event detection system provided by embodiments of the present disclosure;
fig. 11 is a processing diagram of track stitching performed by a server on a tracked target in a video according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a track splicing apparatus provided in the embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an event detection device according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, references to the terms "first", "second", and the like, are intended only to distinguish similar objects and not to indicate a particular ordering for the objects, it being understood that "first", "second", and the like may be interchanged under certain circumstances or sequences of events to enable embodiments of the present disclosure described herein to be practiced in other than the order illustrated or described herein.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and create a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject, has wide design field, and has the technology of both hardware level and software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
2) Computer Vision technology (CV) Computer is a science for studying how to make a machine look, and further, it refers to using a camera and a Computer to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further performing image processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include technologies such as image processing, image recognition, image semantic understanding, image retrieval, OCR, video analysis, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also include common biometric technologies such as face recognition, fingerprint recognition, and the like.
3) Target tracking refers to a process of continuously tracking a tracking object in a video stream, so as to continuously determine the position of the tracking object in each video frame of the video stream, so as to form a track for the tracking object.
4) Track splicing refers to a process of splicing different tracks of the same tracked object in a video stream into one track. The track of the same tracking object should be continuous, but due to the fact that a target is lost during target tracking and the like, one track of the same tracking object is divided into a plurality of different tracks.
5) Event detection is a process of judging whether a tracking object performs a specific event or not based on the track of the tracking object. For example, when the tracking target is an electric vehicle, it may be determined whether the electric vehicle illegally enters a building or the like based on a trajectory of the electric vehicle.
The target tracking of the video stream is an important direction of intelligent video processing, and the application is very wide. The common target tracking can be applied to the field of security protection, for example, the target tracking is performed on an electric vehicle, and the processing and the judgment of a scene are combined to determine whether the electric vehicle illegally enters a building, or the common target tracking is applied to the field of intelligent transportation, for example, the target tracking is performed on an automobile, and the processing and the judgment of the scene are combined to determine whether a road is congested or not.
However, in the related art, when performing target tracking on a tracking object in a video stream, phenomena such as target loss and tracking error easily occur, so that a track of one tracking object in a time-series picture of the video stream is divided into a plurality of different tracks, and it is determined that a plurality of tracks of a plurality of different tracking objects are tracked in the video stream. Therefore, in the related art, when the target tracking is performed on the tracked object, the tracking track of the tracked object is easily interrupted, and the like, so that the target tracking effect is poor.
Further, when an abnormality such as an interruption occurs in the tracking trajectory of the tracked object, it is inevitable that the event detection of the tracked object is affected. For example, when the tracked object is an electric vehicle, it is generally necessary to perform an over-line judgment on the electric vehicle based on the tracking track to determine whether the electric vehicle enters a building, i.e., to complete the event detection on the electric vehicle. However, when the tracking track of the electric vehicle is interrupted, the track of the same electric vehicle is divided into a plurality of different tracks, and the electric vehicle in each track can be considered as a different electric vehicle. At this time, when the reference line of the cross-line judgment is between the two tracks, the cross-line judgment cannot be completed on the electric vehicle, so that the result is missed, and the accuracy of event detection on the electric vehicle is low. Therefore, when an abnormality such as an interruption occurs in the tracking trajectory, the accuracy of event detection for the tracked object may be lowered.
In addition, in the related art, when event detection is performed on a tracking object, for example, an electric vehicle, based on the cross-line detection, a detection result is often obtained directly from the cross-line condition of the central point of the tracking object. For example, according to a tracking track of the electric vehicle, a central point of the electric vehicle is respectively obtained from two continuous frames of images to obtain two central points, then whether the two central points are distributed on two sides of a reference line is judged, if the two central points are located on two sides of the reference line, the electric vehicle is judged to be crossed, and an event detection result is obtained. However, when the tracked object is located near the reference line or the tracked object moves along the reference line, the central point of the tracked object frequently jumps across the reference line, so that even if the tracked object does not actually cross the line or only crosses the line once, the tracked object can be determined to cross the line for many times, which aggravates the false alarm phenomenon of event detection, and thus the accuracy of event detection is low.
The embodiment of the disclosure provides a track splicing method, which can improve the target tracking effect. The track splicing method provided by the embodiment of the disclosure is applied to electronic equipment for track splicing.
The following describes an exemplary application of the electronic device provided by the embodiment of the present disclosure, which may be implemented as various types of user terminals (hereinafter referred to as terminals) such as AR glasses, a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server, and may also be implemented as a device cluster composed of the user terminals and the server.
In the following, the track stitching method provided by the embodiment of the present disclosure will be described in conjunction with exemplary applications and implementations of the electronic device provided by the embodiment of the present disclosure.
Fig. 1 is a first schematic flowchart of a track splicing method provided in an embodiment of the present disclosure, and will be described with reference to the steps shown in fig. 1.
S101, performing target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to the current video frame.
The embodiment of the disclosure is realized in a scene of target tracking of a tracking object in a video stream. For example, the target tracking is performed on an electric vehicle in a monitoring video of a building entrance hall, or is performed in a scene of performing target tracking on a live video of a football field. The electronic device can input the video stream into a trained deep learning model which can be used for target tracking to perform target tracking, or perform optical flow analysis on the video stream to perform target tracking. It can be understood that when the electronic device performs target tracking on the video stream, relevant information is actually given for objects or persons present in each video frame of the video stream. The electronic device determines an object or person identified from a video frame preceding the current video frame as a reference object and determines an object or person identified from the current video frame as a tracking object. That is, at least one reference object is tracked from a video frame preceding the current video frame, and at least one reference object is related information determined from a video frame preceding the current video frame.
In the embodiment of the present disclosure, the trace information may include a first object identifier characterizing the tracking object, and may further include first image-related information (e.g., a tracking area of the tracking object, a timestamp of the current video frame) corresponding to the tracking object, and the like, which is not limited herein. Wherein the first object identifier is used to distinguish the tracked object from other different objects, i.e. to distinguish different objects.
Similarly, the reference information may include a second object identifier of the reference object, a track identifier of the reference object, and may further include second image-related information of the reference object (e.g., a tracking area of the reference object, a timestamp of a video frame before the current video frame), and the like, which is not limited herein. The second object identifier is used for distinguishing the reference object from other different objects, and the track identifier is used for representing an object of a joinable track corresponding to the reference object, that is, representing a track where the reference object is located and which object track can be spliced.
S102, when the reference object matched with the tracking object is not found from the at least one reference object and the track interruption of the tracking object is determined, screening out a target object subjected to track splicing with the tracking object from the at least one reference object based on the track information and the at least one reference information.
After obtaining the tracking object and the at least one reference object, the electronic device searches whether a reference object matching the tracking object exists in the at least one reference object. The electronic equipment shows that the track where the tracking object is located and the track of the reference object matched with the tracking object are continuous and complete under the condition that the reference object matched with the tracking object is searched from at least one reference object aiming at the tracking object, so that the tracking object is not interrupted.
On the contrary, when the electronic device does not find the reference object matched with the tracked object from the at least one reference object, it indicates that the track where the tracked object is located is independent from the track of the reference object appearing before the current video frame, that is, the electronic device determines a plurality of tracks for the video stream, and a track interruption phenomenon occurs.
In some embodiments, the electronic device may determine whether the tracking object has a trajectory disruption based on whether the first object identification is the same as a respective second object identification of the at least one reference object. In the case that a second object identifier identical to the first object identifier exists in the second object identifier of each of the at least one reference object, the electronic equipment determines that the track interruption does not occur to the tracking object; in the case that a second object identifier identical to the first object identifier does not exist in the second object identifier of each of the at least one reference object, the electronic device determines that the track break occurs in the tracking object.
In other embodiments, the electronic device may further use the tracking object as a template, and perform template matching on at least one reference object to determine whether the tracking object has a trajectory interruption. When a reference object with the similarity degree reaching a threshold value with the tracking object is matched from at least one reference object, the electronic equipment determines that the tracking object has no track interruption; when a reference object with the similarity degree reaching a threshold value with the tracking object is not matched from at least one reference object, the electronic equipment determines that the track interruption of the tracking object occurs.
In some embodiments, the electronic device may find the target object from the at least one reference object by comparing only the first object identifier in the trace information with the respective trace identifier of the at least one reference object in the at least one reference information.
In other embodiments, the electronic device may also find the target object from the at least one reference object by comparing only the first image-related information in the trace information with the second image-related information of the at least one reference object in the at least one reference information.
In other embodiments, the electronic device may further find the target object from the at least one reference object by simultaneously comparing the first object identifier with the respective trajectory identifier of the at least one reference object and comparing the first image-related information with the respective second image-related information of the at least one reference object. For example, the electronic device may compare the first object identifier with the respective track identifier of the at least one reference object, and then continue to compare the first image-related information with the respective second image-related information of the at least one reference object to find the target object when the target object is not found.
S103, carrying out track splicing on the target object and the tracked object to obtain the tracking track of the tracked object.
After the electronic device determines the target object whose track splicing is performed beyond term for the tracked object, the tracking track of the tracked object can be obtained by connecting the track of the target object with the track of the tracked object in a time sequence, or the position information of the tracked object can be merged into the track of the target object, that is, the track of the target object is extended in the time sequence based on the position information of the tracked object, so as to obtain the tracking track of the tracked object. Therefore, the electronic equipment realizes the track splicing process aiming at the target object and the tracking object.
It can be understood that, compared with the related art in which an abnormality such as a track break is easily generated when a tracked object is subjected to target tracking, in the embodiment of the present disclosure, after the electronic device performs target tracking on a video stream, based on track information of the tracked object tracked by a current video frame, whether the track break occurs in the tracked object is determined based on at least one reference information of at least one reference object tracked by a video frame before the current video frame, in a case that the track break occurs in the tracked object, a target object capable of performing track splicing with the tracked object is found from the at least one reference object, and by performing track splicing on the tracked object and the target object, the track of the tracked object can be reconnected, so that a complete track of the tracked object can be obtained, and an influence on the tracked object due to the track break is reduced, the target tracking effect is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustration two of the track stitching method provided by the embodiment of the present disclosure. In some embodiments of the present disclosure, based on the trace information and the at least one reference information, the screening out a target object subjected to trace splicing with the tracked object from the at least one reference object, that is, a specific implementation process of S102, may include: s1021, and any one of S1022 and S1023, as follows:
s1021, analyzing the first object identification of the tracking object from the trace information, and analyzing the track identification of each of the at least one reference object from the at least one reference information.
The electronic equipment analyzes the trace information of the tracked object, reads the first object identification of the tracked object, and simultaneously analyzes the at least one piece of reference information one by one to obtain the track identification corresponding to the at least one piece of reference object one by one. Then, the electronic device compares the respective trajectory identifications of the at least one reference object with the first object identification one by one, and determines whether the trajectory identifications matched with the first object identification exist in the respective trajectory identifications of the at least one reference object, so that a target object corresponding to the tracked object is searched in any mode according to whether the trajectory identifications matched with the first object identification exist.
S1022, when the matching identifier identical to the first object identifier is found from the respective track identifier of the at least one reference object, determining the reference object corresponding to the matching identifier in the at least one reference object as the target object.
The matching identifier is the same track identifier as the first object identifier. When the matching identifier with the first object identifier is found, it is stated that although it is determined that the track break occurs in the tracked object, a reference object that can be directly track-spliced with the tracked object still exists in the at least one reference object. At this time, the electronic device searches for a reference object corresponding to the matching identifier from the at least one reference object according to the corresponding relationship between the trajectory identifier and the reference object, and determines the reference object as a target object capable of performing trajectory stitching with the tracked object.
And S1023, in the case that the matching identifier identical to the first object identifier is not found in the track identifier of each at least one reference object, screening out the target object from the at least one reference object based on the first image related information in the track information and the second image related information in the at least one reference information.
When a matching identifier which is the same as the first object identifier is not found from the respective track identifiers of the at least one reference object, it is indicated that not only a track break occurs in the tracked object, but also no reference object capable of being directly spliced with the tracked object is found in the at least one reference object. It should be noted that the reference object with a sufficiently high correlation degree with the tracking object may be regarded as "deformation" of a video frame of the tracking object before the current video frame (the same object may continuously change in position in the video stream), which is still substantially the same object as the tracking object, and may be capable of performing track stitching with the tracking object, so that the electronic device may determine the reference object as the target object.
In the embodiment of the disclosure, the electronic device may determine whether a target object capable of directly performing track splicing with the tracked object exists in the at least one reference object according to whether a matching identifier identical to the first object identifier of the tracked object exists in the respective track identifiers of the at least one reference object, and when the target object cannot be found according to the track identifiers and the first object identifiers, the electronic device may continue to combine the first image related information of the tracked object and the second image related information of the respective reference objects to mine a potential target object capable of performing track splicing with the tracked object. Therefore, the electronic equipment can obtain the target object so as to be convenient for follow-up track splicing with the tracked object and improve the tracking effect.
In some embodiments of the present disclosure, the first image-related information comprises: tracking a first timestamp, a first tracking area, and a first type of an object; the at least one second image-related information comprises: a respective second timestamp, second tracking area, and second type of the at least one reference object.
At this time, the screening of the target object from the at least one reference object based on the first image-related information in the trace information and the second image-related information in the at least one reference information, that is, the implementation process of S1023, may include: s1023a, as follows:
s1023a, when a reference object satisfying the determination condition with the tracking object is found from the at least one reference object, determining the reference object satisfying the determination condition as the target object.
Wherein the determination conditions include: an overlapping area of the second tracking area with the first tracking area is greater than an area threshold, a difference of the second timestamp from the first timestamp is less than a time threshold, and the second type is the same as one or more of the first type.
When the overlapping area of the first tracking area and the second tracking area is larger than the area threshold, the appearance position of the tracked object is closer to the appearance position of the reference object, the difference value between the first timestamp and the second timestamp is smaller than the time threshold, the appearance time of the tracked object is closer to the appearance time of the reference object, and the second type is the same as the first type and indicates that the tracked object and the reference object are the same type of objects. Whether the tracking object and the reference object are located relatively close to each other, and/or are present relatively close to each other, and/or are of the same type, it is indicated that the tracking object and the reference object may be the same object. Therefore, in the embodiment of the present disclosure, the electronic device may set the determination condition based on one or more dimensions of the appearance position, the appearance time, and the type, so as to find a reference object that is sufficiently related to the tracked object and is likely to be the same object from the at least one reference object through the determination condition, thereby obtaining a target object capable of performing track stitching with the tracked object.
It is understood that the overlapping area of the second tracking area and the first tracking area is greater than the area threshold, which may mean that the intersection of the first tracking area and the second tracking area is greater than a certain area, or that the intersection ratio of the first tracking area and the second tracking area is greater than a certain value, and the disclosure is not limited herein.
It can also be understood that the second timestamp is a timestamp of a video frame prior to the current video frame, and thus, in the embodiment of the present disclosure, the second timestamp is smaller than the first timestamp, and the electronic device can directly subtract the second timestamp from the first timestamp to obtain a difference value of the timestamps.
In the embodiment of the present disclosure, the electronic device may set the determination condition from one or more dimensions of the appearance position, the appearance time, and the type, so as to obtain a reference object that may be the same object as the tracked object by searching for a reference object that satisfies the determination condition with the tracked object. Therefore, the electronic equipment can also find the target object according to the condition that the first object identifier is different from the track identifier of the at least one reference object, so that the success rate of finding the target object is ensured.
Referring to fig. 3, fig. 3 is a third schematic flowchart of a track stitching method provided in the embodiment of the present disclosure. In some embodiments of the present disclosure, performing track stitching on the target object and the tracked object to obtain the tracking track of the tracked object, that is, the specific implementation process of S103 may include:
and S1031, determining the track corresponding to the target object as the historical track of the tracking object.
S1032, connecting the historical track and the current track of the tracked object to obtain the tracking track of the tracked object.
After the target object is found, the electronic device obtains the track corresponding to the target object. Since the target object is determined from at least one reference object tracked from a video frame prior to the current video frame, the corresponding trajectory of the target object can be used as the historical trajectory of the tracked object. The electronic equipment connects the historical track and the current track of the tracked object in a time sequence, so that the historical track and the current track form a complete track, and the track is the tracking track of the tracked object.
It can be understood that the current track includes the position information of the tracking object in the current video frame. That is to say, in the embodiment of the present disclosure, the current trajectory may be a trajectory starting from the position information of the tracking object in the current video frame, or may be a trajectory starting from the position information of the tracking object in the video frame before the current video frame.
Illustratively, fig. 4 is a schematic diagram of a current trajectory provided by an embodiment of the present disclosure. The current track may be a track 4-a starting from position information 4-1 of the tracking object in the current video frame, or may be a track 4-B starting from position information 4-2 of the tracking object in a video frame before the current video frame, which is not limited in this disclosure.
In some embodiments, the electronic device may determine the track corresponding to the target object as the historical track of the tracked object by modifying the first object identifier of the tracked object to the target second object identifier corresponding to the target object. In other embodiments, the electronic device may further modify the target trajectory identifier of the target object to the first object identifier of the tracked object, so as to determine the trajectory corresponding to the target object as the historical trajectory of the tracked object.
In the embodiment of the disclosure, the electronic device can splice the track corresponding to the target object as the historical track with the current track of the tracked object in time sequence, so that the electronic device can obtain the complete tracking track of the tracked object, thereby improving the tracking effect.
In some embodiments of the present disclosure, in order to facilitate proceeding to track stitching of the next round, the electronic device further updates the target second image related information corresponding to the target object by using the first image related information of the tracked object, for example, modifying the target second tracking area of the target object to the first tracking area of the tracked object, and modifying the target second timestamp of the target object to the first timestamp of the tracked object, so that the second image related information of the target object is always the latest when track stitching of the next round is performed.
It should be noted that the track splicing method provided in the embodiment of the present disclosure may further include: s201, the following steps are carried out:
s201, when a reference object which meets the judgment condition with the tracking object is not found from the at least one reference object, adding the tracking object to the at least one reference object to obtain the updated at least one reference object.
When the electronic device tracks a first tracking area of an object, an overlapping area of the first tracking area and a second tracking area of each reference object is smaller than or equal to an area threshold, a first time threshold of the tracked object and a second time threshold of each reference object are larger than or equal to a time threshold, and a first type of the tracked object is different from a second type of each reference object, namely, any one reference object and the tracked object do not meet a judgment condition, the tracked object is determined to be an object completely different from all previous reference objects. At this time, the electronic device will add the tracking object to at least one reference object, that is, the tracking object is stored as a completely new reference object, so that the new tracking object can be used for track splicing when needed subsequently.
In the embodiment of the disclosure, under the condition that neither the tracked object nor any one of the reference objects meets the judgment condition, the electronic device stores the tracked object as a new reference object so as to continue to perform track splicing subsequently.
In some embodiments of the present disclosure, adding the tracking object to the at least one reference object to obtain the updated at least one reference object, that is, the specific implementation process of S201 may include: S2011-S2012, as follows:
and S2011, when the number of the at least one reference object reaches the rated storage number of the reference objects, deleting the first reference object in the at least one reference object to obtain the at least one reference object with the corrected number.
In the embodiment of the disclosure, at least one reference object is stored in the buffer, and the buffer generally has a rated capacity, so that the number of reference objects which can be stored in the buffer is limited. Based on this, when the electronic device adds the tracking object to the at least one reference object, it needs to first determine whether the number of the at least one reference object reaches the rated storage number of the reference object. When the number of the at least one reference object reaches the rated storage number, which indicates that the storage capacity in the buffer is completely consumed, the electronic device deletes the first reference object in the at least one reference object from the buffer, so that the number of the at least one reference object is smaller than the rated storage number, that is, the number-corrected at least one reference object is obtained.
It is understood that the nominal storage amount may be manually set according to actual conditions, for example, 100, 50, etc., or may be determined according to the total capacity of the buffer and the capacity required to be consumed by the single reference object during storage, and the disclosure is not limited thereto.
S2012, adding the tracking object to the corrected number of at least one reference object to obtain an updated number of at least one reference object.
After the electronic equipment obtains the corrected number of the at least one reference object, the tracking object is added to the corrected number of the at least one reference object to obtain the updated at least one reference object, so that the number of the updated at least one reference object does not exceed the rated storage number, and the buffer is prevented from being abnormal due to memory leakage and the like.
In the embodiment of the disclosure, when the electronic device adds the tracking object to at least one reference object, whether the number of the reference objects reaches the rated storage number is firstly checked, and when the rated storage number is reached, the initial reference objects which may not be used any more are deleted, and then the tracking object is added, so that the number of the reference objects does not exceed the rated storage number, and the safety of the buffer is further ensured.
It should be noted that the track splicing method provided in the embodiment of the present disclosure may further include: s202, the following steps are carried out:
s202, under the condition that the track interruption of the tracking object is determined not to occur, replacing the reference object matched with the tracking object in the at least one reference object with the tracking object to obtain the updated at least one reference object.
When the electronic equipment finds the same second object identifier from the respective second object identifiers of the at least one reference object according to the first object identifier of the tracked object, it is determined that the tracked object has no track interruption. At this time, in order to facilitate track splicing when performing next round of target tracking on the video stream, the electronic device replaces and updates the reference object matched with the tracking object in the at least one reference object by using the tracking object, so that the reference object stored in the buffer is always the latest, that is, the updated at least one reference object is obtained.
In the embodiment of the disclosure, when it is determined that the track interruption does not occur to the tracked object, the electronic device can directly replace and update the reference object matched with the tracked object by using the tracked object, so that track splicing is performed based on the latest reference object when the requirement of track splicing subsequently exists.
In the following, the event detection method provided by the embodiment of the present disclosure will be described in conjunction with exemplary applications and implementations of the electronic device provided by the embodiment of the present disclosure.
Fig. 5 is a first flowchart of an event detection method provided in the embodiment of the present disclosure, which will be described with reference to the steps shown in fig. 5.
S301, performing target tracking on the video stream to obtain at least one reference information corresponding to at least one reference object and trace information of a tracking object corresponding to the current video frame.
It should be noted that the processing procedure in this step is similar to the processing procedure in S101, and is not described herein again.
S302, under the condition that the track interruption of the tracked object is determined, screening out the target object from the at least one reference object based on the track information and the at least one reference information, and carrying out track splicing on the target object and the tracked object to obtain the tracking track of the tracked object.
It should be noted that this step is similar to the processing procedure of S102-S103, and is not described herein again.
S303, determining a first projection area of the tracked object, and determining a second projection area corresponding to the first projection area according to the tracking track.
The electronic equipment analyzes a first tracking area of the tracking object in the current video frame from the trace information of the tracking object, and then generates a first projection area of the tracking object for cross-line detection according to the first tracking area. Then, the electronic device may filter out a reference video frame from video frames before the current video for the current video frame, determine an appearance region of the tracking object in the reference video frame according to the tracking track, and determine a second projection region corresponding to the first projection region according to the appearance region.
In some embodiments, the electronic device may directly determine the first tracking area as the first projection area, while directly determining the tracking object as the second projection area in an occurrence area of the reference video frame.
In other embodiments, the electronic device may intercept a partial region from the first tracking region as the first projection region, for example, intercept a bottom portion of the tracked object or a region where the preset portion is located as the first projection region. Then, the electronic device cuts out an area having the same size and position as the first projection area from the appearance area of the tracking object in the reference video frame as a second projection area.
It is to be understood that the reference video frame may be a frame previous to the current video frame, or may be any video frame before the current video frame, for example, an nth video frame before the current video frame, and the like, and the disclosure is not limited herein.
S304, according to the second projection area, performing line crossing detection on the first projection area of the tracked object to obtain a detection result, and generating an event result of the tracked object according to the detection result.
After the first projection area and the second projection area are determined, the electronic equipment can judge whether the first projection area has line crossing or not by combining the second projection area, and a detection result is obtained. When the detection result represents that the first projection area has line crossing, the electronic device determines the event result as that the tracking object has line crossing, otherwise, when the detection result represents that the first projection area has no line crossing, the electronic device determines the event result as that the tracking object has no line crossing.
In some embodiments, the electronic device may determine whether the first projection area crosses the line according to each first boundary vertex of the first projection area and each second boundary of the second projection area. In other embodiments, the electronic device may further determine whether the first projection area crosses the line according to any of a plurality of points in the first projection area and points corresponding to the any of the plurality of points in the second projection area. The present disclosure is not limited thereto.
It can be understood that, in the embodiment of the present disclosure, the electronic device changes the related art in which the cross-line detection is performed on the tracking object based on the central point into the cross-line detection performed on the tracking object based on the projection area of the tracking object, that is, an integral part of the tracking area. Therefore, even if the tracked object moves along the reference line or is near the reference line, a correct judgment result can be given, so that the false alarm condition is reduced, the accuracy of cross-line detection is greatly improved, and the accuracy of event detection for the tracked object is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustration two of an event detection method provided in the embodiment of the present disclosure. In some embodiments of the present disclosure, performing cross-line detection on the first projection region of the tracked object according to the second projection region to obtain a detection result, that is, the specific implementation process of S304 may include: S3041-S3042 as follows:
s3041, according to the second boundary vertices of the second projection area, performing line crossing judgment on the first boundary vertices of the first projection area of the tracked object, to obtain vertex judgment results.
S3042, when the vertex judgment result indicates that all the first boundary vertices have line crossing, determining that the detection result is that the first projection area has line crossing.
In the embodiment of the disclosure, the electronic device determines whether the first projection area crosses the line according to whether each first boundary vertex of the first projection area crosses the line. It is understood that the second projection region corresponds to a history region of the first projection region, and the two are linked in time sequence. Therefore, the electronic device needs to determine whether the first boundary vertices of the first projection area cross the line according to the second boundary vertices of the second projection area, so as to obtain a vertex determination result. The vertex judgment result represents whether line crossing occurs to the first boundary vertices. When the vertex judgment result represents that all the first boundary vertices are crossed, the electronic equipment confirms that the first projection area is crossed, and accordingly corresponding detection results are generated.
In some embodiments, the electronic device may determine whether the plurality of first boundary vertices are all crossed according to the distribution of each first boundary vertex and each second boundary vertex on the two sides of the reference line. In other embodiments, the electronic device may further calculate whether the amount of change in the position of each first boundary vertex and the corresponding second boundary vertex exceeds a change threshold value to determine whether the line crossing of the plurality of first boundary vertices occurs.
In the embodiment of the disclosure, the electronic device can determine whether the first projection area has line crossing based on line crossing judgment of a plurality of first boundary vertexes of the first projection area, so that only when the line crossing occurs in the plurality of first boundary vertexes, the line crossing in the first projection area is determined, and whether the line crossing occurs in the tracked object is determined.
In some embodiments of the present disclosure, according to a plurality of second boundary vertices of the second projection area, performing line crossing judgment on a plurality of first boundary vertices of the first projection area of the tracked object, respectively, to obtain vertex judgment results, that is, a specific implementation process of S3041 may include: s3041a, and any one of S3041b and S3041c, as follows:
s3041a determining the positional relationship between the first boundary vertices of the first projection region and the second boundary vertices of the second projection region and the reference line.
It can be understood that, determining the positional relationship between the plurality of first boundary vertices and the plurality of second boundary vertices and the reference line is to determine the distribution of the plurality of first boundary vertices and the plurality of second boundary vertices on both sides of the reference line, so that whether the plurality of first boundary vertices cross the line can be determined according to the distribution.
S3041b, determining that the vertex determination result indicates that the plurality of first boundary vertices cross the line when the position relationship indicates that the plurality of first boundary vertices and the plurality of second boundary vertices are respectively distributed on both sides of the reference line.
Under the condition that the position relation represents that the plurality of first boundary vertexes and the plurality of second boundary vertexes are respectively distributed on two sides of the datum line, namely the plurality of first boundary vertexes are distributed on one side of the datum line, and the plurality of second boundary vertexes are distributed on the other side of the datum line, the electronic equipment can determine that the plurality of first boundary vertexes are crossed, and therefore a vertex judgment result is obtained.
Illustratively, fig. 7 is a schematic diagram of a plurality of first boundary vertices crossing a line provided by an embodiment of the disclosure. Referring to fig. 7, the first projection area and the second projection area are both two quadrangles having the same size. The first boundary vertices of the first projection region 7-1 are divided into a, b, c, d, the second boundary vertices of the second projection region 7-2 are a ', b', c ', d', respectively, and the reference line is A. When the first boundary vertices a, b, c, and d are distributed on the right side of the reference line a, and the second boundary vertices a ', b', c ', and d' are distributed on the left side of the reference line a, the electronic device determines that all of the first boundary vertices a, b, c, and d are crossed.
S3041c, when the positional relationship indicates that the plurality of first boundary vertices and the plurality of second boundary vertices are all distributed on the same side of the reference line, or the plurality of first boundary vertices are distributed on both sides of the reference line, determining that the vertex determination result indicates that the plurality of first boundary vertices are not crossed.
When the plurality of first boundary vertices and the plurality of second boundary vertices are all distributed on the same side of the reference line, the electronic device determines that no line crossing occurs in any of the plurality of first boundary vertices. When the plurality of first boundary vertices are distributed on two sides of the reference line, that is, some first boundary vertices are distributed on one side of the reference line, and other first boundary vertices are distributed on the other side of the reference line, the electronic device may consider that an invalid result is obtained when performing the line crossing judgment on the plurality of first boundary vertices, and thus determine the vertex judgment result as that the plurality of first boundary vertices do not cross the line.
For example, fig. 8A is a first schematic diagram of a plurality of first boundary vertices that are not crossed. When the first boundary vertices of the first projection area 8-1 are divided into a, b, c, and d, and the second boundary vertices of the second projection area 8-2 are respectively a ', b', c ', and d', which are all distributed on the left side of the reference line a, the electronic device obtains a vertex determination result that no line crossing occurs in any of the plurality of first boundary vertices. Fig. 8B is a second schematic diagram illustrating that no line crossing occurs in the first boundary vertices according to the embodiment of the disclosure. When the first boundary vertices a, b, c, and d of the first projection area 8-1 are located at two sides of the reference line a, the electronic device may also obtain a vertex determination result that no line crossing occurs at any of the plurality of first boundary vertices.
In the embodiment of the disclosure, the electronic device can determine whether the multiple first boundary vertexes are crossed according to the distribution conditions of the determined multiple first boundary vertexes and the determined multiple second boundary vertexes on the two sides of the reference line, so as to obtain a vertex determination result, so that a line crossing determination result of the first projection area can be determined according to the vertex determination result subsequently.
In some embodiments of the present disclosure, determining the position relationship between the plurality of first boundary vertices of the first projection region and the plurality of second boundary vertices of the second projection region and the reference line, that is, a specific implementation procedure of S3041a may include: S401-S403, as follows:
s401, from the plurality of second boundary vertices of the second projection area, a matching vertex of each of the plurality of first boundary vertices in the first projection area is screened out.
The electronic equipment matches each first boundary vertex in the first projection area with each second boundary vertex in the second projection area, so that the matching vertex at the same position as each first boundary vertex can be determined.
For example, for a first boundary vertex at the upper left corner of the first projection region, the electronic device may use a second boundary vertex at the upper left corner of the second projection region as a matching vertex corresponding to the first boundary vertex.
S402, generating first vectors aiming at the first boundary vertexes and any point in the reference line respectively, and generating second vectors aiming at the matching vertexes and any point in the reference line.
The electronic equipment selects a point from the datum line, generates a first vector by using the selected point and each first boundary vertex, and generates a second vector by using the selected point and a matched vertex corresponding to each first boundary vertex.
In some embodiments, the electronic device may generate a first vector pointing from any point of the reference line to the first boundary vertex and a second vector pointing from any point of the reference line to the matching vertex. In other embodiments, the electronic device may generate a first vector pointing from the first boundary vertex to any point of the reference line and a second vector pointing from the matching vertex to any point of the reference line. The present disclosure is not limited thereto.
And S403, determining the position relationship that the plurality of first boundary vertexes and the plurality of second boundary vertexes are distributed on two sides of the reference line respectively under the condition that the direction of the first projection of the first vector in the normal direction of the reference line is opposite to the direction of the second projection of the second vector in the normal direction of the reference line.
After obtaining the first vector and the second vector, the electronic device determines a normal direction with respect to the reference line, and then records a projection vector of the first vector in the normal direction as a first projection, and records a projection vector of the second vector in the normal direction as a second projection. Then, the electronic device compares whether the directions of the first projection and the second projection are the same, confirms that the first boundary vertex and the matching vertex corresponding to the first boundary vertex are both on the same side of the reference line when the directions of the first projection and the second projection are the same, and determines that the first boundary vertex and the matching vertex corresponding to the first boundary vertex are on both sides of the reference line when the directions of the first projection and the second projection are opposite. In this way, the electronic device can determine the distribution of the plurality of first boundary vertices and the plurality of second boundary vertices on the reference line.
Exemplarily, fig. 9 is a schematic diagram of determining a distribution of a first boundary vertex and a corresponding matching vertex in a reference line according to an embodiment of the present disclosure. In fig. 9, the electronic device takes an arbitrary point o of the reference line a as an origin, and constructs a first vector, i.e., a first vector with a first boundary vertex n
Figure BDA0003329200150000221
At the same time, a second vector is constructed by using the matching vertex m corresponding to the o and the first boundary vertex n, namely
Figure BDA0003329200150000222
The electronic device will then sum the normal direction, i.e. normal vector, at the reference line a
Figure BDA0003329200150000223
Performing an upward projection to obtain a first projection
Figure BDA0003329200150000224
And a second projection
Figure BDA0003329200150000225
Finally, the electronic device passes the judgment
Figure BDA0003329200150000226
And
Figure BDA0003329200150000227
whether the directions of the first boundary vertices and the corresponding matching vertices are the same or not is obtained, the first boundary vertices and the corresponding matching vertices are distributed on two sides of the reference line, and when the analysis is completed on all the first boundary vertices and the corresponding matching vertices, the distribution conditions of the plurality of first boundary vertices and the plurality of second boundary vertices on the reference line can be determined.
It is to be understood that the electronic device may determine the first projection by point-multiplying the first vector by a unit vector on the normal vector of the reference line, i.e. the normal vector, while determining the second projection by point-multiplying the second vector by the normal vector of the reference line. Then, the electronic device may compare the product of the first projection and the second projection with 0, determine that the first boundary vertex and the matching vertex corresponding to the first boundary vertex are distributed on the same side of the reference line when the product of the first projection and the second projection is greater than 0, and determine that the first boundary vertex and the matching vertex corresponding to the first boundary vertex are distributed on both sides of the reference line when the product of the first projection and the second projection is less than 0.
In the embodiment of the disclosure, the electronic device may determine a first projection for a first vector generated by each first boundary vertex and any point in the reference line, determine a second projection for a second vector generated by the matching vertex and any point in the reference line, compare directions of the first projection and the second projection, and determine distribution conditions of the plurality of first boundary vertices and the plurality of second boundary vertices on both sides of the reference line, so as to determine whether the first projection area has a line crossing according to the distribution conditions.
In some embodiments of the present disclosure, the tracking object is a vehicle, and the first projection area and the second projection area are appearance areas of a preset portion of the vehicle, the preset portion including: any one of wheels, a vehicle bottom seat and pedals. That is, the event detection method in the present disclosure is directed to event detection for a vehicle. The vehicle may be a car, a truck, or an electric vehicle, a bicycle, etc., and the disclosure is not limited herein.
In this case, it can be seen that the preset positions are all bottom areas of the vehicle, so that the electronic device can only take an area where the bottom area of the vehicle is located from the first tracking area as the first projection area.
That is to say, in the embodiment of the present disclosure, the width of the first projection area is the width of the first tracking area, the height is a preset proportion of the height of the first tracking area, and the first tracking area is an appearance area of the vehicle in the current video frame.
Accordingly, the electronic device takes the same area with the first projection area as the second projection area from the second tracking area. Therefore, the width of the second projection area is the width of the second tracking area, and the height is a preset proportion of the height of the second tracking area; the second tracking area is an appearance area of the vehicle found from the video frame before the current video frame according to the tracking track.
It can be understood that the preset proportion can be set according to actual conditions, as long as the area where the preset part is located can be included. For example, the preset ratio may be 0.3, 0.5, etc., and the disclosure is not limited thereto.
In the embodiment of the disclosure, the electronic device may only intercept an area where a preset part of the vehicle appears, to obtain the first projection area and the second projection area, where the preset part is a key part that is closer to the reference line in a spatial position. Therefore, the cross-line detection can be performed only aiming at the key parts, the accuracy of the cross-line detection is improved, and the accuracy of the event detection is also improved.
In the following, an exemplary application will be explained when the electronic device is implemented as a server. Referring to fig. 10, fig. 10 is an alternative architecture diagram of an event detection system provided in the embodiment of the present disclosure. To support the implementation of an event detection application, in the event detection system 100, the video capture device 400-1 and the alerting device 400-2 are connected to the server 200 via the network 300, and the network 300 may be a wide area network or a local area network, or a combination thereof.
The video capture device 400-1 is configured to capture a video stream and transmit the video stream to the server 200 via the network 300.
The server 200 performs target tracking on the video stream to obtain at least one reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame; under the condition that the track interruption of the tracked object is determined, screening out a target object from at least one reference object based on the track information and at least one piece of reference information, and carrying out track splicing on the target object and the tracked object to obtain the tracking track of the tracked object; determining a first projection area of the tracked object, and determining a second projection area corresponding to the first projection area according to the tracking track; and performing line crossing detection on the first projection area of the tracked object according to the second projection area to obtain a detection result, and generating an event result of the tracked object according to the detection result to realize event detection.
The server 200 is also used to send event results to the alerting device 400-2 via the network 300.
The alerting device 400-2 receives the event result transmitted from the server 200 and broadcasts the event result in a voice or animation manner. For example, the alert device 400-2 plays a voice of "do not push the electric vehicle into the elevator", or the like.
Next, a process of implementing the embodiment of the present disclosure in an actual application scenario is described.
The embodiment of the disclosure is realized in the scene of detecting whether the electric vehicle has an entrance event, namely detecting whether the electric vehicle illegally enters a residential building, an office building and the like.
First, the server creates a buffer of reference objects (at least one reference object) for each tracking task, i.e., video (video stream) captured by each camera set in the building. Then, the server performs track splicing processing on the tracking target in the video.
Fig. 11 is a processing process diagram of track stitching performed by a server on a tracked target in a video according to an embodiment of the present disclosure. Referring to fig. 11, the process includes:
s501, detecting and outputting the tracking target.
S502, judging whether a reference target identical to the trackId of the tracking target exists in the buffer. If yes, S503 is performed, otherwise, S504 is performed.
When a certain tracking task outputs a result, whether a reference target of the trackId exists in the buffer is searched through the trackId (first object identifier) of the tracking target (tracking object).
And S503, updating the reference target.
If the reference target of the trackId exists in the buffer (in the case of determining that the tracking target does not generate a trajectory termination), the reference target of the trackId is updated by using the tracking target (a reference object matching the tracking object in at least one reference target is replaced by the tracking object), and the process continues to execute S509.
And S504, judging whether track splicing is needed or not. If yes, S505 is performed, otherwise S506 is performed.
If the reference target of the trackId does not exist in the buffer (the reference object matched with the tracking object is not found from at least one reference object, and the track interruption of the tracking object is determined), traversing all the reference targets of the current tracking task from the buffer, and sequentially judging whether track splicing with the tracking target is needed.
First, the server will match the linkIds (respective trajectory identifications of at least one reference object) of all reference targets with the trackIds of the tracked targets in turn. When the linkId of a certain reference target is equal to the trackId of the reference target (a matching identifier identical to the first object identifier is found from the respective track identifiers of at least one reference object, and the reference object corresponding to the matching identifier is determined as the target object), the three conditions (determination conditions) for track splicing are not required to be judged, and track splicing is directly performed. And when the linkIds of all the reference targets are not equal to the trackIds of the reference targets (the matching identification which is the same as the first object identification is not found in the respective track identification of at least one reference object), carrying out track splicing on the reference targets meeting three conditions with the tracking targets. The three conditions are as follows:
firstly, performing an intersection ratio (overlapping area) by using a target frame (a first tracking area) of a tracking target and a target frame (a second tracking area) of a reference target, wherein the intersection ratio is greater than a set threshold (area threshold);
secondly, the timestamp (first timestamp) of the tracking target is larger than the timestamp (second timestamp) of the reference target, and the timestamp difference is smaller than a set threshold (time threshold);
third, the type of the tracking target (first type) is identical to the type of the reference target (second type).
And S505, splicing the tracking target (track splicing is carried out on the target object and the tracking object), and updating the reference target.
And the server modifies the linkID of the reference target into the trackId of the tracking target and modifies the trackId of the tracking target into the trackId of the reference target so as to realize splicing of the tracking target. Thereafter, the server modifies the target frame and the time stamp of the reference target to the target frame and the time stamp of the tracking target to update the reference target. Thereafter, the server proceeds to S509.
S506, judging whether the buffer capacity is exceeded. If yes, S507 is executed, otherwise S508 is executed.
The tracking target and the reference target do not need to be track spliced (from the at least one reference object, a reference object meeting the determination condition with the tracking object is not found), and the tracking target needs to be added to the buffer (the tracking object is added to the at least one reference object). At this time, the server determines whether the number of reference targets of the current tracking task exceeds a set capacity.
S507, deleting the oldest reference object (deleting the first reference object of the at least one reference object when the nominal storage number of the reference objects is reached).
When the number of reference targets exceeds the set capacity, the server deletes the oldest reference target according to the time stamp, and proceeds to S508.
And S508, adding a reference target.
The server adds this tracking target as a reference target (adds the tracking object to the at least one reference object, and adds the tracking object to the quantity-corrected at least one reference object), and then executes S509.
And S509, outputting the tracking target to downstream logic.
I.e. further event detection.
In the embodiment of the disclosure, event detection is mainly realized according to a plane crossing strategy.
The server firstly intercepts 0.3 (preset proportion) of the height of the bottom of the target frame, namely the height of the target frame, and a rectangle with the width of the target frame is a surface (a first projection area), and determines whether the electric vehicle is crossed by judging whether the rectangle is crossed. This is done because when judging the electric motor car overline, it is actually only necessary to judge whether its wheel has the overline to only need pay close attention to the bottom of target frame.
Then, the server determines that the rectangle has line crossing when all four vertices (a plurality of first boundary vertices) of the selected rectangle have line crossing.
Specifically, the logic for judging that one vertex of the rectangle completes the line crossing is as follows: randomly selecting one point of a line A (a datum line) as an original point o, making a normal vector F with the length of the line A being 1 on the original point o, forming two vectors M (a second vector) and N (a first vector) with the original point o by two points M (a matched top and bottom in a second projection region), N (a first boundary vertex in a first projection region) of two frames in front and at back, respectively making point multiplication (a second projection and a first projection) with the normal vector by M and N, and indicating that the points M and N are respectively arranged at two sides of the line A if the product of the two point multiplication results is less than 0 (the directions of the first projection and the second projection are opposite), and indicating that the points M and N are respectively arranged at two sides of the line A; if the product of the two dot products is greater than 0, it means that the dots m and n are on the side of the first dot A, and it means that there is no crossover.
When the four points of the bottom surface of the tracking target and the four points of the bottom surface of the reference target (the first boundary vertexes and the second boundary vertexes are distributed on the same side of the datum line) are on the same side of the line, determining that no line crossing occurs, and updating the reference bottom surface; when four points of the bottom surface of the tracking target are scattered on two sides of the line (a plurality of first boundary vertexes are distributed on two sides of the datum line), no processing is carried out, and no line crossing is determined to occur; when the four points of the bottom surface of the tracking target are positioned on the same side of the line and the four points of the reference bottom surface are positioned on the other side of the line (the first boundary vertexes and the second boundary vertexes are respectively distributed on the two sides of the reference line), judging that the surface line crossing is established, and obtaining the event result of the electric vehicle line crossing and the entrance event.
By the aid of the method, when the target of the electric vehicle is tracked and the track is interrupted, the disconnected tracks are spliced again, electric vehicle entering detection is performed based on the spliced tracks, and detection precision is improved; meanwhile, the cross-line judgment strategy of the electric vehicle is changed from the original point cross-line to the surface cross-line, so that the robustness of cross-line detection is improved, the accuracy of the electric vehicle in-line detection is improved, and the recall rate of the electric vehicle detection is further improved.
The present disclosure further provides a track splicing apparatus, and fig. 12 is a schematic structural diagram of the track splicing apparatus provided in the embodiment of the present disclosure; as shown in fig. 12, the track splicing device 1 includes:
a target tracking unit 11, configured to perform target tracking on the video stream to obtain at least one reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame; the at least one reference object is tracked from a video frame preceding the current video frame;
a target determining unit 12, configured to, when a reference object matching the tracked object is not found from the at least one reference object and it is determined that a trajectory interruption occurs in the tracked object, screen a target object subjected to trajectory splicing with the tracked object from the at least one reference object based on the trajectory information and the at least one reference information;
and the track splicing unit 13 is configured to perform track splicing on the target object and the tracked object to obtain a tracking track of the tracked object.
In some embodiments of the present disclosure, the target determining unit 12 is further configured to parse a first object identifier of the trace object from the trace information, and parse a track identifier of each of the at least one reference object from the at least one reference information; determining a reference object corresponding to the matching identifier in at least one reference object as the target object under the condition that the matching identifier identical to the first object identifier is found from the respective track identifier of the at least one reference object; and screening the target object from the at least one reference object based on first image related information in the trace information and second image related information in the at least one reference object when the matching identifier which is the same as the first object identifier is not found in the respective trace identifier of the at least one reference object.
In some embodiments of the present disclosure, the first image-related information comprises: a first timestamp, a first tracking area, and a first type of the tracked object; the at least one second image-related information comprises: a respective second timestamp, second tracking area, and second type of the at least one reference object;
the target determining unit 12 is further configured to determine, when a reference object that satisfies a determination condition with the tracking object is found from the at least one reference object, the reference object that satisfies the determination condition as the target object; wherein the determination condition includes: one or more of the second tracking area overlapping the first tracking area by more than an area threshold, the second timestamp differing from the first timestamp by less than a time threshold, and the second type being the same as the first type.
In some embodiments of the present disclosure, the track stitching unit 13 is further configured to determine a track corresponding to the target object as a historical track of the tracked object; connecting the historical track and the current track of the tracked object to obtain the tracking track of the tracked object; wherein the current track includes the position information of the tracking object in the current video frame.
In some embodiments of the present disclosure, with continued reference to fig. 12, the trajectory splicing device 1 further comprises: a reference updating unit 14;
the reference updating unit 14 is configured to, when a reference object that meets the determination condition with the tracked object is not found from the at least one reference object, add the tracked object to the at least one reference object to obtain an updated at least one reference object.
In some embodiments of the present disclosure, the reference updating unit 14 is further configured to delete a first reference object of the at least one reference object when the number of the at least one reference object reaches the rated storage number of the reference objects, so as to obtain at least one reference object with a modified number; and adding the tracking object to the corrected at least one reference object to obtain an updated at least one reference object.
In some embodiments of the present disclosure, the reference updating unit 14 is further configured to, in a case that it is determined that the track interruption does not occur to the tracking object, replace a reference object, which is matched with the tracking object, in the at least one reference object with the tracking object, to obtain an updated at least one reference object.
Fig. 13 is a schematic structural diagram of an event detection device provided in an embodiment of the present disclosure; as shown in fig. 13, the event detection device 2 includes:
a target tracking unit 11, configured to perform target tracking on the video stream to obtain at least one reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame;
a target determining unit 12, configured to, if it is determined that a trajectory interruption occurs in the tracked object, screen out a target object from the at least one reference object based on the trajectory information and the at least one reference information;
a track splicing unit 13, configured to perform track splicing on the target object and the tracked object to obtain a tracking track of the tracked object;
a projection determining unit 21, configured to determine a first projection area of the tracked object, and determine a second projection area corresponding to the first projection area according to the tracking track;
and the event detection unit 22 is configured to perform cross-line detection on the first projection area of the tracked object according to the second projection area to obtain a detection result, and generate an event result of the tracked object according to the detection result.
In some embodiments of the present disclosure, the event detecting unit 22 is further configured to perform line crossing judgment on a plurality of first boundary vertices of the first projection area of the tracked object according to a plurality of second boundary vertices of the second projection area, so as to obtain vertex judgment results; and when the vertex judgment result represents that all the first boundary vertices have line crossing, determining that the detection result is that the first projection area has line crossing.
In some embodiments of the present disclosure, the event detecting unit 22 is further configured to determine a position relationship between the plurality of first boundary vertices of the first projection area and the plurality of second boundary vertices of the second projection area and a reference line, respectively; determining that the vertex judgment result represents that the plurality of first boundary vertices cross the line under the condition that the plurality of first boundary vertices and the plurality of second boundary vertices are respectively distributed on two sides of the datum line according to the position relation representation; and when the position relationship indicates that the plurality of first boundary vertexes and the plurality of second boundary vertexes are distributed on the same side of the datum line or the plurality of first boundary vertexes are distributed on two sides of the datum line, determining that the vertex judgment result indicates that the plurality of first boundary vertexes do not cross the line.
In some embodiments of the present disclosure, the event detecting unit 22 is further configured to filter out matching vertices of each of the plurality of first boundary vertices in the first projection region from a plurality of second boundary vertices in the second projection region; generating a first vector for each of the plurality of first boundary vertices and any point in a reference line, and a second vector for each of the matching vertices and any point in the reference line; the positional relationship is determined such that the plurality of first boundary vertices and the plurality of second boundary vertices are respectively distributed on both sides of the reference line when a first projection of the first vector in the normal direction of the reference line is opposite to a second projection of the second vector in the normal direction of the reference line.
In some embodiments of the present disclosure, the tracking object is a vehicle, and the first projection area and the second projection area are appearance areas of a preset portion of the vehicle, where the preset portion includes: any one of wheels, a vehicle bottom seat and pedals; the width of the first projection area is the width of a first tracking area, the height is a preset proportion of the height of the first tracking area, and the first tracking area is an appearance area of a vehicle in the current video frame; the width of the second projection area is the width of a second tracking area, and the height is a preset proportion of the height of the second tracking area; the second tracking area is an appearance area of the vehicle found from a video frame before the current video frame according to the tracking track.
An embodiment of the present disclosure further provides an electronic device, fig. 14 is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and as shown in fig. 14, the electronic device 3 includes: a memory 31 and a processor 32, wherein the memory 31 and the processor 32 are connected by a communication bus 33; a memory 31 for storing an executable computer program; the processor 32 is configured to implement the method provided by the embodiment of the present disclosure, for example, the track splicing method or the event detection method provided by the embodiment of the present disclosure, when executing the executable computer program stored in the memory 31.
The present disclosure provides a computer-readable storage medium, which stores a computer program, and is configured to cause the processor 32 to execute the computer program to implement a method provided by the present disclosure, for example, a track splicing method or an event detection method provided by the present disclosure.
In some embodiments of the present disclosure, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments of the disclosure, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts, or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present disclosure are included in the protection scope of the present disclosure.

Claims (16)

1. A track splicing method is characterized by comprising the following steps:
performing target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame; the at least one reference object is tracked from a video frame preceding the current video frame;
under the condition that a reference object matched with the tracking object is not found from the at least one reference object and the track interruption of the tracking object is determined, screening a target object subjected to track splicing with the tracking object from the at least one reference object based on the track information and the at least one reference information;
and carrying out track splicing on the target object and the tracked object to obtain the tracking track of the tracked object.
2. The method of claim 1, wherein the screening out target objects from the at least one reference object for trajectory stitching with the tracked object based on the trace information and the at least one reference information comprises:
parsing a first object identifier of the trace object from the trace information, and parsing a respective trace identifier of the at least one reference object from the at least one reference information;
determining a reference object corresponding to the matching identifier in at least one reference object as the target object under the condition that the matching identifier identical to the first object identifier is found from the respective track identifier of the at least one reference object;
and screening the target object from the at least one reference object based on first image related information in the trace information and second image related information in the at least one reference object when the matching identifier which is the same as the first object identifier is not found in the respective trace identifier of the at least one reference object.
3. The method of claim 2, wherein the first image-related information comprises: a first timestamp, a first tracking area, and a first type of the tracked object; the at least one second image-related information comprises: a second timestamp, a second tracking area, and a second type for each of the at least one reference object;
the screening out the target object from the at least one reference object based on first image-related information in the trace information and second image-related information in at least one reference information includes:
when a reference object meeting a judgment condition with the tracking object is found from the at least one reference object, determining the reference object meeting the judgment condition as the target object;
wherein the determination condition includes: one or more of the second tracking area overlapping the first tracking area by more than an area threshold, the second timestamp differing from the first timestamp by less than a time threshold, and the second type being the same as the first type.
4. The method according to any one of claims 1 to 3, wherein the track stitching for the target object and the tracking object to obtain the tracking track of the tracking object comprises:
determining the track corresponding to the target object as the historical track of the tracking object;
connecting the historical track and the current track of the tracked object to obtain the tracking track of the tracked object; wherein the current track includes the position information of the tracking object in the current video frame.
5. The method of claim 3, further comprising:
and adding the tracking object to the at least one reference object to obtain the updated at least one reference object under the condition that the reference object meeting the judgment condition with the tracking object is not found from the at least one reference object.
6. The method of claim 5, wherein the adding the tracking object to the at least one reference object to obtain an updated at least one reference object comprises:
when the number of the at least one reference object reaches the rated storage number of the reference objects, deleting a first reference object in the at least one reference object to obtain the at least one reference object with the corrected number;
and adding the tracking object to the corrected at least one reference object to obtain an updated at least one reference object.
7. The method of claim 1, further comprising:
and under the condition that the track interruption of the tracking object is determined not to occur, replacing the reference object matched with the tracking object in the at least one reference object with the tracking object to obtain the updated at least one reference object.
8. An event detection method, comprising:
performing target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to a current video frame;
under the condition that the track interruption of the tracking object is determined, screening a target object from the at least one reference object based on the track information and the at least one reference information, and carrying out track splicing on the target object and the tracking object to obtain a tracking track of the tracking object;
determining a first projection area of the tracked object, and determining a second projection area corresponding to the first projection area according to the tracking track;
and performing line crossing detection on the first projection area of the tracked object according to the second projection area to obtain a detection result, and generating an event result of the tracked object according to the detection result.
9. The method according to claim 8, wherein the performing cross-line detection on the first projection region of the tracked object according to the second projection region to obtain a detection result comprises:
according to the second boundary vertexes of the second projection area, respectively carrying out line crossing judgment on the first boundary vertexes of the first projection area of the tracked object to obtain a vertex judgment result;
and when the vertex judgment result represents that all the first boundary vertices have line crossing, determining that the detection result is that the first projection area has line crossing.
10. The method according to claim 9, wherein the performing, according to the plurality of second boundary vertices of the second projection area, line crossing judgment on the plurality of first boundary vertices of the first projection area of the tracked object to obtain vertex judgment results includes:
determining the position relations between the plurality of first boundary vertexes of the first projection area and the plurality of second boundary vertexes of the second projection area and a reference line respectively;
determining that the vertex judgment result represents that the plurality of first boundary vertices cross the line under the condition that the plurality of first boundary vertices and the plurality of second boundary vertices are respectively distributed on two sides of the datum line according to the position relation representation;
and when the position relationship indicates that the plurality of first boundary vertexes and the plurality of second boundary vertexes are distributed on the same side of the datum line or the plurality of first boundary vertexes are distributed on two sides of the datum line, determining that the vertex judgment result indicates that the plurality of first boundary vertexes do not cross the line.
11. The method of claim 10, wherein said determining the positional relationship of the first boundary vertices of the first projection region and the second boundary vertices of the second projection region with reference lines, respectively, comprises:
screening out matching vertexes of the plurality of first boundary vertexes in the first projection area from a plurality of second boundary vertexes of the second projection area;
generating a first vector for each of the plurality of first boundary vertices and any point in a reference line, and a second vector for each of the matching vertices and any point in the reference line;
the positional relationship is determined such that the plurality of first boundary vertices and the plurality of second boundary vertices are respectively distributed on both sides of the reference line when a first projection of the first vector in the normal direction of the reference line is opposite to a second projection of the second vector in the normal direction of the reference line.
12. The method according to any one of claims 8 to 11, wherein the tracking object is a vehicle, and the first projection region and the second projection region are appearance regions of a preset portion of the vehicle, the preset portion including: any one of wheels, a vehicle bottom seat and pedals;
the width of the first projection area is the width of a first tracking area, the height is a preset proportion of the height of the first tracking area, and the first tracking area is an appearance area of a vehicle in the current video frame;
the width of the second projection area is the width of a second tracking area, and the height is a preset proportion of the height of the second tracking area; the second tracking area is an appearance area of the vehicle found from a video frame before the current video frame according to the tracking track.
13. A track splicing apparatus, comprising:
the target tracking unit is used for carrying out target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to the current video frame; the at least one reference object is tracked from a video frame preceding the current video frame;
a target determining unit, configured to, when a reference object matching the tracked object is not found from the at least one reference object and it is determined that a trajectory interruption occurs in the tracked object, screen a target object subjected to trajectory splicing with the tracked object from the at least one reference object based on the trace information and the at least one reference information;
and the track splicing unit is used for carrying out track splicing on the target object and the tracking object to obtain the tracking track of the tracking object.
14. An event detection device, comprising:
the target tracking unit is used for carrying out target tracking on the video stream to obtain at least one piece of reference information corresponding to at least one reference object and trace information of a tracking object corresponding to the current video frame;
a target determination unit, configured to, if it is determined that a trajectory interruption occurs in the tracked object, screen out a target object from the at least one reference object based on the trajectory information and the at least one reference information;
the track splicing unit is used for carrying out track splicing on the target object and the tracking object to obtain a tracking track of the tracking object;
the projection determining unit is used for determining a first projection area of the tracked object and determining a second projection area corresponding to the first projection area according to the tracking track;
and the event detection unit is used for performing cross-line detection on the first projection area of the tracked object according to the second projection area to obtain a detection result and generating an event result of the tracked object according to the detection result.
15. An electronic device, comprising:
a memory for storing an executable computer program;
a processor for implementing the trajectory stitching method of any one of claims 1 to 7, or the event detection method of any one of claims 8 to 12, when executing an executable computer program stored in the memory.
16. A computer-readable storage medium, in which a computer program is stored for causing a processor to, when executed, implement the trajectory stitching method of any one of claims 1 to 7 or the event detection method of any one of claims 8 to 12.
CN202111272252.6A 2021-10-29 2021-10-29 Track splicing and event detection method, device, equipment and computer storage medium Withdrawn CN113963029A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111272252.6A CN113963029A (en) 2021-10-29 2021-10-29 Track splicing and event detection method, device, equipment and computer storage medium
PCT/CN2022/095304 WO2023071171A1 (en) 2021-10-29 2022-05-26 Trajectory splicing method and apparatus, event detection method and apparatus, and electronic device, computer-readable storage medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111272252.6A CN113963029A (en) 2021-10-29 2021-10-29 Track splicing and event detection method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN113963029A true CN113963029A (en) 2022-01-21

Family

ID=79468276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111272252.6A Withdrawn CN113963029A (en) 2021-10-29 2021-10-29 Track splicing and event detection method, device, equipment and computer storage medium

Country Status (2)

Country Link
CN (1) CN113963029A (en)
WO (1) WO2023071171A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071171A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Trajectory splicing method and apparatus, event detection method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN117011816A (en) * 2022-05-04 2023-11-07 动态Ad有限责任公司 Trace segment cleaning of trace objects

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118097568A (en) * 2024-04-24 2024-05-28 天津众合智控科技有限公司 Personal object association tracking method and system based on target detection algorithm

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6962145B2 (en) * 2017-11-13 2021-11-05 富士通株式会社 Image processing programs, image processing methods and information processing equipment
CN110443833B (en) * 2018-05-04 2023-09-26 佳能株式会社 Object tracking method and device
CN111753609B (en) * 2019-08-02 2023-12-26 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN110532916B (en) * 2019-08-20 2022-11-04 北京地平线机器人技术研发有限公司 Motion trail determination method and device
CN113963029A (en) * 2021-10-29 2022-01-21 深圳市商汤科技有限公司 Track splicing and event detection method, device, equipment and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071171A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Trajectory splicing method and apparatus, event detection method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN117011816A (en) * 2022-05-04 2023-11-07 动态Ad有限责任公司 Trace segment cleaning of trace objects

Also Published As

Publication number Publication date
WO2023071171A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
CN111626350B (en) Target detection model training method, target detection method and device
CN113963029A (en) Track splicing and event detection method, device, equipment and computer storage medium
CN111046980B (en) Image detection method, device, equipment and computer readable storage medium
CN111145214A (en) Target tracking method, device, terminal equipment and medium
CN103716687A (en) Method and system for using fingerprints to track moving objects in video
CN108174152A (en) A kind of target monitoring method and target monitor system
CN114495128B (en) Subtitle information detection method, device, equipment and storage medium
CN105138525A (en) Traffic video processing device and method, and retrieval device and method
CN113762314B (en) Firework detection method and device
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN114170516A (en) Vehicle weight recognition method and device based on roadside perception and electronic equipment
CN114297432A (en) Video retrieval method, device and equipment and computer readable storage medium
CN116453119A (en) Road detection method, apparatus, computer, readable storage medium, and program product
CN111652181A (en) Target tracking method and device and electronic equipment
CN111767839B (en) Vehicle driving track determining method, device, equipment and medium
CN113822128A (en) Traffic element identification method, device, equipment and computer readable storage medium
CN116721229A (en) Method, device, equipment and storage medium for generating road isolation belt in map
CN111008622A (en) Image object detection method and device and computer readable storage medium
CN115661444A (en) Image processing method, device, equipment, storage medium and product
CN117011481A (en) Method and device for constructing three-dimensional map, electronic equipment and storage medium
CN115576990A (en) Method, device, equipment and medium for evaluating visual truth value data and perception data
CN115345782A (en) Image processing method, image processing apparatus, computer, readable storage medium, and program product
CN114201675A (en) Content recommendation method and device, equipment, medium and product
CN117113281B (en) Multi-mode data processing method, device, agent and medium
CN113129330B (en) Track prediction method and device for movable equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40064032

Country of ref document: HK

WW01 Invention patent application withdrawn after publication

Application publication date: 20220121

WW01 Invention patent application withdrawn after publication