CN117095030A - Method, device and equipment for tracking object of road side system - Google Patents

Method, device and equipment for tracking object of road side system Download PDF

Info

Publication number
CN117095030A
CN117095030A CN202310655876.9A CN202310655876A CN117095030A CN 117095030 A CN117095030 A CN 117095030A CN 202310655876 A CN202310655876 A CN 202310655876A CN 117095030 A CN117095030 A CN 117095030A
Authority
CN
China
Prior art keywords
target
detection
information
image information
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310655876.9A
Other languages
Chinese (zh)
Inventor
关鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunkong Zhixing Technology Co Ltd
Original Assignee
Yunkong Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunkong Zhixing Technology Co Ltd filed Critical Yunkong Zhixing Technology Co Ltd
Priority to CN202310655876.9A priority Critical patent/CN117095030A/en
Publication of CN117095030A publication Critical patent/CN117095030A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification discloses a method, a device and equipment for tracking an object of a road side system. The scheme may include: acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system; determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region in which the detection object is estimated to appear; identifying a target object in the target detection frame candidate region; dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information; if the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object; and adding the tracking identification corresponding to the detection object to the target object.

Description

Method, device and equipment for tracking object of road side system
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, an apparatus, and a device for tracking an object in a roadside system.
Background
Multi-object tracking is an important technology in the field of computer vision that is capable of detecting and tracking multiple objects simultaneously in a video or image sequence. The multi-target tracking technology has wide application prospects, such as automatic driving, pedestrian detection and tracking, intelligent video monitoring, medical image analysis and the like.
When tracking multiple targets, a target frame can be identified from an image by using a target detector, and then target tracking is performed by using a characteristic matching or detection matching mode, but ID jump easily occurs in the tracking process, so that the target tracking is inaccurate.
Disclosure of Invention
The embodiment of the specification provides a method, a device and equipment for tracking an object of a road side system, so as to solve the problem of inaccurate target tracking in the existing multi-target tracking method.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the method for tracking the object of the roadside system provided in the embodiments of the present disclosure may include:
acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
Determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
identifying a target object in the target detection frame candidate region;
dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information;
if the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object;
and adding the tracking identification corresponding to the detection object to the target object.
An apparatus for tracking an object for a roadside system provided in an embodiment of the present disclosure may include:
the information acquisition module is used for acquiring historical track information of the detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
The first determining module is used for determining a corresponding target detection frame candidate area according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
the identification module is used for identifying the target object in the target detection frame candidate area;
the segmentation module is used for segmenting the image information of the target detection frame candidate area based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
the judging module is used for judging whether the similarity between the target object and the detection object is larger than or equal to a threshold value or not based on the foreground image information and the background image information;
the second determining module is used for determining that the target object and the detection object are the same object if the similarity between the target object and the detection object is greater than or equal to a threshold value;
and the tracking identification adding module is used for adding the tracking identification corresponding to the detection object into the target object.
An apparatus for tracking an object for a roadside system provided in an embodiment of the present disclosure may include:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
identifying a target object in the target detection frame candidate region;
dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
Judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information;
if the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object;
and adding the tracking identification corresponding to the detection object to the target object.
At least one embodiment of the present disclosure can achieve the following beneficial effects: the method comprises the steps of determining a corresponding target detection frame candidate region through historical track information of a detection object acquired by road side sensing equipment of a road side system, and obtaining a target object positioned in the target detection frame candidate region, so that the detection object can carry out similarity judgment with the target object of a reduced detection region, and tracking identification corresponding to the detection object is added to the target object under the condition that the target object and the detection object are the same. Therefore, the probability of ID jump of the detection object in the tracking process is reduced, and the tracking accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an overall scheme architecture of a method for tracking an object in an actual application scenario according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for tracking objects for a roadside system according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a method for determining a candidate region of a target detection frame of a different type of history trace according to an embodiment of the present disclosure;
FIG. 4 is a swim lane diagram of a method for tracking objects for a roadside system provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus for tracking an object of a roadside system according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an apparatus for tracking an object for a roadside system according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of one or more embodiments of the present specification more clear, the technical solutions of one or more embodiments of the present specification will be clearly and completely described below in connection with specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without undue burden, are intended to be within the scope of one or more embodiments herein.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
In the prior art, the ID jump generated after the target is shielded and appears can be solved based on feature matching learning of appearance, the similarity is found in a history track through the similarity of appearance feature vectors, but the recall mode has poor effect of distinguishing the similar target, such as two pedestrians wearing the same and appearing at the intersection at the same time, or two automobiles with the same color and model stop at the intersection and other red lights, and the target ID jump is easily caused by the appearance of the target under the scene, so that the tracking of the target is inaccurate.
In order to solve the drawbacks of the prior art, the present solution provides the following embodiments:
fig. 1 is a schematic diagram of an overall scheme architecture of a method for tracking an object of a roadside system in an actual application scenario in an embodiment of the present disclosure.
As shown in fig. 1, the scheme mainly may include: image information 1, road side system 2, and tracking result information 3. The road side system 2 can acquire a plurality of historical tracks from the historical track library, determine target objects in candidate areas of each historical track in the acquired image information 1, match or judge the similarity of the target objects in the candidate areas with detection objects of the corresponding historical tracks, and add tracking identifications of the corresponding detection objects to the target objects meeting the conditions, thereby obtaining tracking result information 3. The road side system 2 may be a road side sensing device integrated by a processor, a radar, a camera and the like and used for processing data; the system can also be a road side system which is formed by edge computing equipment and road side sensing equipment together; the image information 1 may be acquired by a road side perception device of the road side system. The tracking method can track the object in the monitoring area of the road side perception system.
Fig. 2 is a flowchart of a method for tracking an object of a roadside system according to an embodiment of the present disclosure. From the program perspective, the execution subject of the flow may be a program or an application client that is installed on an application server.
As shown in fig. 2, the process may include the steps of:
step 202: acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system.
The detection object in the embodiment of the present specification may include a walking person, a riding person, various vehicles, and the like. The detection object can be other, such as license plates, various animals and the like, and the user can set the detection object according to the requirements. The history trajectory information may include position information of a detection frame at each time of the detection object, length information and width information of the detection frame, ID information of the detection object, foreground feature vector information and background feature vector information at each time of the detection object, and the like. The position information of the detection frame can be represented by the height and width of the center point of the target frame in the image. The information of each moment in the historical track information can be obtained by processing image information or video information acquired by the acquisition equipment according to the set frequency by a road side system. The acquisition device may be a roadside sensing device of a roadside system or the like. The roadside system can represent roadside sensing equipment integrated by a processor, a radar, a camera and the like and used for collecting and processing data; the system can also represent a road side system which is composed of the edge computing device and the road side sensing device and can be used for collecting and processing data, and the like, and has a data collecting function and a data processing function.
Step 204: determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear.
The region in which the detection object appears estimated in the embodiment of the present specification may be a region in which the detection object may appear in the image at the current time, i.e., time T. The historical track may represent the track of the detected object before time T. The history trajectory information may be information of the detection object at each time before the T time.
Step 206: and identifying the target object in the target detection frame candidate area.
In the embodiment of the present disclosure, the roadside system may identify the target object by using a 2D target detection model, such as yolo, fasterrcnn, to determine the target object located in the candidate region of the target detection frame. The target objects in the target detection frame candidate area can be multiple or one. The target object may be a target object to which an ID is not added but which has a detection frame, thereby facilitating discrimination between a plurality of target objects having overlapping portions. If an object in the image is only partially present in the target detection frame candidate region, the object may be determined as a target object in the target detection frame candidate region.
Step 208: dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects occluded to the target object.
In the embodiment of the present disclosure, each target object has a corresponding detection frame image, and it is understood that the subject in the portion framed by the detection frame is the target object. The road side system can divide the detection frame image information by adopting an example division model to obtain foreground image information and background image information. The background image information may be image information of other objects blocked by the target object or image information of other objects blocked by the target object. Which part is selected as the background image information can be used for training the example segmentation model, so that the background image information segmented by the example segmentation model can meet the requirements of users.
In practical application, the road side system can divide the detection frame image corresponding to each target object by adopting an example division model, so as to obtain the foreground image and the background image of each target object. The example segmentation model may be that a main object in a detection frame is segmented according to pixels, that is, if the main object is blocked, no matter if the main object is blocked by other objects or if the main object is blocked by other objects, partial areas of some other objects appear in the detection frame, and the images in the detection frame may be filled with the partial areas of other objects by using RGB three-channel pixel values (128, 128, 128), so as to obtain a foreground image; the image in the detection frame may be filled with RGB three channel pixel values (128, 128, 128) for the subject object and for the partial areas of other objects that are occluded by the subject object, resulting in a background image. It will be appreciated that another background image, i.e. other objects that are occluded by the subject object, may also be obtained by filling in this way.
Step 210: and judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information.
If a plurality of target objects exist in the target detection frame area in the embodiment of the present disclosure, the target objects and the detection objects may be determined one by one based on the foreground image information and the background image information of the target objects, so as to avoid the problem of inaccurate tracking of the detection objects caused by missing data. If the target object does not exist in the target detection frame area, the detection object corresponding to the target detection frame area can be marked as unsuccessful tracking.
Step 212: and if the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object.
In the embodiment of the present disclosure, the similarity between the target object and the detection object is greater than or equal to the threshold value, which may indicate that the detection object is successfully tracked. For the detection object with the similarity smaller than the threshold value, the detection object is marked with unsuccessful tracking, and the target object can be marked with no corresponding detection object. So as to distinguish from successfully tracked detection objects and target objects.
Step 214: and adding the tracking identification corresponding to the detection object to the target object.
The tracking identifier in the embodiment of the present specification may be an ID of the detection object. The roadside system can add the ID of the detection object to the detection frame of the target object, so that the ID of the detection object can be seen under the condition of visualization. The position information, the ID information, and the target object information in the target detection frame may be stored as track information of the detection object at the time T in the corresponding history track. So that the tracking processing of the detection object is performed on the image at the time t+1 as the reference data.
It should be understood that the method according to one or more embodiments of the present disclosure may include the steps in which some of the steps are interchanged as needed, or some of the steps may be omitted or deleted.
In the method in fig. 2, the historical track information of the detected object collected by the road side sensing device of the road side system is used for determining a corresponding target detection frame candidate region and obtaining a target object located in the target detection frame candidate region, so that the detected object can perform similarity judgment with the target object of the reduced detection region, and the tracking identifier corresponding to the detected object is added to the target object under the condition that the target object and the detected object are the same object. Therefore, the frequency of ID jump of the detection object in the tracking process is reduced, and the tracking accuracy is improved.
Based on the method of fig. 2, the present description examples also provide some specific implementations of the method, as described below.
The corresponding target detection frame candidate region can be estimated according to the position information of the target detection frame of the detection object contained in the history track information. Optionally, determining the corresponding candidate region of the target detection frame in the embodiment of the present disclosure may specifically include:
based on the historical track information, predicting a first center point and a second center point of a target detection frame of the detection object by adopting Kalman filtering; the first center point represents the center point of a target detection frame of the detection object at a first moment; the second center point represents the center point of a target detection frame of the detection object at a second moment; the first time and the second time are adjacent times;
calculating a first region radius according to the first center and the second center point;
and determining the target detection frame candidate region according to the first region radius.
The kalman filtering in the embodiment of the present disclosure is an algorithm that uses a linear system state equation to optimally estimate the system state by inputting and outputting observation data through the system, and can make predictions on the motion trail of an object. The kalman filter may use, as an observation sequence, position information of a target detection frame of the detection object before the time T contained in the history trajectory information and noise in the image, so that a center point position of the target detection frame of the detection object at the time T can be predicted. The first time may represent time T and the second time may represent time t+1. The road side system can calculate the distance between the two points according to the coordinates of the first center point and the coordinates of the second center point, uses the distance between the two points as the radius of the first area, and uses the center between the two points as the center of a circle to draw a circle as a target detection frame candidate area. The coordinates of the first center point may be represented by the height and width of the center point in the image, and similarly, the second center point may be represented by the height and width of the center point in the image.
The candidate region of the target detection frame can be determined by combining the length information and the width information of the target detection frame on the basis of the position information of the target detection frame. The determining the corresponding candidate region of the target detection frame in the embodiment of the present disclosure may specifically include:
based on the historical track information, predicting a third center point and a fourth center point of a target detection frame of the detection object by adopting Kalman filtering; the third center point represents the center point of a target detection frame of the detection object at a third moment; the fourth center point represents the center point of the target detection frame of the detection object at the fourth moment; the third time and the fourth time are unconnected and separated by a time unit;
acquiring the width and the height of the target detection frame;
calculating a second region radius based on the third center, the fourth center point, the width, and the height;
and determining the target detection frame candidate region according to the radius of the second region.
In the embodiment of the present disclosure, the size of the target detection frame of the same detection object may be fixed, or may be changed according to the angle of the photographed car body or the person. When the radius of the target detection frame candidate area at the time T is calculated, the width and the height of the target detection frame of the detection object at the time T-1 can be obtained. The third time may represent time T and the fourth time may represent time t+2. The third time may be the same as the first time described above. The coordinates of the third center point and the coordinates of the fourth center point may be calculated according to the calculation method of calculating the coordinates of the first center point described above. The road side system can be based on r 2 =distance(box 3 (x 3 ,y 3 ),The formula calculates the second region radius. Wherein, box 3 (x 3 ,y 3 ) May represent the coordinates of the predicted third center point in the image; box body 4 (x 4 ,y 4 ) The coordinates of the predicted fourth center point in the image may be represented; r is (r) 2 A second region radius may be represented; w may represent the width of the target detection frame of the detection object at time T-1; h may represent the height of the target detection frame of the detection object at time T-1. In summary, the second region radius may be obtained by adding the average value of half the height and half the width of the target detection frame at the time T-1 to the distance between the two points. And drawing a circle by taking the radius of the second area as the radius and taking the centers of the third center point and the fourth center point as circle points to obtain a target detection frame candidate area.
In order to make the tracked object more accurate, the historical tracks of different detected objects can be divided. The method in the embodiment of the specification can further comprise the following steps:
judging whether tracking data corresponding to at least two continuous moments are empty or not in the historical track information;
if the historical track information does not have the tracking data corresponding to at least two continuous moments is empty, determining the historical track corresponding to the historical track information as a determined track;
And if the tracking data corresponding to at least two continuous moments are empty in the historical track information, determining the historical track corresponding to the historical track information as an uncertain track.
The fact that the tracking data is null in the embodiment of the present disclosure may indicate that there is no information of the detection object corresponding to the time in the historical track information, such as information of the foreground feature vector, the background feature vector, and the detection frame position information of the time. The continuous at least two moments can represent continuous moments adjacent to the current moment, such as T-1 moment, T-2 moment and T-3 moment, and the tracking data is empty, and the track is an uncertain track; the tracking data at the times T-5 to T-9 are empty, but the tracking data at the time after the time T-5 is not empty, and the track may be determined. If the trace data for at least ten consecutive moments of the trace is empty, the trace may be deleted from the historical trace repository.
In practical application, in order to make the calculation of the candidate region of the target detection frame more accurate, different types of tracks adopt different calculation modes, and if the historical track is a determined track, the calculation can be performed according to the calculation mode that the radius of the candidate region of the target detection frame corresponds to the radius of the first region; if the historical track is an uncertain track, the calculation can be performed according to a calculation mode that the radius of the candidate area of the target detection frame corresponds to the radius of the second area. In either calculation method, the data closest to the time T and not empty in the history track may be used as a reference to calculate the corresponding region radius.
Fig. 3 is a schematic diagram of a method for determining a candidate region of a target detection frame of a different type of history trace according to an embodiment of the present disclosure. As shown in fig. 3, the person to be framed in the figure may represent a detection object, the solid arrow may represent a travel locus of the person estimated based on the locus of the person before the time T, the circle may represent a target detection frame candidate region, and the broken arrow may represent a radius of the candidate region. The upper half 301 in fig. 3 may represent a method of determining a target detection frame candidate region of a track, and r1 may represent a first region radius; the lower half 302 in fig. 3 may represent a method of determining a target detection frame candidate region of an uncertain track, and r2 may represent a second region radius.
The similarity of the detection object and the target object can be judged based on the appearance feature vector information. The historical track information in the embodiment of the present specification at least includes first appearance feature vector information of the detection object; the determining, based on the foreground image information and the background image information, whether the similarity between the target object and the detection object is greater than or equal to a threshold value may specifically include:
determining second appearance feature vector information of the target object based on the foreground image information and the background image information;
And judging whether the appearance similarity of the detection object and the target object is greater than or equal to a threshold value according to the first appearance feature vector information and the second appearance feature vector information.
The foreground feature vector information and the background feature vector information contained in the history track information in the embodiment of the present specification may be used to represent the first appearance feature vector information of the detection object. The foreground image information and the background image information of the target object can represent the image information of the target object which is framed in the image at the moment T; the road side system can process the image information to obtain second appearance characteristic vector information of the target object at the moment T. The foreground feature vector and the background feature vector of the track can represent first appearance feature vector information corresponding to the last moment of the detected object in the historical track; the road side system can compare according to the appearance feature vectors at two moments and judge whether the appearance similarity is greater than or equal to a threshold value.
The recognition model can be adopted to process the target object, so that appearance characteristic vector information corresponding to the target object is obtained. In this embodiment of the present disclosure, the first appearance feature vector information includes first foreground feature vector information and first background feature vector information of the detection object, and the second appearance feature vector information includes second foreground feature vector information and second background feature vector information;
The determining the second appearance feature vector information of the target object based on the foreground image information and the background image information may specifically include:
identifying the foreground image information by adopting a first Re-ID model to obtain second foreground feature vector information;
identifying the background image information by adopting a second Re-ID model to obtain second background feature vector information;
the determining, based on the foreground image information and the background image information, whether the similarity between the target object and the detection object is greater than or equal to a threshold value may specifically include:
calculating a first feature vector distance based on the first foreground feature vector information and the second foreground feature vector information;
calculating a second feature vector distance based on the first background feature vector information and the second background feature vector information;
the first feature vector distance and the second feature vector distance are weighted and summed to obtain a summation result;
and judging whether the summation result is larger than or equal to a threshold value.
The Re-ID model in the embodiment of the present specification may be used for pedestrian Re-recognition and vehicle Re-recognition. The first Re-ID model can be obtained by training historical foreground image information as a sample; the second Re-ID model may be trained using historical background image information as a sample. And respectively adopting a corresponding Re-ID model to identify foreground image information and background images of the target object, and obtaining corresponding feature vectors.
The road side system can compare appearance similarity according to the obtained feature vector information and the formula of a [ x ] |distance (target detection frame foreground feature vector, track foreground feature vector) |+b ] |distance (target detection frame background feature vector, track background feature vector) |gtoreq th to determine whether the appearance similarity between the detection object and the target object is greater than or equal to a threshold value. Wherein a, b and th can represent super parameters of the appearance matching formula, a+b=1, a and b can represent weights, th can represent a set threshold value, and a user can set corresponding super parameters according to the current scene; and calculating the corresponding vector distance to carry out weighted summation, and determining whether the similarity of the appearance is greater than or equal to a threshold value. The track is always tracked, the probability of similarity of background information in the front frame image and the rear frame image is high, and the probability of similarity of background information in the uncertain track is low due to the fact that a plurality of frames of images are missing at a plurality of moments, so that the setting of the weight value corresponding to the foreground feature vector in the uncertain track is larger than the weight value corresponding to the foreground feature vector in the uncertain track, and the setting of the weight value corresponding to the background feature vector in the uncertain track is smaller than the weight value corresponding to the background feature vector in the uncertain track, and the identification result is more accurate.
The method can simultaneously track a plurality of detection objects so as to improve tracking efficiency when multi-object tracking is performed. In this embodiment of the present disclosure, the detection objects are any object in a detection object set, where the detection set includes a plurality of first detection objects whose historical tracks are determined tracks and a plurality of second detection objects whose historical tracks are uncertain tracks; if a plurality of target objects exist in the target detection frame candidate areas corresponding to the detection objects;
judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value; if the similarity between the target object and the detection object is greater than or equal to a threshold, determining that the target object and the detection object are the same object may specifically include:
judging whether the similarity between each target object and each first detection object is greater than or equal to a threshold value;
if the similarity between the first target object and the first target detection object is greater than or equal to a threshold value, determining the first target object and the first target detection object as the same object; the first target object is any object in each target object, and the first target detection object is any object in each first detection object;
Judging whether the similarity between other target objects and each second detection object is greater than or equal to a threshold value; the other target objects are target objects, of which the similarity with each first detection object is smaller than a threshold value, in each target object;
if the similarity between the second target object and the second target detection object is greater than or equal to a threshold value, determining the second target object and the second target detection object as the same object; the second target object is any object in other target objects, and the second target detection object is any object in the second detection objects.
In the embodiment of the present disclosure, the similarity determination may be performed first based on the first detection object corresponding to each determined track and the target object in the target detection frame candidate region corresponding to each determined track. The road side system can store the information such as the position of the target detection frame, the foreground characteristic vector, the background characteristic vector and the like of the first target object, wherein the similarity between the first target detection frame and the first target detection object is greater than or equal to a threshold value, into a corresponding determined track; a target object having a similarity with the first target detection object smaller than the threshold value may be regarded as the other target object. After determining the determined track, the road side system can perform similarity judgment based on the second target detection objects of the undetermined track, namely judging whether the similarity of each second target detection object and other target objects corresponding to the target detection frame candidate areas is greater than or equal to a threshold value; and storing the information such as the position of the target detection frame, the foreground characteristic vector, the background characteristic vector and the like of the second target object which is larger than or equal to the threshold value into the corresponding uncertain track. For example: the tracks of the vehicle A and the vehicle B stored in the history track library are determined tracks, the tracks of the vehicle C and the pedestrian D stored in the history track library are uncertain tracks, the target objects in the candidate areas of the vehicle A are respectively the vehicle 1, the vehicle 2, the pedestrian 3 and the pedestrian 4, the target objects in the candidate areas of the vehicle A are respectively the vehicle 5 and the vehicle 6, wherein the similarity between the vehicle 2 and the vehicle A is greater than a threshold value, and the similarity between the vehicle 5 and the vehicle B is greater than a threshold value, so that each piece of information of the vehicle 2 can be stored in the track of the vehicle A, and each piece of information of the vehicle 5 can be stored in the track of the vehicle B; wherein, the vehicle 1 is located in the candidate area of the vehicle C and is larger than the threshold value, each piece of information of the vehicle 1 is stored in the track of the vehicle C; pedestrian 3 and pedestrian 4 are located within the candidate region of pedestrian D, but the similarity to pedestrian D is less than the threshold, then pedestrian D, pedestrian 3, and pedestrian 4 may be specially marked.
To make tracking more accurate, the remaining target objects that do not match the track, and the tracks that do not match the target objects, may be globally matched. The method in the embodiment of the specification can further comprise the following steps:
determining a third target object from the other target objects; the third target object represents a target object of which the similarity between the other target objects and each second detection object is smaller than a threshold value;
determining a first historical track from the historical tracks; the similarity between the detection object corresponding to the first historical track and each target object is smaller than a threshold value;
matching a third detection object corresponding to the first historical track with the third target object by adopting a Hungary algorithm to obtain a matching result; the third target object includes an object that is not located in a target detection frame candidate region of the third detection object;
if the matching result indicates that the third target object is matched with the third detection object, determining the third detection object and the third target object as the same object;
if the matching result indicates that the third detection object matched with the third target object does not exist, a new track corresponding to the third target object is established;
And if the matching result indicates that the third target object matched with the third detection object does not exist, marking the first historical track as an uncertain track.
In this embodiment of the present disclosure, the roadside system may determine, by performing similarity determination on the first detection object of the determined track and the second detection object of the uncertain track, the remaining detection objects corresponding to the determined track and the uncertain track that are not matched to the target object, and use the target object that is not matched to the target detection object as the third target object. The third detection object may indicate that the detection object fails to track within the candidate region of the detection object; the third target object may represent that each detected object containing the target object in the candidate region has a similarity to the target object that is less than a threshold.
In the embodiment of the specification, the hungarian algorithm is the most common algorithm for matching partial graphs, and is an algorithm for solving the maximum matching of the partial graphs by using an augmented path. A bipartite graph is a special class of graphs that can be divided into two parts, with points within each part being unconnected to each other. The road side system can take all third target objects as a first part, all third detection objects as a second part, the interiors of the first part and the second part are not connected with each other, the first part and the second part can be matched to obtain a maximum matching result, and each information of the third target objects which are successfully matched can be stored in the track of the corresponding third detection object. And taking the third target object which is not successfully matched as a new detection object, and establishing a new track. For a third detection object which is not successfully matched, whether tracking data of at least ten continuous moments (including a moment T) of the third target detection object are empty or not can be judged, and if yes, a track corresponding to the third target detection object is deleted; if not, judging whether the tracking data of the third target detection object at least two continuous moments (including the T moment) are empty, and if so, marking the track corresponding to the third target detection object as an uncertain track; if not, continuing to mark as the determined track. In practical application, when the third target object and the third detection object in the whole image can be matched through the hungarian algorithm, the tracking result is more accurate because the residual target object and the target detection object are less after the matching through the mode of the candidate area, and the accuracy of the hungarian algorithm is higher.
To more clearly illustrate a method for tracking an object of a roadside system provided in the embodiments of the present disclosure, fig. 4 is a swim lane diagram of a method for tracking an object of a roadside system provided in the embodiments of the present disclosure. As shown in fig. 4, the method may include an image information processing stage, an object tracking stage, and a tracking result processing stage, and may specifically include:
step 402: and acquiring image information of a preset area.
The preset area in the embodiment of the present disclosure may represent an area monitored by the roadside system. The image information may be collected by a road side awareness device of the road side system.
Step 404: and identifying the image information by adopting a target detector to obtain detection frame image information corresponding to each target object.
Step 406: and dividing the image information of the detection frame to obtain a second appearance characteristic vector.
The object detector in the embodiments of the present specification may be a 2D object detection model such as yolo, fasterrcnn. The detection frame image information may include image information of the target object, or may include image information of other objects. Therefore, the detection frame image needs to be segmented into a background image and a foreground image by adopting an example segmentation model, and the foreground image and the background image are respectively identified by adopting a corresponding Re-ID model to obtain a second foreground feature vector and a second background feature vector of each target object in the whole picture, so as to obtain a second appearance feature vector of each target object.
Step 408: and the road side system acquires track information of each first detection object from the historical track library.
Step 410: and determining a first target detection frame candidate area of the first detection object according to the track information of the first detection object.
In this embodiment of the present disclosure, the first detection object represents a detection object corresponding to a determined track in the history track library. The first detection frame candidate region may be calculated according to a candidate region determination method for determining a trajectory.
Step 412: and determining the second appearance characteristic vector of the first target object in the first target detection frame candidate region according to the position information of the detection frame of each target object, the second appearance characteristic vector and the first target detection frame candidate region.
Step 414: and judging whether the similarity between the first detection object and the first target object is greater than or equal to a threshold value according to the first appearance feature vector in the track information of the first target detection object and the second appearance feature vector of the corresponding first target object.
The first detection object in the embodiment of the present specification makes a similarity determination with only the first target object within the range of its candidate region.
Step 416: and the road side system acquires track information of each second detection object from the historical track library.
Step 418: and determining a second target detection frame candidate region of the second detection object according to the track information of the second detection object.
The second detection object in the embodiment of the present disclosure represents a detection object corresponding to an uncertain track in the history track library. The second target detection frame candidate region may be calculated according to a candidate region determination method of an uncertain track. The calculation of the second target detection frame candidate region may be performed simultaneously with the calculation of the first target detection frame candidate region.
Step 420: and determining the second target object in the second target detection frame candidate area according to the position information of the detection frame of each second target object and the second target detection frame candidate area.
In the embodiment of the present disclosure, if the similarity between the first target object and the first detection object is smaller than the threshold value, the second target object included in each of the second target detection frame candidate areas may be determined from the first target objects whose similarity is smaller than the threshold value.
Step 422: and judging whether the similarity between the second detection object and the second target object is greater than or equal to a threshold value according to the appearance feature vector in the track information of the second detection object and the appearance feature vector of the corresponding second target object.
The second detection object in the embodiment of the present specification makes a similarity determination only with the second target object within the range of its candidate region.
Step 424: and matching the remaining successfully unmatched detection objects with the target objects by adopting a Hungary algorithm.
The remaining detection objects that are not successfully matched in the embodiment of the present disclosure represent a determined track that is not successfully matched and a detection object corresponding to the determined track. The remaining target objects that have not been successfully matched represent objects in the entire image that have not been successfully matched by the previous two matches. This matching is not limited to the candidate region range, but the matching of the whole map is performed.
Step 426: judging whether the matching is successful.
Step 428: if the matching is unsuccessful, judging whether the matching of the target object is unsuccessful.
Step 430: if the matching of the target object is not unsuccessful, the updated track information is sent to a historical track library.
In the embodiment of the present disclosure, if the matching of the target object is unsuccessful, or if the matching of the detection object is unsuccessful, the track information may be updated correspondingly according to the number of consecutive moments when the tracking data in the track information of the detection object is null, for example: marking as an uncertain track, deleting track information, etc. Track information is updated for the successfully matched detection objects, and tracking data of the detection objects at the moment T can be stored in the track information.
Step 432: if the matching of the target object is unsuccessful, new track information is established and sent to a historical track library.
In the embodiment of the present disclosure, for an object that is not successfully matched, the object may be used as a new object, new track information may be established, and tracking data of the new track information may be stored in the new track information and marked as an uncertain track.
By the method, a plurality of objects can be tracked at the same time, the tracking speed is increased, the tracking range of each object can be reduced, the frequency of ID jump of the tracked object is reduced, and the tracking accuracy is improved.
Based on the same thought, the embodiment of the specification also provides a device corresponding to the method. Fig. 5 is a schematic structural diagram of an apparatus for tracking an object of a roadside system according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus may include:
an information obtaining module 502, configured to obtain historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
a first determining module 504, configured to determine a corresponding candidate region of the target detection frame according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
An identifying module 506, configured to identify a target object in the target detection frame candidate region;
the segmentation module 508 is configured to segment, based on the target object, image information of the target detection frame candidate region, to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
a determining module 510, configured to determine whether a similarity between the target object and the detection object is greater than or equal to a threshold;
a second determining module 512, configured to determine that the target object and the detection object are the same object if the similarity between the target object and the detection object is greater than or equal to a threshold;
the tracking identifier adding module 514 is configured to add a tracking identifier corresponding to the detection object to the target object.
Based on the same thought, the embodiment of the specification also provides equipment corresponding to the method.
Fig. 6 is a schematic structural diagram of an apparatus for tracking an object of a roadside system according to an embodiment of the present disclosure. As shown in fig. 6, the apparatus 600 may include:
At least one processor 610; the method comprises the steps of,
a memory 630 communicatively coupled to the at least one processor; wherein,
the memory 630 stores instructions 620 executable by the at least one processor 610 to enable the at least one processor 610 to:
acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
identifying a target object in the target detection frame candidate region;
dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information;
If the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object;
and adding the tracking identification corresponding to the detection object to the target object.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus shown in fig. 6, the description is relatively simple, as it is substantially similar to the method embodiment, with reference to the partial description of the method embodiment.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A method for tracking objects for a roadside system, comprising:
acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
identifying a target object in the target detection frame candidate region;
dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
Judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information;
if the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object;
and adding the tracking identification corresponding to the detection object to the target object.
2. The method according to claim 1, wherein the determining the corresponding target detection frame candidate region specifically includes:
based on the historical track information, predicting a first center point and a second center point of a target detection frame of the detection object by adopting Kalman filtering; the first center point represents the center point of a target detection frame of the detection object at a first moment; the second center point represents the center point of a target detection frame of the detection object at a second moment; the first time and the second time are adjacent times;
calculating a first region radius according to the first center and the second center point;
and determining the target detection frame candidate region according to the first region radius.
3. The method according to claim 1, wherein the determining the corresponding target detection frame candidate region specifically includes:
Based on the historical track information, predicting a third center point and a fourth center point of a target detection frame of the detection object by adopting Kalman filtering; the third center point represents the center point of a target detection frame of the detection object at a third moment; the fourth center point represents the center point of the target detection frame of the detection object at the fourth moment; the third time and the fourth time are unconnected and separated by a time unit;
acquiring the width and the height of the target detection frame;
calculating a second region radius based on the third center, the fourth center point, the width, and the height;
and determining the target detection frame candidate region according to the radius of the second region.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
judging whether tracking data corresponding to at least two continuous moments are empty or not in the historical track information;
if the historical track information does not have the tracking data corresponding to at least two continuous moments is empty, determining the historical track corresponding to the historical track information as a determined track;
and if the tracking data corresponding to at least two continuous moments are empty in the historical track information, determining the historical track corresponding to the historical track information as an uncertain track.
5. The method of claim 1, wherein the historical track information includes at least first appearance feature vector information of the detected object; the step of judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information specifically comprises the following steps:
determining second appearance feature vector information of the target object based on the foreground image information and the background image information;
and judging whether the appearance similarity of the detection object and the target object is greater than or equal to a threshold value according to the first appearance feature vector information and the second appearance feature vector information.
6. The method of claim 5, wherein the first appearance feature vector information comprises first foreground feature vector information and first background feature vector information of the detection object, and the second appearance feature vector information comprises second foreground feature vector information and second background feature vector information;
the determining, based on the foreground image information and the background image information, second appearance feature vector information of the target object specifically includes:
Identifying the foreground image information by adopting a first Re-ID model to obtain second foreground feature vector information;
identifying the background image information by adopting a second Re-ID model to obtain second background feature vector information;
the determining the second appearance feature vector of the target object based on the foreground image information and the background image information specifically includes:
identifying the foreground image information by adopting a first Re-ID model to obtain second foreground feature vector information;
identifying the background image information by adopting a second Re-ID model to obtain second background feature vector information;
the step of judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information specifically comprises the following steps:
calculating a first feature vector distance based on the first foreground feature vector information and the second foreground feature vector information;
calculating a second feature vector distance based on the first background feature vector information and the second background feature vector information;
the first feature vector distance and the second feature vector distance are weighted and summed to obtain a summation result;
And judging whether the summation result is larger than or equal to a threshold value.
7. The method according to claim 1, wherein the detected objects are any object in a detected object set, and the detected set includes a plurality of first detected objects whose historical tracks are determined tracks and a plurality of second detected objects whose historical tracks are uncertain tracks; if a plurality of target objects exist in the target detection frame candidate areas corresponding to the detection objects;
judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value; if the similarity between the target object and the detection object is greater than or equal to a threshold, determining that the target object and the detection object are the same object specifically includes:
judging whether the similarity between each target object and each first detection object is greater than or equal to a threshold value;
if the similarity between the first target object and the first target detection object is greater than or equal to a threshold value, determining the first target object and the first target detection object as the same object; the first target object is any object in each target object, and the first target detection object is any object in each first detection object;
Judging whether the similarity between other target objects and each second detection object is greater than or equal to a threshold value; the other target objects are target objects, of which the similarity with each first detection object is smaller than a threshold value, in each target object;
if the similarity between the second target object and the second target detection object is greater than or equal to a threshold value, determining the second target object and the second target detection object as the same object; the second target object is any object in other target objects, and the second target detection object is any object in the second detection objects.
8. The method of claim 7, wherein the method further comprises:
determining a third target object from the other target objects; the third target object represents a target object of which the similarity between the other target objects and each second detection object is smaller than a threshold value;
determining a first historical track from the historical tracks; the similarity between the detection object corresponding to the first historical track and each target object is smaller than a threshold value;
matching a third detection object corresponding to the first historical track with the third target object by adopting a Hungary algorithm to obtain a matching result; the third target object includes an object that is not located in a target detection frame candidate region of the third detection object;
If the matching result indicates that the third target object is matched with the third detection object, determining the third detection object and the third target object as the same object;
if the matching result indicates that the third detection object matched with the third target object does not exist, a new track corresponding to the third target object is established;
and if the matching result indicates that the third target object matched with the third detection object does not exist, marking the first historical track as an uncertain track.
9. An apparatus for tracking objects for a roadside system, comprising:
the information acquisition module is used for acquiring historical track information of the detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
the first determining module is used for determining a corresponding target detection frame candidate area according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
the identification module is used for identifying the target object in the target detection frame candidate area;
the segmentation module is used for segmenting the image information of the target detection frame candidate area based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
The judging module is used for judging whether the similarity between the target object and the detection object is larger than or equal to a threshold value or not based on the foreground image information and the background image information;
the second determining module is used for determining that the target object and the detection object are the same object if the similarity between the target object and the detection object is greater than or equal to a threshold value;
and the tracking identification adding module is used for adding the tracking identification corresponding to the detection object into the target object.
10. An apparatus for tracking objects for a roadside system, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring historical track information of a detection object; the historical track information is generated by information collected by road side sensing equipment of a road side system;
determining a corresponding target detection frame candidate region according to the historical track information; the target detection frame candidate region represents a region where the detection object is estimated to appear;
Identifying a target object in the target detection frame candidate region;
dividing the image information of the target detection frame candidate region based on the target object to obtain foreground image information and background image information corresponding to the target object; the foreground image information represents image information of the target object; the background image information represents image information of other objects blocked to the target object;
judging whether the similarity between the target object and the detection object is greater than or equal to a threshold value based on the foreground image information and the background image information;
if the similarity between the target object and the detection object is greater than or equal to a threshold value, determining that the target object and the detection object are the same object;
and adding the tracking identification corresponding to the detection object to the target object.
CN202310655876.9A 2023-06-05 2023-06-05 Method, device and equipment for tracking object of road side system Pending CN117095030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310655876.9A CN117095030A (en) 2023-06-05 2023-06-05 Method, device and equipment for tracking object of road side system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310655876.9A CN117095030A (en) 2023-06-05 2023-06-05 Method, device and equipment for tracking object of road side system

Publications (1)

Publication Number Publication Date
CN117095030A true CN117095030A (en) 2023-11-21

Family

ID=88778007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310655876.9A Pending CN117095030A (en) 2023-06-05 2023-06-05 Method, device and equipment for tracking object of road side system

Country Status (1)

Country Link
CN (1) CN117095030A (en)

Similar Documents

Publication Publication Date Title
CN111488795B (en) Real-time pedestrian tracking method applied to unmanned vehicle
CN111693972B (en) Vehicle position and speed estimation method based on binocular sequence images
US10810876B2 (en) Road obstacle detection device, method, and program
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
US11170272B2 (en) Object detection device, object detection method, and computer program for object detection
US8379928B2 (en) Obstacle detection procedure for motor vehicle
US7327855B1 (en) Vision-based highway overhead structure detection system
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
Mitzel et al. Real-time multi-person tracking with detector assisted structure propagation
EP2858008A2 (en) Target detecting method and system
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
CN110658539B (en) Vehicle positioning method, device, vehicle and computer readable storage medium
CN111213153A (en) Target object motion state detection method, device and storage medium
Liu et al. Vehicle detection and ranging using two different focal length cameras
CN111666860A (en) Vehicle track tracking method integrating license plate information and vehicle characteristics
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
KR20180047149A (en) Apparatus and method for risk alarming of collision
US11120292B2 (en) Distance estimation device, distance estimation method, and distance estimation computer program
WO2017077261A1 (en) A monocular camera cognitive imaging system for a vehicle
Płaczek A real time vehicle detection algorithm for vision-based sensors
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN117095030A (en) Method, device and equipment for tracking object of road side system
US20220129685A1 (en) System and Method for Determining Object Characteristics in Real-time
CN107256382A (en) Virtual bumper control method and system based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination