CN114333409B - Target tracking method, device, electronic equipment and storage medium - Google Patents

Target tracking method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114333409B
CN114333409B CN202111664378.8A CN202111664378A CN114333409B CN 114333409 B CN114333409 B CN 114333409B CN 202111664378 A CN202111664378 A CN 202111664378A CN 114333409 B CN114333409 B CN 114333409B
Authority
CN
China
Prior art keywords
target
video
vehicle
determining
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111664378.8A
Other languages
Chinese (zh)
Other versions
CN114333409A (en
Inventor
师小凯
唐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Elite Road Technology Co ltd
Original Assignee
Beijing Elite Road Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Elite Road Technology Co ltd filed Critical Beijing Elite Road Technology Co ltd
Priority to CN202111664378.8A priority Critical patent/CN114333409B/en
Publication of CN114333409A publication Critical patent/CN114333409A/en
Application granted granted Critical
Publication of CN114333409B publication Critical patent/CN114333409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a target tracking method, a target tracking device, electronic equipment and a storage medium, and relates to the technical field of intelligent transportation. The method comprises the following steps: acquiring a first video acquired by first video acquisition equipment, and determining vehicle information of a target vehicle in the first video; determining a corresponding tracking target of the target vehicle in a second video acquired by second video acquisition equipment based on the predetermined position transformation information and vehicle information; and establishing an association relation between the vehicle information and the tracking target, and determining the parking information of the target vehicle according to the association relation. According to the technical scheme, the problem that targets in a parking space are inaccurate in timing due to the fact that targets cannot be continuously tracked during the rotation of the first video acquisition equipment and the life damage caused by frequent rotation of the first video acquisition equipment can be avoided, and the stability of a parking system in a road and the accuracy of parking data can be improved.

Description

Target tracking method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of intelligent transportation, in particular to a target tracking method, a target tracking device, electronic equipment and a storage medium.
Background
With the development of the economy in China, the market keeping amount of vehicles is continuously increased, and in order to alleviate the parking problem, an on-road parking system is developed and rapidly popularized. The in-road parking system comprises two video acquisition devices, namely a gun camera and a ball camera, and is used for acquiring road videos in real time and monitoring parking space parking timing.
In the road parking system in the prior art, because the monitoring visual field of the gun camera is larger, license plate information cannot be seen clearly, the ball camera needs to be mobilized to identify the license plate when the vehicle enters the position, the ball camera is frequently rotated, and the service life of the ball camera is greatly reduced; during the rotation of the dome camera, tracking is interrupted, the target cannot be continuously tracked, and inaccurate timing of the target in the parking space is caused.
Disclosure of Invention
The disclosure provides a target tracking method, a target tracking device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a target tracking method including:
acquiring a first video acquired by first video acquisition equipment, and determining vehicle information of a target vehicle in the first video;
determining a corresponding tracking target of the target vehicle in a second video acquired by second video acquisition equipment based on the predetermined position transformation information and vehicle information;
Establishing an association relation between the vehicle information and the tracking target, and determining parking information of the target vehicle according to the association relation;
the acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device.
According to another aspect of the present disclosure, there is provided a target tracking apparatus including:
the acquisition module is used for acquiring a first video acquired by the first video acquisition equipment and determining vehicle information of a target vehicle in the first video;
the first determining module is used for determining a tracking target corresponding to the target vehicle in the second video acquired by the second video acquisition equipment based on the predetermined position transformation information and the vehicle information;
the second determining module is used for establishing an association relation between the vehicle information and the tracking target and determining parking information of the target vehicle according to the association relation;
the acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
According to the target tracking method, the device, the electronic equipment and the storage medium, through the association relation between the vehicle information of the target vehicle in the first video acquired by the first video acquisition equipment and the corresponding tracking target in the second video acquired by the second video acquisition equipment, the parking information of the target vehicle is determined, the problems that the service life is prolonged and damaged due to frequent rotation of the first video acquisition equipment, and the target is inaccurate in timing due to the fact that the target cannot be continuously tracked in a parking space during rotation of the first video acquisition equipment are avoided, and the stability of a parking system in a road and the accuracy of parking data can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a target tracking method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a target tracking method according to an embodiment of the disclosure;
FIG. 3 is a flow chart of a method of target tracking in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target tracking apparatus according to an embodiment of the disclosure;
fig. 5 is a block diagram of an electronic device for implementing the target tracking method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present disclosure provides a target tracking method, and fig. 1 is a flowchart of a target tracking method according to an embodiment of the present disclosure, where the method may be applied to a target tracking apparatus, for example, where the apparatus may perform target tracking when deployed in a terminal or a server or other processing device for execution. In some possible implementations, the method may also be implemented by way of a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, includes:
Step S101, acquiring a first video acquired by first video acquisition equipment, and determining vehicle information of a target vehicle in the first video;
step S102, determining a corresponding tracking target of a target vehicle in a second video acquired by second video acquisition equipment based on predetermined position transformation information and vehicle information;
step S103, establishing an association relation between the vehicle information and the tracking target, and determining parking information of the target vehicle according to the association relation;
the acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device.
In the embodiment of the disclosure, because the acquisition range of the first video acquisition device and the acquisition range of the second video acquisition device are intersected, the target tracking of the cross-device can be realized based on the acquired images in the intersection of the two devices. In one embodiment, the first video capture device is, for example, a ball machine in an in-road parking system, and the second video capture device is, for example, a gun machine in an in-road parking system. Compared with a rifle bolt, the acquisition range of the ball machine is small, the acquisition range of the rifle bolt is large, the focal length of the camera of the ball machine is larger than that of the camera of the rifle bolt, the target in the video acquired by the ball machine is larger than that in the video acquired by the rifle bolt, and license plate information of a vehicle can be obtained according to the video acquired by the ball machine. The ball machine can rotate to different directions according to the control instruction to acquire images of the vehicle, and the acquisition direction of the gun machine is fixed.
Besides the ball machine and the gun machine, the in-road parking system further comprises a calculation module, the gun machine and the ball machine collect video streams near a parking space in a road in real time, data calculation and analysis are carried out through the calculation module, and the calculation module can be realized through local terminal equipment or a remote server. In the embodiment of the disclosure, in order to realize real-time processing of data and reduce the time delay of data processing, terminal equipment is taken as an execution main body, and the data processing is performed through the local terminal equipment.
The vehicle information of the target vehicle may include, but is not limited to, a vehicle position, license plate information, a vehicle traveling direction, a traveling speed, a time of determining the vehicle information, and the like. The position transformation information may be a perspective transformation matrix of the target vehicle from the position in the image acquired by the dome camera to the position of the tracking target in the image acquired by the bolt camera.
And establishing an association relation between the vehicle information and the tracking target, and optionally establishing an association relation between the license plate identifier in the vehicle information and the tracking identifier of the tracking target. By establishing an association relation between the vehicle information and the tracking target, the vehicle information of each target in the video acquired by the gun camera can be known, so that the parking information of the target vehicle is determined, and the vehicle information is acquired without frequently controlling the rotation of the dome camera. In the related art, the rotation of the ball machine is frequently controlled, mechanical time delay exists in the rotation of the ball machine, and when the ball machine is likely to rotate to a corresponding position, the target license plate is blocked, so that unrecognized is caused. In the embodiment of the disclosure, the target video is acquired through the gun camera, and the license plate information of the target can be obtained according to the corresponding relation, so that the problem of unrecognized license plate shielding in the rotation process of the ball camera is avoided.
According to the target tracking method, through the association relation between the vehicle information of the target vehicle in the first video acquired by the first video acquisition equipment and the corresponding tracking target in the second video acquired by the second video acquisition equipment, the parking information of the target vehicle is determined, the problems that the service life is prolonged and damaged due to frequent rotation of the first video acquisition equipment, and the target timing inaccuracy is caused by the fact that the target cannot be continuously tracked in a parking space during the rotation of the first video acquisition equipment are avoided, and the stability of an in-road parking system and the accuracy of parking data can be improved.
The specific manner of predetermining the position transformation information is as follows:
in one possible implementation, the target tracking method further includes:
determining the positions of a plurality of first key points in the region corresponding to the parking spaces in the video acquired by the first video acquisition equipment;
determining the positions of second key points corresponding to the first key points in the video acquired by the second video acquisition equipment according to the parking space identification of the parking space and the positions of the first key points;
and determining position transformation information according to the positions of the plurality of first key points and the positions of the corresponding second key points.
The parking space corresponding area in the video acquired by the first video acquisition equipment comprises a parking space area in the video image and at least one area in the parking space area for parking the vehicle. For example, the position of a first key point is determined on the lane line of the parking space A1, the position of a first key point is determined on the left light of the vehicle B1 parked in the parking space A1, and the positions of a first key point are determined on the upper left corner and the lower right corner of the front windshield of the vehicle B2 parked in the parking space A2, so that the positions of four first key points are obtained. The first key point on the vehicle body can be extracted by adopting a segmentation algorithm, and the first key point on the lane line can be extracted by adopting a lane line detection algorithm.
And determining the position of a second key point corresponding to each first key point in the video acquired by the second video acquisition equipment according to the parking space identification and the position corresponding to each first key point, and calculating a perspective transformation matrix according to the position of each first key point and the position of the corresponding second key point to serve as position transformation information.
The parking space identification can be identification information of different parking spaces which are configured in advance in an image acquired by the dome camera and an image acquired by the gun camera, or can be a parking space number of each parking space displayed in the acquired image. In the image collected by the dome camera and the image collected by the gun camera, the parking space identification of the same parking space is the same.
In the embodiment of the disclosure, the key points are determined and calibrated in the overlapping area of the monitoring vision of the ball machine and the ball machine, so that the one-to-one correspondence of the key points in the images acquired by the ball machine and the ball machine is realized, the position mutual transformation relation matrix of the ball machine and the ball machine is acquired as position transformation information, and the target mutual transformation of the vision of the ball machine and the vision of the ball machine can be realized through the position transformation information.
In one possible implementation manner, acquiring a first video acquired by a first video acquisition device, and determining vehicle information of a target vehicle in the first video includes:
acquiring a first video acquired by a first video acquisition device, and if the number of times of occurrence of the target vehicle in the first video exceeds a preset threshold value and the position of the target vehicle in the first video meets a preset position condition, determining the vehicle information of the target vehicle in the first video.
In practical application, when determining the target vehicle in the first video, since the first video is a video acquired in real time, a plurality of vehicles may appear in the video, some vehicles may pass temporarily, and are not to be parked in a parking space, no vehicle information of the vehicles needs to be determined, when the number of vehicle occurrences exceeds a preset threshold, that is, the vehicles appear in a multi-frame image, and the position of the vehicle is in an algorithm analysis area, which indicates that the vehicle may need to be parked in the parking space, the vehicle quantity is taken as the target vehicle, vehicle information of the vehicle is determined, for example, license plate information is identified, license plate information is obtained, and the position of the vehicle is determined.
In the embodiment of the disclosure, the vehicle information of the target vehicle is determined by configuring the preset threshold value and the preset position condition, so that the waste of calculation resources caused by determining the vehicle information for the vehicle which does not need to stop is avoided.
In one possible implementation manner, determining a tracking target corresponding to the target vehicle in the second video acquired by the second video acquisition device based on the predetermined position transformation information and the vehicle information includes:
determining a predicted position of the target vehicle in a second video acquired by the second video acquisition device based on the position transformation information, the vehicle position in the vehicle information and the driving information which are acquired in advance;
based on the predicted locations and the locations of the plurality of targets in the second video, a corresponding tracked target of the target vehicle in the second video is determined.
In practical applications, the vehicle information may include a vehicle position and travel information, and the travel information may include a travel direction, a travel speed, and a travel time. According to the vehicle position and position conversion information of the target vehicle, the position of the target vehicle in the second video can be obtained, the position of the target vehicle in the running process can be changed, the running change amount is determined according to the running information, the running change amount is added to the position in the second video, and the predicted position is obtained and is taken as the final position of the target vehicle corresponding to the second video. The second video includes a plurality of targets, and if the position frames of the predicted positions of the target vehicles and the position frames of the plurality of targets have intersections, the target with the largest intersection can be used as the tracking target corresponding to the target vehicle in the second video.
In the embodiment of the disclosure, the tracking target of the target vehicle in the second video is determined based on the position transformation information, the vehicle position and the driving information, the calculation process is simple, and the calculation result is accurate.
In one possible implementation, determining a corresponding tracking target of the target vehicle in the second video based on the predicted position and the position of each target in the second video includes:
determining a target similarity between the target vehicle and the matched plurality of targets in the case that the predicted position matches the positions of the plurality of targets in the second video;
and determining a corresponding tracking target of the target vehicle in the second video based on the target similarity.
In practical applications, the matching between the predicted position and the positions of the plurality of targets in the second video may be that the position frames have intersections, and the calculation amount is reduced by calculating whether the predicted position and the positions of the plurality of targets in the second video have intersections or not and filtering out the targets without intersections. And for the targets intersected with the predicted positions, calculating the similarity of each target with the intersection of the target vehicle and the position respectively to obtain a plurality of target similarities, and determining the target with the highest similarity as the corresponding tracking target of the target vehicle in the second video.
In the embodiment of the disclosure, under the condition that the predicted positions are matched with the positions of the plurality of targets in the second video, the similarity between the target vehicle and the targets with matched positions is further calculated, and the tracking targets corresponding to the target vehicle are determined through the similarity, so that the accuracy of the calculation result can be improved.
In one possible implementation manner, determining parking information of the target vehicle according to the association relationship includes:
determining a parking start time and a parking end time corresponding to the tracking target based on the running state of the tracking target in the second video;
and determining the parking information of the target vehicle according to the association relation, the parking start time and the parking end time.
In practical application, according to the time of the tracking target entering the parking space and the time of the tracking target leaving the parking space in the second video, the parking start time and the parking end time corresponding to the tracking target can be determined, the vehicle information of the target vehicle corresponding to the tracking target can be obtained according to the association relation, the parking information is determined according to the vehicle information, the parking start time and the end time, the parking information can be a parking timing order, the parking timing order is sent to a server of a parking timing platform, and the parking timing order can be issued to a user of the target vehicle through the server.
In the embodiment of the disclosure, after the vehicle information of the target vehicle is determined through the video acquired by the first video acquisition equipment, the vehicle information corresponding to the tracking target corresponding to the second video can be obtained according to the association relation, and when the vehicle leaves the parking space, the first video acquisition equipment does not need to be rotated again to acquire the vehicle information, so that the service life damage caused by frequent rotation of the first video acquisition equipment is avoided.
In one possible implementation, the method further includes:
and under the condition that the tracking target is associated with the vehicle information, comparing the similarity of the vehicle corresponding to the associated vehicle information and the tracking target with the similarity of the target vehicle and the tracking target, and determining whether to update the vehicle information associated with the tracking target according to the comparison result.
In practical application, whether the tracking target is associated with the vehicle information is determined, and if the tracking target is not associated with the vehicle information, the vehicle information of the target vehicle is associated with the tracking target.
If the tracking target has the associated vehicle information, the similarity of the vehicle corresponding to the associated vehicle information and the tracking target is stored, the similarity of the target vehicle and the tracking target is calculated, and if the similarity of the vehicle corresponding to the associated vehicle information and the tracking target is smaller than the similarity of the target vehicle and the tracking target, the associated vehicle information of the tracking target is updated, and the vehicle information of the target vehicle is associated with the tracking target.
In the embodiment of the disclosure, the association accuracy of the target vehicle and the tracking target can be improved by updating the vehicle information associated with the tracking target.
In one possible implementation, the method further includes:
generating a rotation control instruction to control the first video acquisition device to rotate from a preset position to a target position to acquire a third video under the condition that any vehicle in the second video meets the preset condition;
and after the third video acquisition is completed, controlling the first video acquisition equipment to return to the preset position.
In practical application, the collection direction of the first video collection device can be configured, so that the first video collection device collects videos at a preset position according to the collection direction, when a snapshot task exists, the first video collection device is controlled to rotate to a target position to collect the videos, the time of the snapshot task can be about 15 seconds, the rotation time of the first video collection device to the target position can be 5 seconds, the time of the first video collection device to collect the videos at the target position can be 5 seconds, and the time of the first video collection device to rotate from the target position to the preset position can be 5 seconds. The time of the snapshot task may be configured according to specific needs, which is not limited by the present disclosure.
Any vehicle in the second video meets the preset condition, a snapshot task is generated for snapshot, the preset condition can be any condition that the first video acquisition equipment needs to be controlled to acquire vehicle information, then the first video acquisition equipment is controlled to rotate from a preset position to the position of the vehicle corresponding to the snapshot task to acquire the video, the vehicle information is determined, for example, a license plate is identified, and after the snapshot task is completed, the first video acquisition equipment is controlled to rotate back to the preset position.
In the embodiment of the disclosure, under the condition that any vehicle in the second video meets the preset condition, the first video acquisition device can be controlled to rotate to the target position to acquire the video, and after the acquisition task is completed, the first video acquisition device and the second video acquisition device are rotated back to the preset position to realize the cooperation work of the first video acquisition device and the second video acquisition device.
In one possible implementation, the method further includes:
and determining whether targets acquired in the same parking space at the preset position are the same vehicle or not, and if so, generating parking information corresponding to the vehicle after determining the parking end time of the vehicle.
In practical application, since the first video acquisition device rotates to the target position to execute the snapshot task when acquiring the video at the preset position, the target tracked at the preset position is interrupted, the vehicle in the parking space at the preset position is likely to leave the parking space, and the same parking space is likely to enter a new vehicle, so that whether the target acquired at the same parking space at the preset position is the same vehicle twice needs to be determined, if the target is the same vehicle, the target is continuously tracked until the target leaves the parking space, the parking end time of the vehicle is determined, the parking information of the vehicle is generated and sent to the server, and the server sends a parking order of the vehicle to the user terminal.
If the targets acquired in the same parking space at the preset position twice are not the same vehicle, the data are cut off, and the new targets are tracked again.
In the embodiment of the disclosure, whether the targets are continuously tracked is determined by determining whether the targets acquired in the same parking space at the preset position are the same vehicle or not, so that two parking orders can be prevented from being generated for the same vehicle, and user experience is improved.
In one possible implementation manner, determining whether the target acquired twice in the same parking space at the preset position is the same vehicle includes:
for a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
for a second target in the video acquired at the preset position for the second time, acquiring a second target identifier of a corresponding target in the video acquired by the second video acquisition equipment;
comparing the first target mark with the second target mark;
if the first target identifier is matched with the second target identifier, determining that targets acquired by the same parking space at the preset position twice are the same vehicle.
In practical application, for a first target in a video acquired at a preset position for the first time, according to position transformation information, the first target is corresponding to a video acquired by second video acquisition equipment to obtain a first target identifier of the corresponding target, a second target identifier of the corresponding target in the video acquired by the second video acquisition equipment for the second target is acquired by adopting the same processing mode, the first target identifier and the second target identifier are compared, and if the first target identifier and the second target identifier are the same, it is determined that the targets acquired at the same parking space at the preset position for two times are the same vehicle.
In the embodiment of the disclosure, whether the targets acquired in the same parking space at the preset position are the same vehicle is determined by determining whether the target identifiers of the targets corresponding to the targets in the video acquired by the second video acquisition equipment are the same through the targets acquired in the same parking space at the preset position at two times, and the accuracy of the calculation result is high.
In one possible implementation manner, determining whether the target acquired twice in the same parking space at the preset position is the same vehicle includes:
for a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
For a second target in the video acquired at the preset position for the second time, acquiring a second target identifier of a corresponding target in the video acquired by the second video acquisition equipment;
comparing the first target mark with the second target mark;
if the first target identifier and the second target identifier are not matched, determining the vehicle similarity of the first target and the second target;
if the similarity of the vehicles exceeds a similarity threshold, determining that the targets acquired by the same parking space at the preset position twice are the same vehicle.
In practical application, for a first target in a video acquired at a preset position for the first time, according to position transformation information, the first target is corresponding to a video acquired by a second video acquisition device to obtain a first target identifier of the corresponding target, a second target identifier of the corresponding target in the video acquired by the second video acquisition device is acquired by adopting the same processing mode, the first target identifier and the second target identifier are compared, if the first target identifier and the second target identifier are different, the vehicle similarity of the two targets in the first video can be determined through a target matching algorithm, and if the vehicle similarity exceeds a similarity threshold, the same vehicle is determined.
In the embodiment of the disclosure, when the first target identifier and the second target identifier are different, the vehicle similarity of the first target and the second target is further calculated, and whether the first target and the second target are the same vehicle or not is further determined through the vehicle similarity, so that the accuracy of the calculation result can be further improved.
Fig. 2 is a schematic diagram of a target tracking method according to an embodiment of the disclosure. As shown in fig. 2, the collection device in the monitoring point No. 1 includes a bolt face and a ball face. In this embodiment, the first image capturing device is a ball machine, the second image capturing device is a bolt machine, the capturing range of the bolt machine is a "bolt machine monitoring state" as shown in the figure, the capturing range of the ball machine is a "ball machine snap shot detail" as shown in the figure, the capturing range of the bolt machine is larger than the capturing range of the ball machine, and the monitoring ranges of the bolt machine and the ball machine are overlapped. The ball machine collects videos at preset positions, identifies license plate information of vehicles in the videos, generates control instructions when a snapshot task exists, controls the ball machine to transfer to a target position to collect the videos, and identifies license plates of the vehicles at the target position. And (3) carrying out vehicle parking timing by matching the gun camera with the ball camera, and generating a vehicle timing order.
FIG. 3 is a flow chart of a target tracking method according to an embodiment of the disclosure. As shown in fig. 3, the method includes:
Step S301, acquiring a first video acquired by first video acquisition equipment, and determining vehicle information of a target vehicle in the first video;
step S302, determining a predicted position of a target vehicle in a second video acquired by second video acquisition equipment based on pre-acquired position transformation information, vehicle position in vehicle information and running information;
step S303, determining a corresponding tracking target of the target vehicle in the second video based on the predicted position and the positions of a plurality of targets in the second video;
step S304, establishing an association relation between the vehicle information and the tracking target;
step S305, determining a parking start time and a parking end time corresponding to the tracking target based on the running state of the tracking target in the second video;
step S306, determining the parking information of the target vehicle according to the association relation, the parking start time and the parking end time.
The acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device.
According to the target tracking method, through the association relation between the vehicle information of the target vehicle in the first video acquired by the first video acquisition equipment and the corresponding tracking target in the second video acquired by the second video acquisition equipment, the parking information of the target vehicle is determined, the problems that the service life is prolonged and damaged due to frequent rotation of the first video acquisition equipment, and the target timing inaccuracy is caused by the fact that the target cannot be continuously tracked in a parking space during the rotation of the first video acquisition equipment are avoided, and the stability of an in-road parking system and the accuracy of parking data can be improved.
Fig. 4 is a schematic diagram of a target tracking apparatus according to an embodiment of the disclosure. As shown in fig. 4, the object tracking device may include:
an acquiring module 401, configured to acquire a first video acquired by a first video acquisition device, and determine vehicle information of a target vehicle in the first video;
a first determining module 402, configured to determine, based on predetermined position transformation information and vehicle information, a tracking target corresponding to the target vehicle in the second video acquired by the second video acquisition device;
a second determining module 403, configured to establish an association relationship between the vehicle information and the tracking target, and determine parking information of the target vehicle according to the association relationship;
the acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device.
In one possible implementation manner, the method further includes a third determining module, configured to:
determining the positions of a plurality of first key points in the region corresponding to the parking spaces in the video acquired by the first video acquisition equipment;
determining the positions of second key points corresponding to the first key points in the video acquired by the second video acquisition equipment according to the parking space identification of the parking space and the positions of the first key points;
And determining position transformation information according to the positions of the plurality of first key points and the positions of the corresponding second key points.
In one possible implementation, the obtaining module is configured to:
acquiring a first video acquired by a first video acquisition device, and if the number of times of occurrence of the target vehicle in the first video exceeds a preset threshold value and the position of the target vehicle in the first video meets a preset position condition, determining the vehicle information of the target vehicle in the first video.
In one possible implementation manner, the first determining module is configured to:
determining a predicted position of the target vehicle in a second video acquired by the second video acquisition device based on the position transformation information, the vehicle position in the vehicle information and the driving information which are acquired in advance;
based on the predicted locations and the locations of the plurality of targets in the second video, a corresponding tracked target of the target vehicle in the second video is determined.
In one possible implementation manner, the first determining module is configured to:
determining a target similarity between the target vehicle and the matched plurality of targets in the case that the predicted position matches the positions of the plurality of targets in the second video;
and determining a corresponding tracking target of the target vehicle in the second video based on the target similarity.
In one possible implementation, the second determining module is configured to:
determining a parking start time and a parking end time corresponding to the tracking target based on the running state of the tracking target in the second video;
and determining the parking information of the target vehicle according to the association relation, the parking start time and the parking end time.
In one possible implementation manner, the device further comprises an alignment module, configured to:
and under the condition that the tracking target is associated with the vehicle information, comparing the similarity of the vehicle corresponding to the associated vehicle information and the tracking target with the similarity of the target vehicle and the tracking target, and determining whether to update the vehicle information associated with the tracking target according to the comparison result.
In one possible implementation, the device further includes a control module configured to:
generating a rotation control instruction to control the first video acquisition device to rotate from a preset position to a target position to acquire a third video under the condition that any vehicle in the second video meets the preset condition;
and after the third video acquisition is completed, controlling the first video acquisition equipment to return to the preset position.
In one possible implementation manner, the method further includes a fourth determining module, configured to:
And determining whether targets acquired in the same parking space at the preset position are the same vehicle or not, and if so, generating parking information corresponding to the vehicle after determining the parking end time of the vehicle.
In one possible implementation manner, the fourth determining module is specifically configured to:
for a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
for a second target in the video acquired at the preset position for the second time, acquiring a second target identifier of a corresponding target in the video acquired by the second video acquisition equipment;
comparing the first target mark with the second target mark;
if the first target identifier is matched with the second target identifier, determining that targets acquired by the same parking space at the preset position twice are the same vehicle.
In one possible implementation manner, the fourth determining module is specifically configured to:
for a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
For a second target in the video acquired at the preset position for the second time, acquiring a second target identifier of a corresponding target in the video acquired by the second video acquisition equipment;
comparing the first target mark with the second target mark;
if the first target identifier and the second target identifier are not matched, determining the vehicle similarity of the first target and the second target;
if the similarity of the vehicles exceeds a similarity threshold, determining that the targets acquired by the same parking space at the preset position twice are the same vehicle.
The functions of each unit, module or sub-module in each apparatus of the embodiments of the present disclosure may be referred to the corresponding descriptions in the above method embodiments, which are not repeated herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as the target tracking method. For example, in some embodiments, the target tracking method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the object tracking method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the target tracking method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (23)

1. A target tracking method, comprising:
acquiring a first video acquired by first video acquisition equipment, and determining vehicle information of a target vehicle in the first video;
determining a tracking target corresponding to the target vehicle in a second video acquired by second video acquisition equipment based on predetermined position transformation information and the vehicle information;
establishing an association relation between the vehicle information and the tracking target, and determining parking information of the target vehicle according to the association relation;
The first video acquisition equipment is a ball machine in an in-road parking system, and the second video acquisition equipment is a gun machine in the in-road parking system; the acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device;
the method further comprises the steps of:
determining the positions of a plurality of first key points in the region corresponding to the parking spaces in the video acquired by the first video acquisition equipment;
determining the positions of second key points corresponding to the first key points in the video acquired by the second video acquisition equipment according to the parking space identification of the parking space and the positions of the first key points;
and determining the position transformation information according to the positions of the plurality of first key points and the positions of the corresponding second key points.
2. The method of claim 1, wherein the acquiring the first video acquired by the first video acquisition device, determining vehicle information of a target vehicle in the first video, comprises:
acquiring a first video acquired by a first video acquisition device, and determining vehicle information of a target vehicle in the first video if the frequency of occurrence of the target vehicle in the first video exceeds a preset threshold and the position of the target vehicle in the first video meets a preset position condition.
3. The method of claim 1, wherein the determining, based on the predetermined position transformation information and the vehicle information, a corresponding tracking target of the target vehicle in the second video captured by the second video capturing device comprises:
determining a predicted position of the target vehicle in a second video acquired by second video acquisition equipment based on pre-acquired position transformation information, vehicle position in the vehicle information and driving information;
based on the predicted locations and locations of a plurality of targets in the second video, a corresponding tracked target of the target vehicle in the second video is determined.
4. The method of claim 3, wherein the determining a corresponding tracked target for the target vehicle in the second video based on the predicted location and the location of each target in the second video comprises:
determining a target similarity between the target vehicle and the matched plurality of targets in the case that the predicted position matches the positions of the plurality of targets in the second video;
and determining a corresponding tracking target of the target vehicle in the second video based on the target similarity.
5. The method of claim 1, wherein the determining parking information of the target vehicle according to the association relationship comprises:
determining a parking start time and a parking end time corresponding to the tracking target based on the running state of the tracking target in the second video;
and determining the parking information of the target vehicle according to the association relation, the parking start time and the parking end time.
6. The method of claim 1, further comprising:
and under the condition that the tracking target is associated with the vehicle information, comparing the similarity between the vehicle corresponding to the associated vehicle information and the tracking target with the similarity between the target vehicle and the tracking target, and determining whether to update the vehicle information associated with the tracking target according to the comparison result.
7. The method of claim 1, further comprising:
generating a rotation control instruction to control the first video acquisition device to rotate from a preset position to a target position to acquire a third video under the condition that any vehicle in the second video meets a preset condition;
and after the third video acquisition is completed, controlling the first video acquisition equipment to return to the preset position.
8. The method of claim 7, further comprising:
and determining whether targets acquired in the same parking space at the preset position are the same vehicle or not, and if so, generating parking information corresponding to the vehicle after determining the parking end time of the vehicle.
9. The method of claim 8, wherein the determining whether the two targets acquired in the same parking space at the preset location are the same vehicle comprises:
for a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
for a second target in the video acquired at the preset position for the second time, acquiring a second target identification of a corresponding target of the second target in the video acquired by second video acquisition equipment;
comparing the first target identifier with the second target identifier;
and if the first target identifier is matched with the second target identifier, determining that targets acquired by the same parking space at the preset position twice are the same vehicle.
10. The method of claim 8, wherein the determining whether the two targets acquired in the same parking space at the preset location are the same vehicle comprises:
For a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
for a second target in the video acquired at the preset position for the second time, acquiring a second target identification of a corresponding target of the second target in the video acquired by second video acquisition equipment;
comparing the first target identifier with the second target identifier;
if the first target identifier and the second target identifier are not matched, determining the vehicle similarity of the first target and the second target;
and if the similarity of the vehicles exceeds a similarity threshold, determining that the targets acquired by the same parking space at the preset position twice are the same vehicle.
11. An object tracking device comprising:
the acquisition module is used for acquiring a first video acquired by first video acquisition equipment and determining vehicle information of a target vehicle in the first video;
the first determining module is used for determining a corresponding tracking target of the target vehicle in a second video acquired by the second video acquisition equipment based on the predetermined position transformation information and the vehicle information;
The second determining module is used for establishing an association relation between the vehicle information and the tracking target and determining parking information of the target vehicle according to the association relation;
the first video acquisition equipment is a ball machine in an in-road parking system, and the second video acquisition equipment is a gun machine in the in-road parking system; the acquisition range of the first video acquisition device is intersected with the acquisition range of the second video acquisition device;
the apparatus further comprises a third determination module for:
determining the positions of a plurality of first key points in the region corresponding to the parking spaces in the video acquired by the first video acquisition equipment;
determining the positions of second key points corresponding to the first key points in the video acquired by the second video acquisition equipment according to the parking space identification of the parking space and the positions of the first key points;
and determining the position transformation information according to the positions of the plurality of first key points and the positions of the corresponding second key points.
12. The apparatus of claim 11, wherein the means for obtaining is configured to:
acquiring a first video acquired by a first video acquisition device, and determining vehicle information of a target vehicle in the first video if the frequency of occurrence of the target vehicle in the first video exceeds a preset threshold and the position of the target vehicle in the first video meets a preset position condition.
13. The apparatus of claim 11, wherein the first determining module is configured to:
determining a predicted position of the target vehicle in a second video acquired by second video acquisition equipment based on pre-acquired position transformation information, vehicle position in the vehicle information and driving information;
based on the predicted locations and locations of a plurality of targets in the second video, a corresponding tracked target of the target vehicle in the second video is determined.
14. The apparatus of claim 13, wherein the first determining module is configured to:
determining a target similarity between the target vehicle and the matched plurality of targets in the case that the predicted position matches the positions of the plurality of targets in the second video;
and determining a corresponding tracking target of the target vehicle in the second video based on the target similarity.
15. The apparatus of claim 11, wherein the second determining module is configured to:
determining a parking start time and a parking end time corresponding to the tracking target based on the running state of the tracking target in the second video;
and determining the parking information of the target vehicle according to the association relation, the parking start time and the parking end time.
16. The apparatus of claim 11, further comprising an alignment module to:
and under the condition that the tracking target is associated with the vehicle information, comparing the similarity between the vehicle corresponding to the associated vehicle information and the tracking target with the similarity between the target vehicle and the tracking target, and determining whether to update the vehicle information associated with the tracking target according to the comparison result.
17. The apparatus of claim 11, further comprising a control module to:
generating a rotation control instruction to control the first video acquisition device to rotate from a preset position to a target position to acquire a third video under the condition that any vehicle in the second video meets a preset condition;
and after the third video acquisition is completed, controlling the first video acquisition equipment to return to the preset position.
18. The apparatus of claim 17, further comprising a fourth determination module to:
and determining whether targets acquired in the same parking space at the preset position are the same vehicle or not, and if so, generating parking information corresponding to the vehicle after determining the parking end time of the vehicle.
19. The apparatus of claim 18, wherein the fourth determination module is specifically configured to:
For a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
for a second target in the video acquired at the preset position for the second time, acquiring a second target identification of a corresponding target of the second target in the video acquired by second video acquisition equipment;
comparing the first target identifier with the second target identifier;
and if the first target identifier is matched with the second target identifier, determining that targets acquired by the same parking space at the preset position twice are the same vehicle.
20. The apparatus of claim 18, wherein the fourth determination module is specifically configured to:
for a first target in a video acquired at a preset position for the first time, acquiring a first target identification of a corresponding target in the video acquired by second video acquisition equipment;
for a second target in the video acquired at the preset position for the second time, acquiring a second target identification of a corresponding target of the second target in the video acquired by second video acquisition equipment;
comparing the first target identifier with the second target identifier;
If the first target identifier and the second target identifier are not matched, determining the vehicle similarity of the first target and the second target;
and if the similarity of the vehicles exceeds a similarity threshold, determining that the targets acquired by the same parking space at the preset position twice are the same vehicle.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-10.
CN202111664378.8A 2021-12-31 2021-12-31 Target tracking method, device, electronic equipment and storage medium Active CN114333409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111664378.8A CN114333409B (en) 2021-12-31 2021-12-31 Target tracking method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111664378.8A CN114333409B (en) 2021-12-31 2021-12-31 Target tracking method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114333409A CN114333409A (en) 2022-04-12
CN114333409B true CN114333409B (en) 2023-08-29

Family

ID=81021789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111664378.8A Active CN114333409B (en) 2021-12-31 2021-12-31 Target tracking method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114333409B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019546B (en) * 2022-05-26 2024-09-17 北京精英路通科技有限公司 Parking prompt method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340856B (en) * 2018-12-19 2024-04-02 杭州海康威视系统技术有限公司 Vehicle tracking method, device, equipment and storage medium
CN110245565A (en) * 2019-05-14 2019-09-17 东软集团股份有限公司 Wireless vehicle tracking, device, computer readable storage medium and electronic equipment
CN110634306A (en) * 2019-08-30 2019-12-31 上海能塔智能科技有限公司 Method and device for determining vehicle position, storage medium and computing equipment
CN111275983B (en) * 2020-02-14 2022-11-01 阿波罗智联(北京)科技有限公司 Vehicle tracking method, device, electronic equipment and computer-readable storage medium
CN113393492A (en) * 2021-05-27 2021-09-14 浙江大华技术股份有限公司 Target tracking method, target tracking device, electronic device and storage medium
CN113469115B (en) * 2021-07-20 2024-07-02 阿波罗智联(北京)科技有限公司 Method and device for outputting information

Also Published As

Publication number Publication date
CN114333409A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN110910665B (en) Signal lamp control method and device and computer equipment
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN112528927B (en) Confidence determining method based on track analysis, road side equipment and cloud control platform
CN111652112B (en) Lane flow direction identification method and device, electronic equipment and storage medium
CN115641359B (en) Method, device, electronic equipment and medium for determining movement track of object
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN114036253A (en) High-precision map data processing method and device, electronic equipment and medium
CN114333409B (en) Target tracking method, device, electronic equipment and storage medium
CN113538963A (en) Method, apparatus, device and storage medium for outputting information
CN116469073A (en) Target identification method, device, electronic equipment, medium and automatic driving vehicle
CN113052047B (en) Traffic event detection method, road side equipment, cloud control platform and system
CN113722342A (en) High-precision map element change detection method, device and equipment and automatic driving vehicle
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
CN115973190A (en) Decision-making method and device for automatically driving vehicle and electronic equipment
CN113670295B (en) Data processing method, device, electronic equipment and readable storage medium
CN115062240A (en) Parking lot sorting method and device, electronic equipment and storage medium
CN110849327B (en) Shooting blind area length determination method and device and computer equipment
CN112991446A (en) Image stabilization method and device, road side equipment and cloud control platform
CN114049615B (en) Traffic object fusion association method and device in driving environment and edge computing equipment
CN118072280A (en) Method and device for detecting change of traffic light, electronic equipment and automatic driving vehicle
CN115223374B (en) Vehicle tracking method and device and electronic equipment
CN118644548A (en) Method, device, electronic equipment and storage medium for determining object position
CN114529768B (en) Method, device, electronic equipment and storage medium for determining object category
CN116434552A (en) Non-motor vehicle lane snapshot method and device, electronic equipment and storage medium
CN115810270A (en) Vehicle steering detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant