CN113824880B - Vehicle tracking method based on target detection and UWB positioning - Google Patents
Vehicle tracking method based on target detection and UWB positioning Download PDFInfo
- Publication number
- CN113824880B CN113824880B CN202110989507.4A CN202110989507A CN113824880B CN 113824880 B CN113824880 B CN 113824880B CN 202110989507 A CN202110989507 A CN 202110989507A CN 113824880 B CN113824880 B CN 113824880B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- monitoring
- relative position
- position coordinates
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000012544 monitoring process Methods 0.000 claims abstract description 124
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 238000013507 mapping Methods 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241001334134 Rugopharynx epsilon Species 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
- H04W4/44—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle tracking method based on target detection and UWB positioning, which comprises the steps of obtaining a top view of a monitoring area as a reference image, selecting any point in the reference image as an origin and establishing a plane coordinate system, and obtaining the relative position coordinate of each pixel point in the reference image based on the origin; acquiring an H matrix between an image coordinate and a relative position coordinate in a monitoring image shot by a monitoring gun camera; registering, binding and registering the number of the vehicle entering the monitoring area, and determining the relative position coordinate of the vehicle based on the UWB base station; acquiring an image of a vehicle entering a monitoring area through a monitoring gun camera, and acquiring a relative position coordinate of a center point of a target frame corresponding to the vehicle according to an H matrix; registering the data of the UWB base station and the data of the monitoring gun camera; and acquiring all UWB positioning and vehicle images corresponding to the vehicles under the same registration number, and realizing tracking of the vehicles. The invention is used for realizing accurate vehicle tracking.
Description
Technical Field
The application belongs to the field of artificial intelligent safety monitoring, and particularly relates to a vehicle tracking method based on target detection and UWB positioning.
Background
The security monitoring is important for various material warehouses, prisons and other places. At present, an effective means is lacking for tracking the whole process of the vehicle, and supervision personnel are usually required to accompany, or special personnel are required to check for a long time.
Current target tracking techniques typically employ either picture tracking to monitor the bolt face or position tracking based on positioning information. When monitoring gun machine tracking is adopted, a deep learning target detection model based on a convolution network is generally adopted; the convolution network performs feature extraction by reducing the height and width of image features and increasing the dimension, the extracted features are used for detecting targets, the actual engineering mostly uses a YOLOv5 algorithm, and YOLOv5 is a more accurate and efficient algorithm, however, for tracking vehicles, real-time information needs to be called, and the running speed of the algorithm still needs to be further improved; the position tracking based on the positioning information is mostly based on RFID, infrared or UWB, wherein the positioning based on UWB is performed through information transmission time between the base station and the tag, the positioning information is accurate, however, the position tracking based on the positioning information can only be seen when the position tracking based on the positioning information is used alone, and vehicle tracking errors are easily caused by positioning drift.
Disclosure of Invention
The purpose of the application is to provide a vehicle tracking method based on target detection and UWB positioning, which is used for realizing accurate vehicle tracking.
In order to achieve the above purpose, the technical scheme adopted by the application is as follows:
a method for tracking a vehicle based on target detection and UWB positioning, the vehicle tracking comprising monitoring a surveillance area formed by a surveillance camera and installing a UWB base station in the surveillance area, and installing UWB tags to vehicles entering the surveillance area, the method for tracking a vehicle based on target detection and UWB positioning comprising:
and 5, registering the data of the UWB base station and the data of the monitoring gun camera, wherein the registering comprises the following steps:
taking a set A ' = { (x ' of relative position coordinates of all vehicles in an image acquired by monitoring a gun camera at the current moment ' 1 ,y′ 1 ),(x′ 2 ,y′ 2 ),(x′ 3 ,y′ 3 ),……,(x′ N ,y′ N ) Taking a set a= { (x) of relative position coordinates of all vehicles acquired based on UWB base station at the present time 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),……,(x N ,y N )};
Taking the ith, i E (1, 2,3, …, N) vehicle p 'in the collection A' i Relative position coordinates (x' i ,y′ i ) Then vehicle p' i The registration number of (2) is:
in the formula, o i Vehicle p 'in the map acquired for monitoring the bolt face' i Registration number, p j The jth vehicle, p, acquired for UWB base station j Is (x) j ,y j ),d(p j ,p′ i ) Representing vehicle p j And vehicle p' i N is the total number of registered vehicles at the current time;
and 6, acquiring all UWB positioning and vehicle images corresponding to the vehicles under the same registration number according to the registered data, and realizing tracking of the vehicles.
The following provides several alternatives, but not as additional limitations to the above-described overall scheme, and only further additions or preferences, each of which may be individually combined for the above-described overall scheme, or may be combined among multiple alternatives, without technical or logical contradictions.
Preferably, the obtaining the relative position coordinates of each pixel point in the reference map based on the origin includes:
selecting n key points in the reference graph, mapping to obtain relative position coordinates of the key points relative to the origin, and carrying out grid registration by adopting ArcGIS to input the relative position coordinates of the n key points to obtain the relative position coordinates of each pixel point in the reference graph relative to the origin.
Preferably, the monitoring area includes a plurality of monitoring cameras, and then shooting area binding needs to be performed on each monitoring camera, where the shooting area binding includes:
selecting a corresponding rectangular frame as a shooting area to be bound in a reference picture according to the view shooting range of each monitoring gun camera;
and establishing a corresponding relation with the monitoring gun camera by taking the relative position coordinates of the left lower corner and the right upper corner of the shooting area as binding data.
Preferably, the monitoring area includes a plurality of monitoring cameras, and the determining the current area of the vehicle in the monitoring area according to the shooting area bound by each monitoring camera includes:
traversing all monitoring gun locks to obtain the relative position coordinates of the vehicle in the reference map;
if the relative position coordinates of the vehicle are in a shooting area, taking the shooting area as the current area of the vehicle, and taking a monitoring gun camera corresponding to the shooting area as a current use monitoring gun camera;
if the relative position coordinates of the vehicle are in a plurality of shooting areas, the current area of the vehicle and the current use monitoring gun camera are still kept unchanged;
let the relative position coordinates of the vehicle be (b) x ,b y ) The relative position coordinates of the lower left corner of the shooting area to be judged areThe relative position coordinates of the upper right corner of the imaging region to be determined are +.>Judging whether the relative position coordinates of the vehicle are in the shooting area to be judged is as follows: if the judgment condition is satisfied->The relative position coordinates of the vehicle are considered to be in the shooting area to be judged; otherwise, the relative position coordinates of the vehicle are not in the shooting area to be judgedIn the domain.
Preferably, the YOLOv5 model is an improved YOLOv5 model, the improved YOLOv5 model adopts a detection network with a two-layer structure, and an efficient channel attention ECA module is added between a backbone network and a head network.
The vehicle tracking method based on target detection and UWB positioning has the following advantages: by combining UWB and monitoring gun camera, the problems that the monitoring loss target exists only based on the tracking of the monitoring camera, and the identity of the object cannot be determined after the camera is switched are solved; the relative position and track of the vehicles in the whole area and the behavior in monitoring are accurately obtained through registration of the data of the UWB base station and the data of the monitoring gun camera, so that the registered vehicles are continuously tracked and tracked in the whole process, and subsequent calling and checking are facilitated.
Drawings
FIG. 1 is a flow chart of a method of vehicle tracking based on target detection and UWB positioning of the present application;
FIG. 2 is a schematic diagram of an embodiment of overlapping shooting areas corresponding to a surveillance camera in the present application;
FIG. 3 is a schematic structural diagram of the improved YOLOv5 model of the present application;
fig. 4 is a schematic structural diagram of the high-efficiency channel attention ECA module of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In one embodiment, in order to solve the problem that tracking errors or target loss easily occur in vehicle tracking in the prior art, a vehicle tracking method based on target detection and UWB positioning is provided.
When the vehicle is tracked, a monitoring area needing to be tracked is determined, the monitoring area is covered by the view angle shooting range of the monitoring gun camera, meanwhile, a UWB base station is installed in the monitoring area, and a UWB label is installed on a vehicle entering the monitoring area, so that the vehicle tracking combining the image and UWB is realized.
As shown in fig. 1, the vehicle tracking method based on target detection and UWB positioning of the present embodiment includes the steps of:
In order to facilitate registration of the corresponding data of the subsequent monitoring image and the corresponding data of the UWB system, the embodiment firstly obtains a reference image as a basic registration image, when determining the relative position coordinate of each pixel point in the reference image in the embodiment, n (n is more than or equal to 6, and 8 is preferable in consideration of precision and processing speed) key points in the reference image are selected, the relative position coordinates of the key points relative to the origin point are obtained through mapping, and arcGIS (that is, arcGIS platform supporting a user to map, that is, arcGIS of map software) is adopted to input the relative position coordinates of the n key points for grid registration, so that the relative position coordinate of each pixel point in the reference image relative to the origin point is obtained.
It should be noted that, when mapping is performed, mapping may be performed manually, or may be performed automatically by a computer using a distance measurement method. And randomly selecting the key points.
And 2, acquiring a monitoring image shot by a monitoring gun camera, correcting the monitoring image, selecting m (m is more than or equal to 6, and 8 are preferable in consideration of precision and processing speed) key points in the corrected monitoring image, acquiring image coordinates and relative position coordinates of the m key points, and calculating to obtain an H matrix according to the acquired image coordinates and relative position coordinates of the m key points.
When the image correction is carried out, firstly, all monitoring gun camera internal parameters and distortion coefficients are calculated through a chessboard board and a calibrecode function in opencv, and then, the monitoring gun camera image is corrected through an undisitor function in opencv.
The image coordinates of the m key points are coordinates of the key points in the monitoring image, and the monitoring image shot by the monitoring gun camera is a part of the monitoring area, so that the key points selected in the monitoring image are necessarily in the reference image, the m key points can be marked in the reference image manually, and the relative position coordinates of the m key points in the reference image can be obtained according to the marks. And finally, finishing the calculation of the H matrix by utilizing a findHomograph function in opencv.
It will be readily appreciated that when there are multiple monitoring bolt works, an H matrix needs to be calculated for each monitoring bolt work in order to subsequently obtain the relative position coordinates of the vehicle in the reference map.
And 3, registering, binding and registering the vehicle entering the monitoring area, and determining the relative position coordinates of the vehicle based on the UWB base station.
In order to simplify the calculation process, the UWB system in this embodiment directly uses the reference map after grid registration by ArcGIS as a base map, that is, in this embodiment, UWB positioning obtained by the UWB base station is the relative position coordinate of the reference map with respect to the origin.
In order to avoid the problem of tracking loss caused by switching of the monitoring camera during running of the vehicle, the vehicle entering the monitoring area is registered in the embodiment. According to the method and the device, the UWB tag is arranged on each vehicle needing to enter the monitoring area in advance, so that the vehicle information can be bound in the UWB tag, when the UWB tag and the UWB base station conduct information interaction, the UWB tag sends the vehicle information to the UWB tag, and the background server binds the registration number to the newly received vehicle information to complete registration. That is, each UWB positioning (i.e., relative position coordinates) determined in the present embodiment corresponds to a registration number that marks vehicle information. After the background server does not receive UWB positioning of a certain registration number for a certain time, the registration number is logged off.
And 4, acquiring an image of the vehicle entering the monitoring area through a monitoring gun camera, acquiring a target frame of the vehicle in the image through a YOLOv5 model, and acquiring a relative position coordinate of a central point of the target frame corresponding to the reference image according to the H matrix.
When an image of a vehicle is acquired, if a plurality of monitoring gun cameras are contained in a monitoring area, shooting area binding needs to be performed on each monitoring gun camera, and the shooting area binding comprises: selecting a corresponding rectangular frame as a shooting area to be bound in a reference picture according to the view shooting range of each monitoring gun camera; and establishing a corresponding relation with the monitoring gun camera by taking the relative position coordinates of the left lower corner and the right upper corner of the shooting area as binding data.
If the viewing angle shooting range of the monitoring camera is rectangular, the frame selection is performed directly in the reference image according to the viewing angle shooting range, and if the viewing angle shooting range of the monitoring camera is non-rectangular, the frame selection is performed in the reference image according to the maximum inscribed rectangle of the teaching shooting range.
Namely, the total coverage area A of the composition of all actual monitoring gun units is obtained as a monitoring area, wherein the formula A is as follows:
A={a 1 ,a 2 ,a 3 ,……,a q }
wherein a is r (r.epsilon. (1, 2,3, …, q)) is the rectangular frame covered by the ith monitoring bolt, q is the number of monitoring bolts, and a r The expression is as follows:
wherein the method comprises the steps ofRespectively x coordinates of the minimum coordinate point of the rectangular frame in the reference graph, y coordinates of the minimum coordinate point in the reference graph, x coordinates of the maximum coordinate point in the reference graph and y coordinates of the maximum coordinate point in the reference graph, namely->For the relative position coordinates of the rectangular frame in the lower left corner of the reference diagram, < >>Is the relative position coordinates of the rectangular frame in the upper right corner of the reference diagram.
As shown in fig. 2, if the monitoring area includes a plurality of monitoring cameras, a current area of the vehicle in the monitoring area needs to be determined according to a shooting area bound by each monitoring camera, where the determining the current area of the vehicle in the monitoring area includes: traversing all monitoring gun locks to obtain the relative position coordinates of the vehicle in the reference map; if the relative position coordinates of the vehicle are in a shooting area, taking the shooting area as the current area of the vehicle, and taking a monitoring gun camera corresponding to the shooting area as a current use monitoring gun camera; if the relative position coordinates of the vehicle are in a plurality of shooting areas, the current area of the vehicle and the current use monitoring bolt remain unchanged.
In the figure, when a vehicle passes through a shooting area 1 of a monitoring gun camera, the monitoring gun camera corresponding to the shooting area 1 is turned on, when the vehicle is located in a public area of the shooting areas 1 and 2, the monitoring gun camera corresponding to the shooting area 1 is still turned on, when the vehicle reaches the shooting area 2, the monitoring gun camera corresponding to the shooting area 2 is turned on, and the monitoring gun camera corresponding to the shooting area 1 is turned off.
Wherein when judging whether the relative position coordinates of the vehicle are in the shooting area to be judged, let the relative position coordinates of the vehicle be (b) x ,b y ) The relative position coordinates of the lower left corner of the shooting area to be judged areThe relative position coordinates of the upper right corner of the imaging region to be determined are +.>Judging whether the relative position coordinates of the vehicle are to be treatedThe judged shooting area is: if the judgment condition is satisfied->The relative position coordinates of the vehicle are considered to be in the shooting area to be judged; otherwise, the relative position coordinates of the vehicle are not considered to be in the shooting area to be judged.
As shown in fig. 3, in order to obtain a better target recognition effect, the present embodiment uses the modified YOLOv5 for target detection. The improvement mainly comprises two points: 1) For a vehicle and a deployed monitoring gun camera, the vehicle in the image frame is a larger target, so that fusion and prediction of the YOLOv5 at the highest scale are removed, a two-layer detection network with a two-layer structure is adopted, and the detection time is saved while the detection effect is maintained. 2) And adding a high-efficiency channel attention ECA module focusing position between the backbone network and the head network to generate a characteristic with better resolution, and enabling the network to acquire more context information and acquire more accurate detection results.
The modified YOLOv5 model thus comprises a backbone network, a head network and a detection network, wherein the backbone network comprises a Focus module, (first) Conv module, (first) C3 module (BottleneckCSP x 3), (second) Conv module, (second) C3 module, (third) Conv module, (third) C3 module, (fourth) Conv module, spp module, (fourth) C3 module and a high-efficiency channel attention ECA module, connected in this order from the input side to the output side.
The head network includes a (fifth) Conv module, a (first) Upsample module, a (first) Concat module, a (fifth) C3 module, a (sixth) Conv module, a (second) Upsample module, a (second) Concat module, a (sixth) C3 module, a (seventh) Conv module, a (third) Concat module, a (seventh) C3 module, a (eighth) Conv module, a (fourth) Concat module, a (eighth) C3 module, and an input of the (fifth) Conv module is connected to an output of the efficient channel attention ECA module, an input of the (first) Concat module is also connected to an output of the (third) C3 module, an input of the (second) Concat module is also connected to an output of the (second) C3 module, an input of the (third) Concat module is also connected to an output of the (sixth) Conv module, and an input of the (fourth) Concat module is also connected to an output of the (fifth) Conv module, which is also connected to an output of the (fifth) Concat module, which is sequentially connected in a data flow direction.
The detection network comprises a Conv (512, s x 3 x (c+5), 3, 2) module connected to the output of the (seventh) C3 module, a Sigmoid function connected to the Conv (512, s x 3 x (c+5), 3, 2) module, a Conv (512, s x 3 x (c+5), 3, 2) module connected to the output of the (eighth) C3 module, and a Sigmoid function connected to the Conv (512, s x 3 x (c+5), 3, 2) module. It should be noted that the first, second, etc. preceding each module are only for convenience in distinguishing the same module in the description of the structure, and are not limiting on the module itself. The Focus module, the Conv module, the C3 module, the Spp module, and the like in the modified YOLOv5 model are all modules in the unmodified existing YOLOv5 model, so that the structure of each module is not described in the present embodiment.
The high-efficiency channel attention ECA module is shown in FIG. 4, wherein X is an input feature, H, W, C is the height, width and channel number of the input feature respectively, and X is subjected to average pooling to obtain an intermediate output O 1 ,O 1 Then O is obtained by one-dimensional convolution 2 K is the neighborhood size across convolution interactions. The efficient channel attention ECA module adopted in this embodiment is an existing structure, and will not be described here again.
When training the improved YOLOv5 model, training through KITTI2012, KITTI2015, COCO and a self-labeling vehicle data set, firstly, iterating for 50000 times, and performing rough training; then the fine training is carried out by iteration 20000 times, and the learning rate is 0.0001 and 0.001 respectively.
And 5, registering the data of the UWB base station and the data of the monitoring gun camera, wherein the registering comprises the following steps:
taking a set A ' = { (x ' of relative position coordinates of all vehicles in an image acquired by monitoring a gun camera at the current moment ' 1 ,y′ 1 ),(x′ 2 ,y′ 2 ),(x′ 3 ,y′ 3 ),……,(x′ N ,y′ N ) Taking a set a= { (x) of relative position coordinates of all vehicles acquired based on UWB base station at the present time 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),……,(x N ,y N )};
Taking the ith, i E (1, 2,3, …, N) vehicle p 'in the collection A' i Relative position coordinates (x' i ,y′ i ) Then vehicle p' i The registration number of (2) is:
in the formula, o i Vehicle p 'in the map acquired for monitoring the bolt face' i Registration number, p j The jth vehicle, p, acquired for UWB base station j Is (x) j ,y j ),d(p j ,p′ i ) Representing vehicle p j And vehicle p' i N is the total number of registered vehicles at the current time.
Vehicle p j And vehicle p' i The distance of (2) may be measured by existing methods of measuring the distance between two coordinates, for example by using a Gaussian distance
And 6, acquiring all UWB positioning and vehicle images corresponding to the vehicles under the same registration number according to the registered data, and realizing tracking of the vehicles.
The embodiment converts images containing target detection results, which are dynamically continuous before registering and logging off, of the vehicle into videos to be stored locally, and stores the position track into mysql. Therefore, the whole process field tracking and trace are continuously carried out on each vehicle after registration, and the subsequent data is convenient to review.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (5)
1. A method for tracking a vehicle based on target detection and UWB positioning, wherein the vehicle tracking is a monitoring area formed by monitoring a bolt, and a UWB base station is installed in the monitoring area, and UWB tags are installed to vehicles entering the monitoring area, the method for tracking a vehicle based on target detection and UWB positioning comprising:
step 1, obtaining a top view of a monitoring area as a reference image, selecting any point in the reference image as an origin, establishing a plane coordinate system, and obtaining relative position coordinates of each pixel point in the reference image based on the origin;
step 2, acquiring a monitoring image shot by a monitoring gun camera, correcting the monitoring image, selecting m key points in the corrected monitoring image, acquiring image coordinates and relative position coordinates of the m key points, and calculating to obtain an H matrix according to the acquired image coordinates and relative position coordinates of the m key points; the method for acquiring the relative position coordinates of the m key points comprises the following steps: marking m key points in the reference graph, and obtaining relative position coordinates of the m key points in the reference graph according to the marking;
step 3, registering, binding and registering numbers are carried out on the vehicles entering the monitoring area, the relative position coordinates of the vehicles are determined based on the UWB base station, the reference map is used as a base map of the UWB system, and the relative position coordinates determined by the UWB base station are used as the relative position coordinates relative to the origin in the reference map;
step 4, acquiring an image of a vehicle entering a monitoring area through a monitoring gun camera, acquiring a target frame of the vehicle in the image through a YOLOv5 model, and acquiring a relative position coordinate of a central point of the target frame corresponding to a reference image according to an H matrix;
and 5, registering the data of the UWB base station and the data of the monitoring gun camera, wherein the registering comprises the following steps:
taking a set A ' = { (x ' of relative position coordinates of all vehicles in an image acquired by monitoring a gun camera at the current moment ' 1 ,y′ 1 ),(x′ 2 ,y′ 2 ),(x′ 3 ,y′ 3 ),......,(x′ N ,y′ N ) Taking a set a= { (x) of relative position coordinates of all vehicles acquired based on UWB base station at the present time 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),......,(x N ,y N )};
Taking the ith, i e (1, 2,3,., N) vehicle p 'in set a' i Relative position coordinates (x' i ,y′ i ) Then vehicle p' i The registration number of (2) is:
in the formula, o i Vehicle p 'in the map acquired for monitoring the bolt face' i Registration number, p j The jth vehicle, p, acquired for UWB base station j Is (x) j ,y j ),d(p k ,p′ i ) Representing vehicle p j And vehicle p' i N is the total number of registered vehicles at the current time;
and 6, acquiring all UWB positioning and vehicle images corresponding to the vehicles under the same registration number according to the registered data, and realizing tracking of the vehicles.
2. The method for tracking a vehicle based on object detection and UWB positioning according to claim 1, wherein the obtaining the relative position coordinates of each pixel point in the reference map based on the origin comprises:
selecting n key points in the reference graph, mapping to obtain relative position coordinates of the key points relative to the origin, and carrying out grid registration by adopting ArcGIS to input the relative position coordinates of the n key points to obtain the relative position coordinates of each pixel point in the reference graph relative to the origin.
3. The method for tracking a vehicle based on object detection and UWB positioning of claim 1, wherein the monitoring area includes a plurality of monitoring cameras, and each monitoring camera needs to be subjected to a shooting area binding, and the shooting area binding includes:
selecting a corresponding rectangular frame as a shooting area to be bound in a reference picture according to the view shooting range of each monitoring gun camera;
and establishing a corresponding relation with the monitoring gun camera by taking the relative position coordinates of the left lower corner and the right upper corner of the shooting area as binding data.
4. A method of tracking a vehicle based on object detection and UWB positioning as defined in claim 3, wherein the monitoring area includes a plurality of monitoring cameras, and determining a current location area of the vehicle in the monitoring area according to the shot area to which each monitoring camera is bound is required, and the determining the current location area of the vehicle in the monitoring area includes:
traversing all monitoring gun locks to obtain the relative position coordinates of the vehicle in the reference map;
if the relative position coordinates of the vehicle are in a shooting area, taking the shooting area as the current area of the vehicle, and taking a monitoring gun camera corresponding to the shooting area as a current use monitoring gun camera;
if the relative position coordinates of the vehicle are in a plurality of shooting areas, the current area of the vehicle and the current use monitoring gun camera are still kept unchanged;
let the relative position coordinates of the vehicle be (b) x ,b y ) The relative position coordinates of the lower left corner of the shooting area to be judged areThe relative position coordinates of the upper right corner of the imaging region to be determined are +.>Judging whether the relative position coordinates of the vehicle are in the shooting area to be judged is as follows: if the judgment condition is satisfied->The relative position coordinates of the vehicle are considered to be in the shooting area to be judged; otherwise, the relative position coordinates of the vehicle are not considered to be in the shooting area to be judged.
5. A vehicle tracking method based on object detection and UWB localization as claimed in claim 3, wherein the YOLOv5 model is a modified YOLOv5 model, the modified YOLOv5 model employs a two-layer structure of detection network and adds an efficient channel attention ECA module between the backbone network and the head network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110989507.4A CN113824880B (en) | 2021-08-26 | 2021-08-26 | Vehicle tracking method based on target detection and UWB positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110989507.4A CN113824880B (en) | 2021-08-26 | 2021-08-26 | Vehicle tracking method based on target detection and UWB positioning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113824880A CN113824880A (en) | 2021-12-21 |
CN113824880B true CN113824880B (en) | 2023-05-19 |
Family
ID=78913586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110989507.4A Active CN113824880B (en) | 2021-08-26 | 2021-08-26 | Vehicle tracking method based on target detection and UWB positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113824880B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627447A (en) * | 2022-03-10 | 2022-06-14 | 山东大学 | Road vehicle tracking method and system based on attention mechanism and multi-target tracking |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784747A (en) * | 2020-08-13 | 2020-10-16 | 上海高重信息科技有限公司 | Vehicle multi-target tracking system and method based on key point detection and correction |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101906329B1 (en) * | 2010-12-15 | 2018-12-07 | 한국전자통신연구원 | Apparatus and method for indoor localization based on camera |
CN108647638B (en) * | 2018-05-09 | 2021-10-12 | 东软睿驰汽车技术(上海)有限公司 | Vehicle position detection method and device |
CN109035294B (en) * | 2018-06-01 | 2022-05-06 | 广东工业大学 | Image extraction system and method for moving target |
CN110570475A (en) * | 2018-06-05 | 2019-12-13 | 上海商汤智能科技有限公司 | vehicle-mounted camera self-calibration method and device and vehicle driving method and device |
CN109635657B (en) * | 2018-11-12 | 2023-01-06 | 平安科技(深圳)有限公司 | Target tracking method, device, equipment and storage medium |
US11226624B2 (en) * | 2019-04-11 | 2022-01-18 | Motorola Solutions, Inc. | System and method for enabling a 360-degree threat detection sensor system to monitor an area of interest surrounding a vehicle |
CN110418110A (en) * | 2019-07-25 | 2019-11-05 | 浙江钧普科技股份有限公司 | A kind of video frequency tracking system and algorithm based on UWB technology |
CN113115208A (en) * | 2021-04-12 | 2021-07-13 | 云汉逐影(北京)科技有限公司 | UWB-based target tracking and target image reconstruction technology |
-
2021
- 2021-08-26 CN CN202110989507.4A patent/CN113824880B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784747A (en) * | 2020-08-13 | 2020-10-16 | 上海高重信息科技有限公司 | Vehicle multi-target tracking system and method based on key point detection and correction |
Also Published As
Publication number | Publication date |
---|---|
CN113824880A (en) | 2021-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107738612B (en) | Automatic parking space detection and identification system based on panoramic vision auxiliary system | |
US9185402B2 (en) | Traffic camera calibration update utilizing scene analysis | |
CN110650316A (en) | Intelligent patrol and early warning processing method and device, electronic equipment and storage medium | |
CN111860352B (en) | Multi-lens vehicle track full tracking system and method | |
CN111079600A (en) | Pedestrian identification method and system with multiple cameras | |
CN105976392B (en) | Vehicle tyre detection method and device based on maximum output probability | |
CN102446355B (en) | Method for detecting target protruding from plane based on double viewing fields without calibration | |
CN113034586B (en) | Road inclination angle detection method and detection system | |
CN111914653B (en) | Personnel marking method and device | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
CN109961013A (en) | Recognition methods, device, equipment and the computer readable storage medium of lane line | |
CN113256731A (en) | Target detection method and device based on monocular vision | |
CN107506753B (en) | Multi-vehicle tracking method for dynamic video monitoring | |
CN112084892B (en) | Road abnormal event detection management device and method thereof | |
CN111461222B (en) | Method and device for obtaining track similarity of target object and electronic equipment | |
CN113824880B (en) | Vehicle tracking method based on target detection and UWB positioning | |
CN112966638A (en) | Transformer station operator identification and positioning method based on multiple characteristics | |
CN112489240B (en) | Commodity display inspection method, inspection robot and storage medium | |
KR100994722B1 (en) | Method for tracking moving object on multiple cameras using probabilistic camera hand-off | |
CN116110127A (en) | Multi-linkage gas station cashing behavior recognition system | |
CN113674361B (en) | Vehicle-mounted all-round-looking calibration implementation method and system | |
CN113743380B (en) | Active tracking method based on video image dynamic monitoring | |
CN112232272B (en) | Pedestrian recognition method by fusing laser and visual image sensor | |
CN114037822A (en) | Method and system for detecting driving license | |
CN114004876A (en) | Dimension calibration method, dimension calibration device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20211221 Assignee: State Grid Zhejiang Xinxing Technology Co.,Ltd. Assignor: STATE GRID ZHEJIANG ELECTRIC POWER CO., LTD. NINGBO POWER SUPPLY Co. Contract record no.: X2023980053734 Denomination of invention: A Vehicle Tracking Method Based on Object Detection and UWB Localization Granted publication date: 20230519 License type: Common License Record date: 20231222 |