CN115201809A - VTS radar target fusion method, system and equipment based on surveillance video assistance - Google Patents

VTS radar target fusion method, system and equipment based on surveillance video assistance Download PDF

Info

Publication number
CN115201809A
CN115201809A CN202210739521.3A CN202210739521A CN115201809A CN 115201809 A CN115201809 A CN 115201809A CN 202210739521 A CN202210739521 A CN 202210739521A CN 115201809 A CN115201809 A CN 115201809A
Authority
CN
China
Prior art keywords
target
radar
video
targets
radar target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210739521.3A
Other languages
Chinese (zh)
Inventor
陈伟能
唐吉
曹阳
汪慧勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210739521.3A priority Critical patent/CN115201809A/en
Publication of CN115201809A publication Critical patent/CN115201809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical field of radar target fusion, in particular to a VTS radar target fusion solving method, a VTS radar target fusion solving system and VTS radar target fusion solving equipment based on surveillance video assistance, wherein the method comprises the steps of calibrating a surveillance area and a radar sector and solving a projection relation between coordinates of the surveillance area and coordinates of the radar sector; determining an association relation between the video target and the radar target, and associating the longitude and latitude track of the video target with the longitude and latitude track of the radar target; predicting radar target fusion generation nodes, sampling video targets, and detecting radar target separation nodes; and selecting the optimal association item, and re-associating the fused and separated radar target with the optimal association item. The method maximizes the credibility of the target identity confirmation after the target is fused by using the monitoring video auxiliary method, reduces the probability of false tracking and track loss after the radar target is fused, improves the tracking stability, and has more accurate effect of confirming the target identity after the fusion is solved compared with the traditional method.

Description

VTS radar target fusion method, system and equipment based on surveillance video assistance
Technical Field
The invention relates to the technical field of radar target fusion, in particular to a VTS radar target fusion method, a VTS radar target fusion system and VTS radar target fusion equipment based on surveillance video assistance.
Background
The Vessel Traffic management System (VTS) is the most important marine equipment for ensuring the safety of Vessel navigation, and plays a vital role in reducing Vessel Traffic accidents, improving the Vessel navigation efficiency, preventing the environmental pollution of water areas and the like. The VTS is a system for monitoring ships sailing in a bay and in and out of ports using communication facilities such as AIS base stations, radars, CCTVs, wireless phones, and shipboard terminals, and providing the ships with security information required for sailing. The system can monitor whether the ship is separated from the navigation path, the advancing direction, the speed, the mutual intersection of the ship and the like so as to rapidly provide the safe navigation information required by the ship when the ship enters and exits the port.
In a ship Traffic Service System (VTS), two ships may cross and meet during navigation, particularly in an inland waterway, and under such a condition, echoes of radar videos of the two ships are fused to form an echo. In the phenomenon, in order to avoid collision, a ship is often maneuvered, the speed and the course of the ship are possibly changed, great challenge is provided for tracking a target, the conditions of mis-tracking or track loss often occur, the continuous stable tracking of the ship is influenced, and the stable tracking performance of a system is further influenced. The method has certain influence on the water traffic supervision of VTS users, and can not timely and accurately inform target ships when the ships are likely to collide and other accidents, thereby possibly bringing about major accidents.
The method aims at the situation that the identity information before re-association can not be carried out after radar target fusion is separated, special processing needs to be carried out, target fusion is carried out in a classical method by utilizing the association of flight paths and the estimation of the flight paths, but the method is effective and has a high probability of errors.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a VTS radar target fusion solving method, a VTS radar target fusion solving system and VTS radar target fusion equipment based on surveillance video assistance.
The invention aims to provide a VTS radar target fusion method based on surveillance video assistance.
The invention further provides a VTS radar target fusion solving system based on surveillance video assistance.
It is a third object of the invention to provide a computer apparatus.
The first purpose of the invention can be achieved by adopting the following technical scheme:
the VTS radar target fusion method based on surveillance video assistance comprises the following steps:
firstly, calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into Mercator coordinates;
step two, solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the mercator coordinates of the radar target of the radar sector by a least square method;
determining an incidence relation between a video target and a radar target, respectively detecting and tracking the video target in a monitoring area and the radar target in a radar sector, converting the image position coordinate of the video target in the monitoring area into an ink card holder coordinate through a relation matrix H, converting the ink card holder coordinate of the video target into longitude and latitude, recording the longitude and latitude track of the video target and the longitude and latitude track of the radar target, and associating the longitude and latitude track of the video target with the longitude and latitude track of the radar target;
step four, predicting radar target fusion occurrence nodes, firstly setting a time interval, calculating the average speed of each radar target in the set time interval, and predicting and calculating the position of each radar target after T seconds; traversing the predicted and calculated positions, and marking the current two radar targets when the distance between the two radar targets is smaller than the size of the echo target;
sampling the video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID of the ship image corresponding to the video target; after waiting for a plurality of seconds, when more than one of the marked radar targets is lost in tracking, judging that the radar targets are fused, otherwise, canceling the marking of the marked radar targets and returning to the fourth step;
detecting a radar target separation node, and judging that the radar target is separated after being fused when a new radar target appears around the single tracking radar target in the overlapping area of the radar sector and the monitoring area and the new radar target is associated with the video target again;
and seventhly, performing image characteristic cross comparison on the video target which is judged to be in the radar target fusion situation and the video target which is judged to be in the radar target separation situation after fusion, selecting an optimal association item, and re-associating the radar target which is separated after fusion with the optimal association item.
The second purpose of the invention can be achieved by adopting the following technical scheme:
VTS radar target fusion system based on surveillance video is supplementary, the system includes:
the calibration module is used for calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into mercator coordinates;
the correlation module is used for solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the ink card tray coordinates of the radar target of the radar sector by a least square method; determining an incidence relation between a video target and a radar target, respectively detecting and tracking the video target in a monitoring area and the radar target in a radar sector, converting image position coordinates of the video target in the monitoring area into mercator coordinates through a relation matrix H, converting the mercator coordinates of the video target into longitude and latitude, recording longitude and latitude tracks of the video target and longitude and latitude tracks of the radar target, and correlating the longitude and latitude tracks of the video target and the longitude and latitude tracks of the radar target;
the system comprises a prediction fusion generation module, a target fusion generation module and a target fusion generation module, wherein the prediction fusion generation module is used for predicting radar target fusion generation nodes, setting a time interval, calculating the average speed of each radar target in the set time interval and predicting and calculating the position of each radar target after T seconds; traversing the predicted and calculated positions, and marking two current radar targets when the distance between the two radar targets is smaller than the size of the echo target;
the video target image sampling module is used for sampling a video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID of the ship image corresponding to the video target; after waiting for a plurality of seconds, when more than one of the marked radar targets is lost, judging that the radar targets are fused, otherwise, canceling the marking of the marked radar targets and returning to the fourth step;
the system comprises a predicted target separation module, a target fusion module and a target fusion module, wherein the predicted target separation module is used for detecting radar target separation nodes, and when a new radar target appears around a single tracking radar target in an overlapping area of a radar sector and a monitoring area and the new radar target is associated with a video target again, the radar target is judged to be separated after being fused;
and the optimal association item association module is used for traversing the N video targets which are judged to be in the radar target fusion situation and the video targets which are judged to be in the radar target separation situation after fusion to perform image characteristic cross comparison, selecting the optimal association item, and re-associating the radar target separated after fusion with the optimal association item.
The third purpose of the invention can be achieved by adopting the following technical scheme:
a computer device comprises a processor and a memory for storing a program executable by the processor, wherein when the processor executes the program stored by the memory, the VTS radar target fusion method based on surveillance video assistance is realized.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention provides a VTS radar target de-fusion method, a system and equipment based on surveillance video assistance, which utilize the surveillance video assistance method to maximize the credibility of target identity confirmation after target de-fusion through image sampling before fusion and image target comparison identity after fusion based on radar echoes, reduce the probability of false tracking and track loss after radar target fusion, improve the tracking stability, and have more accurate effect of confirming the target identity after de-fusion compared with the traditional method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a VTS radar target de-fusion method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image selection point and a radar coordinate selection point in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a relationship matrix in an embodiment of the invention;
FIG. 4 is a schematic diagram of a velocity difference probability distribution according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a radar fusion and separation process in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described in further detail with reference to the accompanying drawings and examples, and it is obvious that the described examples are some, but not all, examples of the present invention, and the embodiments of the present invention are not limited thereto. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Example 1
As shown in fig. 1, the VTS radar target fusion method based on surveillance video assistance according to the present invention includes the following steps:
the method comprises the steps of firstly, calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into Mercator coordinates.
As shown in fig. 2, the image selection point and the radar coordinate selection point correspond to each other, and in this embodiment, the monitoring camera is adjusted within the coverage range of the radar, so that the video covers the water area to be monitored. Selecting 4 video targets in a video picture, taking image position coordinates of the video targets, wherein the image position coordinates of the video targets are coordinates of the middle point at the bottom of the image of the video targets, and the image position coordinate set of the video targets is marked as P camera Denoted as (u, v, 1) T (ii) a Selecting four corresponding radar targets in a radar sector, wherein the image positions of the radar targets are the image gravity center points of the radar targets, reading the longitude and latitude coordinates of the image gravity center points of the radar targets, converting the longitude and latitude coordinates into mercator coordinates, and recording the mercator coordinate set of the radar targets as P radar Denoted as (x, y, 1) T The formula for converting the longitude and latitude coordinates into the mercator coordinates is as follows:
Figure BDA0003717209950000041
wherein R represents the earth radius, R is 6378.1370 kilometers, ln represents the geographical longitude, and La represents the geographical latitude.
And step two, solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the mercator coordinates of the radar target of the radar sector by a least square method.
And solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, wherein the mapping relation between the image position coordinates of the video target of the monitoring area and the ink card tray coordinates of the radar target of the radar sector is as follows:
P radar HP camera
Figure BDA0003717209950000051
wherein, P radar Set of mercator coordinates for radar target, denoted (x, y, 1) Tcamera Set of image position coordinates for video object, denoted as (u, v, 1) T (ii) a H is a 3 x 3 non-singular matrix.
And (3) solving a relation matrix H by a least square method, and converting the mapping relation into the following form:
Figure BDA0003717209950000052
the above 4 corresponding points are substituted by the principle of least squares to obtain:
Figure BDA0003717209950000053
as shown in fig. 3, a denotes the above 8 × 8 matrix, h denotes the middle 8 × 1 matrix, and b denotes the equal-sign right 8 × 1 matrix; and h, obtaining the elements and the value to be solved according to the matrix operation relation:
h=(A T A) -1 (A T b)
and bringing the elements in the H matrix back to all relation matrices H.
Determining the incidence relation between the video target and the radar target, respectively detecting and tracking the video target in the monitoring area and the radar target in the radar sector, converting the image position coordinate of the video target in the monitoring area into an ink card holder coordinate through a relation matrix H, converting the ink card holder coordinate of the video target into longitude and latitude, recording the longitude and latitude track of the video target and the longitude and latitude track of the radar target, and correlating the longitude and latitude track of the video target and the longitude and latitude track of the radar target.
Specifically, pictures of ships in field pictures are collected, a YOLOV4 algorithm model is labeled and trained, a YOLOV4 algorithm model is adopted to detect video targets in a monitoring area, and a KCF algorithm is adopted to track the video targets; and detecting the radar target in the radar sector by adopting a connected domain analysis algorithm, and tracking the radar target in the radar sector by adopting a Kalman tracking algorithm.
The longitude and latitude tracks of the video target and the longitude and latitude tracks of the radar target are correlated, and the correlation method comprises the following steps:
1. and selecting a video target in the monitoring area as a pre-associated object by adopting a nearest neighbor domain method for each radar target tracked by the radar, and recording the distance between each radar target and the pre-associated object.
Here, the earth is assumed to be a spherical model, and the distance S between the two is calculated as follows:
Figure BDA0003717209950000061
wherein Lng1 Lat1 represents the longitude and latitude of a radar target, and Lng2 Lat2 represents the longitude and latitude of a traversal point; α = Lat1-Lat2 is the difference between two latitude points β = Lng1-Lng2 is the difference between two longitude points; 6378.137 is the equatorial radius of the earth in kilometers.
2. And calculating the ship course and the ship speed of each video target according to the track of the video target.
In this embodiment, the heading is continuously recorded with true north as a reference direction and 0.1 second as an interval for 5 seconds, and the coordinate form of the track is pixel coordinates (u, v). Wherein u represents a pixel coordinate with the top left vertex of the image as 0 point and the right direction of the image as the positive direction; where v represents the pixel coordinate with the top left vertex of the image as 0 point and the right-side direction of the image as the lower direction.
3. And setting a probability threshold, respectively calculating a probability value of the ship distance difference, a probability value of the ship course difference and a probability value of the ship speed difference according to the ship distance difference, the ship course difference and the ship speed difference between the video target and the radar target, and calculating a joint probability value.
The ship distance difference and the probability value P conform to the following formula:
Figure BDA0003717209950000062
wherein the content of the first and second substances,
Figure BDA0003717209950000063
the ship distance difference is distributed according to the sigma = 7;
the ship course difference and the probability value P accord with the following formula:
Figure BDA0003717209950000071
wherein the content of the first and second substances,
Figure BDA0003717209950000072
the ship course difference is the distribution of which the ship course difference accords with sigma = 20;
the ship speed difference and the probability value P accord with the following formula:
Figure BDA0003717209950000073
wherein the content of the first and second substances,
Figure BDA0003717209950000074
for ship speed difference, the ship speed difference accords with the distribution of sigma = 1.5.
The joint probability value is the product of three probability values, namely the probability value of the ship distance difference, the probability value of the ship course difference and the probability value of the ship speed difference.
The probability threshold value is selected to be any value between 0.3 and 0.45. The difference between the three parameters is: the ship distance difference, the ship course difference and the ship navigational speed difference are calculated by the same method as the method 1; the course difference is obtained by subtracting the target course calculated after video conversion from the ship course measured in the radar system and taking an absolute value; the speed difference is obtained by subtracting the target speed calculated after video conversion from the ship speed measured in the radar system and taking an absolute value.
4. Canceling the pre-association relation with the joint probability value lower than the set probability threshold value, and determining the association relation with the joint probability value higher than the set probability threshold value.
In this embodiment, it is preferable to set the threshold value to be 0.4, assuming that the difference between the ship distance between the video-converted target and the radar target is 6, the difference between the ship course is 12 °, and the difference between the ship speed is 1.0, and the probabilities respectively calculated are: 0.693,0.835,0.801; the joint probability value is the product of the three probabilities 0.463; and if the joint probability value is higher than the set probability threshold value of 0.4, determining the association relation of the two targets.
As shown in fig. 4, the probability distribution diagram of the speed difference values is a probability distribution curve when σ =1.5, and the mark points in the diagram indicate that when the speed difference between the two is 1, the probability of the association between the two obtained from the distribution curve is 0.801
Step four, predicting radar target fusion generation nodes, setting a time interval, calculating the average speed of each radar target in the set time interval, and predicting and calculating the position of each radar target after T seconds; and traversing the predicted and calculated positions, and marking the current two radar targets when the distance between the two radar targets is smaller than the size of the echo target.
Specifically, a time interval is set, the average speed of the radar target in the interval is calculated, and the position of the radar target after 3 seconds is predicted and calculated; the average speed is calculated by removing the total bits of the ship track points in the last 3 seconds by time. The predicted positions are traversed and marked when a situation occurs in which the distance of two radar targets is less than the maximum distance from the center of gravity to the edge of their echoes. As shown in fig. 5, the radar fusion and separation process is schematically illustrated.
Sampling the video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID (Identity document) of the ship image corresponding to the video target; after waiting for a plurality of seconds (preferably 5 seconds), when more than one of the two radar targets or the plurality of radar targets marked are lost, the situation that the radar targets are fused is judged, otherwise, the marking of the marked radar target is cancelled and the step four is returned.
And step six, detecting radar target separation nodes, and judging the separation situation after the radar targets are fused when new radar targets appear around the single tracking radar target in the overlapping area of the radar sector and the monitoring area and the new radar target is associated with the video target again.
And step seven, traversing the N video targets which are judged to be in the fused condition of the radar target and the video targets which are re-associated when the radar target is judged to be in the separated condition after fusion, performing image characteristic cross comparison, selecting an optimal association item, and re-associating the radar target which is separated after fusion with the optimal association item.
Specifically, traversing the N video targets which are judged to be the radar target fused situation and performing image feature cross comparison on the video targets which are judged to be the radar target fused and separated situation comprises the following steps:
(1) And (5) extracting the ship image of the video target in the video monitoring area and recording as figure 1, and extracting the HOG characteristic of the ship image and recording as characteristic 1 every time when the new radar target in the step six is associated with the video target again. The HOG feature is a Histogram of Oriented gradients (Histogram of Oriented Gradient) feature, which is a feature descriptor used for object detection in computer vision and image processing.
(2) Unifying the ship images of the video targets extracted in the fifth step into the size of the graph 1, respectively extracting the HOG characteristics of the ship image of each video target, recording the HOG characteristics as a characteristic group 2, sequentially comparing the HOG characteristics of the ship image of each video target in the characteristic group 2 with the characteristic 1, and calculating the hamming distance between the characteristics.
(3) And selecting the video target which is closest to the hamming distance and is judged to be the radar target fused situation as the optimal association item, and marking the ID of the existing associated video target of the radar target separated after the ID selected as the optimal association item in the characteristic group 2 is fused as the ID before fusion, so as to achieve the purpose of separating and re-associating the optimal association item after the radar target is fused.
Example 2:
the embodiment provides a VTS radar target fusion solving system based on surveillance video assistance, which includes a calibration module, an association module, a prediction fusion generation module, a video target image sampling module, a prediction target separation module, and an optimal association module, and the specific functions of each module are as follows:
the calibration module is used for calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into mercator coordinates;
the correlation module is used for solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the mercator coordinates of the radar target of the radar sector through a least square method; determining an incidence relation between a video target and a radar target, respectively detecting and tracking the video target in a monitoring area and the radar target in a radar sector, converting image position coordinates of the video target in the monitoring area into mercator coordinates through a relation matrix H, converting the mercator coordinates of the video target into longitude and latitude, recording longitude and latitude tracks of the video target and longitude and latitude tracks of the radar target, and correlating the longitude and latitude tracks of the video target and the longitude and latitude tracks of the radar target.
The system comprises a prediction fusion generation module, a prediction fusion generation module and a fusion generation module, wherein the prediction fusion generation module is used for predicting radar target fusion generation nodes, firstly setting a time interval, calculating the average speed of each radar target in the set time interval, and predicting and calculating the position of each radar target after T seconds; and traversing the predicted and calculated positions, and marking the current two radar targets when the distance between the two radar targets is smaller than the size of the echo target.
The video target image sampling module is used for sampling a video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID of the ship image corresponding to the video target; and after waiting for a plurality of seconds, when more than one of the marked radar targets is lost in tracking, judging that the radar targets are fused, otherwise, canceling the marking of the marked radar targets and returning to the fourth step.
And the predicted target separation module is used for detecting radar target separation nodes, and judging that the radar targets are separated after fusion when new radar targets appear around the single tracking radar target in the overlapping area of the radar sector and the monitoring area and are associated with the video target again.
And the optimal association item association module is used for traversing the N video targets which are judged to be in the radar target fusion situation and the video targets which are judged to be in the radar target separation situation after fusion to perform image characteristic cross comparison, selecting the optimal association item, and re-associating the radar target separated after fusion with the optimal association item.
Example 3:
the present embodiment provides a computer device, which may be a server, a computer, or the like, and includes a processor, a memory, an input device, a display, and a network interface, which are connected by a system bus, where the processor is configured to provide computing and control capabilities, the memory includes a nonvolatile storage medium and an internal memory, the nonvolatile storage medium stores an operating system, a computer program, and a database, the internal memory provides an environment for the operating system and the computer program in the nonvolatile storage medium to run, and when the processor executes the computer program stored in the memory, the VTS radar target fusion method based on surveillance video assistance in embodiment 1 is implemented as follows:
firstly, calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into Mercator coordinates;
step two, solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the ink card tray coordinates of the radar target of the radar sector by a least square method;
determining an incidence relation between a video target and a radar target, respectively detecting and tracking the video target in a monitoring area and the radar target in a radar sector, converting the image position coordinate of the video target in the monitoring area into an ink card holder coordinate through a relation matrix H, converting the ink card holder coordinate of the video target into longitude and latitude, recording the longitude and latitude track of the video target and the longitude and latitude track of the radar target, and associating the longitude and latitude track of the video target with the longitude and latitude track of the radar target;
step four, predicting radar target fusion occurrence nodes, firstly setting a time interval, calculating the average speed of each radar target in the set time interval, and predicting and calculating the position of each radar target after T seconds; traversing the predicted and calculated positions, and marking two current radar targets when the distance between the two radar targets is smaller than the size of the echo target;
sampling the video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID of the ship image corresponding to the video target; after waiting for a plurality of seconds, when more than one of the marked radar targets is lost, judging that the radar targets are fused, otherwise, canceling the marking of the marked radar targets and returning to the fourth step;
and step six, detecting radar target separation nodes, and judging that the radar targets are separated after being fused when new radar targets appear around the single tracking radar target in the overlapping area of the radar sector and the monitoring area and are associated with the video target again.
And step seven, traversing the N video targets which are judged to be in the fused state of the radar targets and the video targets which are judged to be in the separated state after the radar targets are fused, performing image characteristic cross comparison, selecting an optimal association item, and re-associating the fused and separated radar targets with the optimal association item.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (10)

1. The VTS radar target fusion method based on surveillance video assistance is characterized by comprising the following steps of:
firstly, calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into Mercator coordinates;
step two, solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the mercator coordinates of the radar target of the radar sector by a least square method;
determining an incidence relation between a video target and a radar target, respectively detecting and tracking the video target in a monitoring area and the radar target in a radar sector, converting the image position coordinate of the video target in the monitoring area into an ink card holder coordinate through a relation matrix H, converting the ink card holder coordinate of the video target into longitude and latitude, recording the longitude and latitude track of the video target and the longitude and latitude track of the radar target, and associating the longitude and latitude track of the video target with the longitude and latitude track of the radar target;
step four, predicting radar target fusion occurrence nodes, firstly setting a time interval, calculating the average speed of each radar target in the set time interval, and predicting and calculating the position of each radar target after T seconds; traversing the predicted and calculated positions, and marking two current radar targets when the distance between the two radar targets is smaller than the size of the echo target;
sampling the video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID of the ship image corresponding to the video target; after waiting for a plurality of seconds, when more than one of the marked radar targets is lost, judging that the radar targets are fused, otherwise, canceling the marking of the marked radar targets and returning to the fourth step;
detecting a radar target separation node, and judging that the radar targets are separated after being fused when new radar targets appear around the single tracking radar target in the overlapping area of the radar sector and the monitoring area and the new radar target is associated with the video target again;
and step seven, traversing the N video targets which are judged to be in the radar target fusion situation and the video targets which are judged to be in the radar target separation situation after the radar target fusion, performing image characteristic cross comparison, selecting an optimal association item, and re-associating the radar target which is separated after the fusion with the optimal association item.
2. The VTS radar target de-fusion method based on surveillance video assistance according to claim 1, wherein the image position coordinates of the video target are coordinates of a midpoint at the bottom of an image of the video target, and a set of the image position coordinates of the video target is marked as P camera Denoted as (u, v, 1) T (ii) a The longitude and latitude coordinates of the image position of the radar target are the longitude and latitude coordinates of the image gravity center point of the radar target, and the ink card tray coordinate set of the radar target is recorded as P radar Represented by (x, y, 1) T
3. The surveillance video assistance-based VTS radar target de-fusion method of claim 2, wherein the mapping relationship between the image position coordinates of the video targets of the surveillance area and the Moore coordinates of the radar targets of the radar sector is as follows:
P radar =HP camera
Figure FDA0003717209940000021
where H is a 3 x 3 nonsingular matrix.
4. The surveillance video assistance-based VTS radar target de-fusion method of claim 1, wherein the detecting and tracking the video target of the surveillance area and the radar target in the radar sector respectively comprises: collecting a picture with a ship in a field picture, marking and training a YOLOV4 algorithm model, detecting a video target in a monitoring area by adopting the YOLOV4 algorithm model, and tracking the video target by adopting a KCF algorithm; and detecting radar targets in the radar sectors by adopting a connected domain analysis algorithm, and tracking the radar targets in the radar sectors by adopting a Kalman tracking algorithm.
5. The surveillance video assistance-based VTS radar target de-fusion method of claim 4, wherein the associating the latitude and longitude trajectory of the video target with the latitude and longitude trajectory of the radar target comprises:
selecting a video target in a monitoring area as a pre-associated object by adopting a nearest neighbor domain method for each radar target tracked by the radar, and recording the distance between each radar target and the pre-associated object;
calculating the ship course and the ship speed of each video target according to the track of the video target;
setting a probability threshold, respectively calculating a ship distance difference probability value, a ship course difference probability value and a ship speed difference probability value according to a ship distance difference, a ship course difference and a ship speed difference between a video target and a radar target, and calculating a joint probability value;
canceling the pre-association relation with the joint probability value lower than the set probability threshold value, and determining the association relation with the joint probability value higher than the set probability threshold value.
6. The surveillance video assistance-based VTS radar target de-fusion method according to claim 5, wherein the ship distance difference and the probability value P conform to the following formula:
Figure FDA0003717209940000022
wherein the content of the first and second substances,
Figure FDA0003717209940000023
the ship distance difference is distributed according to the sigma = 7;
the ship course difference and the probability value P accord with the following formula:
Figure FDA0003717209940000024
wherein the content of the first and second substances,
Figure FDA0003717209940000025
the ship course difference is the distribution with the range of sigma = 20;
the ship speed difference and the probability value P accord with the following formula:
Figure FDA0003717209940000026
wherein the content of the first and second substances,
Figure FDA0003717209940000031
for ship speed difference, the ship speed difference accords with the distribution of sigma = 1.5.
7. The VTS radar target de-fusion method based on surveillance video assistance of claim 6, wherein the joint probability value is a product of three probability values, namely a probability value of a ship distance difference, a probability value of a ship course difference and a probability value of a ship speed difference; the probability threshold is any value between 0.3 and 0.45.
8. The surveillance video assistance-based VTS radar target de-fusion method of claim 1, wherein traversing N video targets determined as the radar target fused situation and performing image feature cross-comparison on video targets determined as the radar target separation situation after fusion comprises:
extracting the ship image of the video target in the monitoring area and recording as figure 1, and extracting the HOG characteristic of the ship image and recording as characteristic 1 every time when the new radar target in the step six is associated with the video target again;
unifying the ship images of the video targets which are judged to be the radar target fused situation into the size of the video targets shown in the figure 1, respectively extracting the HOG characteristics of the ship image of each video target, sequentially comparing the HOG characteristics of the ship image of each video target with the characteristics 1, and calculating the hamming distance between the characteristics;
and selecting the video target which is closest to the hamming distance and is judged to be the radar target fused situation as the optimal association item, and replacing the ID of the existing associated video target of the separated radar target after fusion by the ID of the optimal association item.
9. VTS radar target fusion system based on surveillance video is supplementary, its characterized in that, the system includes:
the calibration module is used for calibrating a monitoring area and a radar sector, selecting a plurality of video targets in a picture of the monitoring area, reading image position coordinates of the video targets, selecting radar targets corresponding to the video targets in the radar sector, reading longitude and latitude coordinates of image positions of the radar targets, and converting the longitude and latitude coordinates of the image positions of the radar targets into mercator coordinates;
the correlation module is used for solving the projection relation between the coordinates of the monitoring area and the coordinates of the radar sector, and calculating a relation matrix H of the image position coordinates of the video target of the monitoring area and the ink card tray coordinates of the radar target of the radar sector by a least square method; determining an incidence relation between a video target and a radar target, respectively detecting and tracking the video target in a monitoring area and the radar target in a radar sector, converting image position coordinates of the video target in the monitoring area into mercator coordinates through a relation matrix H, converting the mercator coordinates of the video target into longitude and latitude, recording longitude and latitude tracks of the video target and longitude and latitude tracks of the radar target, and correlating the longitude and latitude tracks of the video target and the longitude and latitude tracks of the radar target;
the system comprises a prediction fusion generation module, a prediction fusion generation module and a fusion generation module, wherein the prediction fusion generation module is used for predicting radar target fusion generation nodes, firstly setting a time interval, calculating the average speed of each radar target in the set time interval, and predicting and calculating the position of each radar target after T seconds; traversing the predicted and calculated positions, and marking the current two radar targets when the distance between the two radar targets is smaller than the size of the echo target;
the video target image sampling module is used for sampling a video target, identifying the video target in the monitoring area corresponding to the marked radar target through the incidence relation between the video target and the radar target, extracting the ship image corresponding to the video target in the monitoring area, and recording the ID of the ship image corresponding to the video target; after waiting for a plurality of seconds, when more than one of the marked radar targets is lost, judging that the radar targets are fused, otherwise, canceling the marking of the marked radar targets and returning to the fourth step;
the system comprises a predicted target separation module, a target fusion module and a target fusion module, wherein the predicted target separation module is used for detecting radar target separation nodes, and when a new radar target appears around a single tracking radar target in an overlapping area of a radar sector and a monitoring area and the new radar target is associated with a video target again, the radar target is judged to be separated after being fused;
and the optimal association item association module is used for traversing the N video targets which are judged to be in the radar target fusion situation and the video targets which are judged to be in the radar target separation situation after fusion to perform image characteristic cross comparison, selecting the optimal association item, and re-associating the radar target separated after fusion with the optimal association item.
10. A computer device comprising a processor and a memory for storing a program executable by the processor, wherein the processor when executing the program stored by the memory implements the surveillance video assistance-based VTS radar target unfusion method of any one of claims 1-8.
CN202210739521.3A 2022-06-28 2022-06-28 VTS radar target fusion method, system and equipment based on surveillance video assistance Pending CN115201809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210739521.3A CN115201809A (en) 2022-06-28 2022-06-28 VTS radar target fusion method, system and equipment based on surveillance video assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210739521.3A CN115201809A (en) 2022-06-28 2022-06-28 VTS radar target fusion method, system and equipment based on surveillance video assistance

Publications (1)

Publication Number Publication Date
CN115201809A true CN115201809A (en) 2022-10-18

Family

ID=83578166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210739521.3A Pending CN115201809A (en) 2022-06-28 2022-06-28 VTS radar target fusion method, system and equipment based on surveillance video assistance

Country Status (1)

Country Link
CN (1) CN115201809A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657012A (en) * 2022-12-23 2023-01-31 深圳佑驾创新科技有限公司 Matching method, device and equipment of image target and radar target and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657012A (en) * 2022-12-23 2023-01-31 深圳佑驾创新科技有限公司 Matching method, device and equipment of image target and radar target and storage medium

Similar Documents

Publication Publication Date Title
CN103714718B (en) A kind of inland river bridge area ship safe navigation precontrol system
CN113808282B (en) Multi-navigation element data fusion method
Wilthil et al. A target tracking system for ASV collision avoidance based on the PDAF
CN104660993B (en) Maritime affairs intelligent control method and system based on AIS and CCTV
CN105184816A (en) Visual inspection and water surface target tracking system based on USV and detection tracking method thereof
JP2013083623A (en) Integration method of satellite information and ship information for integrally monitoring ship
CN104535066A (en) Marine target and electronic chart superposition method and system in on-board infrared video image
CN110751077B (en) Optical remote sensing picture ship detection method based on component matching and distance constraint
CN113112540B (en) Method for positioning ship image target by using AIS (automatic identification system) Calibration CCTV (CCTV) camera in VTS (video tape server) system
Tang et al. Detection of abnormal vessel behaviour based on probabilistic directed graph model
Silveira et al. Assessment of ship collision estimation methods using AIS data
CN111163290A (en) Device and method for detecting and tracking night navigation ship
CN112347218B (en) Unmanned ship environment map generation method and unmanned ship sensing system
CN111968046A (en) Radar photoelectric sensor target association fusion method based on topological structure
CN115201809A (en) VTS radar target fusion method, system and equipment based on surveillance video assistance
CN111009008A (en) Self-learning strategy-based automatic airport airplane tagging method
CN110458089B (en) Marine target association system and method based on high-low orbit optical satellite observation
Wu et al. A new multi-sensor fusion approach for integrated ship motion perception in inland waterways
CN110686679B (en) High-orbit optical satellite offshore target interruption track correlation method
CN109814074A (en) Multiple targets tracking based on image aspects processing
Schöller et al. Vision-based object tracking in marine environments using features from neural network detections
Xu et al. Trajectory clustering for SVR-based Time of Arrival estimation
Lu et al. Study on Marine Fishery Law Enforcement Inspection System based on Improved YOLO V5 with UAV
CN112686106A (en) Method for converting video image into maritime radar image
CN115308762B (en) Ship identification method and device based on laser radar and AIS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination