CN114047505A - Target detection display control method and device supporting photoelectric cross positioning in radar failure scene - Google Patents

Target detection display control method and device supporting photoelectric cross positioning in radar failure scene Download PDF

Info

Publication number
CN114047505A
CN114047505A CN202111232889.2A CN202111232889A CN114047505A CN 114047505 A CN114047505 A CN 114047505A CN 202111232889 A CN202111232889 A CN 202111232889A CN 114047505 A CN114047505 A CN 114047505A
Authority
CN
China
Prior art keywords
radar
photoelectric
module
data
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111232889.2A
Other languages
Chinese (zh)
Inventor
姜山
李兆阳
何聪
孙泽
王彦成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiuzhou ATC Technology Co Ltd
Original Assignee
Sichuan Jiuzhou ATC Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiuzhou ATC Technology Co Ltd filed Critical Sichuan Jiuzhou ATC Technology Co Ltd
Priority to CN202111232889.2A priority Critical patent/CN114047505A/en
Publication of CN114047505A publication Critical patent/CN114047505A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a target detection display control method and a target detection display control device supporting photoelectric cross positioning in a radar failure scene; the method comprises protocol conversion, target data processing, equipment health detection, multi-source radar fusion, multi-source photoelectric cross positioning target classification identification and display control management. The invention is based on a streaming data processing framework, realizes multi-source signal fusion display under the condition that radar equipment and photoelectric equipment work simultaneously, relies on the advantages of photoelectric detection, combines the technology of cross positioning of a plurality of photoelectric equipment, realizes target classification identification and tracking under the condition of realizing more accurate positioning of an aerial target, and can also utilize target position data generated by photoelectric cross positioning to develop business under the condition of radar failure.

Description

Target detection display control method and device supporting photoelectric cross positioning in radar failure scene
Technical Field
The invention relates to the technical field of target detection, in particular to a target detection display control method and device supporting photoelectric cross positioning in a radar failure scene.
Background
The unmanned aerial vehicle target has characteristics such as "flying height is low, flight speed is slow, target RCS is little, mobility is strong, interference killing feature is strong", and traditional means has great limitation to its detection, and it is big to survey the degree of difficulty, is difficult to stabilize to its detection, tracking, discernment.
At present, the detection means for low altitude unmanned aerial vehicles at home and abroad mainly takes radar detection as a main part and is assisted by modes such as photoelectric detection, frequency spectrum detection and the like. The radar detection low-altitude unmanned aerial vehicle target can work all day long, the detection distance and the target tracking performance are excellent, the effective identification of the target cannot be realized, the bird target and the unmanned aerial vehicle target cannot be distinguished, meanwhile, the radar cannot be adopted for detection in certain special scenes or terrain states, or radar equipment is prone to failure.
At this time, a multi-source photoelectric detection means is needed, and by relying on the advantages of photoelectric detection and combining a technology of cross positioning of a plurality of photoelectric detection devices, high-level functions such as video tracking, classification and identification of targets are realized under the condition of realizing accurate positioning of the targets in the air.
The photoelectric detection device can not only give out video signals, optical information and visual characteristics of a detected object, but also give out some relative position information of an aerial target such as pitching and orientation by means of an integrated turntable, a direction controller, a view field management device and the like, and the existing unmanned aerial vehicle target detection device is only provided with a single photoelectric detection device, cannot form networking advantages and is only used as additional information input under radar detection; or although a plurality of photoelectric detection devices are equipped to form a networking structure, the relative position information provided by the photoelectric detection devices is not fully utilized, and more blind-supplement video monitoring is realized through networking.
That is to say, current unmanned aerial vehicle target detection device relies on photoelectric detection equipment under the scene of radar inefficacy purely and can't give the comparatively accurate locating information of unmanned aerial vehicle target, can only give some unmanned aerial vehicle's visual characteristic and video information, and these visual characteristic and video information are not too big to the development effect of follow-up unmanned aerial vehicle counter-braking and management business, therefore prior art can not satisfy the business characteristics and the requirement of unmanned aerial vehicle low latitude detection counter-braking.
Disclosure of Invention
The invention aims to provide a target detection display control method and device supporting photoelectric cross positioning in a radar failure scene, and aims to solve the problem that the prior art cannot meet the service characteristics and requirements of unmanned aerial vehicle low-altitude detection countermeasures.
The invention provides a target detection and display control method supporting photoelectric cross positioning in a radar failure scene, which comprises the following steps:
s1, the protocol conversion module receives the photoelectric message and the photoelectric video stream from the photoelectric equipment and the thunder and lightning message from the radar equipment, directly sends the received optical television frequency stream to the target classification and identification module, converts the received photoelectric message and the radar message into a uniform data format and sends the uniform data format to the target data processing module; wherein:
the radar message includes: heartbeat data and radar target data of the radar device;
the photoelectric message comprises: heartbeat data and photoelectric detection data of the photoelectric equipment;
s2, the target data processing module processes the photoelectric message and the radar message after the protocol conversion:
aggregating radar target data into a local radar track and outputting the local radar track to a multi-source radar fusion module;
the photoelectric detection data are directly forwarded to a multi-source photoelectric cross positioning module;
directly forwarding heartbeat data of the radar equipment and heartbeat data of the photoelectric equipment to an equipment health detection module;
s3, the equipment health detection module detects the health states of the radar equipment and the photoelectric equipment in real time according to heartbeat data of the radar equipment and the heartbeat data of the photoelectric equipment, and outputs a data source failure instruction, a data source removing instruction or a data source recovering instruction to the multi-source radar fusion module and the multi-source photoelectric cross positioning module according to the detected health states, so that the data from the corresponding radar equipment and the photoelectric equipment are subjected to failure, removal or recovery operation when the data are processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module;
s4, the multi-source radar fusion module adopts SF distributed fusion algorithm to perform associated fusion on a plurality of local radar tracks from the target data processing module to form a system radar track of the current fusion period;
s5, the multi-source photoelectric cross positioning module gives out target positioning data to photoelectric detection data from the target data processing module through a cross positioning algorithm based on space stereo calculation geometry;
s6, the target classification and identification module performs target classification and identification on the optical television frequency flow by adopting a deep learning convolutional neural network with YoloV3 as a framework to obtain a target classification and identification result;
and S7, the display control management module displays the system radar track, the target positioning data and the target classification and identification results, and plays the photoelectric video stream.
Optionally, the unified data format in step S1 adopts a JSON format.
Further, the method for aggregating the radar target data into the local radar track and outputting the local radar track to the multi-source radar fusion module in the step S2 includes:
calculating the Mahalanobis distance between radar target data by using a tracking algorithm;
comparing the calculated Mahalanobis distance with a preset Mahalanobis distance threshold;
and aggregating the radar target data which are smaller than the preset Mahalanobis distance threshold value into a local radar track.
Further, step S3 includes the following sub-steps:
s31, the equipment health detection module starts a plurality of state detection threads to detect the health states of the radar equipment and the photoelectric equipment in real time according to the heartbeat data of the radar equipment and the heartbeat data of the photoelectric equipment;
s32, each state detection thread extracts the device ID number currently still operating healthily from the heartbeat data, and caches the device ID number, the number of times of continuously receiving heartbeat data, and the number of times of continuously not receiving heartbeat data for real-time detection of the health state of the device:
(1) if the frequency of continuously not receiving heartbeat data of a certain device is greater than a failure threshold value, setting the health state of the device to be failed, simultaneously waking up an instruction sending thread to send a data source failure instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, and performing failure operation on the data from the device when the data are processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module, namely the data from the device do not participate in the processing of the multi-source radar fusion module and the multi-source photoelectric cross positioning module;
(2) if the number of times that a certain device does not receive heartbeat data continuously is larger than a rejection threshold value, setting the health state of the device as rejection, deleting related data of the device from a cache, simultaneously waking up an instruction sending thread to send a data source rejection instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, and preventing the data from the device from being read when the multi-source radar fusion module and the multi-source photoelectric cross positioning module are processed;
(3) if the frequency of continuously receiving heartbeat data by a certain device is greater than a recovery threshold value in a failure state or a rejection state, caching the data of the device and setting the health state of the device to be healthy, simultaneously awakening an instruction sending thread to send a data source recovery instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, so that the data from the device is recovered when being processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module, namely the data from the device participates in the processing of the multi-source radar fusion module and the multi-source photoelectric cross positioning module again.
Further, assuming that there are N local radar tracks from the target data processing module in step S4, the calculation formula of the SF distributed fusion algorithm is as follows:
Figure BDA0003316535610000041
Figure BDA0003316535610000042
wherein the content of the first and second substances,
Figure BDA0003316535610000043
represents the optimal estimate of the ith local radar track at time k, Pi(k | k) represents the mean square error of the ith local radar track at time k,
Figure BDA0003316535610000044
represents the optimal estimation of the system radar track at the k-th moment, and P (k | k) represents the mean square error of the system radar track at the k-th moment.
Further, step S5 includes the following sub-steps:
s51, acquiring the longitude and latitude of the photoelectric equipment through the positioning modules carried by the two photoelectric equipment, assuming that all calculations are carried out under an xyz coordinate system, and setting the coordinate of the position of the photoelectric equipment 1 as (x)1,y1,z1) The coordinate of the position of the optoelectronic device 2 is (x)2,y2,z2) Simultaneously assuming the target direction detected by opto-electronic device 1 and opto-electronic device 2 at the k-th instantAngle is respectively alpha1And alpha2Target pitch angles are respectively beta1And beta2Then, the coordinates of the position of the target at the k-th time are calculated as (x (k), y (k), z (k)) according to the following formula:
Figure BDA0003316535610000051
s52, selecting photoelectric detection data provided by two photoelectric devices capable of forming an intersection point in the multi-source photoelectric cross positioning module, starting a calculation thread to perform calculation once by the method of step S51 at fixed intervals, and if the photoelectric devices are in an active state and the facing positions of the two photoelectric devices continuously maintain the intersection point in the process, continuously generating subsequent position point coordinates of the target, thereby obtaining target positioning data.
Further, step S6 includes the following sub-steps:
s61, sampling and segmenting the optical television frequency stream by adopting an image extraction technology to obtain a plurality of video frame images;
s62, performing image quality improvement processing on the video frame image by adopting an interframe difference processing technology;
s63, performing image processing on the video frame image processed by the inter-frame difference processing technology by adopting a deep learning convolutional neural network with YoloV3 as a framework and extracting image features;
and S64, inputting the extracted image features into the target classification recognition model for target classification recognition to obtain a target classification recognition result.
Further, in step S7, the display control management module displays the system radar track, the target location data and the target classification and identification result in a human-computer interaction interface manner, and plays the photoelectric video stream.
The invention also provides a device for realizing the target detection display control method supporting photoelectric cross positioning in the radar failure scene, which comprises a display control management module, a protocol conversion module, a target data processing module, an equipment health detection module, a multi-source radar fusion module, a multi-source photoelectric cross positioning module and a target classification identification module, wherein the protocol conversion module, the target data processing module, the equipment health detection module, the multi-source radar fusion module, the multi-source photoelectric cross positioning module and the target classification identification module are controlled by the display control management module;
the input end of the protocol conversion module is used for connecting radar equipment and photoelectric equipment; the protocol conversion module is connected with the input end of the target classification identification module on one hand and connected with the target data processing module on the other hand;
the target data processing module is respectively connected with the equipment health detection module, the multi-source radar fusion module and the multi-source photoelectric cross positioning module, and the equipment health detection module, the multi-source radar fusion module, the multi-source photoelectric cross positioning module and the target classification identification module are all connected with the display control management module.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the invention is based on a streaming data processing framework, realizes multi-source signal fusion display under the condition that radar equipment and photoelectric equipment work simultaneously, relies on the advantages of photoelectric detection, combines the technology of cross positioning of a plurality of photoelectric equipment, realizes target classification identification and tracking under the condition of realizing more accurate positioning of an aerial target, and can also utilize target position data generated by photoelectric cross positioning to develop business under the condition of radar failure.
2. The invention can realize the automatic analysis, cross positioning, track initiation and display management of the relative position of the detection target of the photoelectric equipment;
3. the invention supports the integration of the radar track and the photoelectric cross positioning track, and simultaneously supports the independent display and the mutual switching of the radar track and the photoelectric cross positioning track;
4. the invention supports the low-altitude target monitoring functions of system radar track, target positioning data, target classification and identification result display, photoelectric video stream playing and the like;
5. the invention utilizes the unified protocol conversion, can support the self-introduction of different monitoring source data according to the actual requirement under the condition of less change, and realizes the consistency of the data transmission in different monitoring source data messages.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic architecture diagram of a target detection display control method and device supporting photoelectric cross positioning in a radar failure scene according to an embodiment of the present invention.
FIG. 2 is a flow chart of the device health detection module processing data according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a multi-source photoelectric cross location module processing data according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The embodiment provides a target detection display control method supporting photoelectric cross positioning in a radar failure scene, which mainly adopts a technical framework of stream processing to complete the functions of receiving, processing, fusing, detecting, cross positioning, displaying, controlling, managing and the like of radar messages, photoelectric messages and photoelectric video streams.
As shown in fig. 1, the display control method includes the following steps:
s1, the protocol conversion module receives the photoelectric message and the photoelectric video stream from the photoelectric equipment and the thunder and lightning message from the radar equipment, directly sends the received optical television frequency stream to the target classification and identification module, converts the received photoelectric message and the radar message into a uniform data format and sends the uniform data format to the target data processing module; in this embodiment, a JSON format is adopted as a unified data format. Wherein:
the radar message includes: heartbeat data and radar target data of the radar device; the radar target data mainly comprises longitude, latitude, pitch, azimuth and speed;
the photoelectric message comprises: heartbeat data and photoelectric detection data of the photoelectric equipment; the photoelectric detection data mainly comprises pitching and orientation;
s2, the target data processing module processes the photoelectric message and the radar message after the protocol conversion:
(1) aggregating radar target data into a local radar track and outputting the local radar track to a multi-source radar fusion module; specifically, the method comprises the following steps:
calculating the Mahalanobis distance between each radar target data based on the longitude, the latitude, the pitch, the azimuth and the speed contained in the radar target data by using a tracking algorithm; wherein, the tracking algorithm is the prior art and is not described herein again;
comparing the calculated Mahalanobis distance with a preset Mahalanobis distance threshold; the Mahalanobis distance threshold is preset according to requirements;
aggregating radar target data corresponding to Mahalanobis distance threshold values smaller than preset value into local radar tracks
(2) The photoelectric detection data are directly forwarded to a multi-source photoelectric cross positioning module;
(3) directly forwarding heartbeat data of the radar equipment and heartbeat data of the photoelectric equipment to an equipment health detection module;
s3, the equipment health detection module detects the health states of the radar equipment and the photoelectric equipment in real time according to heartbeat data of the radar equipment and the heartbeat data of the photoelectric equipment, and outputs a data source failure instruction, a data source removing instruction or a data source recovering instruction to the multi-source radar fusion module and the multi-source photoelectric cross positioning module according to the detected health states, so that the data from the corresponding radar equipment and the photoelectric equipment are subjected to failure, removal or recovery operation when the data are processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module; as shown in fig. 2, specifically:
s31, the equipment health detection module starts a plurality of state detection threads to detect the health states of the radar equipment and the photoelectric equipment in real time according to the heartbeat data of the radar equipment and the heartbeat data of the photoelectric equipment;
s32, each state detection thread extracts the device ID number currently still operating healthily from the heartbeat data, and caches the device ID number, the number of times of continuously receiving heartbeat data, and the number of times of continuously not receiving heartbeat data for real-time detection of the health state of the device:
(1) if the frequency of continuously not receiving heartbeat data of a certain device is greater than a failure threshold value, setting the health state of the device to be failed, simultaneously waking up an instruction sending thread to send a data source failure instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, and performing failure operation on the data from the device when the data are processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module, namely the data from the device do not participate in the processing of the multi-source radar fusion module and the multi-source photoelectric cross positioning module;
(2) if the number of times that a certain device does not receive heartbeat data continuously is larger than a rejection threshold value, setting the health state of the device as rejection, deleting related data of the device from a cache, simultaneously waking up an instruction sending thread to send a data source rejection instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, and preventing the data from the device from being read when the multi-source radar fusion module and the multi-source photoelectric cross positioning module are processed;
(3) if the frequency of continuously receiving heartbeat data by a certain device is greater than a recovery threshold value in a failure state or a rejection state, caching the data of the device and setting the health state of the device to be healthy, simultaneously awakening an instruction sending thread to send a data source recovery instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, so that the data from the device is recovered when being processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module, namely the data from the device participates in the processing of the multi-source radar fusion module and the multi-source photoelectric cross positioning module again.
S4, the multi-source radar fusion module adopts SF distributed fusion algorithm to perform associated fusion on a plurality of local radar tracks from the target data processing module to form a system radar track of the current fusion period; wherein, assuming that there are N local radar tracks from the target data processing module, the calculation formula of the SF distributed fusion algorithm is as follows:
Figure BDA0003316535610000101
Figure BDA0003316535610000102
wherein the content of the first and second substances,
Figure BDA0003316535610000103
represents the optimal estimate of the ith local radar track at time k, Pi(k | k) represents the mean square error of the ith local radar track at time k,
Figure BDA0003316535610000104
represents the optimal estimation of the system radar track at the k-th moment, and P (k | k) represents the mean square error of the system radar track at the k-th moment.
S5, the multi-source photoelectric cross positioning module gives out target positioning data to photoelectric detection data from the target data processing module through a cross positioning algorithm based on space stereo calculation geometry; as shown in fig. 3, specifically:
s51, obtaining the positioning information by two positioning modules carried by the photoelectric equipmentThe longitude and latitude of the photoelectric device are obtained, all calculation is performed under an xyz coordinate system, and the coordinate of the position of the photoelectric device 1 is set to (x)1,y1,z1) The coordinate of the position of the optoelectronic device 2 is (x)2,y2,z2) Meanwhile, assume that the target direction angles detected by the photoelectric devices 1 and 2 at the k-th time are respectively alpha1And alpha2Target pitch angles are respectively beta1And beta2Then, the coordinates of the position of the target at the k-th time are calculated as (x (k), y (k), z (k)) according to the following formula:
Figure BDA0003316535610000105
s52, selecting photoelectric detection data provided by two photoelectric devices capable of forming an intersection point in the multi-source photoelectric cross positioning module, starting a calculation thread to perform calculation once by the method of step S51 at fixed intervals, and if the photoelectric devices are in an active state and the facing positions of the two photoelectric devices continuously maintain the intersection point in the process, continuously generating subsequent position point coordinates of the target, thereby obtaining target positioning data.
S6, the target classification and identification module performs target classification and identification on the optical television frequency flow by adopting a deep learning convolutional neural network with YoloV3 as a framework to obtain a target classification and identification result; specifically, the method comprises the following steps:
s61, sampling and segmenting the optical television frequency stream by adopting an image extraction technology to obtain a plurality of video frame images;
s62, performing image quality improvement processing on the video frame image by adopting an interframe difference processing technology;
s63, performing image processing on the video frame image processed by the inter-frame difference processing technology by adopting a deep learning convolutional neural network with YoloV3 as a framework and extracting image features; the deep learning convolutional neural network with YoloV3 as a framework is the prior art and is not described herein again;
and S64, inputting the extracted image features into the target classification recognition model for target classification recognition to obtain a target classification recognition result. The target classification and identification model is a classifier which is trained by adopting training samples in advance, and the classifier can select a common classifier for target classification and identification according to needs.
And S7, the display control management module displays the system radar track, the target positioning data and the target classification and identification results, and plays the photoelectric video stream. In this embodiment, the display control management module may calculate and display capability based on geographic data and spatial information provided by the GIS engine, apply technologies in the fields of computer graphics, UI interaction, computer vision, network communication, and the like, read system radar track, target location data, target classification recognition results, photoelectric video streams, and the like, and display the system radar track, the target location data, the target classification recognition results, and the photoelectric video streams in a human-computer interaction interface manner after processing such as combining, packaging, rendering, and the like, and play the photoelectric video streams. Meanwhile, the interface operation and the instruction operation of the user can be monitored during the process of the man-machine interaction interface, and the interface operation and the instruction operation are fed back to each functional module in the modes of events, network streams and the like.
As shown in fig. 1, in order to implement the above-mentioned target detection display control method supporting photoelectric cross location in a radar failure scene, the display control device provided in this embodiment includes a display control management module, and a protocol conversion module, a target data processing module, an equipment health detection module, a multi-source radar fusion module, a multi-source photoelectric cross location module, and a target classification identification module that are controlled by the display control management module;
the input end of the protocol conversion module is used for connecting radar equipment and photoelectric equipment; the protocol conversion module is connected with the input end of the target classification identification module on one hand and connected with the target data processing module on the other hand;
the target data processing module is respectively connected with the equipment health detection module, the multi-source radar fusion module and the multi-source photoelectric cross positioning module, and the equipment health detection module, the multi-source radar fusion module, the multi-source photoelectric cross positioning module and the target classification identification module are all connected with the display control management module.
The protocol conversion module, the target data processing module, the equipment health detection module, the multi-source radar fusion module, the multi-source photoelectric cross positioning module, the target classification identification module and the display control management module in the display control device work in the display control method, and are not described again.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A target detection display control method supporting photoelectric cross positioning in a radar failure scene is characterized by comprising the following steps:
s1, the protocol conversion module receives the photoelectric message and the photoelectric video stream from the photoelectric equipment and the thunder and lightning message from the radar equipment, directly sends the received optical television frequency stream to the target classification and identification module, converts the received photoelectric message and the radar message into a uniform data format and sends the uniform data format to the target data processing module; wherein:
the radar message includes: heartbeat data and radar target data of the radar device;
the photoelectric message comprises: heartbeat data and photoelectric detection data of the photoelectric equipment;
s2, the target data processing module processes the photoelectric message and the radar message after the protocol conversion:
aggregating radar target data into a local radar track and outputting the local radar track to a multi-source radar fusion module;
the photoelectric detection data are directly forwarded to a multi-source photoelectric cross positioning module;
directly forwarding heartbeat data of the radar equipment and heartbeat data of the photoelectric equipment to an equipment health detection module;
s3, the equipment health detection module detects the health states of the radar equipment and the photoelectric equipment in real time according to heartbeat data of the radar equipment and the heartbeat data of the photoelectric equipment, and outputs a data source failure instruction, a data source removing instruction or a data source recovering instruction to the multi-source radar fusion module and the multi-source photoelectric cross positioning module according to the detected health states, so that the data from the corresponding radar equipment and the photoelectric equipment are subjected to failure, removal or recovery operation when the data are processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module;
s4, the multi-source radar fusion module adopts SF distributed fusion algorithm to perform associated fusion on a plurality of local radar tracks from the target data processing module to form a system radar track of the current fusion period;
s5, the multi-source photoelectric cross positioning module gives out target positioning data to photoelectric detection data from the target data processing module through a cross positioning algorithm based on space stereo calculation geometry;
s6, the target classification and identification module performs target classification and identification on the optical television frequency flow by adopting a deep learning convolutional neural network with YoloV3 as a framework to obtain a target classification and identification result;
and S7, the display control management module displays the system radar track, the target positioning data and the target classification and identification results, and plays the photoelectric video stream.
2. The method for detecting and displaying objects supporting photoelectric cross-positioning in radar failure scenarios as claimed in claim 1, wherein the unified data format in step S1 is JSON format.
3. The method for detecting and displaying and controlling the target supporting the photoelectric cross positioning in the radar failure scene according to claim 1, wherein the method for aggregating the radar target data into the local radar track and outputting the local radar track to the multi-source radar fusion module in the step S2 comprises:
calculating the Mahalanobis distance between radar target data by using a tracking algorithm;
comparing the calculated Mahalanobis distance with a preset Mahalanobis distance threshold;
and aggregating the radar target data which are smaller than the preset Mahalanobis distance threshold value into a local radar track.
4. The method for detecting and controlling the target supporting the photoelectric cross positioning in the radar failure scenario according to claim 1, wherein step S3 includes the following sub-steps:
s31, the equipment health detection module starts a plurality of state detection threads to detect the health states of the radar equipment and the photoelectric equipment in real time according to the heartbeat data of the radar equipment and the heartbeat data of the photoelectric equipment;
s32, each state detection thread extracts the device ID number currently still operating healthily from the heartbeat data, and caches the device ID number, the number of times of continuously receiving heartbeat data, and the number of times of continuously not receiving heartbeat data for real-time detection of the health state of the device:
(1) if the frequency of continuously not receiving heartbeat data of a certain device is greater than a failure threshold value, setting the health state of the device to be failed, simultaneously waking up an instruction sending thread to send a data source failure instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, and performing failure operation on the data from the device when the data are processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module, namely the data from the device do not participate in the processing of the multi-source radar fusion module and the multi-source photoelectric cross positioning module;
(2) if the number of times that a certain device does not receive heartbeat data continuously is larger than a rejection threshold value, setting the health state of the device as rejection, deleting related data of the device from a cache, simultaneously waking up an instruction sending thread to send a data source rejection instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, and preventing the data from the device from being read when the multi-source radar fusion module and the multi-source photoelectric cross positioning module are processed;
(3) if the frequency of continuously receiving heartbeat data by a certain device is greater than a recovery threshold value in a failure state or a rejection state, caching the data of the device and setting the health state of the device to be healthy, simultaneously awakening an instruction sending thread to send a data source recovery instruction to a multi-source radar fusion module and a multi-source photoelectric cross positioning module, so that the data from the device is recovered when being processed in the multi-source radar fusion module and the multi-source photoelectric cross positioning module, namely the data from the device participates in the processing of the multi-source radar fusion module and the multi-source photoelectric cross positioning module again.
5. The method for detecting and controlling objects supporting photoelectric cross-positioning in radar failure scenarios as claimed in claim 1, wherein in step S4, assuming that there are N local radar tracks from the object data processing module, the SF distributed fusion algorithm has the following calculation formula:
Figure FDA0003316535600000031
Figure FDA0003316535600000032
wherein the content of the first and second substances,
Figure FDA0003316535600000033
represents the optimal estimate of the ith local radar track at time k, Pi(k | k) represents the mean square error of the ith local radar track at time k,
Figure FDA0003316535600000034
represents the optimal estimation of the system radar track at the k-th moment, and P (k | k) represents the mean square error of the system radar track at the k-th moment.
6. The method for detecting and controlling the target supporting the photoelectric cross positioning in the radar failure scenario according to claim 1, wherein step S5 includes the following sub-steps:
s51, acquiring the longitude and latitude of the photoelectric equipment through the positioning modules carried by the two photoelectric equipment, and assuming that all calculations are carried outThe coordinate of the position of the photoelectric device 1 is set to (x) in the xyz coordinate system1,y1,z1) The coordinate of the position of the optoelectronic device 2 is (x)2,y2,z2) Meanwhile, assume that the target direction angles detected by the photoelectric devices 1 and 2 at the k-th time are respectively alpha1And alpha2Target pitch angles are respectively beta1And beta2Then, the coordinates of the position of the target at the k-th time are calculated as (x (k), y (k), z (k)) according to the following formula:
Figure FDA0003316535600000041
s52, selecting photoelectric detection data provided by two photoelectric devices capable of forming an intersection point in the multi-source photoelectric cross positioning module, starting a calculation thread to perform calculation once by the method of step S51 at fixed intervals, and if the photoelectric devices are in an active state and the facing positions of the two photoelectric devices continuously maintain the intersection point in the process, continuously generating subsequent position point coordinates of the target, thereby obtaining target positioning data.
7. The method for detecting and controlling the target supporting the photoelectric cross positioning in the radar failure scenario according to claim 1, wherein step S6 includes the following sub-steps:
s61, sampling and segmenting the optical television frequency stream by adopting an image extraction technology to obtain a plurality of video frame images;
s62, performing image quality improvement processing on the video frame image by adopting an interframe difference processing technology;
s63, performing image processing on the video frame image processed by the inter-frame difference processing technology by adopting a deep learning convolutional neural network with YoloV3 as a framework and extracting image features;
and S64, inputting the extracted image features into the target classification recognition model for target classification recognition to obtain a target classification recognition result.
8. The method for detecting and displaying targets supporting photoelectric cross location under radar failure scenes as claimed in claim 1, wherein in step S7, the display management module displays the radar tracks, the target location data and the target classification and identification results of the system and plays the photoelectric video stream in a man-machine interface manner.
9. An apparatus for implementing the target detection display control method supporting photoelectric cross positioning in a radar failure scene according to any one of claims 1 to 8, wherein the apparatus comprises a display control management module, and a protocol conversion module, a target data processing module, an equipment health detection module, a multi-source radar fusion module, a multi-source photoelectric cross positioning module and a target classification identification module which are controlled by the display control management module;
the input end of the protocol conversion module is used for connecting radar equipment and photoelectric equipment; the protocol conversion module is connected with the input end of the target classification identification module on one hand and connected with the target data processing module on the other hand;
the target data processing module is respectively connected with the equipment health detection module, the multi-source radar fusion module and the multi-source photoelectric cross positioning module, and the equipment health detection module, the multi-source radar fusion module, the multi-source photoelectric cross positioning module and the target classification identification module are all connected with the display control management module.
CN202111232889.2A 2021-10-22 2021-10-22 Target detection display control method and device supporting photoelectric cross positioning in radar failure scene Pending CN114047505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111232889.2A CN114047505A (en) 2021-10-22 2021-10-22 Target detection display control method and device supporting photoelectric cross positioning in radar failure scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111232889.2A CN114047505A (en) 2021-10-22 2021-10-22 Target detection display control method and device supporting photoelectric cross positioning in radar failure scene

Publications (1)

Publication Number Publication Date
CN114047505A true CN114047505A (en) 2022-02-15

Family

ID=80205938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111232889.2A Pending CN114047505A (en) 2021-10-22 2021-10-22 Target detection display control method and device supporting photoelectric cross positioning in radar failure scene

Country Status (1)

Country Link
CN (1) CN114047505A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115932765A (en) * 2022-12-13 2023-04-07 扬州宇安电子科技有限公司 Radar failure automatic detection system and method based on multi-source data analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115932765A (en) * 2022-12-13 2023-04-07 扬州宇安电子科技有限公司 Radar failure automatic detection system and method based on multi-source data analysis
CN115932765B (en) * 2022-12-13 2023-10-13 扬州宇安电子科技有限公司 Radar failure automatic detection system and method based on multi-source data analysis

Similar Documents

Publication Publication Date Title
CN109144095B (en) Embedded stereoscopic vision-based obstacle avoidance system for unmanned aerial vehicle
US10162362B2 (en) Fault tolerance to provide robust tracking for autonomous positional awareness
CN103941746B (en) Image processing system and method is patrolled and examined without man-machine
CN111767798B (en) Intelligent broadcasting guide method and system for indoor networking video monitoring
CN103327310B (en) A kind of monitoring followed the tracks of based on mouse track and cruise method
CN105551032A (en) Pole image collection system and method based on visual servo
CN105069429A (en) People flow analysis statistics method based on big data platform and people flow analysis statistics system based on big data platform
CN110058597A (en) A kind of automatic Pilot heterogeneous system and implementation method
CN103581614A (en) Method and system for tracking targets in video based on PTZ
CN114114314A (en) Power transmission line inspection detection system and detection method based on laser point cloud
CN104185078A (en) Video monitoring processing method, device and system thereof
CN103581627A (en) Image and information fusion display method for high-definition video
CN104932523A (en) Positioning method and apparatus for unmanned aerial vehicle
CN110047092A (en) Multiple target method for real time tracking under a kind of complex environment
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN113589837A (en) Electric power real-time inspection method based on edge cloud
CN114047505A (en) Target detection display control method and device supporting photoelectric cross positioning in radar failure scene
JP2021157853A (en) Method and device for differentiating color of signal light and road-side apparatus
CN108521605A (en) The playback method and play system of remote sensing video
KR20200067979A (en) Method and apparatus for determining stereoscopic multimedia information
CN114202819A (en) Robot-based substation inspection method and system and computer
CN109708659B (en) Distributed intelligent photoelectric low-altitude protection system
CN112422895A (en) Image analysis tracking and positioning system and method based on unmanned aerial vehicle
CN104616277B (en) Pedestrian's localization method and its device in video structural description
CN111027195A (en) Simulation scene generation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination