CN113890977A - Airborne video processing device and unmanned aerial vehicle with same - Google Patents

Airborne video processing device and unmanned aerial vehicle with same Download PDF

Info

Publication number
CN113890977A
CN113890977A CN202111193580.7A CN202111193580A CN113890977A CN 113890977 A CN113890977 A CN 113890977A CN 202111193580 A CN202111193580 A CN 202111193580A CN 113890977 A CN113890977 A CN 113890977A
Authority
CN
China
Prior art keywords
video
video processing
data
fpga
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111193580.7A
Other languages
Chinese (zh)
Inventor
魏志强
温明
马希超
葛珊
阮建明
王洪庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Research Institute Of China Electronics Technology Group Corp
Original Assignee
Third Research Institute Of China Electronics Technology Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Research Institute Of China Electronics Technology Group Corp filed Critical Third Research Institute Of China Electronics Technology Group Corp
Priority to CN202111193580.7A priority Critical patent/CN113890977A/en
Publication of CN113890977A publication Critical patent/CN113890977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Abstract

The invention provides an airborne video processing device and an unmanned aerial vehicle with the same, wherein the airborne video processing device comprises an FPGA and a video processing chip which are integrated, the FPGA receives visible light television video signals and infrared thermal imager video signals to obtain video data after format conversion, the video data are transmitted to the video processing chip through an interactive interface, the video processing chip encodes the video data according to a preset protocol and then stores the video data in a local memory, and transmits the encoded video data to ground receiving equipment through a wireless data link, or the video processing chip transmits the encoded video data to the FPGA, and the FPGA transmits the encoded video data to the ground receiving equipment through the wireless data link. This machine carries video processing apparatus adopts FPGA + SOC hardware architecture, can realize functions such as video compression storage, multi-target detection, target tracking, target speed prediction to reserve a plurality of interfaces, realized integrating, multi-functional, miniaturized design, satisfy unmanned aerial vehicle to the actual demand of video processing function.

Description

Airborne video processing device and unmanned aerial vehicle with same
Technical Field
The invention relates to the technical field of video processing, in particular to an airborne video processing device and an unmanned aerial vehicle with the same.
Background
At present, unmanned aerial vehicles are more and more widely applied in the fields of military, police and civil use, and when the unmanned aerial vehicle-mounted photoelectric pod is used for target reconnaissance, detection, identification and tracking, the acquired target video images need to be transmitted to a ground station in real time through wireless communication link equipment, so that operators can master the target state and the surrounding environment in a flight area in real time to make decisions; meanwhile, the acquired target video image is compressed and stored on the airborne photoelectric equipment for further analysis afterwards.
Due to the limitation of wireless link bandwidth, for target images acquired by optical loads such as a high-definition camera, a thermal infrared imager and the like, the data volume of hundreds of megabits per second or even gigabits (G) bit cannot be directly transmitted through the wireless link, and video data must be encoded and compressed to ensure the real-time performance of transmission pictures and the normal work of a system; meanwhile, as each wireless link device has different external interfaces, a compression coding system is required to be configured with different external interfaces so as to improve the adaptability of the photoelectric pod device; in addition, the weight and the power consumption of the photoelectric pod are very key technical indexes for the unmanned aerial vehicle, and the endurance time of the unmanned aerial vehicle is directly influenced.
The photoelectric pod compression coding function and the equipment target detection and tracking function are separate modules due to the limitation of the calculation running speed of a core processing chip and the resource of a processor, namely, the image compression coding function, the video storage function, the target detection function and the target identification tracking function are respectively independent processing modules, an original video and a tracking video of superposed target state information need to be transmitted between the two modules through corresponding video interfaces, the integration degree is low, the size and the weight of equipment are increased by a plurality of circuit board modules, the time delay of data transmission among the circuit modules is increased, and the power consumption and the cost of the equipment are increased.
Disclosure of Invention
The invention provides an airborne video processing device and an unmanned aerial vehicle with the same, and aims to solve the technical problem of how to realize light-weight and integrated design of the airborne video processing device.
The onboard video processing device comprises an FPGA and a video processing chip which are arranged in an integrated mode, the FPGA receives visible light television video signals and infrared thermal imager video signals and carries out format conversion to obtain video data, the video data are transmitted to the video processing chip through an interactive interface, the video processing chip encodes the video data according to a preset protocol and then stores the video data in a local memory, and the encoded video data are transmitted to ground receiving equipment through a wireless data link, or the video processing chip transmits the encoded video data to the FPGA and the FPGA transmits the encoded video data to the ground receiving equipment through the wireless data link.
According to some embodiments of the invention, the onboard video processing device is further provided with: the SDI decoding chip is used for carrying out video decoding on received visible light television video signals and then transmitting the video signals to the FPGA, and the A/D decoding chip is used for decoding the received infrared thermal imager video signals and then transmitting the video signals to the FPGA.
In some embodiments of the present invention, the video processing chip has: a video input module, a video processing module and a video coding module,
the FPGA converts the received decoded visible light television video signal into a BT.1120 time sequence, converts the received decoded infrared thermal imager video signal into a BT.656 time sequence, inputs the video signal into the video input module to obtain image data, transmits the image data to the video processing module to be preprocessed to obtain video processing data, and the video coding module codes the video processing data according to an H.264/H.265 protocol.
According to some embodiments of the present invention, the video processing chip directly transmits the video data stream of the video input module to the video processing module in an online mode, and the video processing module sends the video processing data to the video encoding module in a manner of collecting and sending the video processing data in a row unit for encoding.
In some embodiments of the present invention, the video processing chip transmits the encoded video data to the wireless data link through an ethernet port, or transmits the encoded video data to the FPGA through a PCIe expansion bus.
According to some embodiments of the present invention, when the video processing module performs encoding, video data, airplane data, and load status data are combined into a video data stream according to a preset protocol, and the video data stream is stored in the local memory, and the FPGA reads the video data stream in the local memory through the PCIe interface, performs parallel-to-serial conversion, and then sends a clock and drives output according to a set code stream.
In some embodiments of the present invention, the FPGA has multiple asynchronous communication 422 interfaces for communicating with the aircraft flight control, inertial navigation, servo system and upper computer.
According to some embodiments of the invention, the video processing chip has a target tracking module that performs target tracking using a kernel correlation filtering algorithm based on histogram of oriented gradients features.
In some embodiments of the present invention, the video processing chip employs Hi3519A or Hi 35559A.
According to the unmanned aerial vehicle provided by the embodiment of the invention, the unmanned aerial vehicle is provided with the airborne photoelectric pod, and the airborne photoelectric pod adopts the airborne video processing device to perform video compression storage, multi-target detection, target tracking and target speed prediction.
The airborne video processing device provided by the invention has the following advantages:
the onboard video processing device adopts an FPGA + SOC (Haisin Hi3519A) hardware architecture, the respective performance and functional advantages of the FPGA and the Hi3519A are considered, a video multifunctional processing module is realized, a plurality of hardware input and output interfaces are reserved, and the actual requirements of the existing unmanned aerial vehicle on the video processing function are met; the functions of video compression storage, multi-target detection, target tracking, target speed prediction and the like can be realized on software, the module functions are enriched, and integration, multiple functions and miniaturization are realized; the long-time target tracking capability and tracking precision of the photoelectric pod are improved by adopting a KCF target tracking algorithm based on HOG characteristics; the design method is easy for function expansion, improves the integration and miniaturization levels of equipment, and has richer module functions and stronger adaptability; the system is suitable for the existing unmanned aerial vehicle or the mooring unmanned aerial vehicle to realize video transmission and photoelectric pod equipment control communication, and meets the practical application of the system.
Drawings
FIG. 1 is a schematic diagram of an onboard video processing device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the hardware components of an onboard video processing device according to an embodiment of the invention;
fig. 3 is a schematic view illustrating a video encoding process of an onboard video processing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a data transmission flow of high speed synchronization 422 of an onboard video processing device according to an embodiment of the invention;
FIG. 5 is a functional architecture diagram of Haas processor software of an onboard video processing device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a front view of an airborne video processing apparatus according to an embodiment of the invention;
fig. 7 is a schematic diagram of a real object backside of an onboard video processing device according to an embodiment of the invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the intended purpose, the present invention will be described in detail with reference to the accompanying drawings and preferred embodiments.
The unmanned airborne photoelectric pod is provided with a visible light television and a thermal infrared imager imaging sensor which are respectively an SDI digital video interface and a PAL analog video interface. When the target is detected, identified and tracked, the acquired target video image needs to be transmitted to a ground station in real time through wireless communication link equipment, so that an operator can master the target state in real time and make a decision; meanwhile, the collected target video image is required to be compressed and stored on the photoelectric pod equipment body for further analysis afterwards. Due to the limitation of the bandwidth of a wireless communication link, for target images acquired by optical loads such as a high-definition camera, a thermal infrared imager and the like, data volume of hundreds of megabits per second or even gigabits per second can not be directly transmitted through the wireless link, and video data must be compressed and encoded; meanwhile, as each wireless link device has different video interface forms, different external interfaces are required to be configured on the compression coding system so as to improve the adaptability of the photoelectric pod device; in addition, the weight and the power consumption of the photoelectric pod are very key technical indexes for the unmanned aerial vehicle, and the endurance time of the unmanned aerial vehicle is directly influenced.
In the prior art, due to the limitation of the calculation running speed of a core processing chip and the resource of a processor, the general photoelectric pod compression coding function and the equipment target detection and tracking function are separate modules, namely, the image compression coding function, the video storage function and the target detection and identification tracking function are respectively independent processing modules, an original video and a tracking video of superposed target state information need to be transmitted between the two modules through corresponding video interfaces, the integration degree is low, the multiple circuit board modules not only increase the size and the weight of the equipment, but also increase the time delay of data transmission between the circuit modules, and increase the power consumption and the cost of the equipment.
The disadvantages of the prior art are as follows:
the image processing module has single function, only has compression coding and storage functions and has no target detection, identification and tracking functions; the number of external interfaces is small, and the adaptability of the loader is not strong; the integration degree is not high, and the hardware cost, the power consumption, the volume, the weight and the like are increased.
In view of the above disadvantages, the present invention aims to provide a multifunctional, lightweight, miniaturized, and highly integrated image processing module, which is applied to an optoelectronic pod device, has rich external hardware interface functions and software functions, has video compression and encoding functions on the premise of realizing the functions of target detection, identification, and tracking, reduces the device size, power consumption, and cost, and solves the actual requirements of an unmanned aerial vehicle on the optoelectronic pod.
The onboard video processing device comprises an FPGA and a video processing chip which are arranged in an integrated mode, the FPGA receives visible light television video signals and infrared thermal imager video signals and carries out format conversion to obtain video data, the video data are transmitted to the video processing chip through an interactive interface, the video processing chip encodes the video data according to a preset protocol and then stores the video data in a local memory, and the encoded video data are transmitted to ground receiving equipment through a wireless data link, or the video processing chip transmits the encoded video data to the FPGA, and the FPGA transmits the encoded video data to the ground receiving equipment through the wireless data link.
According to some embodiments of the invention, the onboard video processing device is further provided with: the SDI decoding chip is used for carrying out video decoding on received visible light television video signals and then transmitting the video signals to the FPGA, and the A/D decoding chip is used for decoding the received infrared thermal imager video signals and then transmitting the video signals to the FPGA.
In some embodiments of the invention, a video processing chip has: a video input module, a video processing module and a video coding module,
the FPGA converts the received decoded visible light television video signal into a BT.1120 time sequence, converts the received decoded infrared thermal imager video signal into a BT.656 time sequence, inputs the video signal into a video input module to obtain image data, transmits the image data to a video processing module to be preprocessed to obtain video processing data, and the video coding module codes the video processing data according to an H.264/H.265 protocol.
According to some embodiments of the present invention, the video processing chip directly transmits the video data stream of the video input module to the video processing module in an online mode, and the video processing module transmits the video processing data to the video encoding module in a manner of acquiring and transmitting the video processing data in a row unit for encoding.
In some embodiments of the present invention, the video processing chip transmits the encoded video data to the wireless data link through the ethernet port or to the FPGA through the PCIe expansion bus.
According to some embodiments of the invention, when the video processing module performs encoding, video data, airplane data and load state data are combined into a video data stream according to a preset protocol and stored in the local memory, and the FPGA reads the video data stream in the local memory through the PCIe interface, performs parallel-to-serial conversion, and then sends a clock and drives output according to a set code stream.
In some embodiments of the invention, the FPGA has multiple asynchronous communication 422 interfaces for communicating with the aircraft flight control, inertial navigation, servo system, and upper computer.
According to some embodiments of the invention, the video processing chip has a target tracking module that performs target tracking using a kernel correlation filtering algorithm based on histogram of oriented gradients features.
In some embodiments of the present invention, the video processing chip employs Hi3519A or Hi 35559A.
According to the unmanned aerial vehicle provided by the embodiment of the invention, the unmanned aerial vehicle is provided with the airborne photoelectric pod, and the airborne photoelectric pod adopts the airborne video processing device to perform video compression storage, multi-target detection, target tracking and target speed prediction.
The airborne video processing device provided by the invention has the following advantages:
the onboard video processing device adopts an FPGA + SOC (Haisin Hi3519A) hardware architecture, the respective performance and functional advantages of the FPGA and the Hi3519A are considered, a video multifunctional processing module is realized, a plurality of hardware input and output interfaces are reserved, and the actual requirements of the existing unmanned aerial vehicle on the video processing function are met; the functions of video compression storage, multi-target detection, target tracking, target speed prediction and the like can be realized on software, the module functions are enriched, and integration, multiple functions and miniaturization are realized; the long-time target tracking capability and tracking precision of the photoelectric pod are improved by adopting a KCF target tracking algorithm based on HOG characteristics; the design method is easy for function expansion, improves the integration and miniaturization levels of equipment, and has richer module functions and stronger adaptability; the system is suitable for the existing unmanned aerial vehicle or the mooring unmanned aerial vehicle to realize video transmission and photoelectric pod equipment control communication, and meets the practical application of the system.
An onboard video processing apparatus according to the present invention is described in detail below with reference to the accompanying drawings. It is to be understood that the following description is only exemplary in nature and should not be taken as a specific limitation on the invention.
Fig. 1 shows a block diagram of a two-way video compression and storage system. An FPGA + SOC (Haas Hi3519A) architecture is adopted. A two-way video signal of a visible light television and a thermal infrared imager is transmitted to an FPGA through a special decoding chip, the FPGA carries out format conversion On 2-way videos and transmits the two-way video signal to a Hi3519A SOC in a BT.1120 interface mode, the Hi3519A can carry out OSD (On-Screen Display) information superposition according to actual needs, the OSD (On-Screen Display) information is compressed, encoded and packaged according to an H.264/H.265 protocol, one way of the OSD information is stored in a local memory, the other way of the OSD information is transmitted to a wireless data link through an Ethernet port On the SOC, or the OSD information is transmitted to the FPGA through a PCIe expansion bus, the FPGA transmits the received data to wireless data link equipment through a high-speed synchronous 422 interface (whether the synchronous 422 interface or the Ethernet port is selected to transmit the video according to a video interface form reserved by the wireless link equipment) and transmits the video to a ground Display control station in real time, and the ground station decodes the received the compressed video according to the protocol to restore the video image Display. The module is also provided with a plurality of asynchronous 422 interfaces, and can realize communication with a photoelectric pod servo system, an optical sensor, an airborne inertial navigation system, a flight control system, an upper computer and the like.
1. An input video analysis module:
the output video of the visible light television is in an HD-SDI format of 1080p @30Hz, the onboard video processing device adopts a special SDI decoding chip (LMH0387) to complete video decoding, and the decoded video effective data is output to the FPGA in a CMOS level mode along with F, V, H synchronous signals by 20 bits.
The infrared thermal imager outputs PAL system video data, the onboard video processing device adopts a special A/D chip (TW9912) to complete video analog-to-digital conversion, and the video data is output to the FPGA along with HS, VS and PIXCLK in an 8bit data form.
The FPGA analyzes and converts the received video data and transmits the video data to the Haisi Hi3519A SOC processor in a BT.1120 mode.
2. The video compression coding module:
the FPGA converts 1 path of SDI video and 1 path of infrared video into BT.1120 and BT.656 time sequences respectively, the BT.1120 and BT.656 time sequences are input to a Video Input (VI) module of Hi3519A, processed image data is output to a Video Processing (VPSS) module after interface conversion, shearing, scaling and the like, the VPSS module performs unified preprocessing (denoising, de-interlacing and the like) on an input image, then operations such as scaling, sharpening and the like are performed, and finally an image with set resolution is output to a video coding (VENC) module. The VENC module carries out H.264/H.265 coding and code stream control on the image data, and the coded data are sent to the FPGA to realize transmission. All modules in the whole process are realized by hardware modules in Hi3519A, and the whole encoding process can be realized only by calling and setting parameters through software programming.
In the functional module, the Hi3519A uses an online mode to directly transmit a video input VI data stream to a VPSS module, and the VPSS module transmits video data to a VENC module for coding in a mode of acquiring and transmitting the video data simultaneously in a row unit, so that data delay in the process that the VPSS module processes a complete frame image and then transmits the complete frame image to the VENC module can be reduced.
3. The video data sending module:
and the encoded video data, the aircraft data and the load state data are packaged and then transmitted to a wireless data link through a high-speed synchronous 422 interface or an Ethernet interface to realize wireless transmission to a ground control station. When compression coding is performed in Hi3519A, according to a protocol, a frame header and a frame tail are added to video data, airplane data and load state data to form a complete video data stream, the complete video data stream is stored in a DDR memory, and after the data in the data stream is read by an FPGA through a PCIe interface, the data is subjected to parallel-serial conversion and then a clock is sent and output is driven according to a set code stream, as shown in fig. 4. Likewise, the video data stream may also be transmitted to the ground station via the ethernet interface of Hi3519A SOC.
The high-speed synchronous 422 sending module processes data according to 3 independent levels of frames, bytes and bits, and mainly comprises the following functions:
the processor communicates: the synchronization 422 controls the slave device mode to be mapped to the memory control of the processor through PCIe (peripheral component interface express), and supports the processor to directly carry out the sending and receiving control of the data frame;
frame processing: the sending end reads out the transmission data, adds the frame head and the frame tail to form a complete frame;
byte processing: according to the protocol, a sending end adds corresponding stuffing bytes between frames;
and (4) BIT treatment: the sending end completes parallel-serial conversion from bytes to BITs and clock and data output driving;
and (3) HDMI video output: the uncompressed visible light and infrared double-channel video stream after mixed coding can be output to the equipment with the corresponding interface.
4. The target tracking module realizes that:
the target tracking module runs in a floating point processing unit (FPU) built in a Hi3519A SOC in an algorithm plug-in calling mode, and the calling time and data exchange of the module are controlled by the CPU. The input data of the module is the memory address of the current frame image data of the original video, and the pixel error amount and the tracking state of the target are output.
The target tracking module employs a Kernel Correlation Filter (KCF) algorithm based on Histogram of Oriented Gradient (HOG) features. Compared with the gray scale feature, the HOG feature is adopted to endow the algorithm with robustness to geometric deformation, illumination brightness and color change, and the use of the texture structure can improve the stability of the moving target. The KCF algorithm has relatively low computation amount and good real-time performance, and the problem of target tracking failure possibly caused by the conditions of rapid movement, deformation, shielding and the like of a target can be avoided through filter training in an initialization stage and real-time correction and learning of a tracking result.
5. The target speed prediction module implements:
in order to improve the target tracking precision and reduce the time delay of links such as circuit D/A conversion, data frame access, miss distance calculation and the like, the airborne video processing device introduces feedforward control to a servo system by using a target speed prediction module to compensate time delay. The speed prediction module runs in a DSP (digital signal processor) built in the Hi3519A SOC (system on chip), and realizes speed prediction by reading a series of data such as the azimuth angle and the pitch angle of the photoelectric pod, the target tracking error amount and the like and adopting a Kalman filtering algorithm. The speed and acceleration signals of the signals are introduced into the input end of the speed loop channel to form feedforward control, and the servo performance is improved to improve the target tracking precision.
6. The target detection module realizes that:
the target detection module runs in a Neural Network Interface Engine (NNIE) built in the Hi3519A SOC, and can realize multi-target automatic detection and identification under a complex background. The adopted target detection model is YOLO V3, and the model is realized by adopting a single step, namely, a target detection problem is converted into a single regression problem for extracting bounding boxes and class probabilities from an image, and a plurality of bounding boxes and class probabilities are predicted by adopting a single convolutional neural network. Therefore, the model can achieve a fast detection speed and simultaneously realize a higher accuracy rate.
The target detection module is realized by adopting an offline training and online detection mode. And (3) training a YOLO V3 model built by using a Caffe framework by using a required image library offline, converting model parameters into a special binary file by using a Ruyi Studio development tool provided by Haisi after the training is finished, and importing the special binary file into an onboard storage space. When the device works, the CPU transmits an image to be detected into NNIE, the NNIE loads imported model parameters, completes target detection in a hardware implementation mode, and outputs information of the type, confidence degree, position, size and the like of a plurality of targets.
7. The communication module realizes that:
through a plurality of asynchronous 422 serial ports extended by the FPGA, the airborne video processing device can realize real-time communication with a plurality of devices or systems.
1) Communicating with an airborne flight control system. The module can send information such as the servo states of the photoelectric pod equipment, such as the azimuth, the pitching and the working mode, the focal length of the optical sensor, the residual recording time of the storage module, the number of stored video clips and the like to the aircraft flight control system; and the airborne flight control sends information such as aircraft course angle, pitch angle, roll angle, longitude and latitude, altitude and the like to the module.
2) And communicating with the airborne inertial navigation system. The module can receive information such as aircraft course angle, pitch angle, roll angle, longitude and latitude, altitude and the like sent by the aerial carrier inertial navigation system, and the information can be used for electronic image racemization and geographical tracking positioning.
3) Communicating with the optoelectronic pod servo system. The module can receive servo state information, an optical imaging sensor field angle, current working optical imaging sensor information and laser ranging working state information which are sent by a servo system; the module sends a photoelectric pod device control instruction, a working mode control instruction, an optical imaging sensor control instruction and a laser range finder control instruction to the servo system.
4) And communicating with an upper computer display control system. The module receives a photoelectric pod equipment servo control instruction, an optical imaging sensor control instruction, a laser range finder control instruction and a video storage module control instruction which are sent by a display control system; the state information sent to the display control system by the module comprises the servo state of photoelectric pod equipment, the state of an optical imaging sensor, the state of a laser range finder and the working state of a video storage module.
8. Software functions:
hi3519ASOC is a main control processor and completes the functions of video coding, storage, communication management and the like. The FPGA realizes the processing of a video and communication interface, completes the format conversion of the video and data, the butt joint with a main control processor and the like. Fig. 5 is a Hi3519A software functional architecture, which is based on a hardware platform and mainly includes a system boot, a Linux kernel, a driver layer, and an application layer, and the basic functions implemented are as follows:
1) video coding: the encoding of video data (1 SDI +1 PAL video) is completed, the encoding mode H.264/H.265 is configurable, and the code rate is configurable;
2) the video playback function: 1-path video playback is supported, namely recorded video files are read for playback;
3) an OSD module: overlaying carrier attitude information, carrier pod information and the like to a video for coding, and realizing local storage and transmission;
4) video file storage management function: the coded video can be stored in an electronic disk according to a corresponding format and can be automatically covered (under the condition of full disk);
5) and (3) resolving an RS422 interface control instruction: receiving and analyzing a control instruction, and realizing the functions of video recording, video playback, OSD superposition, packed data synchronous transmission and the like according to requirements;
6) the BIT acquisition function: and reporting the state information of the storage system in a timing period, wherein the state information comprises information such as residual recording time, the number of recorded files, the state of the storage system and the like.
7) Self-checking function: the power-on self-check can realize the initial state detection of the equipment and the periodic self-check can realize the periodic self-check of the module at regular time, thereby improving the reliability and stability of the equipment.
The software is based on a Linux 2.6.38SMP development kit, has strong reliability, and is convenient to upgrade, expand and maintain.
In summary, the onboard video processing device provided by the invention adopts an FPGA + SOC (haisi Hi3519A) hardware architecture, designs a multifunctional video image processing module, reserves a corresponding synchronous high-speed 422 video output interface, an ethernet video output port and an HDMI video output port, reserves a multi-channel asynchronous communication 422 interface, can realize communication with an onboard flight control system, an inertial navigation system, a servo system and an upper computer, improves the installed adaptability of the module, and meets the application requirements of wired video transmission and equipment control of an airplane wireless or tethered unmanned aerial vehicle.
The airborne video processing device can realize the functions of video compression storage, multi-target detection, target tracking, target speed prediction and the like on software, enriches the functions of modules, and realizes integration, multiple functions and miniaturization.
The onboard video processing device adopts a KCF target tracking algorithm based on HOG characteristics, and improves the long-time target tracking capability and tracking precision of the photoelectric pod.
A method for multi-target detection and target speed prediction is provided, and meanwhile, the module design method is easy to expand functions on hardware and software, for example, a plurality of RS422 interfaces can be expanded on hardware through an FPGA (field programmable gate array), so that more optical sensors can be directly controlled; functions such as stabilized pointing and the like can be added to software, so that the integration and miniaturization level of the equipment is improved, the module function is richer, and the adaptability is stronger.
The airborne video processing device provided by the invention has the following advantages:
1) the FPGA + SOC (Haas Haisi Hi3519A) hardware architecture is adopted, respective performance and functional advantages of the FPGA and the Hi3519A are considered, a video multifunctional processing module is realized, a plurality of hardware input and output interfaces are reserved, and actual requirements of the existing unmanned aerial vehicle on video processing functions are met.
2) The module software can realize the functions of video compression storage, multi-target detection, target tracking, target speed prediction and the like, enriches the module functions and realizes integration, multiple functions and miniaturization.
3) The module adopts a KCF target tracking algorithm based on HOG characteristics, and the long-time target tracking capability and tracking accuracy of the photoelectric pod are improved.
4) The module design method is easy for function expansion, improves the integration and miniaturization levels of equipment, and has richer module functions and stronger adaptability.
5) The system is suitable for the existing unmanned aerial vehicle or the mooring unmanned aerial vehicle to realize video transmission and photoelectric pod equipment control communication, and meets the practical application of the system.
The following is a description of terms referred to in this application:
1、GTP transceiver:
the GTP transceiver is a transceiver with the line speed of 500Mb/s to 6.6Gb/s in the FPGA, and can be flexibly configured by utilizing programmable resources in the FPGA, so that the GTP transceiver is suitable for different requirements such as Ethernet, SATA1.0 interfaces and the like, and has the function of a physical layer of various high-speed serial interfaces.
2. An HDMI interface:
the high-definition multimedia interface is a full digital video and sound sending interface. It supports various television and computer video formats including SDTV, HDTV video pictures, plus multi-channel digital audio, and can send uncompressed audio and video signals.
3. PAL video, SDI video:
PAL, also known as Pair, is an abbreviation for Phase Alteration line in English. Sometimes used to refer to 625 lines, 25 frames per second, interlaced, PAL color coded television systems.
SDI is a Digital Interface, an acronym for Serial Digital Interface, which is a standard for transmitting Digital video over coaxial lines. The three formats of SD-SDI, HD-SDI and 3G-SDI correspond to transmission rates of 270Mb/s, 1.485Gb/s and 2.97Gb/s, uncompressed audio and video signals are transmitted as the same as HDMI, the three formats of the digital audio and video interface are low in loss and high in interference resistance, and the three formats of the digital audio and video interface are widely applied to the fields of broadcasting, television and monitoring.
4. SOC: the System on Chip is called a System on Chip, a System on Chip and a System on Chip;
5. haesi 3519A SOC related terms:
VI: inputting a video;
VPSS: video processing;
VENC: video coding;
PHY: the Port Physical Layer is called as a Port Physical Layer;
U-Boot: the Universal Boot Loader is an open source code project which follows the terms of the general Public license agreement GPL (general Public license), and is used for system Boot;
MMZ: one of the two physical memories of the haisi chip is a multimedia Memory area, which is called a Media Memory Zone for short MMZ.
While the invention has been described in connection with specific embodiments thereof, it is to be understood that it is intended by the appended drawings and description that the invention may be embodied in other specific forms without departing from the spirit or scope of the invention.

Claims (10)

1. An airborne video processing device is characterized by comprising an FPGA and a video processing chip which are arranged in an integrated mode, wherein the FPGA receives visible light television video signals and infrared thermal imager video signals and carries out format conversion to obtain video data, the video data are transmitted to the video processing chip through an interactive interface, the video processing chip encodes the video data according to a preset protocol and then stores the video data in a local storage, and the encoded video data are transmitted to ground receiving equipment through a wireless data link, or the video processing chip transmits the encoded video data to the FPGA and then the FPGA transmits the encoded video data to the ground receiving equipment through the wireless data link.
2. The onboard video processing device according to claim 1, further provided with: the SDI decoding chip is used for carrying out video decoding on received visible light television video signals and then transmitting the video signals to the FPGA, and the A/D decoding chip is used for decoding the received infrared thermal imager video signals and then transmitting the video signals to the FPGA.
3. The on-board video processing device according to claim 2, wherein the video processing chip has: the FPGA converts received decoded visible light television video signals into BT.1120 time sequences, converts received decoded infrared thermal imager video signals into BT.656 time sequences, inputs the video signals into the video input module to obtain image data, transmits the image data to the video processing module to be preprocessed to obtain video processing data, and the video coding module codes the video processing data according to an H.264/H.265 protocol.
4. The device of claim 3, wherein the video processing chip directly transmits the video data stream from the video input module to the video processing module in an online mode, and the video processing module sends the video processing data to the video encoding module in a row unit in a manner of collecting and sending data simultaneously for encoding.
5. The on-board video processing device according to claim 1, wherein the video processing chip transmits the encoded video data to the wireless data link through an ethernet port or to the FPGA through a PCIe expansion bus.
6. The onboard video processing device according to claim 5, wherein when the video processing module performs encoding, video data, airplane data and load status data are combined into a video data stream according to a preset protocol and stored in the local memory, and the FPGA reads the video data stream in the local memory through the PCIe interface, performs parallel-serial conversion on the video data stream, and then sends a clock and drives output according to a set code stream.
7. The on-board video processing device according to claim 1, wherein the FPGA has multiple asynchronous communication 422 interfaces for communicating with the on-board flight control, inertial navigation, servo system and upper computer.
8. The onboard video processing device according to claim 1, wherein the video processing chip has a target tracking module, and the target tracking module performs target tracking by using a kernel correlation filtering algorithm based on histogram of oriented gradients.
9. The on-board video processing device according to any of claims 1-8, wherein the video processing chip employs Hi3519A or Hi 35559A.
10. An unmanned aerial vehicle, characterized in that the unmanned aerial vehicle is provided with an onboard electro-optical pod for video compression storage, multi-target detection, target tracking and target speed prediction using an onboard video processing device according to any one of claims 1-9.
CN202111193580.7A 2021-10-13 2021-10-13 Airborne video processing device and unmanned aerial vehicle with same Pending CN113890977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111193580.7A CN113890977A (en) 2021-10-13 2021-10-13 Airborne video processing device and unmanned aerial vehicle with same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111193580.7A CN113890977A (en) 2021-10-13 2021-10-13 Airborne video processing device and unmanned aerial vehicle with same

Publications (1)

Publication Number Publication Date
CN113890977A true CN113890977A (en) 2022-01-04

Family

ID=79002683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111193580.7A Pending CN113890977A (en) 2021-10-13 2021-10-13 Airborne video processing device and unmanned aerial vehicle with same

Country Status (1)

Country Link
CN (1) CN113890977A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115041429A (en) * 2022-08-15 2022-09-13 中国电子科技集团公司第三十研究所 Multi-variety mixed line testing device and method for communication module products
CN115474090A (en) * 2022-08-31 2022-12-13 北京理工大学 Heterogeneous embedded real-time processing architecture supporting video target detection and tracking and application thereof
CN117055599A (en) * 2023-08-31 2023-11-14 北京航翊科技有限公司 Unmanned aerial vehicle flight control method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110265134A1 (en) * 2009-11-04 2011-10-27 Pawan Jaggi Switchable multi-channel data transcoding and transrating system
WO2017125916A1 (en) * 2016-01-19 2017-07-27 Vision Cortex Ltd Method and system for emulating modular agnostic control of commercial unmanned aerial vehicles (uavs)
CN107993257A (en) * 2017-12-28 2018-05-04 中国科学院西安光学精密机械研究所 A kind of intelligence IMM Kalman filtering feedforward compensation target tracking methods and system
CN111050107A (en) * 2019-11-11 2020-04-21 湖南君瀚信息技术有限公司 Wireless high-definition low-delay video transmission device, system and method
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112532978A (en) * 2020-11-25 2021-03-19 中国电子科技集团公司第五十四研究所 High-performance audio and video coding device based on HEVC
CN112751585A (en) * 2021-01-15 2021-05-04 天津航天中为数据系统科技有限公司 Unmanned aerial vehicle wireless communication terminal
CN113449566A (en) * 2020-03-27 2021-09-28 北京机械设备研究所 Intelligent image tracking method and system for low-speed small target in human-in-loop

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110265134A1 (en) * 2009-11-04 2011-10-27 Pawan Jaggi Switchable multi-channel data transcoding and transrating system
WO2017125916A1 (en) * 2016-01-19 2017-07-27 Vision Cortex Ltd Method and system for emulating modular agnostic control of commercial unmanned aerial vehicles (uavs)
CN107993257A (en) * 2017-12-28 2018-05-04 中国科学院西安光学精密机械研究所 A kind of intelligence IMM Kalman filtering feedforward compensation target tracking methods and system
CN111050107A (en) * 2019-11-11 2020-04-21 湖南君瀚信息技术有限公司 Wireless high-definition low-delay video transmission device, system and method
CN113449566A (en) * 2020-03-27 2021-09-28 北京机械设备研究所 Intelligent image tracking method and system for low-speed small target in human-in-loop
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112532978A (en) * 2020-11-25 2021-03-19 中国电子科技集团公司第五十四研究所 High-performance audio and video coding device based on HEVC
CN112751585A (en) * 2021-01-15 2021-05-04 天津航天中为数据系统科技有限公司 Unmanned aerial vehicle wireless communication terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115041429A (en) * 2022-08-15 2022-09-13 中国电子科技集团公司第三十研究所 Multi-variety mixed line testing device and method for communication module products
CN115474090A (en) * 2022-08-31 2022-12-13 北京理工大学 Heterogeneous embedded real-time processing architecture supporting video target detection and tracking and application thereof
CN117055599A (en) * 2023-08-31 2023-11-14 北京航翊科技有限公司 Unmanned aerial vehicle flight control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113890977A (en) Airborne video processing device and unmanned aerial vehicle with same
US10110814B1 (en) Reducing bandwidth for video streaming using de-warping and video analytics
CN108924477B (en) Remote video processing method and system and video processing equipment
US11514371B2 (en) Low latency image processing using byproduct decompressed images
CN2917134Y (en) DSP-based embedded real-time panoramic image acquisition and processing device
CN112714281A (en) Unmanned aerial vehicle carries VR video acquisition transmission device based on 5G network
CN111307291B (en) Surface temperature anomaly detection and positioning method, device and system based on unmanned aerial vehicle
CN109600532B (en) Unmanned aerial vehicle multi-channel video seamless switching system and method
CN110740257A (en) high-speed high-definition CMOS imaging system and imaging method thereof
CN112584092A (en) Data acquisition device and data acquisition system
CN111447409A (en) Video compression processing method and device for airborne multi-sensor single processor of unmanned aerial vehicle
CN109495707B (en) High-speed video acquisition and transmission method
CN104580895A (en) Airborne imaging system with synchronous camera shooting and photographing capacity
US20190268564A1 (en) Multi-camera serial video data conversion for graphics processing unit (gpu) interface
CN208400324U (en) A kind of airborne aviation management data set in CNI identification
CN101765006A (en) Remote detection video format real-time conversion equipment
US20070046781A1 (en) Systems and methods for processing digital video data
CN104159015A (en) Image pickup system based on unmanned aerial vehicle
CN116797716A (en) Real-time acquisition method, device and system for mapping model
CN109688314B (en) Camera system and method with low delay, less cache and controllable data output mode
US20200193135A1 (en) Image processing method and device, and unmanned aerial vehicle
CN113727073A (en) Method and system for realizing vehicle-mounted video monitoring based on cloud computing
CN112019808A (en) Vehicle-mounted real-time video information intelligent recognition device based on MPSoC
CN113391640A (en) Unmanned aerial vehicle visual guidance system with detection and tracking cooperation and control method
CN113139985A (en) Tracking target framing method for eliminating communication delay influence of unmanned aerial vehicle and ground station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination