CN111695541A - Unmanned aerial vehicle forest fire prevention system and method based on machine vision - Google Patents

Unmanned aerial vehicle forest fire prevention system and method based on machine vision Download PDF

Info

Publication number
CN111695541A
CN111695541A CN202010562077.3A CN202010562077A CN111695541A CN 111695541 A CN111695541 A CN 111695541A CN 202010562077 A CN202010562077 A CN 202010562077A CN 111695541 A CN111695541 A CN 111695541A
Authority
CN
China
Prior art keywords
unit
unmanned aerial
aerial vehicle
data
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010562077.3A
Other languages
Chinese (zh)
Inventor
何宜兵
段立新
宋博然
张神力
蔡忠鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tianhai Chenguang Technology Co ltd
Original Assignee
Shenzhen Tianhai Chenguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tianhai Chenguang Technology Co ltd filed Critical Shenzhen Tianhai Chenguang Technology Co ltd
Priority to CN202010562077.3A priority Critical patent/CN111695541A/en
Publication of CN111695541A publication Critical patent/CN111695541A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2477Temporal data queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Fuzzy Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to an unmanned aerial vehicle forest fire prevention system and method based on machine vision, wherein the system comprises the following components: the system comprises an unmanned aerial vehicle, a ground station, an intelligent algorithm terminal and display equipment; the method comprises the following steps: the unmanned aerial vehicle sends the forest scene video data shot by the pod to the ground station; the ground station forwards the received video data to an intelligent algorithm terminal; the intelligent algorithm terminal intelligently analyzes the received video, gives an identification result, and forwards the video and the identification result to the display equipment; the display equipment displays the position information of smoke, ignition points, fire fields or fire wires, water sources and vehicles on the forest scene on the GIS map, and the fire spreading information. The unmanned aerial vehicle forest fire prevention system and method based on machine vision can enable forest fire prevention related departments to comprehensively master the overall situation of forest fire when the forest fire occurs, and are convenient for timely and efficient fire extinguishing arrangement and deployment.

Description

Unmanned aerial vehicle forest fire prevention system and method based on machine vision
Technical Field
The invention relates to the field of machine vision application, in particular to an unmanned aerial vehicle forest fire prevention system and method based on machine vision.
Background
Forest fires are one of the worldwide important forestry disasters, and a certain amount of forest fires occur every year, so that great loss of forestry resources is caused. When a forest fire happens, the fire often spreads rapidly, a scientific method for controlling the overall situation of the forest fire is needed, an unmanned aerial vehicle is generally adopted for inspection at present, the unmanned aerial vehicle only collects videos of forest fire houses during inspection, detailed and specific information of the forest fire, such as smoke, the position of a fire point, the size and the position of a fire scene, the position and the length of a fire wire, the position of a water source, the position of a fire truck and the like, cannot be displayed, and systematic reference cannot be provided for emergency command.
Disclosure of Invention
In order to solve the problem that in the prior art, when forest fire occurs, more detailed and specific intelligent and structured information of a forest fire scene is provided for emergency commands, the invention provides an unmanned aerial vehicle forest fire prevention system and method based on machine vision. The unmanned aerial vehicle forest fire prevention system and method based on machine vision can enable forest fire prevention related departments to comprehensively master the overall situation of forest fire when the forest fire occurs, and are convenient for timely and efficient fire extinguishing arrangement and deployment.
The technical scheme provided by the invention is as follows:
an unmanned aerial vehicle forest fire prevention system based on machine vision, wherein, the system includes:
and the unmanned aerial vehicle is used for carrying out real-time aerial photography on the forest fire scene to obtain real-time video data.
And the ground station is used for controlling the unmanned aerial vehicle and receiving real-time video and data of aerial photography of the unmanned aerial vehicle.
And the intelligent algorithm terminal is used for analyzing the real-time video data of the forest fire scene to obtain an intelligent analysis and identification result.
And the display equipment is used for displaying the position information of the smoke, the ignition point, the fire field or the fire wire, the water source and the vehicle of the forest scene on the GIS map according to the obtained identification result information and the fire spreading information.
An unmanned aerial vehicle forest fire prevention system based on machine vision, wherein, unmanned aerial vehicle specifically includes:
and the power unit is used for providing power for the flight and stability of the unmanned aerial vehicle.
And the main control unit is used for maintaining the stability and navigation of the unmanned aerial vehicle and receiving and processing the remote control command of the ground station.
And the pod execution unit is used for aerial photographing real-time video data of the forest fire scene.
And the GIS unit is used for mapping and corresponding the aerial real-time video data and the actual map coordinates.
And the sensor unit comprises a wind speed sensor, a wind direction sensor, a magnetometer, a gyroscope and the like.
And the communication link unit is used for receiving the remote control signal instruction and transmitting video and data to the ground station.
The unmanned aerial vehicle forest fire prevention system based on machine vision, wherein, the ground satellite station specifically includes:
and the communication link unit is used for receiving the real-time video and data aerial-photographed by the unmanned aerial vehicle and sending a remote control command to the unmanned aerial vehicle.
And the remote control unit is used for sending a remote control instruction to the unmanned aerial vehicle.
And the streaming media forwarding unit is used for sending the received real-time video and data to the intelligent algorithm terminal.
The unmanned aerial vehicle forest fire prevention system based on machine vision, wherein, intelligent algorithm terminal specifically includes:
and the communication unit is used for receiving the real-time video and the data and sending the video data and the identification result data.
And the frame extracting unit is used for performing frame extracting processing on the received real-time video data to obtain picture data.
And the smoke identification unit is used for identifying smoke in the picture and outputting identification result information.
And the ignition point identification unit is used for identifying the ignition point in the picture and outputting identification result information.
And the fire scene identification unit is used for identifying the fire scene edge information in the picture and outputting identification result information.
And the fire wire identification unit is used for identifying the fire wire in the picture and outputting identification result information.
And the water source identification unit is used for identifying the water source in the picture and outputting identification result information.
And the vehicle identification unit is used for identifying the fire fighting truck in the picture and outputting identification result information.
The machine vision algorithm models of the smoke identification unit, the ignition point identification unit, the fire scene identification unit, the fire wire identification unit, the water source identification unit and the vehicle identification unit are based on a deep convolution neural network.
And the identification result unit is used for adding a time stamp and position information to the identification result, packaging the identification result into a data frame and sending the data frame to the display equipment by the communication unit.
The unmanned aerial vehicle forest fire prevention system based on machine vision, wherein, display device specifically includes:
and the communication unit is used for receiving the video and the identification result data pushed by the intelligent algorithm terminal.
And the coordinate mapping unit is used for mapping the identification result data to a specific GIS map.
And the GIS display unit is used for dynamically displaying the structured data of the identification result on the GIS map.
And the decoding synchronization unit is used for decoding and time synchronizing the received video and the identification structure structured data.
And the video display unit is used for displaying the real-time video and the structured data.
An unmanned aerial vehicle forest fire prevention method based on machine vision, wherein the method comprises the following steps:
and the unmanned aerial vehicle sends the forest scene video data shot by the pod to the ground station.
And the ground station forwards the received video data to the intelligent algorithm terminal.
The intelligent algorithm terminal intelligently analyzes the received video, gives an identification result, and forwards the video and the identification result to the display equipment.
The display equipment displays the position information of smoke, ignition points, fire fields or fire wires, water sources and vehicles on the forest scene on the GIS map, and the information of the fire spreading trend.
The unmanned aerial vehicle forest fire prevention method based on the machine vision is characterized in that the unmanned aerial vehicle sends forest scene video data shot by a pod to a ground station, and the method specifically comprises the following steps:
the power unit of the unmanned aerial vehicle provides power for the unmanned aerial vehicle to stably fly in the forest fire scene.
And the pod execution unit of the unmanned aerial vehicle takes aerial real-time video data of the scene of the forest fire.
And the GIS unit of the unmanned aerial vehicle maps and corresponds the aerial real-time video data and the actual map coordinates.
And the sensor unit of the unmanned aerial vehicle acquires information such as wind direction and wind speed.
And the communication link unit of the unmanned aerial vehicle receives the remote control signal instruction and transmits video and data to the ground station.
The unmanned aerial vehicle forest fire prevention method based on machine vision is characterized in that the ground station forwards the received video data to the intelligent algorithm terminal, and the method specifically comprises the following steps:
and the communication link unit of the ground station receives real-time video and data aerial-photographed by the unmanned aerial vehicle and is used for sending a remote control command to the unmanned aerial vehicle.
And the remote control unit of the ground station sends a remote control instruction to the unmanned aerial vehicle.
And the streaming media forwarding unit of the ground station sends the received real-time video and data to the intelligent algorithm terminal.
The unmanned aerial vehicle forest fire prevention method based on the machine vision comprises the following steps that the intelligent algorithm terminal intelligently analyzes a received video, gives an identification result, and forwards the video and the identification result to a display device, and specifically comprises the following steps:
and the communication unit of the intelligent algorithm terminal receives the real-time video and data and sends the video data and the identification result data.
And the frame extracting unit of the intelligent algorithm terminal performs frame extracting processing on the received real-time video data to obtain picture data.
And the smoke identification unit of the intelligent algorithm terminal identifies smoke in the picture and outputs identification result information.
And the ignition point identification unit of the intelligent algorithm terminal identifies the ignition point in the picture and outputs identification result information.
And the fire scene identification unit of the intelligent algorithm terminal identifies the fire scene edge information in the picture and outputs identification result information.
And the live wire identification unit of the intelligent algorithm terminal identifies the live wire in the picture and outputs identification result information.
And the water source identification unit of the intelligent algorithm terminal identifies the water source in the picture and outputs identification result information.
And the vehicle identification unit of the intelligent algorithm terminal identifies the fire fighting vehicle in the picture and outputs identification result information.
The machine vision algorithm models of the smoke identification unit, the ignition point identification unit, the fire scene identification unit, the fire wire identification unit, the water source identification unit and the vehicle identification unit of the intelligent algorithm terminal are based on a deep convolutional neural network.
And the identification result unit of the intelligent algorithm terminal adds the time stamp and the position information to the identification result, encapsulates the identification result into a data frame and sends the data frame to the display equipment through the communication unit.
The unmanned aerial vehicle forest fire prevention method based on the machine vision comprises the following steps that the display equipment displays position information of forest scene smoke, ignition points, fire fields or fire wires, water sources and vehicles and fire spreading trend information on a GIS map, and specifically comprises the following steps:
and the communication unit of the display equipment receives the video and the identification result data pushed by the intelligent algorithm terminal.
And the coordinate mapping unit of the display equipment maps the identification result data to a specific GIS map.
And the GIS display unit of the display equipment dynamically displays the structured data of the identification result on the GIS map.
And the decoding synchronization unit of the display equipment decodes and time-synchronizes the received video and the identification structure structured data.
And the video display unit of the display equipment displays real-time video and structured data.
The unmanned aerial vehicle forest fire prevention system and method based on machine vision can enable forest fire prevention related departments to comprehensively master the overall situation of forest fire when the forest fire occurs, and are convenient for timely and efficient fire extinguishing arrangement and deployment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system architecture block diagram of a forest fire prevention system of an unmanned aerial vehicle based on machine vision.
Fig. 2 is a functional structure block diagram of an unmanned aerial vehicle in the system architecture of the forest fire prevention system of the unmanned aerial vehicle based on machine vision.
Fig. 3 is a functional structure block diagram of a ground station in a system architecture of a forest fire prevention system of an unmanned aerial vehicle based on machine vision.
Fig. 4 is a functional structure block diagram of an intelligent algorithm terminal in a system architecture of the unmanned aerial vehicle forest fire prevention system based on machine vision.
Fig. 5 is a functional structure block diagram of a display device in a system architecture of the unmanned aerial vehicle forest fire prevention system based on machine vision.
Fig. 6 is a flowchart of a preferred embodiment of a forest fire prevention method for an unmanned aerial vehicle based on machine vision according to the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is described in further detail below. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a system architecture block diagram of an unmanned aerial vehicle forest fire prevention system based on machine vision, which is shown in figure 1. The method specifically comprises the following steps:
unmanned aerial vehicle 100, ground station 200, intelligent algorithm terminal 300, display device 400.
The unmanned aerial vehicle 100 is used for performing real-time aerial photography on a forest fire scene to obtain real-time video data; the unmanned aerial vehicle 100 is an industrial unmanned aerial vehicle; the unmanned aerial vehicle 100 aerial photography is aerial photography performed by a pod execution unit; the height and flight path of the aerial photograph are controlled by a ground station.
The ground station 200 is used for controlling the unmanned aerial vehicle and receiving real-time videos and data of aerial photography of the unmanned aerial vehicle; the control unmanned aerial vehicle comprises information such as flying direction, height and speed of the unmanned aerial vehicle; the real-time video data is real-time video and data that is aerial by the pod execution unit of the drone 100.
The intelligent algorithm terminal 300 is used for analyzing real-time video data of a forest fire scene to obtain an intelligent analysis and identification result; and sends the intelligent analysis recognition result and the related information to the display device 400.
The display device 400 is used for displaying position information of smoke, ignition points, fire fields or live lines, water sources and vehicles on the GIS map according to the obtained identification result information, and fire spreading information; real-time video data of smoke, ignition points, fire fields or fire wires, water sources and vehicles on the forest scene can be displayed; the fire spreading trend information provides fire spreading trend information according to information of ignition points, fire fields or fire lines and information of wind speed and wind direction provided by the unmanned aerial vehicle 100.
The invention provides a functional structure block diagram of an unmanned aerial vehicle in a system architecture of an unmanned aerial vehicle forest fire prevention system based on machine vision, which is shown in figure 2. The method specifically comprises the following steps:
power unit 101, master control unit 102, pod execution unit 103, GIS unit 104, sensor unit 105, communication link unit 106.
The power unit 101 of the unmanned aerial vehicle 100 is used for providing power for the unmanned aerial vehicle to fly and stably; the power unit 101 is a general system for the unmanned aerial vehicle, and is not described herein again.
The main control unit 102 of the drone 100 is configured to maintain stability and navigation of the drone, and receive and process a remote control command of a ground station; the main control unit 102 is a general main control for the unmanned aerial vehicle, and is not described herein again.
The pod execution unit 103 of the unmanned aerial vehicle 100 is used for aerial photography of real-time video data of a forest fire scene; the pod execution unit is uniquely identified by a unique pod ID; the pod ID and the equipment ID of the unmanned aerial vehicle uniquely correspond; the aerial video also comprises identification information such as unmanned aerial vehicle equipment ID, pod ID, video ID and the like; the aerial video is provided with geographical position information; the geographic position information refers to geographic position information of a GIS map.
The GIS unit 104 of the unmanned aerial vehicle 100 is configured to map and correspond real-time video data of aerial photography and actual map coordinates; the mapping correspondence refers to the correspondence between the pixel coordinate position information of the real-time video and the position information of the actual physical map, so as to obtain the specific position information of each pixel point in the video on the physical map.
The sensor unit 105 of the drone 100 includes a wind speed sensor, a wind direction sensor, a magnetometer, a gyroscope, and the like; the wind speed sensor is used for detecting the wind speed in the current aerial photography; the wind direction sensor is used for detecting the wind direction during current aerial photography.
The communication link unit 106 of the unmanned aerial vehicle 100 is configured to receive a remote control signal instruction and transmit video and data to a ground station; the remote control signal instruction is transmitted to the unmanned aerial vehicle by the ground station through the communication link unit 106; the video and data are transmitted by the communication link unit 106 to the ground station; the video is a real-time video aerial-photographed by the pod execution unit 103; the data includes, but is not limited to: coordinate mapping data, wind speed data and wind direction data of the GIS unit.
The invention provides a functional structure block diagram of a ground station in a system architecture of an unmanned aerial vehicle forest fire prevention system based on machine vision, which is shown in fig. 3. The method specifically comprises the following steps:
communication link unit 201, remote control unit 202, and streaming media forwarding unit 203.
The communication link unit 201 of the ground station 200 is configured to receive real-time video and data obtained by aerial photography by the unmanned aerial vehicle, and send a remote control instruction to the unmanned aerial vehicle; the real-time video is a live-action video of a forest fire scene aerial-photographed by the pod execution unit 103 of the unmanned aerial vehicle 100; the video is uniquely identified by an unmanned aerial vehicle device ID, a pod ID and a video ID; the data includes, but is not limited to: coordinate mapping data, wind speed data and wind direction data of the GIS unit.
The remote control unit 202 of the ground station 200 is configured to send a remote control instruction to the unmanned aerial vehicle; the remote control unit 202 sends a remote control instruction to the main control unit 102 of the drone 100 through the communication link 201; the main control unit 102 receives the control instruction sent by the remote control unit 202, and converts the control instruction into a power system instruction to control the flight of the unmanned aerial vehicle.
The streaming media forwarding unit 203 of the ground station 200 is configured to send the received real-time video and data to an intelligent algorithm terminal; the streaming media forwarding unit 203 executes a streaming media forwarding function based on the RTSP streaming media protocol; the real-time video data refers to real-time video data received from the drone 100; the received data includes, but is not limited to: coordinate mapping data, wind speed data and wind direction data of the GIS unit.
The invention provides a functional structure block diagram of an intelligent algorithm terminal in a system architecture of an unmanned aerial vehicle forest fire prevention system based on machine vision, which is shown in figure 4. The method specifically comprises the following steps:
the system comprises a communication unit 301, a frame extracting unit 302, a smoke identification unit 303, a fire point identification unit 304, a fire scene identification unit 305, a fire wire identification unit 306, a water source identification unit 307, a vehicle identification unit 308 and an identification result unit 309.
The communication unit 301 of the intelligent algorithm terminal 300 is configured to receive real-time video and data, and send video data and identification result data; the communication unit 301 receives and transmits data based on the RTSP streaming media protocol; the real-time video is a real-time video forwarded by the ground station; the video is uniquely identified by an unmanned aerial vehicle device ID, a pod ID and a video ID; the data received from the ground station includes, but is not limited to: coordinate mapping data, wind speed data and wind direction data of the GIS unit.
The frame extracting unit 302 of the intelligent algorithm terminal 300 is configured to perform frame extraction processing on the received real-time video data to obtain picture data; the process of frame extraction processing is actually a process of decoding the real-time video data and then coding to obtain picture data; typically, three to five frames are extracted in one second, and in practical application, the frequency of frame extraction can be adjusted according to actual needs; the encoding format of the picture includes but is not limited to: JPEG, JPEG2000, BMP; the picture is uniquely identified by an unmanned aerial vehicle device ID, a pod ID, a video ID and a picture ID; the picture also corresponds to coordinate mapping data, wind speed data and wind direction data of the GIS unit.
The smoke recognition unit 303 of the intelligent algorithm terminal 300 is configured to recognize smoke in a picture and output recognition result information; the smoke recognition unit 303 is a smoke recognition algorithm model obtained by learning a large number of smoke videos or pictures based on a deep neural network of machine learning; the smoke identification unit 303 performs object detection and segmentation on the picture data to obtain smoke identification result information of the picture; the smoke recognition result information includes: whether smoke exists or not, position information of the smoke in the picture and a smoke identification frame; mapping the position information by pixel point coordinates of smoke appearing in the picture and coordinates in a GIS map to obtain actual physical coordinate information; the position information is uniquely identified in the picture by a plurality of pixel points; the smoke identification frame is used for calibrating the size of the identified smoke area and is generally a rectangular frame.
The ignition point recognition unit 304 of the intelligent algorithm terminal 300 is configured to recognize an ignition point in a picture and output recognition result information; the fire point identification unit 304 is a fire point identification algorithm model obtained by learning a large number of fire point videos or pictures based on a machine learning deep neural network; the fire point identification unit 304 performs object detection and segmentation on the picture data to obtain fire point identification result information of the picture; the fire point recognition result information includes: whether the ignition point exists or not, position information of the ignition point in the picture and an identification frame of the ignition point; the position information is mapped by pixel point coordinates of the fire point appearing in the picture and coordinates in the GIS map to obtain actual physical coordinate information; the identification frame of the ignition point is used for calibrating the size of the identified ignition point area and is generally a rectangular frame.
The fire scene recognition unit 305 of the intelligent algorithm terminal 300 is configured to recognize fire scene edge information in a picture and output recognition result information; the fire scene recognition unit 305 is a fire scene recognition algorithm model obtained by learning a large number of fire scene videos or pictures based on a deep neural network of machine learning; the fire scene recognition unit 305 performs object detection and segmentation on the picture data to obtain fire scene recognition result information of the picture; the fire scene identification result information includes: whether a fire scene exists, the position information of the fire scene edge in the picture and an identification frame of the fire scene; mapping the position information by pixel point coordinates of the fire scene edge in the picture and coordinates in the GIS map to obtain actual physical coordinate information; the fire scene edge is uniquely identified in the picture by a plurality of pixel points; the identification frame of the fire scene is used for calibrating the size of the identified fire scene area and is generally a rectangular frame; the rectangular frame for identifying the fire scene can be a plurality of frames.
The fire wire identification unit 306 of the intelligent algorithm terminal 300 is configured to identify a fire wire in a picture and output identification result information; the live wire identification unit 306 is a live wire identification algorithm model obtained by learning a large number of live wire videos or pictures based on a deep neural network of machine learning; the live wire identification unit 306 performs object detection and segmentation on the picture data to obtain live wire identification result information of the picture; the fire wire identification result information includes: whether a live wire exists, the position information of the live wire in the picture and an identification frame of the live wire; mapping the position information by pixel point coordinates of the live wire appearing in the picture and coordinates in the GIS map to obtain actual physical coordinate information; the live wire is uniquely identified in the picture by a plurality of pixel points; the identification frame of the live wire is used for calibrating the size and the range of the identified live wire area, and is generally a rectangular frame; the rectangular frame for identifying the fire wire can be a plurality of frames.
The water source identification unit 307 of the intelligent algorithm terminal 300 is configured to identify a water source in a picture and output identification result information; the water source identification unit 307 is a water source identification algorithm model obtained by learning a large number of water source videos or pictures based on a deep neural network of machine learning; the water source identification unit 307 performs object detection and segmentation on the picture data to obtain water source identification result information of the picture; the water source identification result information includes: whether a water source exists or not, position information of the edge of the water source in the picture and an identification frame of the water source; mapping the position information by pixel point coordinates of the edge of the water source in the picture and coordinates in the GIS map to obtain actual physical coordinate information; the edge of the water source is uniquely identified in the picture by a plurality of pixel points; the water source identification frame is used for calibrating the size of the identified water source area and is generally a rectangular frame; the rectangular frame for identifying the water source can be a plurality of frames.
The vehicle identification unit 308 of the intelligent algorithm terminal 300 is configured to identify the fire fighting vehicle in the picture and output identification result information; the vehicle identification unit 308 is used for identifying the fire fighting truck and giving the position information of the fire fighting truck.
The machine vision algorithm models of the smoke identification unit 303, the ignition point identification unit 304, the fire scene identification unit 305, the fire wire identification unit 306, the water source identification unit 307 and the vehicle identification unit 308 of the intelligent algorithm terminal 300 are based on a deep convolutional neural network.
The recognition result unit 309 of the intelligent algorithm terminal 300 is configured to add a timestamp and location information to the recognition result, encapsulate the result into a data frame, and send the data frame to a display device through a communication unit; the time stamp information is added to the identification result so as to realize the synchronization of the structural data presentation and the video display of the identification result; the timestamp is a UTC time; the position information is obtained by coordinate mapping; the recognition results include, but are not limited to: type, location, region size, timestamp; the type refers to a recognition result type, and is specifically divided into the following types according to the recognition result condition: smoke, ignition point, fire field, fire line, water source, vehicle.
The invention provides a functional structure block diagram of display equipment in a system architecture of an unmanned aerial vehicle forest fire prevention system based on machine vision, which is shown in fig. 5. The method specifically comprises the following steps:
communication unit 401, coordinate mapping unit 402, GIS display unit 403, decoding synchronization unit 404, and video display unit 405.
The communication unit 401 of the display device 400 is configured to receive a video and identification result data pushed by an intelligent algorithm terminal; the transmission of the video is based on an RTSP streaming media protocol; the recognition result data includes, but is not limited to: type, location, area size, timestamp.
The coordinate mapping unit 402 of the display device 400 is configured to map the identification result data to a specific GIS map; the coordinate mapping unit finds out specific position information of the identification result on the GIS map according to the position information in the identification result data; and associating the position information with the type and the area size; the type refers to a recognition result type, and is specifically divided into the following types according to the recognition result condition: smoke, ignition point, fire field, fire line, water source, vehicle.
The GIS display unit 403 of the display device 400 is configured to dynamically display the structured data of the recognition result on a GIS map; the GIS display unit 403 displays the position and area size information of smoke, ignition point, fire field, fire wire, water source, vehicle and the like on a GIS map according to the mapping information of the coordinate mapping unit 402, and has structural data description information; at a certain point on the GIS map, real-time video information of a fire scene can be checked; the decoding and presentation of the real-time video information is processed by the decoding synchronization unit 404 and the video display unit 405.
The decoding synchronization unit 404 of the display device 400, configured to decode and time synchronize the received video and the identification structured data; the decoding synchronization unit 404 calls a video decoder to decode the video; the decoding synchronization unit 404 calls a word stock to decode the structured data information; the decoding synchronization unit 404 is further configured to obtain timestamp information of the video and the structured data information; the timestamp information is used to implement a synchronization function.
The video display unit 405 of the display device 400 is configured to display real-time video and structured data; the video display unit 405 is configured to display the video data and the structured data decoded by the decoding synchronization unit 404.
The invention provides a flow chart of a best embodiment of an unmanned aerial vehicle forest fire prevention method based on machine vision, which is shown in fig. 6. The method specifically comprises the following steps:
step S100: and the unmanned aerial vehicle sends the forest scene video data shot by the pod to the ground station.
The power unit 101 of the unmanned aerial vehicle 100 provides power for the unmanned aerial vehicle to stably fly in a forest fire scene.
The pod execution unit 103 of the unmanned aerial vehicle 100 takes aerial real-time video data of a forest fire scene.
The GIS unit 104 of the drone 100 maps and corresponds the real-time video data of the aerial photograph with the actual map coordinates.
The sensor unit 105 of the drone 100 acquires information such as wind direction and wind speed.
The communication link unit 106 of the drone 100 receives remote control signal commands and transmits video and data to ground stations.
The power unit 101, the pod execution unit 103, the GIS unit 104, the sensor unit 105, and the communication link unit 106 of the unmanned aerial vehicle 100 are already described in detail in the functional block diagram of the unmanned aerial vehicle in the system architecture of the forest fire protection system of the unmanned aerial vehicle based on machine vision in fig. 2, and are not described herein again.
Step S200: and the ground station forwards the received video data to the intelligent algorithm terminal.
The communication link unit 201 of the ground station 200 receives real-time video and data of the aerial photo of the unmanned aerial vehicle, and is used for sending a remote control command to the unmanned aerial vehicle.
The remote control unit 202 of the ground station 200 sends a remote control instruction to the drone.
The streaming media forwarding unit 203 of the ground station 200 sends the received real-time video and data to the intelligent algorithm terminal.
The communication link unit 201, the remote control unit 202, and the streaming media forwarding unit 203 of the ground station 200 have already been described in detail in the functional structure block diagram part of the ground station in the system architecture of the forest fire prevention system of the unmanned aerial vehicle based on machine vision, which is proposed in fig. 3, and are not described herein again.
Step S300: the intelligent algorithm terminal intelligently analyzes the received video, gives an identification result, and forwards the video and the identification result to the display equipment.
The communication unit 301 of the intelligent algorithm terminal 300 receives real-time video and data, and transmits video data and recognition result data.
The frame extracting unit 302 of the intelligent algorithm terminal 300 performs frame extraction processing on the received real-time video data to obtain picture data.
The smoke recognition unit 303 of the intelligent algorithm terminal 300 recognizes smoke in the picture and outputs recognition result information.
The ignition point recognition unit 304 of the intelligent algorithm terminal 300 recognizes the ignition point in the picture and outputs recognition result information.
The fire scene recognition unit 305 of the intelligent algorithm terminal 300 recognizes fire scene edge information in a picture and outputs recognition result information.
The live wire recognition unit 306 of the intelligent algorithm terminal 300 recognizes the live wire in the picture and outputs recognition result information.
The water source identification unit 307 of the intelligent algorithm terminal 300 identifies the water source in the picture and outputs identification result information.
The vehicle recognition unit 308 of the intelligent algorithm terminal 300 recognizes the fire engine in the picture and outputs recognition result information.
The machine vision algorithm models of the smoke identification unit 303, the ignition point identification unit 304, the fire scene identification unit 305, the fire wire identification unit 306, the water source identification unit 307 and the vehicle identification unit 308 of the intelligent algorithm terminal 300 are based on a deep convolutional neural network.
The recognition result unit 309 of the intelligent algorithm terminal 300 adds a timestamp and position information to the recognition result, and encapsulates the result into a data frame, which is sent to the display device by the communication unit.
The communication unit 301, the frame extracting unit 302, the smoke identifying unit 303, the ignition point identifying unit 304, the fire scene identifying unit 305, the fire wire identifying unit 306, the water source identifying unit 307, the vehicle identifying unit 308, and the identification result unit 309 of the intelligent algorithm terminal 300 are described in detail in the functional structure block diagram part of the intelligent algorithm terminal in the system architecture of the unmanned aerial vehicle forest fire prevention system based on machine vision, which is provided in fig. 4, and are not described herein again.
Step S400: the display equipment displays the position information of smoke, ignition points, fire fields or fire wires, water sources and vehicles on the forest scene on the GIS map, and the fire spreading information.
The communication unit 401 of the display device 400 receives the video and the recognition result data pushed by the intelligent algorithm terminal.
The coordinate mapping unit 402 of the display device 400 maps the recognition result data to a specific GIS map.
The GIS display unit 403 of the display device 400 dynamically displays the structured data of the recognition result on a GIS map.
The decoding synchronization unit 404 of the display device 400 decodes and time synchronizes the received video and the identification structured data.
The video display unit 405 of the display device 400 displays real-time video as well as structured data.
The communication unit 401, the coordinate mapping unit 402, the GIS display unit 403, the decoding synchronization unit 404, and the video display unit 405 of the display device 400 are already described in detail in the functional structure block diagram part of the display device in the system architecture of the unmanned aerial vehicle forest fire prevention system based on machine vision, which is provided in fig. 5, and are not described herein again.
The unmanned aerial vehicle forest fire prevention system and method based on machine vision can enable forest fire prevention related departments to comprehensively master the overall situation of forest fire when the forest fire occurs, and are convenient for timely and efficient fire extinguishing arrangement and deployment.
It should be understood that the invention is not limited to the embodiments described above, but that modifications and variations can be made by one skilled in the art in light of the above teachings, and all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. The utility model provides an unmanned aerial vehicle forest fire prevention system based on machine vision, its characterized in that, the system includes:
the unmanned aerial vehicle is used for carrying out real-time aerial photography on a forest fire scene to obtain real-time video data;
the ground station is used for controlling the unmanned aerial vehicle and receiving real-time video and data aerial-photographed by the unmanned aerial vehicle;
the intelligent algorithm terminal is used for analyzing real-time video data of the forest fire scene to obtain an intelligent analysis and identification result;
and the display equipment is used for displaying the position information of the smoke, the ignition point, the fire field or the fire wire, the water source and the vehicle of the forest scene on the GIS map according to the obtained identification result information and the fire spreading information.
2. The unmanned aerial vehicle forest fire prevention system based on machine vision as claimed in claim 1, wherein the unmanned aerial vehicle specifically comprises:
the power unit is used for providing power for the flight and stability of the unmanned aerial vehicle;
the main control unit is used for maintaining the stability and navigation of the unmanned aerial vehicle and receiving and processing a remote control command of the ground station;
the pod execution unit is used for aerial photographing real-time video data of a forest fire scene;
the GIS unit is used for mapping and corresponding the real-time video data of aerial photography and the actual map coordinate;
the sensor unit comprises a wind speed sensor, a wind direction sensor, a magnetometer, a gyroscope and the like;
and the communication link unit is used for receiving the remote control signal instruction and transmitting video and data to the ground station.
3. The unmanned aerial vehicle forest fire prevention system based on machine vision as claimed in claim 1, wherein the ground station specifically comprises:
the communication link unit is used for receiving real-time videos and data aerial-photographed by the unmanned aerial vehicle and sending remote control instructions to the unmanned aerial vehicle;
the remote control unit is used for sending a remote control instruction to the unmanned aerial vehicle;
and the streaming media forwarding unit is used for sending the received real-time video and data to the intelligent algorithm terminal.
4. The unmanned aerial vehicle forest fire prevention system based on machine vision as claimed in claim 1, wherein the intelligent algorithm terminal specifically comprises:
the communication unit is used for receiving real-time videos and data and sending the video data and the identification result data;
the frame extracting unit is used for carrying out frame extracting processing on the received real-time video data to obtain picture data;
the smoke identification unit is used for identifying smoke in the picture and outputting identification result information;
the ignition point identification unit is used for identifying the ignition point in the picture and outputting identification result information;
the fire scene identification unit is used for identifying fire scene edge information in the picture and outputting identification result information;
the live wire identification unit is used for identifying the live wire in the picture and outputting identification result information;
the water source identification unit is used for identifying the water source in the picture and outputting identification result information;
the vehicle identification unit is used for identifying the fire fighting vehicle in the picture and outputting identification result information;
the machine vision algorithm models of the smoke identification unit, the ignition point identification unit, the fire scene identification unit, the fire wire identification unit, the water source identification unit and the vehicle identification unit are based on a deep convolution neural network;
and the identification result unit is used for adding a time stamp and position information to the identification result, packaging the identification result into a data frame and sending the data frame to the display equipment by the communication unit.
5. The unmanned aerial vehicle forest fire prevention system based on machine vision as claimed in claim 1, wherein the display device specifically comprises:
the communication unit is used for receiving the video and the identification result data pushed by the intelligent algorithm terminal;
the coordinate mapping unit is used for mapping the identification result data to a specific GIS map;
the GIS display unit is used for dynamically displaying the structured data of the identification result on a GIS map;
a decoding synchronization unit for decoding and time synchronizing the received video and the identification structure structured data;
and the video display unit is used for displaying the real-time video and the structured data.
6. An unmanned aerial vehicle forest fire prevention method based on machine vision is characterized by comprising the following steps:
the unmanned aerial vehicle sends the forest scene video data shot by the pod to the ground station;
the ground station forwards the received video data to an intelligent algorithm terminal;
the intelligent algorithm terminal intelligently analyzes the received video, gives an identification result, and forwards the video and the identification result to the display equipment;
the display equipment displays the position information of smoke, ignition points, fire fields or fire wires, water sources and vehicles on the forest scene on the GIS map, and the fire spreading information.
7. The forest fire prevention method of the unmanned aerial vehicle based on the machine vision as claimed in claim 6, wherein the unmanned aerial vehicle sends forest scene video data shot by a pod to a ground station, and the method comprises the following steps:
the power unit of the unmanned aerial vehicle provides power for the unmanned aerial vehicle to stably fly in the forest fire scene;
the pod execution unit of the unmanned aerial vehicle takes aerial photograph of real-time video data of a forest fire scene;
the GIS unit of the unmanned aerial vehicle maps and corresponds the real-time video data of aerial photography and the actual map coordinates;
the sensor unit of the unmanned aerial vehicle acquires information such as wind direction and wind speed;
and the communication link unit of the unmanned aerial vehicle receives the remote control signal instruction and transmits video and data to the ground station.
8. The unmanned aerial vehicle forest fire prevention method based on machine vision as claimed in claim 6, wherein the ground station forwards the received video data to an intelligent algorithm terminal, specifically comprising the steps of:
the communication link unit of the ground station receives real-time video and data aerial-photographed by the unmanned aerial vehicle and is used for sending a remote control command to the unmanned aerial vehicle;
the remote control unit of the ground station sends a remote control instruction to the unmanned aerial vehicle;
and the streaming media forwarding unit of the ground station sends the received real-time video and data to the intelligent algorithm terminal.
9. The unmanned aerial vehicle forest fire prevention method based on machine vision as claimed in claim 6, wherein the intelligent algorithm terminal intelligently analyzes the received video, gives a recognition result, and forwards the video and the recognition result to a display device, specifically comprising the steps of:
the communication unit of the intelligent algorithm terminal receives real-time video and data and sends video data and identification result data;
the frame extracting unit of the intelligent algorithm terminal performs frame extracting processing on the received real-time video data to obtain picture data;
the smoke identification unit of the intelligent algorithm terminal identifies smoke in the picture and outputs identification result information;
the ignition point identification unit of the intelligent algorithm terminal identifies the ignition point in the picture and outputs identification result information;
the fire scene identification unit of the intelligent algorithm terminal identifies fire scene edge information in the picture and outputs identification result information;
the live wire identification unit of the intelligent algorithm terminal identifies the live wire in the picture and outputs identification result information;
the water source identification unit of the intelligent algorithm terminal identifies the water source in the picture and outputs identification result information;
the vehicle identification unit of the intelligent algorithm terminal identifies the fire fighting vehicle in the picture and outputs identification result information;
the machine vision algorithm models of the smoke identification unit, the ignition point identification unit, the fire scene identification unit, the fire wire identification unit, the water source identification unit and the vehicle identification unit of the intelligent algorithm terminal are based on a deep convolutional neural network;
and the identification result unit of the intelligent algorithm terminal adds the time stamp and the position information to the identification result, encapsulates the identification result into a data frame and sends the data frame to the display equipment through the communication unit.
10. A forest fire prevention method of unmanned aerial vehicles based on machine vision as claimed in claim 6, wherein the display device displays forest scene smoke, ignition point, fire field or line, water source, vehicle position information and fire spread information on GIS map, including steps:
the communication unit of the display equipment receives the video and the identification result data pushed by the intelligent algorithm terminal;
the coordinate mapping unit of the display equipment maps the identification result data to a specific GIS map;
the GIS display unit of the display equipment dynamically displays the structured data of the identification result on a GIS map;
the decoding synchronization unit of the display equipment decodes and time-synchronizes the received video and the identification structure structured data;
and the video display unit of the display equipment displays real-time video and structured data.
CN202010562077.3A 2020-06-18 2020-06-18 Unmanned aerial vehicle forest fire prevention system and method based on machine vision Pending CN111695541A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010562077.3A CN111695541A (en) 2020-06-18 2020-06-18 Unmanned aerial vehicle forest fire prevention system and method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010562077.3A CN111695541A (en) 2020-06-18 2020-06-18 Unmanned aerial vehicle forest fire prevention system and method based on machine vision

Publications (1)

Publication Number Publication Date
CN111695541A true CN111695541A (en) 2020-09-22

Family

ID=72481804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010562077.3A Pending CN111695541A (en) 2020-06-18 2020-06-18 Unmanned aerial vehicle forest fire prevention system and method based on machine vision

Country Status (1)

Country Link
CN (1) CN111695541A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668397A (en) * 2020-12-04 2021-04-16 普宙飞行器科技(深圳)有限公司 Fire real-time detection and analysis method and system, storage medium and electronic equipment
CN113409484A (en) * 2021-06-24 2021-09-17 广东电网有限责任公司 Intelligent disaster investigation system
CN113823056A (en) * 2021-09-26 2021-12-21 中电科西北集团有限公司 Unmanned aerial vehicle forest fire prevention monitoring system based on remote monitoring
CN114558267A (en) * 2022-03-03 2022-05-31 上海应用技术大学 Industrial scene fire prevention and control system
WO2023284520A1 (en) * 2021-07-12 2023-01-19 环球数科集团有限公司 Multi-feature fusion based fire identification system
CN115689076A (en) * 2022-08-23 2023-02-03 北京化工大学 Forest fire rescue vehicle path optimization method for loading fire extinguishing bomb

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202472841U (en) * 2011-12-19 2012-10-03 南京农业大学 Forest fire monitoring and early warning system based on IOT
CN103824138A (en) * 2012-11-19 2014-05-28 郭志华 Forest fire hazard emergency command decision management GIS three-dimensional platform
CN105096508A (en) * 2015-07-27 2015-11-25 中国电子科技集团公司第三十八研究所 Forest-fire-prevention digital informatization integration command system
CN108416963A (en) * 2018-05-04 2018-08-17 湖北民族学院 Forest Fire Alarm method and system based on deep learning
CN110517433A (en) * 2019-08-21 2019-11-29 深圳云感物联网科技有限公司 Forest Fire Prevention Direction based on GIS dispatches system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202472841U (en) * 2011-12-19 2012-10-03 南京农业大学 Forest fire monitoring and early warning system based on IOT
CN103824138A (en) * 2012-11-19 2014-05-28 郭志华 Forest fire hazard emergency command decision management GIS three-dimensional platform
CN105096508A (en) * 2015-07-27 2015-11-25 中国电子科技集团公司第三十八研究所 Forest-fire-prevention digital informatization integration command system
CN108416963A (en) * 2018-05-04 2018-08-17 湖北民族学院 Forest Fire Alarm method and system based on deep learning
CN110517433A (en) * 2019-08-21 2019-11-29 深圳云感物联网科技有限公司 Forest Fire Prevention Direction based on GIS dispatches system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668397A (en) * 2020-12-04 2021-04-16 普宙飞行器科技(深圳)有限公司 Fire real-time detection and analysis method and system, storage medium and electronic equipment
CN113409484A (en) * 2021-06-24 2021-09-17 广东电网有限责任公司 Intelligent disaster investigation system
WO2023284520A1 (en) * 2021-07-12 2023-01-19 环球数科集团有限公司 Multi-feature fusion based fire identification system
CN113823056A (en) * 2021-09-26 2021-12-21 中电科西北集团有限公司 Unmanned aerial vehicle forest fire prevention monitoring system based on remote monitoring
CN114558267A (en) * 2022-03-03 2022-05-31 上海应用技术大学 Industrial scene fire prevention and control system
CN115689076A (en) * 2022-08-23 2023-02-03 北京化工大学 Forest fire rescue vehicle path optimization method for loading fire extinguishing bomb
CN115689076B (en) * 2022-08-23 2023-06-16 北京化工大学 Forest fire rescue vehicle path optimization method loaded with fire extinguishing bomb

Similar Documents

Publication Publication Date Title
CN111695541A (en) Unmanned aerial vehicle forest fire prevention system and method based on machine vision
US11024092B2 (en) System and method for augmented reality content delivery in pre-captured environments
CN112581590B (en) Unmanned aerial vehicle cloud edge terminal cooperative control method for 5G security rescue networking
CN103686084A (en) Panoramic video monitoring method used for cooperative real-time reconnaissance of multiple unmanned aerial vehicles
CN110866991A (en) Marine inspection supervisory systems based on unmanned aerial vehicle takes photo by plane
CN111307291B (en) Surface temperature anomaly detection and positioning method, device and system based on unmanned aerial vehicle
CN105516604A (en) Aerial video sharing method and system
CN110636255A (en) Unmanned aerial vehicle image and video transmission and distribution system and method based on 4G network
WO2022062860A1 (en) Data processing method, apparatus and device for point cloud media, and storage medium
WO2023029551A1 (en) Image stitching method and system based on multiple unmanned aerial vehicles
CN111656356A (en) Object recognition system using distributed neural network
CN110460579B (en) Flight data display method, system and device and readable storage medium
CN111708916A (en) Unmanned aerial vehicle cluster video intelligent processing system and method
CN116824480A (en) Monitoring video analysis method and system based on deep stream
WO2023060405A1 (en) Unmanned aerial vehicle monitoring method and apparatus, and unmanned aerial vehicle and monitoring device
CN114326764A (en) Rtmp transmission-based smart forestry unmanned aerial vehicle fixed-point live broadcast method and unmanned aerial vehicle system
KR101674033B1 (en) Image mapping system of a closed circuit television based on the three dimensional map
CN108304420A (en) Unmanned plane image processing method and device
CN115731633A (en) Visualization method and system for multiple data acquired by sensor
CN113347390B (en) POS information link transmission system and method thereof
US11825066B2 (en) Video reproduction apparatus, reproduction method, and program
CN116740316B (en) XR technology-based high-precision business hall people stream monitoring panoramic display method
CN114598692B (en) Point cloud file transmission method, application method, device, equipment and storage medium
US20230334716A1 (en) Apparatus and method for providing 3-dimensional spatial data based on spatial random access
KR102548624B1 (en) System for Generating Safety Alarm In Risk Predicted Area by Analyzing Risk in Advance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200922

WD01 Invention patent application deemed withdrawn after publication