WO2022110912A1 - Unmanned aerial vehicle video-based forest fire spreading data assimilation method and apparatus - Google Patents

Unmanned aerial vehicle video-based forest fire spreading data assimilation method and apparatus Download PDF

Info

Publication number
WO2022110912A1
WO2022110912A1 PCT/CN2021/112848 CN2021112848W WO2022110912A1 WO 2022110912 A1 WO2022110912 A1 WO 2022110912A1 CN 2021112848 W CN2021112848 W CN 2021112848W WO 2022110912 A1 WO2022110912 A1 WO 2022110912A1
Authority
WO
WIPO (PCT)
Prior art keywords
fire
time
line
position information
model
Prior art date
Application number
PCT/CN2021/112848
Other languages
French (fr)
Chinese (zh)
Inventor
陈涛
黄丽达
孙占辉
袁宏永
刘春慧
王晓萌
白硕
张立凡
王镜闲
Original Assignee
清华大学
北京辰安科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学, 北京辰安科技股份有限公司 filed Critical 清华大学
Publication of WO2022110912A1 publication Critical patent/WO2022110912A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Definitions

  • the present application relates to the field of data processing, in particular to a method, device, electronic device and storage medium for data assimilation of forest fire spread based on drone video, and belongs to the application field of data assimilation.
  • Forest fires are difficult to control, and forest fires will destroy forest ecosystems, cause environmental pollution, and threaten the safety of human life and property.
  • emergency management workers urgently need to obtain front-line fire information and accurate forest fire spread prediction information in a short period of time, so as to gain valuable time for emergency rescue and rescue work.
  • Geostationary satellites have high orbits, wide coverage, and high observation frequency, but relatively low spatial resolution. Geostationary satellites generally find that the size of the fire field is in the kilometer level, and it is difficult to effectively monitor small fire fields. Compared with stationary satellites, polar-orbiting satellites collect remote sensing images with high spatial resolution and low observation frequency.
  • the present application aims to solve one of the technical problems in the related art at least to a certain extent.
  • the present application proposes a method, device, electronic device, and computer-readable storage medium for forest fire spread data assimilation based on drone video, which can improve the accuracy of the forest fire spread model. , High-accuracy prediction of fire information, so as to provide objective fire information for forest fire fighting.
  • a method for assimilating forest fire spread data based on drone video including: acquiring meteorological data and basic geographic information data of the fire occurrence place, and obtaining the fire occurrence place K-1 The fire line state analysis value at time; the meteorological data, basic geographic information data and the fire line state analysis value at the K-1 time of the fire occurrence place are input into the forest fire spread model, and the fire line prediction position information at time K is obtained.
  • the forest fire spread model includes a Rothermel model and a Huygens wave model; the meteorological data, basic geographic information data and the fire line state analysis value at the K-1 moment of the fire are input into the In the forest fire spread model, obtaining the predicted position information of the fire line at time K includes: inputting the meteorological data of the fire occurrence place and the basic geographic information data into the Rothermel model, and obtaining the forest fire spread at time K-1 Speed; input the forest fire spread speed at K-1 time and the fire line state analysis value at K-1 time into the Huygens fluctuation model to predict the fire line position, and obtain the fire line predicted position information at time K.
  • the obtaining the fire line observation position information at time K according to the thermal imaging video of the fire field area includes: obtaining the thermal image of the fire field area at time K from the thermal imaging video of the fire field area; determining the fire field area The temperature information corresponding to each pixel in the thermal imaging; according to the temperature information and temperature threshold corresponding to each pixel in the thermal imaging of the fire area, the fire field range is extracted from the thermal imaging of the fire field area; The fire field range in carries out edge extraction, obtains the pixel position of fire line; The pixel position of described fire line is converted into the GPS (Global Positioning System global positioning system) coordinate of fire line, obtains the fire line observation position information of described K moment.
  • GPS Global Positioning System global positioning system
  • the number of the unmanned aerial vehicle is at least one, and for the thermal imaging video of the fire field area shot by at least one unmanned aerial vehicle at multiple observation points, multiple pixel positions of the line of fire are obtained; Converting the pixel position into the GPS coordinates of the line of fire, and obtaining the position information of the line of observation of the line of fire at time K, includes: performing coordinate conversion on a plurality of pixel positions of the line of fire respectively, and obtaining the number of coordinates of the line of fire in the UAV geographic coordinate system.
  • a plurality of coordinate values calculate a plurality of observation altitude angle matrices and a plurality of azimuth angle matrices of the fire line according to the plurality of coordinate values of the fire line in the UAV geographic coordinate system; according to the plurality of observation altitude angle matrices of the fire line Perform Kalman filter estimation on the position of the live line with multiple azimuth angle matrices to obtain the estimated coordinate value of the live line; convert the estimated coordinate value of the live line through GPS coordinates to obtain the observed position information of the live line at time K.
  • converting the pixel position of the fire line into the GPS coordinates of the fire line, and obtaining the fire line observation position information at the K time includes: obtaining the DEM geographic information of the fire occurrence place; obtaining the unmanned aerial vehicle. GPS information, attitude information and built-in parameters; according to the DEM (Digital Elevation Model digital elevation model) geographic information, the GPS information, attitude information and built-in parameters of the drone, generate a virtual perspective of the point of the drone; The actual UAV imaging process is simulated according to the virtual viewing angle of the UAV point, and a simulated image is obtained; according to the pixel position of the fire line, the pixel coordinates of the fire line in the simulated image are determined; The pixel coordinates in the simulated image are converted by GPS coordinates to obtain the position information of the line of fire observation at the K time.
  • DEM Digital Elevation Model digital elevation model
  • judging whether parameter adjustment of the forest fire spread model needs to be performed according to the predicted fire line location information at the K time and the fire line observation location information includes: calculating the fire line predicted location information at the K time. The deviation from the observation position information of the fire line; judge whether the deviation converges within the target range; if the deviation does not converge within the target range, judge whether the number of iterations for the fire spread model is less than the maximum number of iterations ; if the number of iterations of the fire spread model is less than the maximum number of iterations, it is determined that parameter adjustment of the forest fire spread model is required; if the deviation converges within the target range, and/or the fire spread If the number of iterations of the spread model is greater than or equal to the maximum number of iterations, the parameter adjustment of the forest fire spread model is stopped.
  • the model parameters of the forest fire spread model are adjusted according to the predicted fire line position information and the fire line observation position information at the time K, and the fire spread model at time K is recalculated according to the forest fire spread model adjusted by the model parameters.
  • the fire line prediction position information includes: calculating the deviation between the fire line predicted position information at the K time and the fire line observation position information; updating the coefficient matrix according to the preset forest fire spread speed and the deviation to the K-1 time.
  • the forest fire spread speed is adjusted; the adjusted forest fire spread speed at the K-1 time and the fire line state analysis value at the K-1 time are input into the Huygens fluctuation model, and the fire line at the K time is obtained again. Predicted location information.
  • the adjusting the forest fire spread rate at the time K-1 according to the preset forest fire spread rate update coefficient matrix and the deviation includes: updating the forest fire spread rate update coefficient matrix and all the parameters. The deviation is multiplied, and the obtained product is added to the adjusted forest fire spreading speed at time K-1.
  • the calculation of the live line state analysis value at time K according to the recalculated live line predicted position information at time K and the observed live line position information includes: based on the set Kalman filter algorithm, the recalculated time K The least squares fitting is performed between the predicted hotline position information of , and the observed hotline position information to obtain the hotline state analysis value at the time K.
  • a device for assimilating forest fire spread data based on drone video comprising: a first acquisition module for acquiring meteorological data and basic geographic information data of a fire occurrence place; a second acquisition module module, used to obtain the fire line state analysis value at the time K-1 of the fire occurrence place; the third acquisition module is used to obtain the meteorological data of the fire occurrence place, basic geographic information data and the fire line at the time K-1 time
  • the state analysis value is input into the forest fire spread model, and the predicted position information of the line of fire at time K is obtained;
  • the fourth acquisition module is used to obtain the thermal imaging video of the fire area based on the drone;
  • the fifth acquisition module is used to obtain according to the The thermal imaging video of the fire field area obtains the fire line observation position information at the K time; the judgment module is used to judge whether the forest fire spread model needs to be determined according to the fire line prediction position information and the fire line observation position information at the K time.
  • an adjustment module is used to adjust the model parameters of the forest fire spread model according to the predicted position information of the fire line at the time K and the observed position information of the fire line when the parameters of the forest fire spread model need to be adjusted. , and recalculate the predicted position information of the fire line at time K according to the forest fire spread model adjusted by the model parameters;
  • the data assimilation module is used to calculate the predicted position information of the fire line at time K according to the recalculated predicted position information of the fire line at time K and the observed position information of the fire line at time K. Firewire status analysis value.
  • the forest fire spread model includes a Rothermel model and a Huygens wave model
  • the third acquisition module is specifically configured to: input the meteorological data of the fire occurrence place and the basic geographic information data into the Rothermel model to obtain the forest fire spread speed at time K-1; input the forest fire spread speed at time K-1 and the analysis value of the fire line state at the time K-1 into the Huygens wave model for fire line position analysis to obtain the predicted position information of the live line at time K.
  • the fifth acquisition module is specifically configured to: acquire the thermal imaging of the fire field area at time K from the thermal imaging video of the fire field area; determine the temperature information corresponding to each pixel in the thermal imaging of the fire field area; The temperature information and temperature threshold corresponding to each pixel in the thermal imaging of the fire area are extracted from the thermal imaging of the fire area; the edge of the fire area in the thermal imaging of the fire area is extracted to obtain the pixel position of the fire line ; Convert the pixel position of the live line to the GPS coordinates of the live line, and obtain the live line observation position information at the K time.
  • the number of the unmanned aerial vehicle is at least one, and the fifth acquisition module is specifically configured to: obtain multiple thermal imaging videos of the fire field area captured by the at least one unmanned aerial vehicle at multiple observation points. pixel location.
  • the fifth acquisition module is specifically configured to: perform coordinate transformation on multiple pixel positions of the fire line respectively, to obtain a plurality of coordinate values of the fire line in the UAV geographic coordinate system; Calculate multiple observation altitude angle matrices and multiple azimuth angle matrices of the fire line from multiple coordinate values in the UAV geographic coordinate system; The location is estimated by Kalman filter to obtain the estimated coordinate value of the live line; the coordinate estimated value of the live line is converted by GPS coordinates to obtain the observed location information of the live line at time K.
  • the fifth obtaining module is specifically configured to: obtain the DEM geographic information of the fire occurrence place; obtain the GPS information, attitude information and built-in parameters of the UAV; according to the DEM geographic information, the The GPS information, attitude information and built-in parameters of the drone generate a virtual perspective of the drone point; simulate the actual drone imaging process according to the virtual perspective of the drone point, and obtain a simulated image; determine the pixel coordinates of the live line in the simulated image; convert the pixel coordinates of the live line in the simulated image through GPS coordinates to obtain the observation position information of the live line at time K.
  • the judging module is specifically configured to: calculate the deviation between the predicted live position information at the K time and the observed live position information; judge whether the deviation converges within the target range; if the deviation does not converge within the target range; the target range, it is determined whether the number of iterations for the fire spread model is less than the maximum number of iterations; if the number of iterations for the fire spread model is less than the maximum number of iterations, it is determined that the fire spread model needs to be updated Perform parameter adjustment; if the deviation converges to the target range, and/or the number of iterations of the fire spread model is greater than or equal to the maximum number of iterations, stop adjusting the parameters of the forest fire spread model.
  • the adjustment module is specifically configured to: calculate the deviation between the predicted position information of the live line at the time K and the observed position information of the live line; update the coefficient matrix according to the preset forest fire spread speed and the deviation to the Adjust the forest fire spread speed at time K-1; input the adjusted forest fire spread speed at time K-1 and the analysis value of the state of fire at time K-1 into the Huygens fluctuation model, and re- Obtain the predicted hotline position information at time K.
  • the adjustment module is specifically configured to: multiply the forest fire spread speed update coefficient matrix and the deviation, and perform a multiplication operation between the obtained product and the adjusted forest fire spread speed at time K-1. addition operation.
  • the data assimilation module is specifically configured to: perform least squares fitting on the recalculated live line predicted location information at time K and the live line observed location information based on an aggregated Kalman filter algorithm to obtain the The analysis value of the live line state at time K.
  • an electronic device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program , the method for assimilating the forest fire spread data based on the drone video described in the embodiment of the first aspect of the present application is implemented.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, realizes the drone-based video according to the embodiment of the first aspect of the present application
  • the forest fire spread data assimilation method is provided, on which a computer program is stored, and when the computer program is executed by a processor, realizes the drone-based video according to the embodiment of the first aspect of the present application.
  • the fire point positioning technology of the thermal imaging video of the UAV is combined, the position of the fire line is identified through the intelligent analysis of the thermal imaging video of the UAV, the parameters of the forest fire spread model are corrected in real time by observing the position of the fire line, and the dynamic Iterating the forest fire spread model to realize the data assimilation process of the forest fire spread model can effectively solve the problem that the fire line cannot be obtained in real time, and the parameters of the forest fire spread model cannot be corrected in time, resulting in the inability to guarantee the accuracy of the prediction results.
  • this application can quickly move and shoot, cover a large area, and quickly transmit video.
  • this application proposes a multi-convergent ensemble Kalman filter data assimilation method for the non-steady meteorological conditions in the forest fire area. While the fire line position corrects the parameters of the forest fire spread model in real time, it dynamically iterates the forest fire spread speed, effectively improving the The accuracy of the forest fire spread model.
  • FIG. 1 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to another embodiment of the present application;
  • Fig. 4 is a flow chart of obtaining the predicted position information of the live line at time K according to an embodiment of the present application
  • FIG. 5 is a schematic flowchart of obtaining position information of live line observation at time K according to another embodiment of the present application.
  • FIG. 6 is a schematic diagram of obtaining three-dimensional coordinate information of a target by multiple drones according to an embodiment of the present application
  • FIG. 7 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application
  • FIG. 8 is a flowchart of obtaining the predicted location information of the live wire according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a device for assimilating forest fire spread data based on drone video according to an embodiment of the present application.
  • FIG. 1 is a flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application.
  • the forest fire spread data assimilation method in the embodiment of the present application can be applied to the forest fire spread data assimilation device based on drone video implemented in the present application, and the device can be implemented by software and/or hardware.
  • the device can be integrated into electronic equipment.
  • the forest fire spread data assimilation method includes the following steps:
  • step 101 the meteorological data and basic geographic information data of the fire occurrence place are obtained, and the fire line state analysis value at the time K-1 of the fire occurrence place is obtained.
  • the meteorological data may include, but not limited to, any one or more of wind speed, wind direction, air temperature, precipitation probability, precipitation amount, air pressure, air humidity, air oxygen content, and the like.
  • the meteorological data may include wind speed and direction.
  • the basic geographic information data may include, but are not limited to, the type of underlying surface, forest moisture content, forest slope map, slope aspect, forest combustible substances, physical and chemical properties of forest combustible substances, etc. any one or more.
  • the physical and chemical properties may include, but are not limited to, any one or more of density, ignition point, calorific value, flammability, and the like.
  • the basic geographic information data may include underlying surface type, forest moisture content, forest slope map, slope aspect, and forest combustible matter.
  • the method for obtaining the analysis value of the fire line state at time K-1 at the fire occurrence place can be as follows: the meteorological data, basic geographic information data and the analysis value of the fire line state at time K-2 at the fire place are input into the forest fire
  • the spread model can obtain the predicted position information of the line of fire at the time of K-1, and then obtain the thermal imaging video of the fire area based on the drone, and obtain the observation position information of the line of fire at the time of K-1 according to this video. According to the predicted position information of the fire line and the observation position information, it is determined whether the parameters of the forest fire spread model need to be adjusted.
  • time K represents a certain time point when the forest fire is burning
  • time K-1 represents a time point corresponding to one time step back from the time point
  • time K-2 represents a time point from the said time point. The time point goes back two time points corresponding to the time steps, and so on.
  • the fire line state analysis value at time K-1 at the fire occurrence place when calculating the fire line state analysis value at time K-1 at the fire occurrence place, it can be based on the live line state analysis value at the previous time K-1 time (ie K-2 time), the meteorological data at the fire place, and Based on the basic geographic information data, the forest fire spread model is used to predict the position of the fire line at the time of K-1, and the predicted position of the fire line at the time of K-1 is obtained.
  • the fire line observation position information determines that the parameters of the fire spread model need to be dynamically adjusted, the parameters of the fire spread model can be adjusted. Calculate the hotline state analysis value at time K-1 according to the recalculated liveline predicted position information at time K-1 and the observed liveline position information at time K-1.
  • the predicted position information of the live line at the time of K-1 obtained in the first prediction and the predicted position of the live line at the time of K-1 can be directly obtained from the first prediction.
  • the live line state analysis value at time K-1 is calculated from the live line observation position information.
  • the acquisition method of the initial state analysis value of the live line where the fire occurred may be as follows: meteorological, basic geographic information and live line state can be obtained through multiple simulation tests in advance.
  • the corresponding relationship of the analysis values in this way, the initial state analysis value of the fire line at the fire occurrence place can be obtained according to the corresponding relationship, the meteorological data and the basic geographic information data of the fire occurrence place. That is to say, when a fire occurs in a certain place, the initial state analysis value of the live line at the place where the fire occurs can be predicted by using the empirical value obtained by multiple simulation tests.
  • Step 102 input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model, and obtain the predicted fire line position information at the time K of the fire.
  • the forest fire spread model includes: the Rothermel model, the Huygens wave model, the model used in combination with the Rothermel model and the Huygens wave model, the McArthur model, etc., which have the function of simulating the spread of forest fire by inputting information 's model.
  • the forest fire spread model includes: the Rothermel model and the Huygens wave model.
  • the meteorological data of the fire occurrence place, the basic geographic information data and the fire line state analysis value at time K-1 are input into the forest fire spread model, and the specific implementation process of obtaining the predicted fire line position information at time K may be as follows : Input the meteorological data and basic geographic information data of the fire location into the Rothermel model, obtain the forest fire spread speed at K-1, and analyze the forest fire spread speed at K-1 and the fire line state analysis value at K-1 Input to the Huygens fluctuation model to predict the position of the live line, and obtain the predicted position of the live line at time K.
  • step 102 may be as follows:
  • each fire can be obtained through the Rothermel model.
  • the forest fire spread speed of the point is R_0.
  • the fire point is regarded as a point on the wave front, and each fire point can be used as the next wave source (ie, the secondary wave source), and the wave continues to propagate, which can be regarded as the next time step K
  • Predicted line of fire position at the moment ( The subscript k represents time, the superscript represents the matrix state of FireWire, and f represents the prediction matrix of FireWire, Represents the predicted position of the live line at time k; e j , n j are the coordinates of point j on the live line, and the subscript m is the number of marked points on the perimeter of the live line).
  • the formula of the forest fire spread model is expressed as follows:
  • formula (1) represents the Rothermel model, in formula (1), R 0 is the fire spread rate of a certain fire point, IR is the reaction intensity, ⁇ is the propagation rate, ⁇ b is the density of combustibles, and ⁇ is the effective Heat coefficient, Q ig is the heat required to ignite a unit mass of combustibles, ⁇ sw wind speed and slope correction coefficient;
  • formula (2) represents the Huygens wave model, in formula (2), H represents the Huygens model,
  • the superscript a of represents the state analysis matrix of the ensemble predicted live line and observed live line, is represented as the state analysis matrix of the model at the last time step K-1.
  • Step 103 obtaining the thermal imaging video of the fire field area based on the UAV shooting, and obtaining the fire line observation position information at time K according to the thermal imaging video of the fire field area.
  • the method for obtaining the thermal imaging video from the video shot by the drone includes but is not limited to the following infrared thermal imaging technology : All objects in nature, due to the thermal motion of molecules inside the object, will have infrared radiation as long as the temperature is higher than absolute zero (-273°C), and the wavelength of this radiation is inversely proportional to its temperature.
  • the thermal imaging technology is infrared thermal imaging technology, which is transformed into a thermal image (may be grayscale and/or pseudo-color) of the target object through systematic processing according to the level of radiant energy of the detected object.
  • the UAV is equipped with such a thermal imager, and the pixel information of the thermal imager can reflect the temperature information of the shooting area.
  • the pixel information of the thermal imaging can be used to reflect the temperature information of the shooting area, so as to obtain the fire line observation position information at time K.
  • the thermal imager on the drone can send the thermal imaging video captured by the thermal imager to the forest fire spread data assimilation device through the communication connection with the forest fire spread data assimilation device, so that the forest fire spread data assimilation device can be
  • the fire spread data assimilation device obtains the thermal imaging video of the fire area based on the thermal imager on the drone. That is to say, the drone is equipped with a thermal imager to shoot thermal imaging video of the fire area.
  • the drone communicates with the forest fire spread data assimilation device using a communication connection, so that the forest fire spread data assimilation device can obtain the thermal imaging video of the fire field area from the drone based on the communication connection.
  • the manner used for the above-mentioned communication connection may be a mobile Internet manner, a wireless communication manner, or the like.
  • the mobile Internet can be 3G (3rd generation mobile networks) network, 4G (4th generation mobile networks fourth generation mobile information system) network, 5G (5th generation mobile networks) fifth generation mobile communication technology ) network, etc.
  • wireless communication can be one of WIFI (Wireless Fidelity), digital wireless data transmission radio, UWB (Ultra Wide Band) transmission, Zigbee transmission, etc.
  • WIFI Wireless Fidelity
  • UWB Ultra Wide Band
  • Step 104 according to the predicted position information of the fire line and the observed position information of the fire line at time K, determine whether parameter adjustment of the forest fire spread model needs to be performed.
  • the embodiment of the present application may implement forest fire spread data assimilation based on a multi-convergence ensemble Kalman filtering method.
  • the multi-convergent ensemble Kalman filter method first needs to select the state analysis live line position at time K-1 and the fire spread speed V k-1 as the state parameters to be corrected.
  • the Rothermel Huygens model is based on the live line state at time k-1.
  • Analysis value The predicted hotline position at time K obtained from the prediction
  • the observation position of the line of fire at time K is: Then, predict the position information according to the live line at time K and fire line observation location information Calculate the deviation between the two, and decide whether to adjust the parameters of the forest fire spread model according to the convergence of the deviation.
  • the specific implementation process of judging whether to adjust the parameters of the forest fire spread model according to the predicted position information of the fire line and the observed position information of the fire line at time K may include:
  • Step 201 Calculate the deviation between the predicted hotline position information and the hotline observation position information at time K.
  • the deviation is defined as follows:
  • the above formula (3) can be used to calculate the predicted position information of the live line at time K and the observation position of the fire line at time K is deviation between.
  • Step 202 judging whether the deviation converges within the target range.
  • Step 203 if the deviation does not converge within the target range, determine whether the number of iterations for the forest fire spread model is less than the maximum number of iterations.
  • a variable may be set to record the number of iterations of the forest fire spread model, and the maximum number of iterations may be a constant set manually, and the constant may be recorded in the system in advance, or may be recorded in the system in advance Given a recommended value in the actual operation, the maximum number of iterations can be dynamically adjusted according to experience or field conditions. By comparing the recorded number of iterations with the set maximum number of iterations, the relative relationship between the number of iterations and the maximum number of iterations can be obtained.
  • N iteration is the maximum iteration step size limit
  • h is the current number of iterations for the forest fire spread model
  • the predicted position information of the fire line at the time of K is judged and the observation position of the fire line at time K is After the deviation between them does not converge within the target range
  • Step 204 if the number of iterations of the forest fire spread model is less than the maximum number of iterations, it is determined that the parameters of the forest fire spread model need to be adjusted.
  • Step 205 if the deviation converges to the target range, and/or the number of iterations of the fire spread model is greater than or equal to the highest number of iterations, stop adjusting the parameters of the forest fire spread model. It can be seen that, through the above steps 201 to 205, under the condition of unsteady meteorological conditions in the forest fire area, the data assimilation method of the multi-convergent set Kalman filter can be used to correct the fire spread parameters such as the fire line position in real time. , the dynamic iterative forest fire spread speed will effectively improve the accuracy of the forest fire spread model.
  • Step 105 if it is necessary to adjust the parameters of the forest fire spread model, adjust the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculate according to the forest fire spread model adjusted by the model parameters.
  • the predicted hotline position information at time K, and the hotline state analysis value at time K is calculated according to the recalculated hotline predicted position information at time K and the observed hotline position information.
  • the specific implementation process of calculating the live line state analysis value at time K according to the recalculated live line prediction position information and live line observation position information at time K may be as follows: Least square fitting is performed between the recalculated live line predicted location information at time K and the live line observed location information to obtain the live line state analysis value at the K time.
  • the position of the live line in the state analysis is obtained by the least squares fitting calculation That is, the state analysis matrix (state analysis firewire Minimal error with the real live wire position). Among them, calculate the live line state analysis value The steps are as follows:
  • N is the number of elements in the state variable set, and 1 N is a matrix of size N ⁇ N with element values 1/N. is the medium prediction matrix A vector of means of the elements of each column in .
  • the observation vector yo can be obtained, and disturbance is added to the observation vector to generate an observation matrix containing N observation vectors.
  • the process of adding disturbance is as follows:
  • the observation vector after perturbation is obtained to form an observation matrix.
  • R m ⁇ N represents the definition domain of Y o , which means that Y o is m rows and N columns
  • the added perturbation can be stored in a matrix:
  • the ensemble observation error covariance matrix can be expressed as:
  • H is the observation operator, which maps X from the state space to the observation space.
  • the embodiment of the present application proposes a multi-convergent ensemble Kalman filter data assimilation method for the unsteady meteorological conditions in the forest fire area.
  • the fire line position corrects the parameters of the forest fire spread model in real time, and dynamically iterates the forest fire spread speed. Effectively improve the accuracy of the forest fire spread model.
  • the method for assimilating the forest fire spread data based on the drone video in the embodiment of the present application according to the meteorological data of the fire occurrence place, the basic geographic information data, the fire line state analysis value at the time of K-1 and other data, through the forest fire spread model, obtain the predicted position information of the fire line at time K, compare the predicted position information of the fire line at time K with the observation position information of the fire line obtained by the UAV, and judge whether the parameters of the forest fire spread model need to be adjusted.
  • the model parameters are adjusted according to the predicted position information of the fire line at time K and the observed position information, and the predicted position information of the fire line at time K is recalculated according to the adjusted forest fire spread model, and the analysis value of the fire line state at time K is recalculated.
  • This method of forest fire spread data assimilation based on UAV video uses UAV as the front-end monitoring equipment, extracts the fire line in real time, and obtains the position information of the fire line.
  • an assimilation forest fire spread with parameters that can be dynamically adjusted is proposed.
  • the model can effectively solve the problem that the fire line cannot be obtained in real time, and the parameters of the forest fire spread model cannot be corrected in time, which leads to the inability to guarantee the accuracy of the prediction results, and improves the prediction accuracy of the model.
  • the UAV has the advantages of high mobility and low cost.
  • the UAV can send back live video in real time, so that the update interval of the observation line of fire can be identified in minutes or even seconds, thus effectively avoiding the temporal resolution and spatial resolution of satellite remote sensing data.
  • the disadvantage of mutual restriction of fire rate can greatly improve the timeliness and accuracy of forest fire spread model prediction, thereby improving the prediction accuracy of fire area and providing objective fire information for forest fire fighting.
  • FIG. 3 is a flowchart of a method for assimilating forest fire spread data based on drone video according to another embodiment of the present application.
  • the forest fire spread data assimilation method includes:
  • Step 301 Obtain meteorological data and basic geographic information data of the fire occurrence place, and obtain the fire line state analysis value at the time K-1 of the fire occurrence place.
  • Step 302 Input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model, obtain the predicted position information of the fire line at the time K, and obtain the fire field area based on the drone shooting. Thermal imaging video.
  • Step 303 Obtain the thermal image of the fire field area at time K from the thermal imaging video of the fire field area, and determine the temperature information corresponding to each pixel in the thermal image of the fire field area.
  • Step 304 according to the temperature information and temperature threshold corresponding to each pixel of the thermal imaging of the fire field, extract the fire field range from the thermal imaging of the fire field area, and perform edge extraction on the fire field range to obtain the pixel position of the fire line.
  • Step 305 Convert the pixel position of the live line to the GPS coordinates of the live line to obtain the observation position information of the live line at time K.
  • the process of converting the firewire pixel information into GPS information adopts the inverse process of camera imaging, as shown in FIG. 4 , is the projection from the 3D scene to the 2D image plane captured by the drone process of transformation.
  • the essence of camera imaging is the process of central perspective projection in photographic geometry.
  • the points on the three-dimensional ground determine the observation results through the viewing cone space and viewpoint orientation specified by the projection matrix, and the two-dimensional image of the camera picture and the three-dimensional geographic information form a corresponding relationship through the viewing cone and viewpoint orientation.
  • Converting two-dimensional picture information into three-dimensional coordinate information is the inverse process of the above process.
  • the first example is a line of fire positioning technology that does not combine DEM information.
  • the same fire field area can be photographed based on at least one drone at multiple observation points, and then the at least one drone can be used in multiple observation points.
  • the thermal imaging video of the fire field area captured by each observation point obtains multiple pixel positions of the fire line, and then uses the multiple pixel positions of the fire line to calculate the fire line observation position information at time K.
  • this example includes the following steps:
  • Step 501 Perform coordinate transformation on multiple pixel positions of the line of fire, respectively, to obtain multiple coordinate values of the line of fire in the UAV geographic coordinate system.
  • Step 502 Calculate multiple observation altitude angle matrices and multiple azimuth angle matrices of the live line according to the multiple coordinate values of the live line in the UAV geographic coordinate system.
  • Step 503 Perform Kalman filter estimation on the position of the live line according to multiple observation elevation angle matrices and multiple azimuth angle matrices of the live line, and obtain the coordinate estimated value of the live line.
  • Step 504 Convert the estimated coordinates of the live line through GPS coordinates to obtain the observed location information of the live line at time K.
  • steps 501-504 without combining DEM information may be as follows:
  • the positioning method of the UAV to the ground target is mainly: collect and process data through the airborne sensor, obtain the relative distance and angle data between the UAV and the target, and calculate the target position coordinates based on the UAV's own position and attitude data, as shown in Figure 6
  • the UAV detects the same target through multi-point positions, and the accurate three-dimensional coordinates of the target can be obtained through the visual-based multi-point angle observation fire line positioning method.
  • Target positioning is carried out through multi-point angle observation, and the pixel information of the line of fire is used to calculate the relative height and azimuth matrix of the line of fire and the UAV according to the imaging principle, and the system state equation and observation equation are established.
  • the position of the man-machine is then converted into the position coordinates in the geodetic coordinate system of the FireWire. Observing at time K, you can get the actual observed fire line position at time K
  • the main steps to obtain the observation line of fire are as follows:
  • the pixel information of the fire line is converted into the value in the UAV geographic coordinate system through the coordinates, and the altitude and azimuth angle matrix of the fire line point relative to the UAV geographic coordinates are calculated.
  • the second example is a live wire positioning technology combined with DEM information, as shown in Figure 7, this example includes the following steps:
  • Step 701 Obtain the DEM geographic information of the place where the fire occurred.
  • Step 702 Obtain GPS information, attitude information and built-in parameters of the UAV.
  • Step 703 according to the DEM geographic information, the GPS information of the drone, the attitude information and the built-in parameters, generate a virtual perspective of the point of the drone.
  • Step 704 simulate the actual UAV imaging process according to the virtual viewing angle of the UAV point to obtain a simulated image.
  • Step 705 Determine the pixel coordinates of the live line in the simulated image according to the pixel position of the live line.
  • Step 706 Convert the pixel coordinates of the live line in the simulated image through GPS coordinates to obtain the observation position information of the live line at time K.
  • a specific implementation process of the steps 701-706 in combination with DEM information may be as follows:
  • the forest-based DEM geographic information through the TS-GIS (TypeScript-Geographic Information system TypeScript language-geographic information system) engine, forms a virtual perspective of the drone point and generates a projection matrix. Using the projection matrix, the spatial coordinates corresponding to the pixel points of the fire line in the thermal image can be obtained. Observing at time K, you can get the actual observed fire line position at time K.
  • the FireWire positioning process is as follows:
  • the TS-GIS engine can display 3D DEM information.
  • Step 306 according to the predicted position information of the fire line and the observed position information of the fire line at time K, determine whether parameter adjustment of the forest fire spread model is required.
  • Step 307 if it is necessary to adjust the parameters of the forest fire spread model, adjust the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculate K according to the forest fire spread model adjusted by the model parameters.
  • the predicted live line position information at time, and the live line state analysis value at time K is calculated according to the recalculated live line predicted position information and live line observed position information at time K.
  • the forest fire spread speed parameter in the forest fire spread model may be adjusted according to the deviation between the predicted fire line position information and the fire line observation position information, and then the fire line is recalculated by the adjusted forest fire spread model. Predicted location information.
  • the model parameters of the forest fire spread model are adjusted according to the predicted fire line position information and the fire line observation position information at time K, and the time K is recalculated according to the forest fire spread model adjusted by the model parameters.
  • the specific implementation process of the FireWire predicted location information may include:
  • Step 801 Calculate the deviation between the predicted hotline position information and the hotline observation position information at time K.
  • Step 802 Adjust the forest fire spread rate at time K-1 according to the preset forest fire spread rate update coefficient matrix and the deviation.
  • the forest fire spread rate at time K-1 is adjusted according to the preset forest fire spread rate update coefficient matrix and the deviation, which can be illustrated as follows.
  • the method includes: updating the forest fire spread rate update coefficient matrix Multiply with the deviation, and add the obtained product to the adjusted forest fire spread rate at time K-1.
  • Step 803 Input the adjusted forest fire spread speed at time K-1 and the analysis value of the fire line state at time K-1 into the Huygens fluctuation model, and obtain the predicted fire line position information at time K again.
  • the specific implementation process of the steps 801-803 may be as follows:
  • the forest fire spread rate R 0,k-1 at time K-1 calculated by the Rothermel model is the forest fire spread rate in the forest fire spread model at that time.
  • the thermal airflow and convection of the fire field will affect the wind direction and wind speed of the fire field.
  • the wind speed and wind direction of the fire field is not steady, and the fire spread speed is not steady. Therefore, the fire spread speed of the forest fire spread model also needs to be dynamically adjusted. , and now refer to the above non-steady-state factors to update the speed of forest fire spread.
  • C is the update coefficient matrix of forest fire spread speed
  • Errh is the deviation between the predicted value and the observed value obtained from formula (3).
  • the obtained meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 are input into the forest fire spread model, Obtain the predicted position information of the fire line at time K; obtain the thermal imaging video of the fire area based on the drone, obtain the thermal image of the fire area at time K, and determine the temperature information corresponding to each pixel in the thermal imaging, according to the temperature information and temperature Threshold, extract the fire field range, perform edge extraction on the fire field range, obtain the pixel position of the fire line, convert the estimated coordinates of the fire line through GPS coordinates, and obtain the fire line observation position information at time K; predict the position information and observation position according to the fire line at time K information to determine whether parameter adjustment of the forest fire spread model is necessary.
  • the method for assimilation of forest fire spread data based on drone video implemented in this embodiment uses drone as the front-end monitoring device, extracts the fire line in real time, and obtains the location information of the fire line, and proposes an assimilation with dynamically adjustable parameters for the forest fire spread model.
  • the forest fire spread model effectively solves the problems that the simulation model cannot be dynamically adjusted for the changes of the simulated environment, the forest fire model is not suitable for non-steady state, and the changes of the environment cannot be transmitted in real time, etc., and the prediction accuracy of the model is improved.
  • UAVs have the advantages of high maneuverability and low cost. UAVs can send back live video in real time, so that the update interval of the observation line of fire can be identified in minutes or even seconds.
  • the data assimilation method adopted by the model continuously assimilates the forest fire spread model, improves the prediction accuracy of the burned area, and provides objective fire information for forest fire fighting work.
  • this embodiment provides a method for obtaining the observation position of the line of fire from the regional thermal imaging video. This method can obtain the information of the observation position of the line of fire, and at the same time, the observation position of the line of fire can be displayed intuitively, which provides a direct method for forest fire extinguishing work. Guidance and strong support.
  • the present application also proposes a data assimilation device for forest fire spread based on drone video.
  • 9 is a schematic structural diagram of a device for assimilating data on forest fire spread based on drone video according to an embodiment of the present application. As shown in FIG. 9 , the device for assimilating data on forest fire spread based on drone video includes:
  • the first acquisition module 901 is used to acquire meteorological data and basic geographic information data of the fire occurrence place;
  • the second obtaining module 902 is used to obtain the analysis value of the live line state at the moment K-1 of the fire occurrence place;
  • the third acquisition module 903 is used to input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model to obtain the predicted fire line position information at the time K of the fire occurrence place;
  • a fourth acquisition module 904 configured to acquire the thermal imaging video of the fire area based on the drone shot
  • the fifth acquisition module 905 is configured to acquire the fire line observation position information at the K moment according to the thermal imaging video of the fire field area;
  • the judgment module 906 is used for judging whether it is necessary to adjust the parameters of the forest fire spread model according to the predicted position information of the line of fire at time K and the observed position information of the line of fire;
  • the adjustment module 907 is configured to adjust the model parameters of the forest fire spread model according to the predicted position information of the fire line and the observed position information of the fire line at time K when the parameters of the forest fire spread model need to be adjusted, and according to the adjusted forest fire spread model parameters
  • the model recalculates the predicted location information of the live line at time K;
  • the data assimilation module 908 is configured to calculate the live line state analysis value at time K according to the recalculated live line predicted position information and live line observation position information at time K.
  • the forest fire spread model includes the Rothermel model and the Huygens fluctuation model; in the embodiments of the present application, the third acquisition module 903 is specifically used to: obtain the meteorological data and basic geographic information data of the fire occurrence place Input to the Rothermel model to obtain the forest fire spread speed at K-1 time; input the forest fire spread speed at K-1 time and the fire line state analysis value at K-1 time into the Huygens fluctuation model to predict the fire line position, and obtain The predicted location information of the live line at time K.
  • the fifth acquisition module 905 is specifically configured to: acquire the thermal image of the fire area at time K from the thermal imaging video of the fire area; determine the temperature information corresponding to each pixel in the thermal image of the fire area; The temperature information and temperature threshold corresponding to each pixel in the area thermal imaging, extract the fire field range from the fire field area thermal imaging; extract the edge of the fire field range in the fire field area thermal imaging to obtain the pixel position of the fire line; The position is converted into the GPS coordinates of the fire line, and the position information of the fire line observation at time K is obtained.
  • the number of unmanned aerial vehicles is at least one
  • the fifth acquisition module 905 is specifically configured to: obtain thermal imaging videos of the fire field area captured by at least one unmanned aerial vehicle at multiple observation points, and obtain the number of fire lines. pixel location.
  • the fifth acquisition module 905 converts the pixel position of the live line into the GPS coordinates of the live line, and the specific implementation process of obtaining the observation position information of the live line at time K may be as follows: coordinate a plurality of pixel positions of the live line respectively.
  • the multiple observation altitude angle matrices and multiple azimuth angle matrices of the live line are used to estimate the position of the live line by Kalman filter to obtain the estimated coordinate value of the live line;
  • the coordinate estimated value of the live line is converted by GPS coordinates to obtain the observation position information of the live line at time K.
  • the fifth acquisition module 905 converts the pixel position of the fire line into the GPS coordinates of the fire line
  • the specific implementation process of obtaining the fire line observation position information at time K may be as follows: obtaining the DEM geographic information of the fire place; obtaining The GPS information, attitude information and built-in parameters of the UAV; according to the DEM geographic information, the GPS information, attitude information and built-in parameters of the UAV, a virtual perspective of the UAV point is generated; according to the virtual perspective of the UAV point Simulate the actual UAV imaging process to obtain a simulated image; determine the pixel coordinates of the live line in the simulated image according to the pixel position of the live line; convert the pixel coordinates of the live line in the simulated image through GPS coordinates to obtain the observation position information of the live line at time K .
  • the judging module 906 is specifically configured to: calculate the deviation between the predicted live position information and the observed live position information at time K; judge whether the deviation converges within the target range; if the deviation does not converge within the target range, then judge Whether the number of iterations of the fire spread model is less than the maximum number of iterations; if the number of iterations of the fire spread model is less than the maximum number of iterations, it is determined that the parameters of the fire spread model need to be adjusted; if the deviation converges within the target range, and/or, If the number of iterations of the fire spread model is greater than or equal to the maximum number of iterations, the parameter adjustment of the forest fire spread model is stopped.
  • the adjustment module 907 adjusts the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculates the fire line at time K according to the forest fire spread model adjusted by the model parameters.
  • the specific implementation process of the predicted position information can be as follows: calculate the deviation between the predicted position information of the fire line at time K and the observed position information of the fire line; Adjustment: Input the adjusted forest fire spread speed at K-1 time and the fire line state analysis value at K-1 time into the Huygens fluctuation model to regain the predicted fire line position information at time K.
  • the specific implementation process for the adjustment module 907 to adjust the forest fire spread rate at time K-1 according to the preset forest fire spread rate update coefficient matrix and the deviation may be as follows: The matrix and the deviation are multiplied, and the resulting product is added to the adjusted forest fire spread rate at time K-1.
  • the specific implementation process for the data assimilation module 908 to calculate the live line state analysis value at time K according to the recalculated live line predicted position information and live line observation position information at time K may be as follows: based on the set Kalman filter algorithm, The least squares fitting is performed between the recalculated live line predicted position information at time K and the live line observation position information to obtain the live line state analysis value at time K.
  • the forest fire spread data assimilation device based on the drone video of the embodiment of the present application obtains the meteorological data and basic geographic information data of the fire occurrence place; obtains the fire line state analysis value at the time K-1 of the fire occurrence place; The meteorological data, basic geographic information data and the analysis value of the fire line state at the time of K-1 are input into the forest fire spread model, and the predicted position information of the fire line at time K is obtained; the thermal imaging of the fire area based on the drone shooting is obtained.
  • the model parameters of the forest fire spread model are adjusted according to the predicted position information of the fire line and the observation position information of the fire line at time K, and the predicted position information of the fire line at time K is recalculated according to the forest fire spread model adjusted by the model parameters;
  • the hotline state analysis value at time K is calculated from the recalculated hotline predicted position information and hotline observation position information at time K.
  • This UAV video-based forest fire spread data assimilation device uses UAV as the front-end monitoring equipment, extracts the fire line in real time, and obtains the position information of the fire line.
  • a dynamic adjustment of the parameters of the forest fire spread assimilation is proposed.
  • the model effectively solves the problems that the simulation model cannot be dynamically adjusted for the changes of the simulated environment, the forest fire model is not suitable for non-steady state, and the changes of the environment cannot be transmitted in real time, etc., and the prediction accuracy of the model is improved.
  • UAVs have the advantages of high maneuverability and low cost. UAVs can send back live video in real time, so that the update interval of the observation line of fire can be identified in minutes or even seconds.
  • the data assimilation method adopted by the model continuously assimilates the forest fire spread model, improves the prediction accuracy of the burned area, and provides objective fire site information for forest fire fighting work.
  • the present application further provides an electronic device and a readable storage medium.
  • FIG. 10 it is a block diagram of an electronic device of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application.
  • Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
  • the electronic device includes: one or more processors 1001, a memory 1002, and interfaces for connecting various components, including a high-speed interface and a low-speed interface.
  • the various components are interconnected using different buses and may be mounted on a common motherboard or otherwise as desired.
  • the processor may process instructions executed within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system).
  • a processor 1001 is used as an example.
  • the memory 1002 is the non-transitory computer-readable storage medium provided by the present application.
  • the memory stores instructions executable by at least one processor, so that the at least one processor executes the method for data assimilation of forest fire spread based on drone video provided by the present application.
  • the non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method for assimilating forest fire spread data based on drone video provided by the present application.
  • the memory 1002 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as the method for assimilating forest fire spread data based on drone video in the embodiments of the present application
  • Corresponding program instructions/modules for example, the first acquisition module 901, the second acquisition module 902, the third acquisition module 903, the fourth acquisition module 904, the fifth acquisition module 905, the judgment module 906, the adjustment module 906 shown in FIG. module 907, data assimilation module 908.
  • the processor 1001 executes various functional applications and data processing of the server by running the non-transient software programs, instructions and modules stored in the memory 1002, that is, to realize the forest fire spread data based on the drone video in the above method embodiments. assimilation method.
  • the memory 1002 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; data created by the use of the device, etc. Additionally, memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 1002 may optionally include memory located remotely relative to the processor 1001 that may be connected via a network to electronics for assimilation of forest fire spread data based on drone video. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the electronic device of the method for assimilation of forest fire spread data based on drone video may further include: an input device 1003 and an output device 1004 .
  • the processor 1001 , the memory 1002 , the input device 1003 and the output device 1004 may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 10 .
  • the input device 1003 can receive input numerical or character information, as well as generate key signal input related to user settings and function control of electronic equipment for assimilation of forest fire spread data based on drone video, such as touch screen, keypad, mouse, track Input devices such as pads, touchpads, pointing sticks, one or more mouse buttons, trackballs, joysticks, etc.
  • Output devices 1004 may include display devices, auxiliary lighting devices (eg, LEDs), haptic feedback devices (eg, vibration motors), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
  • Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that The processor, which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
  • the processor which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, apparatus, and/or apparatus for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs), including machine-readable media that receive machine instructions as machine-readable signals.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer.
  • a display device eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
  • a computer system can include clients and servers.
  • Clients and servers are generally remote from each other and usually interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, also known as a cloud computing server or a cloud host, which is a host product in the cloud computing service system to solve the problems existing in traditional physical hosts and VPS (Virtual Private Server) services.
  • VPS Virtual Private Server

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present application discloses an unmanned aerial vehicle video-based forest fire spreading data assimilation method and apparatus, an electronic device, and a computer-readable storage medium. Said method comprises: acquiring meteorological data, basic geographic information data and a fire line state analysis value at a (K-1)th moment of a fire site; inputting the described information into a forest fire spreading model, and acquiring fire line prediction position information at a Kth moment; acquiring a thermal imaging video of a fire field area photographed by an unmanned aerial vehicle, and acquiring fire line observation position information at the Kth moment; determining, according to the fire line prediction position and the fire line observation position at the Kth moment, whether parameters of the model need to be adjusted; and if the parameters of the model need to be adjusted, adjusting the model parameters according to the fire line prediction position and the fire line observation position at the Kth moment, and recalculating a fire line prediction position at the Kth moment to obtain a fire line state analysis value at the Kth moment. The present application has a low cost, can dynamically iterate the forest fire spreading model, acquires an accurate fire line prediction position, and saves valuable time for forest fire rescue.

Description

基于无人机视频的林火蔓延数据同化方法以及装置Forest fire spread data assimilation method and device based on drone video
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求清华大学、北京辰安科技股份有限公司于2020年11月27日提交的、发明名称为“基于无人机视频的林火蔓延数据同化方法以及装置”的、中国专利申请号“202011367733.0”的优先权。This application requires the Chinese patent application number "202011367733.0" filed by Tsinghua University and Beijing Chen'an Technology Co., Ltd. on November 27, 2020 with the invention titled "Method and Device for Assimilation of Forest Fire Spreading Data Based on UAV Video" " priority.
技术领域technical field
本申请涉及数据处理领域,特别涉及一种基于无人机视频的林火蔓延数据同化方法、装置、电子设备和存储介质,属于数据同化应用领域。The present application relates to the field of data processing, in particular to a method, device, electronic device and storage medium for data assimilation of forest fire spread based on drone video, and belongs to the application field of data assimilation.
背景技术Background technique
森林火灾难于控制,并且森林火灾会破坏森林生态系统,造成环境污染,威胁人类生命财产安全。发生火情时,应急管理工作者迫切需要在短时间内获取前线火情信息及准确的林火蔓延预测信息,从而为应急救援和扑救工作争取宝贵时间。Forest fires are difficult to control, and forest fires will destroy forest ecosystems, cause environmental pollution, and threaten the safety of human life and property. When a fire occurs, emergency management workers urgently need to obtain front-line fire information and accurate forest fire spread prediction information in a short period of time, so as to gain valuable time for emergency rescue and rescue work.
然而,现有火线位置获取多基于卫星遥感数据,受卫星轨道的限制,卫星遥感林火监测的时间分辨率和空间分辨率相互制约,分辨率和时效性无法同时满足要求。林火监测的遥感卫星主要有静止卫星和极轨卫星两类。静止卫星轨道高度高,覆盖范围广、观测频次高,但空间分辨率相对较低,静止卫星一般发现火场的大小为公里级,小火场很难进行有效监测。相比静止卫星,极轨卫星采集遥感影像的空间分辨率高,观测频次较低,即使同时利用多系列各类极轨卫星对地观测,也仅能实现任意地点每日约10次的观测频率,无法实现全时段、全区域的覆盖。当火灾区域受地形遮挡时,卫星遥感的传感器不能对遮挡区域进行监测,卫星遥感机动性差,灵活性不足。遥感技术在分辨率、时效性和灵活性上无法完全满足火场实时监测的要求。因此,如何实时快速、高准确度的获取火场信息,已经成为亟待解决的问题。However, most of the existing fire line position acquisition is based on satellite remote sensing data. Due to the limitation of satellite orbit, the temporal resolution and spatial resolution of forest fire monitoring by satellite remote sensing are mutually restricted, and the resolution and timeliness cannot meet the requirements at the same time. There are two main types of remote sensing satellites for forest fire monitoring: geostationary satellites and polar-orbiting satellites. Geostationary satellites have high orbits, wide coverage, and high observation frequency, but relatively low spatial resolution. Geostationary satellites generally find that the size of the fire field is in the kilometer level, and it is difficult to effectively monitor small fire fields. Compared with stationary satellites, polar-orbiting satellites collect remote sensing images with high spatial resolution and low observation frequency. Even if multiple series of various types of polar-orbiting satellites are used for earth observation at the same time, only about 10 observations per day can be achieved at any location. , it is impossible to achieve full-time and full-area coverage. When the fire area is blocked by terrain, the sensor of satellite remote sensing cannot monitor the blocked area, and the satellite remote sensing has poor mobility and insufficient flexibility. Remote sensing technology cannot fully meet the requirements of real-time fire monitoring in terms of resolution, timeliness and flexibility. Therefore, how to obtain fire field information in real time, quickly and with high accuracy has become an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。The present application aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本申请提出一种基于无人机视频的林火蔓延数据同化方法、装置、电子设备、计算机可读存储介质,可以提高林火蔓延模型的精度,利用该林火蔓延模型可以实时快速、高准确度的预测火场信息,从而为林火扑救工作提供客观的火场信息。To this end, the present application proposes a method, device, electronic device, and computer-readable storage medium for forest fire spread data assimilation based on drone video, which can improve the accuracy of the forest fire spread model. , High-accuracy prediction of fire information, so as to provide objective fire information for forest fire fighting.
根据本申请的第一方面,提供了一种基于无人机视频的林火蔓延数据同化方法,包括:获取火灾发生地的气象数据和基础地理信息数据,并获取所述火灾发生地K-1时刻的火线状态分析值;将所述火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息;获取基于无人机拍摄的火场区域热成像视频,并根据所述火场区域热成像视频获取所述K时刻的火线观测位置信息;根据所述K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整;如果需要对所述林火蔓延模型进行参数调整,则根据所述K时刻的火线预测位置信息和所述火线观测位置信息调整所述林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,以及根据重新计算的K时刻的火线预测位置信息和所述火线观测位置信息计算K时刻的火线状态分析值。According to a first aspect of the present application, a method for assimilating forest fire spread data based on drone video is provided, including: acquiring meteorological data and basic geographic information data of the fire occurrence place, and obtaining the fire occurrence place K-1 The fire line state analysis value at time; the meteorological data, basic geographic information data and the fire line state analysis value at the K-1 time of the fire occurrence place are input into the forest fire spread model, and the fire line prediction position information at time K is obtained. Obtain the thermal imaging video of the fire field area based on the UAV shooting, and obtain the fire line observation position information at the K time according to the thermal imaging video of the fire field area; According to the fire line prediction position information at the K time and the fire line observation position information to determine whether parameter adjustment of the forest fire spread model is required; if parameter adjustment of the forest fire spread model is required, adjust the The model parameters of the forest fire spread model, and recalculate the predicted fire line position information at time K according to the forest fire spread model adjusted by the model parameters, and calculate K according to the recalculated fire line predicted position information at time K and the fire line observation position information. The analysis value of the live state at the moment.
可选地,所述林火蔓延模型包括Rothermel模型和惠更斯波动模型;所述将所述火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息,包括:将所述火灾发生地的气象数据和所述基础地理信息数据输入至所述Rothermel模型,获取K-1时刻的林火蔓延速度;将所述K-1时刻的林火蔓延速度和所述K-1时刻的火线状态分析值输入至所述惠更斯波动模型进行火线位置的预测,得到K时刻的火线预测位置信息。Optionally, the forest fire spread model includes a Rothermel model and a Huygens wave model; the meteorological data, basic geographic information data and the fire line state analysis value at the K-1 moment of the fire are input into the In the forest fire spread model, obtaining the predicted position information of the fire line at time K includes: inputting the meteorological data of the fire occurrence place and the basic geographic information data into the Rothermel model, and obtaining the forest fire spread at time K-1 Speed; input the forest fire spread speed at K-1 time and the fire line state analysis value at K-1 time into the Huygens fluctuation model to predict the fire line position, and obtain the fire line predicted position information at time K.
可选地,所述根据所述火场区域热成像视频获取所述K时刻的火线观测位置信息,包括:从所述火场区域热成像视频中获取K时刻的火场区域热成像;确定所述火场区域热成像之中各个像素所对应的温度信息;根据所述火场区域热成像之中各个像素所对应的温度信息和温度阈值,从火场区域热成像中提取火场范围;对所述火场区域热成像之中的火场范围进行边缘提取,获得火线的像素位置;将所述火线的像素位置转换为火线的GPS(Global Positioning System全球定位系统)坐标,获得所述K时刻的火线观测位置信息。Optionally, the obtaining the fire line observation position information at time K according to the thermal imaging video of the fire field area includes: obtaining the thermal image of the fire field area at time K from the thermal imaging video of the fire field area; determining the fire field area The temperature information corresponding to each pixel in the thermal imaging; according to the temperature information and temperature threshold corresponding to each pixel in the thermal imaging of the fire area, the fire field range is extracted from the thermal imaging of the fire field area; The fire field range in carries out edge extraction, obtains the pixel position of fire line; The pixel position of described fire line is converted into the GPS (Global Positioning System global positioning system) coordinate of fire line, obtains the fire line observation position information of described K moment.
可选地,所述无人机的个数为至少一个,针对至少一个无人机在多个观测点拍摄的火场区域热成像视频,获取火线的多个像素位置;所述将所述火线的像素位置转换为火线的GPS坐标,获得所述K时刻的火线观测位置信息,包括:对所述火线的多个像素位置分别进行坐标转换,获得所述火线在无人机地理坐标系中的多个坐标值;根据所述火线在无人机地理坐标系中的多个坐标值计算所述火线的多个观测高度角矩阵和多个方位角矩阵;根据所述火线的多个观测高度角矩阵和多个方位角矩阵对火线位置进行卡尔曼滤波估计,获得所述火线的坐标估计值;将所述火线的坐标估计值通过GPS坐标转换,获得所述K时刻的火线观测位置信息。Optionally, the number of the unmanned aerial vehicle is at least one, and for the thermal imaging video of the fire field area shot by at least one unmanned aerial vehicle at multiple observation points, multiple pixel positions of the line of fire are obtained; Converting the pixel position into the GPS coordinates of the line of fire, and obtaining the position information of the line of observation of the line of fire at time K, includes: performing coordinate conversion on a plurality of pixel positions of the line of fire respectively, and obtaining the number of coordinates of the line of fire in the UAV geographic coordinate system. a plurality of coordinate values; calculate a plurality of observation altitude angle matrices and a plurality of azimuth angle matrices of the fire line according to the plurality of coordinate values of the fire line in the UAV geographic coordinate system; according to the plurality of observation altitude angle matrices of the fire line Perform Kalman filter estimation on the position of the live line with multiple azimuth angle matrices to obtain the estimated coordinate value of the live line; convert the estimated coordinate value of the live line through GPS coordinates to obtain the observed position information of the live line at time K.
可选地,所述将所述火线的像素位置转换为火线的GPS坐标,获得所述K时刻的火线 观测位置信息,包括:获取所述火灾发生地的DEM地理信息;获取所述无人机的GPS信息、姿态信息和内置参数;根据所述DEM(Digital Elevation Model数字高程模型)地理信息、所述无人机的GPS信息、姿态信息和内置参数,生成无人机点位的虚拟视角;根据所述无人机点位的虚拟视角模拟实际无人机成像过程,得到仿真图像;根据所述火线的像素位置,确定所述火线在所述仿真图像中的像素坐标;将所述火线在所述仿真图像中的像素坐标通过GPS坐标转换,获得所述K时刻的火线观测位置信息。Optionally, converting the pixel position of the fire line into the GPS coordinates of the fire line, and obtaining the fire line observation position information at the K time, includes: obtaining the DEM geographic information of the fire occurrence place; obtaining the unmanned aerial vehicle. GPS information, attitude information and built-in parameters; according to the DEM (Digital Elevation Model digital elevation model) geographic information, the GPS information, attitude information and built-in parameters of the drone, generate a virtual perspective of the point of the drone; The actual UAV imaging process is simulated according to the virtual viewing angle of the UAV point, and a simulated image is obtained; according to the pixel position of the fire line, the pixel coordinates of the fire line in the simulated image are determined; The pixel coordinates in the simulated image are converted by GPS coordinates to obtain the position information of the line of fire observation at the K time.
可选地,所述根据所述K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整,包括:计算所述K时刻的火线预测位置信息和所述火线观测位置信息的偏差;判断所述偏差是否收敛在目标范围内;若所述偏差未收敛在所述目标范围,则判断针对所述火蔓延模型的已迭代次数是否小于最高迭代次数;若所述火蔓延模型的已迭代次数小于所述最高迭代次数,则判定需要对所述林火蔓延模型进行参数调整;若所述偏差收敛在所述目标范围,和/或,所述火蔓延模型的已迭代次数大于或等于所述最高迭代次数,则停止对所述林火蔓延模型进行参数调整。Optionally, judging whether parameter adjustment of the forest fire spread model needs to be performed according to the predicted fire line location information at the K time and the fire line observation location information includes: calculating the fire line predicted location information at the K time. The deviation from the observation position information of the fire line; judge whether the deviation converges within the target range; if the deviation does not converge within the target range, judge whether the number of iterations for the fire spread model is less than the maximum number of iterations ; if the number of iterations of the fire spread model is less than the maximum number of iterations, it is determined that parameter adjustment of the forest fire spread model is required; if the deviation converges within the target range, and/or the fire spread If the number of iterations of the spread model is greater than or equal to the maximum number of iterations, the parameter adjustment of the forest fire spread model is stopped.
可选地,所述根据所述K时刻的火线预测位置信息和所述火线观测位置信息调整所述林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,包括:计算所述K时刻的火线预测位置信息和所述火线观测位置信息的偏差;根据预设的林火蔓延速度更新系数矩阵和所述偏差对所述K-1时刻的林火蔓延速度进行调整;将经过调整的所述K-1时刻的林火蔓延速度和所述K-1时刻的火线状态分析值输入至所述惠更斯波动模型,重新获得K时刻的火线预测位置信息。Optionally, the model parameters of the forest fire spread model are adjusted according to the predicted fire line position information and the fire line observation position information at the time K, and the fire spread model at time K is recalculated according to the forest fire spread model adjusted by the model parameters. The fire line prediction position information includes: calculating the deviation between the fire line predicted position information at the K time and the fire line observation position information; updating the coefficient matrix according to the preset forest fire spread speed and the deviation to the K-1 time. The forest fire spread speed is adjusted; the adjusted forest fire spread speed at the K-1 time and the fire line state analysis value at the K-1 time are input into the Huygens fluctuation model, and the fire line at the K time is obtained again. Predicted location information.
可选地,所述根据预设的林火蔓延速度更新系数矩阵和所述偏差对所述K-1时刻的林火蔓延速度进行调整,包括:将所述林火蔓延速度更新系数矩阵和所述偏差进行乘法运算,将得到的乘积与经过调整的所述K-1时刻的林火蔓延速度进行加法运算。Optionally, the adjusting the forest fire spread rate at the time K-1 according to the preset forest fire spread rate update coefficient matrix and the deviation includes: updating the forest fire spread rate update coefficient matrix and all the parameters. The deviation is multiplied, and the obtained product is added to the adjusted forest fire spreading speed at time K-1.
可选地,所述根据重新计算的K时刻的火线预测位置信息和所述火线观测位置信息计算K时刻的火线状态分析值,包括:基于集合卡尔曼滤波算法,将所述重新计算的K时刻的火线预测位置信息与所述火线观测位置信息进行最小二乘拟合,得到所述K时刻的火线状态分析值。Optionally, the calculation of the live line state analysis value at time K according to the recalculated live line predicted position information at time K and the observed live line position information includes: based on the set Kalman filter algorithm, the recalculated time K The least squares fitting is performed between the predicted hotline position information of , and the observed hotline position information to obtain the hotline state analysis value at the time K.
根据本申请的第二方面,提供了一种基于无人机视频的林火蔓延数据同化装置,包括:第一获取模块,用于获取火灾发生地的气象数据和基础地理信息数据;第二获取模块,用于获取所述火灾发生地K-1时刻的火线状态分析值;第三获取模块,用于将所述火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息;第四获取模块,用于获取基于无人机拍摄的火场区域热成像视频;第五获取模块,用于根据所述火场区域热成像视频获取所述K时刻的火 线观测位置信息;判断模块,用于根据所述K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整;调整模块,用于在需要对所述林火蔓延模型进行参数调整时,根据所述K时刻的火线预测位置信息和所述火线观测位置信息调整所述林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息;数据同化模块,用于根据重新计算的K时刻的火线预测位置信息和所述火线观测位置信息计算K时刻的火线状态分析值。According to a second aspect of the present application, a device for assimilating forest fire spread data based on drone video is provided, comprising: a first acquisition module for acquiring meteorological data and basic geographic information data of a fire occurrence place; a second acquisition module module, used to obtain the fire line state analysis value at the time K-1 of the fire occurrence place; the third acquisition module is used to obtain the meteorological data of the fire occurrence place, basic geographic information data and the fire line at the time K-1 time The state analysis value is input into the forest fire spread model, and the predicted position information of the line of fire at time K is obtained; the fourth acquisition module is used to obtain the thermal imaging video of the fire area based on the drone; the fifth acquisition module is used to obtain according to the The thermal imaging video of the fire field area obtains the fire line observation position information at the K time; the judgment module is used to judge whether the forest fire spread model needs to be determined according to the fire line prediction position information and the fire line observation position information at the K time. Carry out parameter adjustment; an adjustment module is used to adjust the model parameters of the forest fire spread model according to the predicted position information of the fire line at the time K and the observed position information of the fire line when the parameters of the forest fire spread model need to be adjusted. , and recalculate the predicted position information of the fire line at time K according to the forest fire spread model adjusted by the model parameters; the data assimilation module is used to calculate the predicted position information of the fire line at time K according to the recalculated predicted position information of the fire line at time K and the observed position information of the fire line at time K. Firewire status analysis value.
可选地,所述林火蔓延模型包括Rothermel模型和惠更斯波动模型;所述第三获取模块具体用于:将所述火灾发生地的气象数据和所述基础地理信息数据输入至所述Rothermel模型,获取K-1时刻的林火蔓延速度;将所述K-1时刻的林火蔓延速度和所述K-1时刻的火线状态分析值输入至所述惠更斯波动模型进行火线位置的预测,得到K时刻的火线预测位置信息。Optionally, the forest fire spread model includes a Rothermel model and a Huygens wave model; the third acquisition module is specifically configured to: input the meteorological data of the fire occurrence place and the basic geographic information data into the Rothermel model to obtain the forest fire spread speed at time K-1; input the forest fire spread speed at time K-1 and the analysis value of the fire line state at the time K-1 into the Huygens wave model for fire line position analysis to obtain the predicted position information of the live line at time K.
可选地,所述第五获取模块具体用于:从所述火场区域热成像视频中获取K时刻的火场区域热成像;确定所述火场区域热成像之中各个像素所对应的温度信息;根据所述火场区域热成像之中各个像素所对应的温度信息和温度阈值,从火场区域热成像中提取火场范围;对所述火场区域热成像之中的火场范围进行边缘提取,获得火线的像素位置;将所述火线的像素位置转换为火线的GPS坐标,获得所述K时刻的火线观测位置信息。Optionally, the fifth acquisition module is specifically configured to: acquire the thermal imaging of the fire field area at time K from the thermal imaging video of the fire field area; determine the temperature information corresponding to each pixel in the thermal imaging of the fire field area; The temperature information and temperature threshold corresponding to each pixel in the thermal imaging of the fire area are extracted from the thermal imaging of the fire area; the edge of the fire area in the thermal imaging of the fire area is extracted to obtain the pixel position of the fire line ; Convert the pixel position of the live line to the GPS coordinates of the live line, and obtain the live line observation position information at the K time.
可选地,所述无人机的个数为至少一个,所述第五获取模块具体用于:针对至少一个无人机在多个观测点拍摄的火场区域热成像视频,获取火线的多个像素位置。Optionally, the number of the unmanned aerial vehicle is at least one, and the fifth acquisition module is specifically configured to: obtain multiple thermal imaging videos of the fire field area captured by the at least one unmanned aerial vehicle at multiple observation points. pixel location.
可选地,所述第五获取模块具体用于:对所述火线的多个像素位置分别进行坐标转换,获得所述火线在无人机地理坐标系中的多个坐标值;根据所述火线在无人机地理坐标系中的多个坐标值计算所述火线的多个观测高度角矩阵和多个方位角矩阵;根据所述火线的多个观测高度角矩阵和多个方位角矩阵对火线位置进行卡尔曼滤波估计,获得所述火线的坐标估计值;将所述火线的坐标估计值通过GPS坐标转换,获得所述K时刻的火线观测位置信息。Optionally, the fifth acquisition module is specifically configured to: perform coordinate transformation on multiple pixel positions of the fire line respectively, to obtain a plurality of coordinate values of the fire line in the UAV geographic coordinate system; Calculate multiple observation altitude angle matrices and multiple azimuth angle matrices of the fire line from multiple coordinate values in the UAV geographic coordinate system; The location is estimated by Kalman filter to obtain the estimated coordinate value of the live line; the coordinate estimated value of the live line is converted by GPS coordinates to obtain the observed location information of the live line at time K.
可选地,所述第五获取模块具体用于:获取所述火灾发生地的DEM地理信息;获取所述无人机的GPS信息、姿态信息和内置参数;根据所述DEM地理信息、所述无人机的GPS信息、姿态信息和内置参数,生成无人机点位的虚拟视角;根据所述无人机点位的虚拟视角模拟实际无人机成像过程,得到仿真图像;根据所述火线的像素位置,确定所述火线在所述仿真图像中的像素坐标;将所述火线在所述仿真图像中的像素坐标通过GPS坐标转换,获得所述K时刻的火线观测位置信息。Optionally, the fifth obtaining module is specifically configured to: obtain the DEM geographic information of the fire occurrence place; obtain the GPS information, attitude information and built-in parameters of the UAV; according to the DEM geographic information, the The GPS information, attitude information and built-in parameters of the drone generate a virtual perspective of the drone point; simulate the actual drone imaging process according to the virtual perspective of the drone point, and obtain a simulated image; determine the pixel coordinates of the live line in the simulated image; convert the pixel coordinates of the live line in the simulated image through GPS coordinates to obtain the observation position information of the live line at time K.
可选地,所述判断模块具体用于:计算所述K时刻的火线预测位置信息和所述火线观测位置信息的偏差;判断所述偏差是否收敛在目标范围内;若所述偏差未收敛在所述目标 范围,则判断针对所述火蔓延模型的已迭代次数是否小于最高迭代次数;若所述火蔓延模型的已迭代次数小于所述最高迭代次数,则判定需要对所述林火蔓延模型进行参数调整;若所述偏差收敛在所述目标范围,和/或,所述火蔓延模型的已迭代次数大于或等于所述最高迭代次数,则停止对所述林火蔓延模型进行参数调整。Optionally, the judging module is specifically configured to: calculate the deviation between the predicted live position information at the K time and the observed live position information; judge whether the deviation converges within the target range; if the deviation does not converge within the target range; the target range, it is determined whether the number of iterations for the fire spread model is less than the maximum number of iterations; if the number of iterations for the fire spread model is less than the maximum number of iterations, it is determined that the fire spread model needs to be updated Perform parameter adjustment; if the deviation converges to the target range, and/or the number of iterations of the fire spread model is greater than or equal to the maximum number of iterations, stop adjusting the parameters of the forest fire spread model.
可选地,所述调整模块具体用于:计算所述K时刻的火线预测位置信息和所述火线观测位置信息的偏差;根据预设的林火蔓延速度更新系数矩阵和所述偏差对所述K-1时刻的林火蔓延速度进行调整;将经过调整的所述K-1时刻的林火蔓延速度和所述K-1时刻的火线状态分析值输入至所述惠更斯波动模型,重新获得K时刻的火线预测位置信息。Optionally, the adjustment module is specifically configured to: calculate the deviation between the predicted position information of the live line at the time K and the observed position information of the live line; update the coefficient matrix according to the preset forest fire spread speed and the deviation to the Adjust the forest fire spread speed at time K-1; input the adjusted forest fire spread speed at time K-1 and the analysis value of the state of fire at time K-1 into the Huygens fluctuation model, and re- Obtain the predicted hotline position information at time K.
可选地,所述调整模块具体用于:将所述林火蔓延速度更新系数矩阵和所述偏差进行乘法运算,将得到的乘积与经过调整的所述K-1时刻的林火蔓延速度进行加法运算。Optionally, the adjustment module is specifically configured to: multiply the forest fire spread speed update coefficient matrix and the deviation, and perform a multiplication operation between the obtained product and the adjusted forest fire spread speed at time K-1. addition operation.
可选地,所述数据同化模块具体用于:基于集合卡尔曼滤波算法,将所述重新计算的K时刻的火线预测位置信息与所述火线观测位置信息进行最小二乘拟合,得到所述K时刻的火线状态分析值。Optionally, the data assimilation module is specifically configured to: perform least squares fitting on the recalculated live line predicted location information at time K and the live line observed location information based on an aggregated Kalman filter algorithm to obtain the The analysis value of the live line state at time K.
根据本申请的第三方面,提供了一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现本申请第一方面实施例所述的基于无人机视频的林火蔓延数据同化方法。According to a third aspect of the present application, there is provided an electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program , the method for assimilating the forest fire spread data based on the drone video described in the embodiment of the first aspect of the present application is implemented.
根据本申请的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本申请第一方面实施例所述的基于无人机视频的林火蔓延数据同化方法。According to a fourth aspect of the present application, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, realizes the drone-based video according to the embodiment of the first aspect of the present application The forest fire spread data assimilation method.
根据本申请实施例的技术方案,结合了无人机热成像视频的火点定位技术,通过无人机热成像视频智能分析识别火线位置,通过观测火线位置实时修正林火蔓延模型参数,同时动态迭代林火蔓延模型,以实现该林火蔓延模型的数据同化过程,可以有效解决火线不能实时获取,林火蔓延模型参数不能及时修正,导致预测结果精度无法保证的问题。另外,本申请通过使用无人机航拍森林火灾现场,可以快速移动拍摄,覆盖面积大,视频回传迅速,无人机视频回传数据分析火线数据成本低,时效性强,机动灵活,有效避免卫星遥感数据时间分辨率和空间分辨率相互制约的缺点,可以极大提高林火蔓延模型预测的时效性和准确性。此外,本申请针对林火区域非稳态气象条件,提出了多收敛的集合卡尔曼滤波的数据同化方法,火线位置实时修正林火蔓延模型参数的同时,动态迭代林火蔓延速度,有效提高了林火蔓延模型的精准性。According to the technical solutions of the embodiments of the present application, the fire point positioning technology of the thermal imaging video of the UAV is combined, the position of the fire line is identified through the intelligent analysis of the thermal imaging video of the UAV, the parameters of the forest fire spread model are corrected in real time by observing the position of the fire line, and the dynamic Iterating the forest fire spread model to realize the data assimilation process of the forest fire spread model can effectively solve the problem that the fire line cannot be obtained in real time, and the parameters of the forest fire spread model cannot be corrected in time, resulting in the inability to guarantee the accuracy of the prediction results. In addition, by using drones to take aerial photography of forest fire scenes, this application can quickly move and shoot, cover a large area, and quickly transmit video. The shortcoming of the mutual restriction between the temporal resolution and the spatial resolution of satellite remote sensing data can greatly improve the timeliness and accuracy of forest fire spread model prediction. In addition, this application proposes a multi-convergent ensemble Kalman filter data assimilation method for the non-steady meteorological conditions in the forest fire area. While the fire line position corrects the parameters of the forest fire spread model in real time, it dynamically iterates the forest fire spread speed, effectively improving the The accuracy of the forest fire spread model.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, from the following description, and in part will be apparent from the following description, or learned by practice of the present application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1是根据本申请一个实施例的基于无人机视频的林火蔓延数据同化方法的流程示意图;1 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application;
图2是根据本申请一个实施例的基于无人机视频的林火蔓延数据同化方法的流程示意图;2 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application;
图3是根据本申请另一个实施例的基于无人机视频的林火蔓延数据同化方法的流程示意图;3 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to another embodiment of the present application;
图4是根据本申请一个实施例的获取K时刻的火线预测位置信息的流程图;Fig. 4 is a flow chart of obtaining the predicted position information of the live line at time K according to an embodiment of the present application;
图5是根据本申请另一个实施例的获取K时刻火线观测位置信息的流程示意图;FIG. 5 is a schematic flowchart of obtaining position information of live line observation at time K according to another embodiment of the present application;
图6是根据本申请实施例的多个无人机获取目标三维坐标信息示意图;6 is a schematic diagram of obtaining three-dimensional coordinate information of a target by multiple drones according to an embodiment of the present application;
图7是根据本申请实施例的基于无人机视频的林火蔓延数据同化方法的流程示意图;7 is a schematic flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application;
图8是根据本申请实施例的获取火线预测位置信息的流程图;FIG. 8 is a flowchart of obtaining the predicted location information of the live wire according to an embodiment of the present application;
图9是根据本申请一个实施例的基于无人机视频的林火蔓延数据同化装置的结构示意图。FIG. 9 is a schematic structural diagram of a device for assimilating forest fire spread data based on drone video according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.
下面参考附图描述本申请实施例的基于无人机视频的林火蔓延数据同化方法、装置、电子设备和存储介质。The method, device, electronic device, and storage medium for data assimilation of forest fire spread based on drone video according to the embodiments of the present application are described below with reference to the accompanying drawings.
图1是根据本申请一个实施例的基于无人机视频的林火蔓延数据同化方法的流程图。需要说明的是,本申请实施例的林火蔓延数据同化方法可应用于本申请实施了的基于无人机视频的林火蔓延数据同化装置,该装置可以由软件和/或硬件方式实现,该装置可以集成到电子设备中。FIG. 1 is a flowchart of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application. It should be noted that the forest fire spread data assimilation method in the embodiment of the present application can be applied to the forest fire spread data assimilation device based on drone video implemented in the present application, and the device can be implemented by software and/or hardware. The device can be integrated into electronic equipment.
如图1所示,该林火蔓延数据同化方法包括如下步骤:As shown in Figure 1, the forest fire spread data assimilation method includes the following steps:
步骤101,获取火灾发生地的气象数据和基础地理信息数据,并获取火灾发生地K-1时刻的火线状态分析值。In step 101, the meteorological data and basic geographic information data of the fire occurrence place are obtained, and the fire line state analysis value at the time K-1 of the fire occurrence place is obtained.
在本申请一些实施例中,该气象数据可包括但不限于风速、风向、气温、降水概率、降水量、气压、空气湿度、空气氧含量等中的任意一种或多种。作为一种示例,该气象数 据可包括风速和风向。In some embodiments of the present application, the meteorological data may include, but not limited to, any one or more of wind speed, wind direction, air temperature, precipitation probability, precipitation amount, air pressure, air humidity, air oxygen content, and the like. As an example, the meteorological data may include wind speed and direction.
在本申请一些实施例中,该基础地理信息数据可包括但不限于下垫面类型、森林含水率、林区坡度图、坡向、森林可燃物质、森林可燃物的物理和化学性质等中的任意一种或多种。其中,该物理和化学性质可包括但不限于:密度、燃点、热值、可燃性等中的任意一种或多种。作为一种示例,该基础地理信息数据可包括下垫面类型、森林含水率、林区坡度图、坡向和森林可燃物质。In some embodiments of the present application, the basic geographic information data may include, but are not limited to, the type of underlying surface, forest moisture content, forest slope map, slope aspect, forest combustible substances, physical and chemical properties of forest combustible substances, etc. any one or more. Wherein, the physical and chemical properties may include, but are not limited to, any one or more of density, ignition point, calorific value, flammability, and the like. As an example, the basic geographic information data may include underlying surface type, forest moisture content, forest slope map, slope aspect, and forest combustible matter.
在本申请例中,火灾发生地K-1时刻的火线状态分析值的获取方法可如下:将火灾发生地的气象数据、基础地理信息数据和K-2时刻的火线状态分析值输入至林火蔓延模型,可以获取K-1时刻的火线预测位置信息,然后,获取基于无人机拍摄的火场区域热成像视频,根据此视频获取K-1时刻的火线观测位置信息,根据K-1时刻的火线预测位置信息和观测位置信息,判断是否需要对林火蔓延模型进行参数调整,如需要调整,则根据K-1时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,之后,根据经过模型参数调整的林火蔓延模型重新计算K-1时刻的火线预测位置信息,并且根据重新计算的K-1时刻的火线预测位置信息和火线观测位置信息计算K-1时刻的火线状态分析值。In this application example, the method for obtaining the analysis value of the fire line state at time K-1 at the fire occurrence place can be as follows: the meteorological data, basic geographic information data and the analysis value of the fire line state at time K-2 at the fire place are input into the forest fire The spread model can obtain the predicted position information of the line of fire at the time of K-1, and then obtain the thermal imaging video of the fire area based on the drone, and obtain the observation position information of the line of fire at the time of K-1 according to this video. According to the predicted position information of the fire line and the observation position information, it is determined whether the parameters of the forest fire spread model need to be adjusted. After that, recalculate the predicted fire line position information at time K-1 according to the forest fire spread model adjusted by the model parameters, and calculate the fire line at time K-1 according to the recalculated fire line predicted position information and fire line observation position information at time K-1. Status analysis value.
其中,本申请中,K时刻表示林火燃烧时的某一时间点,K-1时刻表示从所述时间点往前回溯一个时间步长所对应的时间点,K-2时刻表示从所述时间点往前回溯两个所述时间步长所对应的时间点,以此类推。Among them, in this application, time K represents a certain time point when the forest fire is burning, time K-1 represents a time point corresponding to one time step back from the time point, and time K-2 represents a time point from the said time point. The time point goes back two time points corresponding to the time steps, and so on.
也就是说,在计算火灾发生地K-1时刻的火线状态分析值时,可基于K-1时刻的前一个时刻(即K-2时刻)的火线状态分析值、火灾发生地的气象数据和基础地理信息数据,利用林火蔓延模型对K-1时刻的火线位置进行预测,得到K-1时刻的火线预测位置信息,然后,利用K-1时刻的火线预测位置信息和K-1时刻的火线观测位置信息判断需要动态调整林火蔓延模型的参数时,可对火蔓延模型进行参数调整,进而利用经过模型参数调整后的林火蔓延模型重新计算K-1时刻的火线预测位置信息,进而根据重新计算的K-1时刻的火线预测位置信息和K-1时刻的火线观测位置信息计算K-1时刻的火线状态分析值。其中,如果无需对模型进行参数调整,则无需重新计算K-1时刻的火线预测位置信息,此时可直接根据第一次预测得到的K-1时刻的火线预测位置信息和K-1时刻的火线观测位置信息计算K-1时刻的火线状态分析值。That is to say, when calculating the fire line state analysis value at time K-1 at the fire occurrence place, it can be based on the live line state analysis value at the previous time K-1 time (ie K-2 time), the meteorological data at the fire place, and Based on the basic geographic information data, the forest fire spread model is used to predict the position of the fire line at the time of K-1, and the predicted position of the fire line at the time of K-1 is obtained. When the fire line observation position information determines that the parameters of the fire spread model need to be dynamically adjusted, the parameters of the fire spread model can be adjusted. Calculate the hotline state analysis value at time K-1 according to the recalculated liveline predicted position information at time K-1 and the observed liveline position information at time K-1. Among them, if there is no need to adjust the parameters of the model, there is no need to recalculate the predicted position information of the live line at the time of K-1. At this time, the predicted position information of the live line at the time of K-1 obtained in the first prediction and the predicted position of the live line at the time of K-1 can be directly obtained from the first prediction. The live line state analysis value at time K-1 is calculated from the live line observation position information.
需要说明的是,在本申请实施例中,当K=1,即火灾发生地火线的初始状态分析值的获取方式可如下:可预先经过多次模拟试验而得到气象、基础地理信息与火线状态分析值的对应关系,这样,可根据该对应关系、该火灾发生地的气象数据和基础地理信息数据,获取该火灾发生地火线的初始状态分析值。也就是说,当有某地发生火灾时,可先利用多次模拟试验而得到的经验值来预测该火灾发生地火线的初始状态分析值。It should be noted that, in the embodiment of the present application, when K=1, that is, the acquisition method of the initial state analysis value of the live line where the fire occurred may be as follows: meteorological, basic geographic information and live line state can be obtained through multiple simulation tests in advance. The corresponding relationship of the analysis values, in this way, the initial state analysis value of the fire line at the fire occurrence place can be obtained according to the corresponding relationship, the meteorological data and the basic geographic information data of the fire occurrence place. That is to say, when a fire occurs in a certain place, the initial state analysis value of the live line at the place where the fire occurs can be predicted by using the empirical value obtained by multiple simulation tests.
步骤102,将火灾发生地的气象数据、基础地理信息数据和K-1时刻的火线状态分析值输入至林火蔓延模型,获取K时刻的火线预测位置信息。 Step 102 , input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model, and obtain the predicted fire line position information at the time K of the fire.
在本申请一些实施例中,林火蔓延模型包括:Rothermel模型、惠更斯波动模型、Rothermel模型和惠更斯波动模型综合使用的模型、McArthur模型等具有通过输入信息可以模拟出林火蔓延功能的模型。其中,在本申请实施例中,该林火蔓延模型包括:Rothermel模型和惠更斯波动模型。In some embodiments of the present application, the forest fire spread model includes: the Rothermel model, the Huygens wave model, the model used in combination with the Rothermel model and the Huygens wave model, the McArthur model, etc., which have the function of simulating the spread of forest fire by inputting information 's model. Wherein, in the embodiment of the present application, the forest fire spread model includes: the Rothermel model and the Huygens wave model.
作为一种示例,所述将火灾发生地的气象数据、基础地理信息数据和K-1时刻的火线状态分析值输入至林火蔓延模型,获取K时刻的火线预测位置信息的具体实现过程可如下:将火灾发生地的气象数据和基础地理信息数据输入至Rothermel模型,获取K-1时刻的林火蔓延速度,并将K-1时刻的林火蔓延速度和K-1时刻的火线状态分析值输入至惠更斯波动模型进行火线位置的预测,得到K时刻的火线预测位置信息。As an example, the meteorological data of the fire occurrence place, the basic geographic information data and the fire line state analysis value at time K-1 are input into the forest fire spread model, and the specific implementation process of obtaining the predicted fire line position information at time K may be as follows : Input the meteorological data and basic geographic information data of the fire location into the Rothermel model, obtain the forest fire spread speed at K-1, and analyze the forest fire spread speed at K-1 and the fire line state analysis value at K-1 Input to the Huygens fluctuation model to predict the position of the live line, and obtain the predicted position of the live line at time K.
作为一种示例,所述102步骤的具体实现过程可以如下:As an example, the specific implementation process of step 102 may be as follows:
首先,根据火灾发生地的气象、基础地理信息等数据,可以包括风速、风向、下垫面类型、森林含水率、林区坡度图、坡向、森林可燃物质等,通过Rothermel模型得到每个火点的林火蔓延速度R_0。根据惠更斯波动理论,将火点视为波阵面上的点,每个火点都可作为下一个波源(即次波源),对波进行继续传播,可以视为下一时间步长K时刻的预测火线位置
Figure PCTCN2021112848-appb-000001
(
Figure PCTCN2021112848-appb-000002
下标k表示时间,上标表示火线的矩阵状态,f表示火线的预测矩阵,
Figure PCTCN2021112848-appb-000003
表示k时刻预测火线位置;e j,n j为火线上j点的坐标,下标m为火线周界上标记点的个数)。其中,林火蔓延模型的公式表达如下:
First, according to the meteorological and basic geographic information of the fire location, including wind speed, wind direction, underlying surface type, forest moisture content, forest slope map, slope aspect, forest combustible substances, etc., each fire can be obtained through the Rothermel model. The forest fire spread speed of the point is R_0. According to the Huygens wave theory, the fire point is regarded as a point on the wave front, and each fire point can be used as the next wave source (ie, the secondary wave source), and the wave continues to propagate, which can be regarded as the next time step K Predicted line of fire position at the moment
Figure PCTCN2021112848-appb-000001
(
Figure PCTCN2021112848-appb-000002
The subscript k represents time, the superscript represents the matrix state of FireWire, and f represents the prediction matrix of FireWire,
Figure PCTCN2021112848-appb-000003
Represents the predicted position of the live line at time k; e j , n j are the coordinates of point j on the live line, and the subscript m is the number of marked points on the perimeter of the live line). Among them, the formula of the forest fire spread model is expressed as follows:
Figure PCTCN2021112848-appb-000004
Figure PCTCN2021112848-appb-000004
Figure PCTCN2021112848-appb-000005
Figure PCTCN2021112848-appb-000005
其中,式(1)表示Rothermel模型,式(1)中,R 0为某一火点的林火蔓延速度,I R表示反应强度,ζ表示传播速率,ρ b表示可燃物密度,ε为有效热系数,Q ig为点燃单位质量可燃物所需的热量,Φ sw风速和坡度修正系数;式(2)表示惠更斯波动模型,式(2)中,Η表示惠更斯模型,
Figure PCTCN2021112848-appb-000006
的上标a表示集合预测火线和观测火线的状态分析矩阵,
Figure PCTCN2021112848-appb-000007
表示为上一时间步长K-1时刻模型的状态分析矩阵。
Among them, formula (1) represents the Rothermel model, in formula (1), R 0 is the fire spread rate of a certain fire point, IR is the reaction intensity, ζ is the propagation rate, ρ b is the density of combustibles, and ε is the effective Heat coefficient, Q ig is the heat required to ignite a unit mass of combustibles, Φ sw wind speed and slope correction coefficient; formula (2) represents the Huygens wave model, in formula (2), H represents the Huygens model,
Figure PCTCN2021112848-appb-000006
The superscript a of , represents the state analysis matrix of the ensemble predicted live line and observed live line,
Figure PCTCN2021112848-appb-000007
is represented as the state analysis matrix of the model at the last time step K-1.
由式(1)和式(2)可以看出,向Rothermel模型中输入气象、地形、植被、初始火源等数据,可以得到林火蔓延速度,将林火蔓延速度和上一时间步长时刻的火线模型的状态分析矩阵输入惠更斯波动模型可以得到当前时刻的火线预测位置信息。From equations (1) and (2), it can be seen that the meteorological, terrain, vegetation, initial fire source and other data are input into the Rothermel model, and the forest fire spread speed can be obtained. The state analysis matrix of the hotline model can be input into the Huygens wave model to obtain the predicted hotline position information at the current moment.
步骤103,获取基于无人机拍摄的火场区域热成像视频,并根据火场区域热成像视频获 取K时刻的火线观测位置信息。 Step 103, obtaining the thermal imaging video of the fire field area based on the UAV shooting, and obtaining the fire line observation position information at time K according to the thermal imaging video of the fire field area.
在本申请的一些实施例中,该基于无人机拍摄的火场区域热成像视频中,从无人机拍摄的视频得到热成像视频的方法,包括但不限于下述的一种红外热成像技术:自然界中的一切物体,由于物体内部分子热运动的结果,只要温度高于绝对零度(-273℃),都会有红外辐射,并且该种辐射的波长与其温度成反比,本实施例采取的一种热成像技术为红外热成像技术,该种技术根据检测到的物体的辐射能量的高低,经系统处理转变为目标物体的热图像(可以为灰度图和/或伪彩色)。无人机搭载此种热成像仪,热成像仪的像素信息可以反映拍摄区域的温度信息。本步骤中,利用热成像的像素信息可以反映拍摄区域的温度信息这一特性,来获取K时刻的火线观测位置信息。In some embodiments of the present application, in the thermal imaging video of the fire area based on the drone shot, the method for obtaining the thermal imaging video from the video shot by the drone includes but is not limited to the following infrared thermal imaging technology : All objects in nature, due to the thermal motion of molecules inside the object, will have infrared radiation as long as the temperature is higher than absolute zero (-273°C), and the wavelength of this radiation is inversely proportional to its temperature. The thermal imaging technology is infrared thermal imaging technology, which is transformed into a thermal image (may be grayscale and/or pseudo-color) of the target object through systematic processing according to the level of radiant energy of the detected object. The UAV is equipped with such a thermal imager, and the pixel information of the thermal imager can reflect the temperature information of the shooting area. In this step, the pixel information of the thermal imaging can be used to reflect the temperature information of the shooting area, so as to obtain the fire line observation position information at time K.
在本申请的一些实施例中,无人机上的热成像仪可通过与林火蔓延数据同化装置的通信连接,将热成像仪拍摄的热成像视频发送给林火蔓延数据同化装置,以使得林火蔓延数据同化装置获得基于无人机上的热成像仪拍摄的火场区域热成像视频。也就是说,无人机上搭载有热成像仪,以用于对火场区域进行热成像视频的拍摄。其中,无人机与林火蔓延数据同化装置采用通信连接进行通信,以使得林火蔓延数据同化装置基于该通信连接能够从无人机获得火场区域热成像视频。In some embodiments of the present application, the thermal imager on the drone can send the thermal imaging video captured by the thermal imager to the forest fire spread data assimilation device through the communication connection with the forest fire spread data assimilation device, so that the forest fire spread data assimilation device can be The fire spread data assimilation device obtains the thermal imaging video of the fire area based on the thermal imager on the drone. That is to say, the drone is equipped with a thermal imager to shoot thermal imaging video of the fire area. Wherein, the drone communicates with the forest fire spread data assimilation device using a communication connection, so that the forest fire spread data assimilation device can obtain the thermal imaging video of the fire field area from the drone based on the communication connection.
作为一种示例,上述通信连接所采用的方式可以是移动互联网方式、或无线通信方式等。其中,移动互联网可以是3G(3th generation mobile networks第三代的移动信息系统)网络、4G(4th generation mobile networks第四代的移动信息系统)网络、5G(5th generation mobile networks第五代移动通信技术)网络等中的一种;无线通信可以是WIFI(Wireless Fidelity无线保真)、数字式无线数据传输电台、UWB(Ultra Wide Band超宽带)传输、Zigbee传输等中的一种。As an example, the manner used for the above-mentioned communication connection may be a mobile Internet manner, a wireless communication manner, or the like. Among them, the mobile Internet can be 3G (3rd generation mobile networks) network, 4G (4th generation mobile networks fourth generation mobile information system) network, 5G (5th generation mobile networks) fifth generation mobile communication technology ) network, etc.; wireless communication can be one of WIFI (Wireless Fidelity), digital wireless data transmission radio, UWB (Ultra Wide Band) transmission, Zigbee transmission, etc.
步骤104,根据K时刻的火线预测位置信息和火线观测位置信息,判断是否需要对林火蔓延模型进行参数调整。 Step 104 , according to the predicted position information of the fire line and the observed position information of the fire line at time K, determine whether parameter adjustment of the forest fire spread model needs to be performed.
可选地,本申请实施例可基于多收敛的集合卡尔曼滤波方法实现林火蔓延数据同化。其中,多收敛的集合卡尔曼滤波方法首先需要选取K-1时刻的状态分析火线位置、火灾蔓延速度V k-1作为待修正的状态参量,Rothermel惠更斯模型基于k-1时刻的火线状态分析值
Figure PCTCN2021112848-appb-000008
预测而得到的K时刻的火线预测位置
Figure PCTCN2021112848-appb-000009
K时刻的火线观测位置为
Figure PCTCN2021112848-appb-000010
然后,根据K时刻的火线预测位置信息
Figure PCTCN2021112848-appb-000011
和火线观测位置信息
Figure PCTCN2021112848-appb-000012
计算两者之间的偏差,根据该偏差的收敛情况来决定是否需要对林火蔓延模型进行参数调整。
Optionally, the embodiment of the present application may implement forest fire spread data assimilation based on a multi-convergence ensemble Kalman filtering method. Among them, the multi-convergent ensemble Kalman filter method first needs to select the state analysis live line position at time K-1 and the fire spread speed V k-1 as the state parameters to be corrected. The Rothermel Huygens model is based on the live line state at time k-1. Analysis value
Figure PCTCN2021112848-appb-000008
The predicted hotline position at time K obtained from the prediction
Figure PCTCN2021112848-appb-000009
The observation position of the line of fire at time K is:
Figure PCTCN2021112848-appb-000010
Then, predict the position information according to the live line at time K
Figure PCTCN2021112848-appb-000011
and fire line observation location information
Figure PCTCN2021112848-appb-000012
Calculate the deviation between the two, and decide whether to adjust the parameters of the forest fire spread model according to the convergence of the deviation.
在本申请一些实施例中,如图2所示,所述根据K时刻的火线预测位置信息和火线观测位置信息,判断是否需要对林火蔓延模型进行参数调整的具体实现过程可包括:In some embodiments of the present application, as shown in FIG. 2 , the specific implementation process of judging whether to adjust the parameters of the forest fire spread model according to the predicted position information of the fire line and the observed position information of the fire line at time K may include:
步骤201,计算K时刻的火线预测位置信息和火线观测位置信息的偏差。Step 201: Calculate the deviation between the predicted hotline position information and the hotline observation position information at time K.
在本申请一些实施例中,偏差定义如下:In some embodiments of the present application, the deviation is defined as follows:
Figure PCTCN2021112848-appb-000013
Figure PCTCN2021112848-appb-000013
也就是说,可利用上述公式(3),计算K时刻的火线预测位置信息
Figure PCTCN2021112848-appb-000014
和K时刻的火线观测位置为
Figure PCTCN2021112848-appb-000015
之间的偏差。
That is to say, the above formula (3) can be used to calculate the predicted position information of the live line at time K
Figure PCTCN2021112848-appb-000014
and the observation position of the fire line at time K is
Figure PCTCN2021112848-appb-000015
deviation between.
步骤202,判断偏差是否收敛在目标范围内。 Step 202, judging whether the deviation converges within the target range.
可选地,在得到K时刻的火线预测位置信息
Figure PCTCN2021112848-appb-000016
和K时刻的火线观测位置为
Figure PCTCN2021112848-appb-000017
之间的偏差时,可通过如下公式(4)判断该偏差是否收敛在目标范围内:
Optionally, after obtaining the predicted position information of the live line at time K
Figure PCTCN2021112848-appb-000016
and the observation position of the fire line at time K is
Figure PCTCN2021112848-appb-000017
When there is a deviation between the two, the following formula (4) can be used to judge whether the deviation converges within the target range:
Figure PCTCN2021112848-appb-000018
Figure PCTCN2021112848-appb-000018
其中,||Err h||和
Figure PCTCN2021112848-appb-000019
分别为第h次迭代步时,偏差与观测数据的2范数,C factor为判断计算是否收敛的标准。经过h次迭代,
Figure PCTCN2021112848-appb-000020
小于或等于C factor,此时,停止针对模型的迭代,认为偏差收敛在目标范围内;若
Figure PCTCN2021112848-appb-000021
大于C factor,则认为偏差未收敛在目标范围内,此时需要继续对模型进行参数调整。
where ||Err h || and
Figure PCTCN2021112848-appb-000019
are the 2-norm of the deviation and the observed data in the h-th iteration step, respectively, and C factor is the criterion for judging whether the calculation converges. After h iterations,
Figure PCTCN2021112848-appb-000020
is less than or equal to C factor , at this time, the iteration of the model is stopped, and the deviation is considered to converge within the target range; if
Figure PCTCN2021112848-appb-000021
If it is greater than C factor , it is considered that the deviation has not converged within the target range, and it is necessary to continue to adjust the parameters of the model.
步骤203,若偏差未收敛在目标范围,则判断针对林火蔓延模型的已迭代次数是否小于最高迭代次数。 Step 203, if the deviation does not converge within the target range, determine whether the number of iterations for the forest fire spread model is less than the maximum number of iterations.
在本申请一些实施例中,可以设定一个变量用于记录林火蔓延模型的迭代次数,最高迭代次数可以是人为设定的一个常量,该常量可以提前记录在系统中,也可以提前在系统中给定一个推荐值,在实际操作的过程中可以根据经验或者根据现场情况动态调整该最高迭代次数。将所记录的迭代次数和设定的最高迭代次数相比较,便可以得出迭代次数和最高迭代次数的相对关系。例如,假设N iteration为最高迭代步长限制,h为针对林火蔓延模型的当前已迭代次数,在判断K时刻的火线预测位置信息
Figure PCTCN2021112848-appb-000022
和K时刻的火线观测位置为
Figure PCTCN2021112848-appb-000023
之间的偏差未收敛在目标范围内之后,需判断针对林火蔓延模型的当前已迭代次数是否小于最高迭代次数。若h≥N iteration,则表示针对林火蔓延模型的当前已迭代次数大于或等于最高迭代次数,此时停止对该林火蔓延模型的迭代,即无需再对该林火蔓延模型进行参数调整。若h<N iteration,则表示针对林火蔓延模型的当前已迭代次数小于最高迭代次数,可执行步骤204。
In some embodiments of the present application, a variable may be set to record the number of iterations of the forest fire spread model, and the maximum number of iterations may be a constant set manually, and the constant may be recorded in the system in advance, or may be recorded in the system in advance Given a recommended value in the actual operation, the maximum number of iterations can be dynamically adjusted according to experience or field conditions. By comparing the recorded number of iterations with the set maximum number of iterations, the relative relationship between the number of iterations and the maximum number of iterations can be obtained. For example, assuming that N iteration is the maximum iteration step size limit, h is the current number of iterations for the forest fire spread model, and the predicted position information of the fire line at the time of K is judged
Figure PCTCN2021112848-appb-000022
and the observation position of the fire line at time K is
Figure PCTCN2021112848-appb-000023
After the deviation between them does not converge within the target range, it is necessary to judge whether the current number of iterations for the forest fire spread model is less than the maximum number of iterations. If h≥N iteration , it means that the current number of iterations for the forest fire spread model is greater than or equal to the maximum number of iterations. At this time, the iteration of the forest fire spread model is stopped, that is, there is no need to adjust the parameters of the forest fire spread model. If h<N iteration , it means that the current number of iterations for the forest fire spread model is less than the maximum number of iterations, and step 204 can be executed.
步骤204,若林火蔓延模型的已迭代次数小于最高迭代次数,则判定需要对林火蔓延模型进行参数调整。 Step 204 , if the number of iterations of the forest fire spread model is less than the maximum number of iterations, it is determined that the parameters of the forest fire spread model need to be adjusted.
步骤205,若偏差收敛在所述目标范围,和/或,火蔓延模型的已迭代次数大于或等于最高迭代次数,则停止对所述林火蔓延模型进行参数调整。由此可见,通过上述步骤201- 步骤205,在林火区域非稳态气象条件的条件下,采用多收敛的集合卡尔曼滤波的数据同化方法可以将火线位置等林火蔓延参数实时修正的同时,动态迭代林火蔓延速度,将有效提高了林火蔓延模型的精准性。 Step 205, if the deviation converges to the target range, and/or the number of iterations of the fire spread model is greater than or equal to the highest number of iterations, stop adjusting the parameters of the forest fire spread model. It can be seen that, through the above steps 201 to 205, under the condition of unsteady meteorological conditions in the forest fire area, the data assimilation method of the multi-convergent set Kalman filter can be used to correct the fire spread parameters such as the fire line position in real time. , the dynamic iterative forest fire spread speed will effectively improve the accuracy of the forest fire spread model.
步骤105,如果需要对林火蔓延模型进行参数调整,则根据K时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,以及根据重新计算的K时刻的火线预测位置信息和火线观测位置信息计算K时刻的火线状态分析值。 Step 105, if it is necessary to adjust the parameters of the forest fire spread model, adjust the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculate according to the forest fire spread model adjusted by the model parameters. The predicted hotline position information at time K, and the hotline state analysis value at time K is calculated according to the recalculated hotline predicted position information at time K and the observed hotline position information.
在本申请的一些实施例中,所述根据重新计算的K时刻的火线预测位置信息和火线观测位置信息计算K时刻的火线状态分析值的具体实现过程可如下:基于集合卡尔曼滤波算法,将重新计算的K时刻的火线预测位置信息与火线观测位置信息进行最小二乘拟合,得到所述K时刻的火线状态分析值。In some embodiments of the present application, the specific implementation process of calculating the live line state analysis value at time K according to the recalculated live line prediction position information and live line observation position information at time K may be as follows: Least square fitting is performed between the recalculated live line predicted location information at time K and the live line observed location information to obtain the live line state analysis value at the K time.
作为一种示例,所述基于集合卡尔曼滤波算法,将重新计算的K时刻的火线预测位置信息与火线观测位置信息进行最小二乘拟合,得到所述K时刻的火线状态分析值的具体实现过程可以如下:As an example, based on the collective Kalman filtering algorithm, least squares fitting is performed on the recalculated live line predicted position information at time K and the live line observed position information to obtain the specific implementation of the live line state analysis value at the K time The process can be as follows:
在观测火线位置
Figure PCTCN2021112848-appb-000024
与预测火线位置
Figure PCTCN2021112848-appb-000025
的偏差Err h满足公式(4)的条件下,可以认为偏差收敛在目标范围内,可以将观测火线
Figure PCTCN2021112848-appb-000026
与预测火线
Figure PCTCN2021112848-appb-000027
通过最小二乘拟合计算得到状态分析火线位置
Figure PCTCN2021112848-appb-000028
即状态分析矩阵(状态分析火线
Figure PCTCN2021112848-appb-000029
与真实火线位置误差最小)。其中,计算火线状态分析值
Figure PCTCN2021112848-appb-000030
的步骤如下:
Observing the position of the line of fire
Figure PCTCN2021112848-appb-000024
with predicted hotline location
Figure PCTCN2021112848-appb-000025
Under the condition that the deviation Err h satisfies the formula (4), it can be considered that the deviation converges within the target range, and the observation line of fire can be
Figure PCTCN2021112848-appb-000026
live with forecast
Figure PCTCN2021112848-appb-000027
The position of the live line in the state analysis is obtained by the least squares fitting calculation
Figure PCTCN2021112848-appb-000028
That is, the state analysis matrix (state analysis firewire
Figure PCTCN2021112848-appb-000029
Minimal error with the real live wire position). Among them, calculate the live line state analysis value
Figure PCTCN2021112848-appb-000030
The steps are as follows:
1)计算集合预测误差协方差矩阵P e1) Calculate the ensemble prediction error covariance matrix P e .
Figure PCTCN2021112848-appb-000031
Figure PCTCN2021112848-appb-000031
Figure PCTCN2021112848-appb-000032
Figure PCTCN2021112848-appb-000032
其中N是状态变量集合的元素数,1 N是大小为N×N、元素值为1/N的矩阵。
Figure PCTCN2021112848-appb-000033
为中预测矩阵
Figure PCTCN2021112848-appb-000034
中的每列元素的均值向量。
where N is the number of elements in the state variable set, and 1 N is a matrix of size N×N with element values 1/N.
Figure PCTCN2021112848-appb-000033
is the medium prediction matrix
Figure PCTCN2021112848-appb-000034
A vector of means of the elements of each column in .
2)产生观测集合。在数据同化时间步,可以获得观测向量y o,在观测向量中加入扰动,生成包含N次观测向量的观测矩阵,扰动的添加过程如: 2) Generate an observation set. At the data assimilation time step, the observation vector yo can be obtained, and disturbance is added to the observation vector to generate an observation matrix containing N observation vectors. The process of adding disturbance is as follows:
Figure PCTCN2021112848-appb-000035
Figure PCTCN2021112848-appb-000035
获得扰动后的观测向量可以组成观测矩阵。The observation vector after perturbation is obtained to form an observation matrix.
Figure PCTCN2021112848-appb-000036
Figure PCTCN2021112848-appb-000036
其中,R m×N表示Y o的定义域,表示Y o为m行N列 Among them, R m×N represents the definition domain of Y o , which means that Y o is m rows and N columns
同时,添加的扰动可以存储在矩阵中:Meanwhile, the added perturbation can be stored in a matrix:
E=(ε 12,...,ε N)          (9) E=(ε 12 ,...,ε N ) (9)
集合观测误差协方差矩阵可表示为:The ensemble observation error covariance matrix can be expressed as:
Figure PCTCN2021112848-appb-000037
Figure PCTCN2021112848-appb-000037
1)集合卡尔曼滤波增益计算如下:1) The ensemble Kalman filter gain is calculated as follows:
K e=P eH T(HP eH T+R e) -1          (11) K e =P e H T (HP e H T +R e ) -1 (11)
其中H为观测算子,将X从状态空间映射到观测空间。where H is the observation operator, which maps X from the state space to the observation space.
2)更新系统状态分析值:2) Update the system state analysis value:
Figure PCTCN2021112848-appb-000038
Figure PCTCN2021112848-appb-000038
由此,本申请实施例针对林火区域非稳态气象条件,提出了多收敛的集合卡尔曼滤波的数据同化方法,火线位置实时修正林火蔓延模型参数的同时,动态迭代林火蔓延速度,有效提高了林火蔓延模型的精准性。Therefore, the embodiment of the present application proposes a multi-convergent ensemble Kalman filter data assimilation method for the unsteady meteorological conditions in the forest fire area. The fire line position corrects the parameters of the forest fire spread model in real time, and dynamically iterates the forest fire spread speed. Effectively improve the accuracy of the forest fire spread model.
综上,本申请实施例的基于无人机视频的林火蔓延数据同化方法,根据火灾发生地的气象数据、基础地理信息数据、K-1时刻的火线状态分析值等数据,通过林火蔓延模型,获取K时刻的火线预测位置信息,将K时刻火线预测位置信息与无人机获取的火线观测位置信息相比较,判断是否需要对所述林火蔓延模型进行参数调整,如需调整,则根据K时刻的火线预测位置信息和观测位置信息调整模型参数,并根据调整后林火蔓延模型重新计算K时刻的火线预测位置信息,并重新计算K时刻的火线状态分析值。这种基于无人机视频的林火蔓延数据同化方法采用了无人机作为前端监测设备,实时提取火线,获得火线的位置信息,针对林火蔓延模型提出了参数可动态调整的同化林火蔓延模型,可以有效解决火火线不能实时获取,林火蔓延模型参数不能及时修正,导致预测结果精度无法保证的问题,提高了模型预测精度。无人机具有高机动性和低成本的优点,无人机可以实时回传现场视频,从而使得观测火线的更新间隔为分钟级甚至秒级识别,从而有效避免卫星遥感数据时间分辨率和空间分辨率相互制约的缺点,可以极大提高林火蔓延模型预测的时效性和准确性,从而可以提高过火面积的预测精度,为林火扑救工作提供客观的火场信息。To sum up, the method for assimilating the forest fire spread data based on the drone video in the embodiment of the present application, according to the meteorological data of the fire occurrence place, the basic geographic information data, the fire line state analysis value at the time of K-1 and other data, through the forest fire spread model, obtain the predicted position information of the fire line at time K, compare the predicted position information of the fire line at time K with the observation position information of the fire line obtained by the UAV, and judge whether the parameters of the forest fire spread model need to be adjusted. The model parameters are adjusted according to the predicted position information of the fire line at time K and the observed position information, and the predicted position information of the fire line at time K is recalculated according to the adjusted forest fire spread model, and the analysis value of the fire line state at time K is recalculated. This method of forest fire spread data assimilation based on UAV video uses UAV as the front-end monitoring equipment, extracts the fire line in real time, and obtains the position information of the fire line. According to the forest fire spread model, an assimilation forest fire spread with parameters that can be dynamically adjusted is proposed. The model can effectively solve the problem that the fire line cannot be obtained in real time, and the parameters of the forest fire spread model cannot be corrected in time, which leads to the inability to guarantee the accuracy of the prediction results, and improves the prediction accuracy of the model. The UAV has the advantages of high mobility and low cost. The UAV can send back live video in real time, so that the update interval of the observation line of fire can be identified in minutes or even seconds, thus effectively avoiding the temporal resolution and spatial resolution of satellite remote sensing data. The disadvantage of mutual restriction of fire rate can greatly improve the timeliness and accuracy of forest fire spread model prediction, thereby improving the prediction accuracy of fire area and providing objective fire information for forest fire fighting.
需要说明的是,为了能够获得更加精准的火线观测位置信息,可基于无人机所搭载的热成像相机来对火场区域进行拍摄,进而利用无人机拍摄的热成像视频来计算K时刻的火线观测位置信息。具体而言,图3是根据本申请另一个实施例的基于无人机视频的林火蔓延数据同化方法的流程图,如图3所示,该林火蔓延数据同化方法包括:It should be noted that, in order to obtain more accurate fire line observation position information, the fire field area can be photographed based on the thermal imaging camera carried by the drone, and then the thermal imaging video captured by the drone can be used to calculate the fire line at time K. Observation location information. Specifically, FIG. 3 is a flowchart of a method for assimilating forest fire spread data based on drone video according to another embodiment of the present application. As shown in FIG. 3 , the forest fire spread data assimilation method includes:
步骤301,获取火灾发生地的气象数据和基础地理信息数据,并获取火灾发生地K-1时刻的火线状态分析值。Step 301: Obtain meteorological data and basic geographic information data of the fire occurrence place, and obtain the fire line state analysis value at the time K-1 of the fire occurrence place.
步骤302,将火灾发生地的气象数据、基础地理信息数据和K-1时刻的火线状态分析值输入至林火蔓延模型,获取K时刻的火线预测位置信息,获取基于无人机拍摄的火场区域热成像视频。Step 302: Input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model, obtain the predicted position information of the fire line at the time K, and obtain the fire field area based on the drone shooting. Thermal imaging video.
步骤303,从火场区域热成像视频中获取K时刻的火场区域热成像,确定火场区域热成像之中各个像素所对应的温度信息。Step 303: Obtain the thermal image of the fire field area at time K from the thermal imaging video of the fire field area, and determine the temperature information corresponding to each pixel in the thermal image of the fire field area.
步骤304,根据火场区域热成像各个像素所对应的温度信息和温度阈值,从火场区域热成像中提取火场范围,并对火场范围进行边缘提取,获得火线的像素位置。 Step 304 , according to the temperature information and temperature threshold corresponding to each pixel of the thermal imaging of the fire field, extract the fire field range from the thermal imaging of the fire field area, and perform edge extraction on the fire field range to obtain the pixel position of the fire line.
步骤305,将火线的像素位置转换为火线的GPS坐标,获得K时刻的火线观测位置信息。Step 305: Convert the pixel position of the live line to the GPS coordinates of the live line to obtain the observation position information of the live line at time K.
在本申请一些实施例中,火线像素信息转换为GPS信息的过程采用的是相机拍摄成像的反过程,如图4所示,是从三维场景到无人机拍摄到的二维像平面的投影变换的过程。相机成像的本质是摄影几何中的中心透视投影过程。三维地面的点通过投影矩阵指定的视锥空间和视点方位决定观测结果,相机图片的二维图像与三维地理信息通过视锥体和视点方位形成对应关系。将二维图片信息转换为三维坐标信息为上述过程的逆过程。In some embodiments of the present application, the process of converting the firewire pixel information into GPS information adopts the inverse process of camera imaging, as shown in FIG. 4 , is the projection from the 3D scene to the 2D image plane captured by the drone process of transformation. The essence of camera imaging is the process of central perspective projection in photographic geometry. The points on the three-dimensional ground determine the observation results through the viewing cone space and viewpoint orientation specified by the projection matrix, and the two-dimensional image of the camera picture and the three-dimensional geographic information form a corresponding relationship through the viewing cone and viewpoint orientation. Converting two-dimensional picture information into three-dimensional coordinate information is the inverse process of the above process.
在本申请实施例中,将火线的像素位置转换为火线的GPS坐标的方式有很多种,可以根据具体应用场景进行选择和设置,举例说明如下:In the embodiment of the present application, there are many ways to convert the pixel position of the FireWire to the GPS coordinates of the FireWire, which can be selected and set according to specific application scenarios. An example is as follows:
第一种示例,该种示例为不结合DEM信息的火线定位技术,本示例中可基于至少一个无人机在多个观测点来对同一火场区域进行拍摄,进而针对至少一个无人机在多个观测点拍摄的火场区域热成像视频,获取火线的多个像素位置,进而利用火线的多个像素位置来计算K时刻的火线观测位置信息。具体而言,如图5所示,该种示例包括如下步骤:The first example is a line of fire positioning technology that does not combine DEM information. In this example, the same fire field area can be photographed based on at least one drone at multiple observation points, and then the at least one drone can be used in multiple observation points. The thermal imaging video of the fire field area captured by each observation point, obtains multiple pixel positions of the fire line, and then uses the multiple pixel positions of the fire line to calculate the fire line observation position information at time K. Specifically, as shown in Figure 5, this example includes the following steps:
步骤501,对火线的多个像素位置分别进行坐标转换,获得所述火线在无人机地理坐标系中的多个坐标值。Step 501: Perform coordinate transformation on multiple pixel positions of the line of fire, respectively, to obtain multiple coordinate values of the line of fire in the UAV geographic coordinate system.
步骤502,根据火线在无人机地理坐标系中的多个坐标值计算火线的多个观测高度角矩阵和多个方位角矩阵。Step 502: Calculate multiple observation altitude angle matrices and multiple azimuth angle matrices of the live line according to the multiple coordinate values of the live line in the UAV geographic coordinate system.
步骤503,根据火线的多个观测高度角矩阵和多个方位角矩阵对火线位置进行卡尔曼滤波估计,获得火线的坐标估计值。Step 503: Perform Kalman filter estimation on the position of the live line according to multiple observation elevation angle matrices and multiple azimuth angle matrices of the live line, and obtain the coordinate estimated value of the live line.
步骤504,将火线的坐标估计值通过GPS坐标转换,获得K时刻的火线观测位置信息。Step 504: Convert the estimated coordinates of the live line through GPS coordinates to obtain the observed location information of the live line at time K.
举例而言,如所述步骤501-504的一种不结合DEM信息的具体实现过程可如下:For example, a specific implementation process of steps 501-504 without combining DEM information may be as follows:
无人机对地面目标的定位方法主要是:通过机载传感器采集处理数据,得到无人机与目标间的相对距离角度数据,结合无人机自身位置姿态数据解算出目标位置坐标,如图6所示,无人机通过多点位置对同一目标进行侦查,通过基于视觉的多点角度观测火线定位方法,可以获取目标的精确三维坐标。The positioning method of the UAV to the ground target is mainly: collect and process data through the airborne sensor, obtain the relative distance and angle data between the UAV and the target, and calculate the target position coordinates based on the UAV's own position and attitude data, as shown in Figure 6 As shown in the figure, the UAV detects the same target through multi-point positions, and the accurate three-dimensional coordinates of the target can be obtained through the visual-based multi-point angle observation fire line positioning method.
通过多点角度观测进行目标定位,将火线像素信息根据成像原理计算出火线与无人机 的相对高低角和方位角矩阵建立系统状态方程和观测方程,利用无迹卡尔曼滤波估计出火线相对无人机的位置,然后转换成火线的大地坐标系下的位置坐标。在K时刻进行观测,即可得到K时刻的实际观测火线位置
Figure PCTCN2021112848-appb-000039
获取观测火线的主要步骤如下:
Target positioning is carried out through multi-point angle observation, and the pixel information of the line of fire is used to calculate the relative height and azimuth matrix of the line of fire and the UAV according to the imaging principle, and the system state equation and observation equation are established. The position of the man-machine is then converted into the position coordinates in the geodetic coordinate system of the FireWire. Observing at time K, you can get the actual observed fire line position at time K
Figure PCTCN2021112848-appb-000039
The main steps to obtain the observation line of fire are as follows:
1)火线像素信息通过坐标转换为无人机地理坐标系下的值,计算火线点相对于无人机地理坐标的高度角和方位角矩阵。1) The pixel information of the fire line is converted into the value in the UAV geographic coordinate system through the coordinates, and the altitude and azimuth angle matrix of the fire line point relative to the UAV geographic coordinates are calculated.
2)通过多点观测火线,得到火线的多个观测高度角矩阵和方位角矩阵,结合卡尔曼滤波得到火线的估计值。2) Through multi-point observation of the fire line, multiple observation altitude angle matrices and azimuth angle matrices of the fire line are obtained, and the estimated value of the fire line is obtained by combining Kalman filtering.
3)通过坐标将火线的估计值转换为火线的GPS坐标,即实际观测火线位置。3) Convert the estimated value of the line of fire to the GPS coordinates of the line of fire through the coordinates, that is, the actual position of the line of fire.
第二种示例,该种示例为结合DEM信息的火线定位技术,如图7所示,该种示例包括如下步骤:The second example, this example is a live wire positioning technology combined with DEM information, as shown in Figure 7, this example includes the following steps:
步骤701,获取火灾发生地的DEM地理信息。Step 701: Obtain the DEM geographic information of the place where the fire occurred.
步骤702,获取所述无人机的GPS信息、姿态信息和内置参数。Step 702: Obtain GPS information, attitude information and built-in parameters of the UAV.
步骤703,根据DEM地理信息、无人机的GPS信息、姿态信息和内置参数,生成无人机点位的虚拟视角。 Step 703 , according to the DEM geographic information, the GPS information of the drone, the attitude information and the built-in parameters, generate a virtual perspective of the point of the drone.
步骤704,根据无人机点位的虚拟视角模拟实际无人机成像过程,得到仿真图像。 Step 704 , simulate the actual UAV imaging process according to the virtual viewing angle of the UAV point to obtain a simulated image.
步骤705,根据火线的像素位置,确定火线在仿真图像中的像素坐标。Step 705: Determine the pixel coordinates of the live line in the simulated image according to the pixel position of the live line.
步骤706,将火线在仿真图像中的像素坐标通过GPS坐标转换,获得K时刻的火线观测位置信息。Step 706: Convert the pixel coordinates of the live line in the simulated image through GPS coordinates to obtain the observation position information of the live line at time K.
举例而言,所述步骤701-706的一种结合DEM信息的具体实现过程可如下:For example, a specific implementation process of the steps 701-706 in combination with DEM information may be as follows:
基于森林的DEM地理信息,通过TS-GIS(TypeScript-Geographic Information system TypeScript语言-地理信息系统)引擎,形成无人机点位的虚拟视角,生成投影矩阵。利用投影矩阵可以得到热成像图片中火线像素点对应的空间坐标。在K时刻进行观测,即可得到K时刻的实际观测火线位置
Figure PCTCN2021112848-appb-000040
火线定位过程如下:
The forest-based DEM geographic information, through the TS-GIS (TypeScript-Geographic Information system TypeScript language-geographic information system) engine, forms a virtual perspective of the drone point and generates a projection matrix. Using the projection matrix, the spatial coordinates corresponding to the pixel points of the fire line in the thermal image can be obtained. Observing at time K, you can get the actual observed fire line position at time K
Figure PCTCN2021112848-appb-000040
The FireWire positioning process is as follows:
1)结合森林区域的DEM数据和遥感影像数据源,TS-GIS引擎可以进行三维DEM信息展示。1) Combined with DEM data and remote sensing image data sources in forest areas, the TS-GIS engine can display 3D DEM information.
2)输入相机GPS信息、姿态信息、相机的内置参数,利用透视成像与摄影测量成像的一致性,通过TS-GIS虚拟相机视角模拟实际摄像机成像过程,得到仿真图像。2) Input the camera GPS information, attitude information, and built-in parameters of the camera, and use the consistency of perspective imaging and photogrammetric imaging to simulate the actual camera imaging process through the TS-GIS virtual camera perspective to obtain a simulated image.
3)在所构建的虚拟视角下的三维场景中,通过设置投影矩阵及观察矩阵生成的仿真图像,对应目标在监控图像中的像素坐标,进行火线的GPS定位。即为实际观测火线位置。3) In the constructed three-dimensional scene under the virtual viewing angle, by setting the simulation image generated by the projection matrix and the observation matrix, corresponding to the pixel coordinates of the target in the monitoring image, the GPS positioning of the FireWire is performed. That is, the actual observed fire line position.
步骤306,根据K时刻的火线预测位置信息和火线观测位置信息,判断是否需要对林火蔓延模型进行参数调整。 Step 306 , according to the predicted position information of the fire line and the observed position information of the fire line at time K, determine whether parameter adjustment of the forest fire spread model is required.
步骤307,如果需要对林火蔓延模型进行参数调整,根据K时刻的火线预测位置信息和 火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,以及根据重新计算的K时刻的火线预测位置信息和火线观测位置信息计算K时刻的火线状态分析值。 Step 307, if it is necessary to adjust the parameters of the forest fire spread model, adjust the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculate K according to the forest fire spread model adjusted by the model parameters. The predicted live line position information at time, and the live line state analysis value at time K is calculated according to the recalculated live line predicted position information and live line observed position information at time K.
在本申请一些实施例中,可根据火线预测位置信息和火线观测位置信息的偏差来对林火蔓延模型之中的林火蔓延速度参数进行调整,进而经过调整后的林火蔓延模型重新计算火线预测位置信息。In some embodiments of the present application, the forest fire spread speed parameter in the forest fire spread model may be adjusted according to the deviation between the predicted fire line position information and the fire line observation position information, and then the fire line is recalculated by the adjusted forest fire spread model. Predicted location information.
作为一种示例,如图8所示,所述根据K时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息的具体实现过程可包括:As an example, as shown in FIG. 8 , the model parameters of the forest fire spread model are adjusted according to the predicted fire line position information and the fire line observation position information at time K, and the time K is recalculated according to the forest fire spread model adjusted by the model parameters. The specific implementation process of the FireWire predicted location information may include:
步骤801,计算K时刻的火线预测位置信息和火线观测位置信息的偏差。Step 801: Calculate the deviation between the predicted hotline position information and the hotline observation position information at time K.
步骤802,根据预设的林火蔓延速度更新系数矩阵和偏差对K-1时刻的林火蔓延速度进行调整。Step 802: Adjust the forest fire spread rate at time K-1 according to the preset forest fire spread rate update coefficient matrix and the deviation.
在本申请实施例中,根据预设的林火蔓延速度更新系数矩阵和偏差对K-1时刻的林火蔓延速度进行调整,可以举例说明如下,该方法包括:将林火蔓延速度更新系数矩阵和偏差进行乘法运算,将得到的乘积与经过调整的K-1时刻的林火蔓延速度进行加法运算。In the embodiment of the present application, the forest fire spread rate at time K-1 is adjusted according to the preset forest fire spread rate update coefficient matrix and the deviation, which can be illustrated as follows. The method includes: updating the forest fire spread rate update coefficient matrix Multiply with the deviation, and add the obtained product to the adjusted forest fire spread rate at time K-1.
步骤803,将经过调整的K-1时刻的林火蔓延速度和K-1时刻的火线状态分析值输入至惠更斯波动模型,重新获得K时刻的火线预测位置信息。Step 803: Input the adjusted forest fire spread speed at time K-1 and the analysis value of the fire line state at time K-1 into the Huygens fluctuation model, and obtain the predicted fire line position information at time K again.
举例而言,所述步骤801-803的具体实现过程可以如下:For example, the specific implementation process of the steps 801-803 may be as follows:
当偏差未收敛在所述目标范围的情况时,Rothermel模型计算得到的K-1时刻的林火蔓延速度R 0,k-1即为当时林火蔓延模型中的林火蔓延速度,由于林火发生时,火场的热气流、对流等情况将影响火场的风向、风速,火场的风速风向不是稳态的,火灾蔓延速度也并非稳态,因此林火蔓延模型的林火蔓延速度也需要动态调整,现在将上述非稳态因素参考进去,对林火蔓延速度进行更新。 When the deviation does not converge to the target range, the forest fire spread rate R 0,k-1 at time K-1 calculated by the Rothermel model is the forest fire spread rate in the forest fire spread model at that time. When it occurs, the thermal airflow and convection of the fire field will affect the wind direction and wind speed of the fire field. The wind speed and wind direction of the fire field is not steady, and the fire spread speed is not steady. Therefore, the fire spread speed of the forest fire spread model also needs to be dynamically adjusted. , and now refer to the above non-steady-state factors to update the speed of forest fire spread.
1)修正的林火蔓延速度:1) Corrected forest fire spread speed:
R h,k-1=R 0,k-1+CErr h          (13) R h,k-1 =R 0,k-1 +CErr h (13)
其中,C为林火蔓延速度更新系数矩阵,Err h为公式(3)得到预测值和观测值的偏差。 Among them, C is the update coefficient matrix of forest fire spread speed, and Errh is the deviation between the predicted value and the observed value obtained from formula (3).
2)更新林火蔓延速度:2) Update the speed of forest fire spread:
R 0,k-1=R h,k-1   (14) R 0,k-1 =R h,k-1 (14)
将更新后的林火蔓延速度重新输入林火蔓延模型,重新计算k-1时刻的预测火线位置
Figure PCTCN2021112848-appb-000041
迭代步骤306-307。
Re-input the updated forest fire spread speed into the forest fire spread model, and recalculate the predicted fire line position at time k-1
Figure PCTCN2021112848-appb-000041
Iterate steps 306-307.
根据本申请实施例的基于无人机视频的林火蔓延数据同化方法,将获取到的火灾发生地的气象数据、基础地理信息数据和K-1时刻的火线状态分析值输入林火蔓延模型,获取K时刻的火线预测位置信息;获取基于无人机拍摄的火场区域热成像视频,从中获取K时刻的火场区域热成像,确定热成像中各个像素所对应的温度信息,根据该温度信息和温度阈值,提取火场范围,对火场范围进行边缘提取,获得火线的像素位置,将火线的坐标估计值通过GPS坐标转换,获得K时刻的火线观测位置信息;根据K时刻的火线预测位置信息和观测位置信息,判断是否需要对林火蔓延模型进行参数调整。如果需要对林火蔓延模型进行参数调整,根据K时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,根据重新计算的K时刻的火线预测位置信息和观测位置信息计算K时刻的火线状态分析值。本实施例实施的基于无人机视频的林火蔓延数据同化方法采用了无人机作为前端监测设备,实时提取火线,获得火线的位置信息,针对林火蔓延模型提出了参数可动态调整的同化林火蔓延模型,有效解决了仿真模型不能针对模拟环境的变化动态调整、林火模型不适应于非稳态、环境的变化无法实时传输等问题,提高了模型预测精度。无人机具有高机动性和低成本的优点,无人机可以实时回传现场视频,从而使得观测火线的更新间隔为分钟级甚至秒级识别。该模型采取的数据同化方法,不断同化林火蔓延模型,提高过火面积的预测精度,为林火扑救工作提供客观的火场信息。在偏差未收敛时,提出了将林火发生现场的非稳态因素纳入模型的解决方案,进一步提升了模型的预测精度。同时,本实施例提供了从区域热成像视频获取火线观测位置的方法,该方法可以获取火线观测位置的信息,同时也将火线观测位置直观的展示出来,为林火的扑灭工作提供了直接的指导和有力的支持。According to the forest fire spread data assimilation method based on the drone video of the embodiment of the present application, the obtained meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 are input into the forest fire spread model, Obtain the predicted position information of the fire line at time K; obtain the thermal imaging video of the fire area based on the drone, obtain the thermal image of the fire area at time K, and determine the temperature information corresponding to each pixel in the thermal imaging, according to the temperature information and temperature Threshold, extract the fire field range, perform edge extraction on the fire field range, obtain the pixel position of the fire line, convert the estimated coordinates of the fire line through GPS coordinates, and obtain the fire line observation position information at time K; predict the position information and observation position according to the fire line at time K information to determine whether parameter adjustment of the forest fire spread model is necessary. If it is necessary to adjust the parameters of the forest fire spread model, adjust the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculate the fire line at time K according to the forest fire spread model adjusted by the model parameters. Predicted position information, according to the recalculated live line predicted position information and observed position information at time K to calculate the live line state analysis value at time K. The method for assimilation of forest fire spread data based on drone video implemented in this embodiment uses drone as the front-end monitoring device, extracts the fire line in real time, and obtains the location information of the fire line, and proposes an assimilation with dynamically adjustable parameters for the forest fire spread model. The forest fire spread model effectively solves the problems that the simulation model cannot be dynamically adjusted for the changes of the simulated environment, the forest fire model is not suitable for non-steady state, and the changes of the environment cannot be transmitted in real time, etc., and the prediction accuracy of the model is improved. UAVs have the advantages of high maneuverability and low cost. UAVs can send back live video in real time, so that the update interval of the observation line of fire can be identified in minutes or even seconds. The data assimilation method adopted by the model continuously assimilates the forest fire spread model, improves the prediction accuracy of the burned area, and provides objective fire information for forest fire fighting work. When the deviation did not converge, a solution was proposed to incorporate the unsteady factors of the forest fire scene into the model, which further improved the prediction accuracy of the model. At the same time, this embodiment provides a method for obtaining the observation position of the line of fire from the regional thermal imaging video. This method can obtain the information of the observation position of the line of fire, and at the same time, the observation position of the line of fire can be displayed intuitively, which provides a direct method for forest fire extinguishing work. Guidance and strong support.
为了实现上述实施例,本申请还提出了一种基于无人机视频的林火蔓延数据同化装置。图9是根据本申请一个实施例的基于无人机视频的林火蔓延数据同化装置的结构示意图,如图9所示,该基于无人机视频的林火蔓延数据同化装置包括:In order to realize the above embodiments, the present application also proposes a data assimilation device for forest fire spread based on drone video. 9 is a schematic structural diagram of a device for assimilating data on forest fire spread based on drone video according to an embodiment of the present application. As shown in FIG. 9 , the device for assimilating data on forest fire spread based on drone video includes:
第一获取模块901,用于获取火灾发生地的气象数据和基础地理信息数据;The first acquisition module 901 is used to acquire meteorological data and basic geographic information data of the fire occurrence place;
第二获取模块902,用于获取火灾发生地K-1时刻的火线状态分析值;The second obtaining module 902 is used to obtain the analysis value of the live line state at the moment K-1 of the fire occurrence place;
第三获取模块903,用于将火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息;The third acquisition module 903 is used to input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model to obtain the predicted fire line position information at the time K of the fire occurrence place;
第四获取模块904,用于获取基于无人机拍摄的火场区域热成像视频;a fourth acquisition module 904, configured to acquire the thermal imaging video of the fire area based on the drone shot;
第五获取模块905,用于根据火场区域热成像视频获取所述K时刻的火线观测位置信息;The fifth acquisition module 905 is configured to acquire the fire line observation position information at the K moment according to the thermal imaging video of the fire field area;
判断模块906,用于根据K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对林火蔓延模型进行参数调整;The judgment module 906 is used for judging whether it is necessary to adjust the parameters of the forest fire spread model according to the predicted position information of the line of fire at time K and the observed position information of the line of fire;
调整模块907,用于在需要对林火蔓延模型进行参数调整时,根据K时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息;The adjustment module 907 is configured to adjust the model parameters of the forest fire spread model according to the predicted position information of the fire line and the observed position information of the fire line at time K when the parameters of the forest fire spread model need to be adjusted, and according to the adjusted forest fire spread model parameters The model recalculates the predicted location information of the live line at time K;
数据同化模块908,用于根据重新计算的K时刻的火线预测位置信息和火线观测位置信息计算K时刻的火线状态分析值。The data assimilation module 908 is configured to calculate the live line state analysis value at time K according to the recalculated live line predicted position information and live line observation position information at time K.
在本申请一些实施例中,林火蔓延模型包括Rothermel模型和惠更斯波动模型;在本申请实施例中,第三获取模块903具体用于:将火灾发生地的气象数据和基础地理信息数据输入至Rothermel模型,获取K-1时刻的林火蔓延速度;将K-1时刻的林火蔓延速度和K-1时刻的火线状态分析值输入至惠更斯波动模型进行火线位置的预测,得到K时刻的火线预测位置信息。In some embodiments of the present application, the forest fire spread model includes the Rothermel model and the Huygens fluctuation model; in the embodiments of the present application, the third acquisition module 903 is specifically used to: obtain the meteorological data and basic geographic information data of the fire occurrence place Input to the Rothermel model to obtain the forest fire spread speed at K-1 time; input the forest fire spread speed at K-1 time and the fire line state analysis value at K-1 time into the Huygens fluctuation model to predict the fire line position, and obtain The predicted location information of the live line at time K.
在本申请一些实施例中,第五获取模块905具体用于:从火场区域热成像视频中获取K时刻的火场区域热成像;确定火场区域热成像之中各个像素所对应的温度信息;根据火场区域热成像之中各个像素所对应的温度信息和温度阈值,从火场区域热成像中提取火场范围;对火场区域热成像之中的火场范围进行边缘提取,获得火线的像素位置;将火线的像素位置转换为火线的GPS坐标,获得K时刻的火线观测位置信息。In some embodiments of the present application, the fifth acquisition module 905 is specifically configured to: acquire the thermal image of the fire area at time K from the thermal imaging video of the fire area; determine the temperature information corresponding to each pixel in the thermal image of the fire area; The temperature information and temperature threshold corresponding to each pixel in the area thermal imaging, extract the fire field range from the fire field area thermal imaging; extract the edge of the fire field range in the fire field area thermal imaging to obtain the pixel position of the fire line; The position is converted into the GPS coordinates of the fire line, and the position information of the fire line observation at time K is obtained.
在本申请一些实施例中,无人机的个数为至少一个,第五获取模块905具体用于:针对至少一个无人机在多个观测点拍摄的火场区域热成像视频,获取火线的多个像素位置。在本申请的实施例中,第五获取模块905将火线的像素位置转换为火线的GPS坐标,获得K时刻的火线观测位置信息的具体实现过程可如下:对火线的多个像素位置分别进行坐标转换,获得火线在无人机地理坐标系中的多个坐标值;根据火线在无人机地理坐标系中的多个坐标值计算火线的多个观测高度角矩阵和多个方位角矩阵;根据火线的多个观测高度角矩阵和多个方位角矩阵对火线位置进行卡尔曼滤波估计,获得火线的坐标估计值;将火线的坐标估计值通过GPS坐标转换,获得K时刻的火线观测位置信息。In some embodiments of the present application, the number of unmanned aerial vehicles is at least one, and the fifth acquisition module 905 is specifically configured to: obtain thermal imaging videos of the fire field area captured by at least one unmanned aerial vehicle at multiple observation points, and obtain the number of fire lines. pixel location. In the embodiment of the present application, the fifth acquisition module 905 converts the pixel position of the live line into the GPS coordinates of the live line, and the specific implementation process of obtaining the observation position information of the live line at time K may be as follows: coordinate a plurality of pixel positions of the live line respectively. Convert to obtain multiple coordinate values of FireWire in the UAV geographic coordinate system; calculate multiple observation altitude angle matrices and multiple azimuth angle matrices of FireWire according to the multiple coordinate values of FireWire in the UAV geographic coordinate system; The multiple observation altitude angle matrices and multiple azimuth angle matrices of the live line are used to estimate the position of the live line by Kalman filter to obtain the estimated coordinate value of the live line; the coordinate estimated value of the live line is converted by GPS coordinates to obtain the observation position information of the live line at time K.
在本申请一些实施例中,第五获取模块905将火线的像素位置转换为火线的GPS坐标,获得K时刻的火线观测位置信息的具体实现过程可如下:获取火灾发生地的DEM地理信息;获取无人机的GPS信息、姿态信息和内置参数;根据DEM地理信息、无人机的GPS信息、姿态信息和内置参数,生成无人机点位的虚拟视角;根据无人机点位的虚拟视角模拟实际无人机成像过程,得到仿真图像;根据火线的像素位置,确定火线在仿真图像中的像素坐标;将火线在仿真图像中的像素坐标通过GPS坐标转换,获得K时刻的火线观测位置信息。In some embodiments of the present application, the fifth acquisition module 905 converts the pixel position of the fire line into the GPS coordinates of the fire line, and the specific implementation process of obtaining the fire line observation position information at time K may be as follows: obtaining the DEM geographic information of the fire place; obtaining The GPS information, attitude information and built-in parameters of the UAV; according to the DEM geographic information, the GPS information, attitude information and built-in parameters of the UAV, a virtual perspective of the UAV point is generated; according to the virtual perspective of the UAV point Simulate the actual UAV imaging process to obtain a simulated image; determine the pixel coordinates of the live line in the simulated image according to the pixel position of the live line; convert the pixel coordinates of the live line in the simulated image through GPS coordinates to obtain the observation position information of the live line at time K .
在本申请一些实施例中,判断模块906具体用于:计算K时刻的火线预测位置信息和火线观测位置信息的偏差;判断偏差是否收敛在目标范围内;若偏差未收敛在目标范围,则判断针对火蔓延模型的已迭代次数是否小于最高迭代次数;若火蔓延模型的已迭代次数 小于最高迭代次数,则判定需要对林火蔓延模型进行参数调整;若偏差收敛在目标范围,和/或,火蔓延模型的已迭代次数大于或等于最高迭代次数,则停止对林火蔓延模型进行参数调整。In some embodiments of the present application, the judging module 906 is specifically configured to: calculate the deviation between the predicted live position information and the observed live position information at time K; judge whether the deviation converges within the target range; if the deviation does not converge within the target range, then judge Whether the number of iterations of the fire spread model is less than the maximum number of iterations; if the number of iterations of the fire spread model is less than the maximum number of iterations, it is determined that the parameters of the fire spread model need to be adjusted; if the deviation converges within the target range, and/or, If the number of iterations of the fire spread model is greater than or equal to the maximum number of iterations, the parameter adjustment of the forest fire spread model is stopped.
在本申请一些实施例中,调整模块907根据K时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息的具体实现过程可如下:计算K时刻的火线预测位置信息和火线观测位置信息的偏差;根据预设的林火蔓延速度更新系数矩阵和偏差对K-1时刻的林火蔓延速度进行调整;将经过调整的K-1时刻的林火蔓延速度和K-1时刻的火线状态分析值输入至惠更斯波动模型,重新获得K时刻的火线预测位置信息。In some embodiments of the present application, the adjustment module 907 adjusts the model parameters of the forest fire spread model according to the predicted fire line position information and the fire line observation position information at time K, and recalculates the fire line at time K according to the forest fire spread model adjusted by the model parameters. The specific implementation process of the predicted position information can be as follows: calculate the deviation between the predicted position information of the fire line at time K and the observed position information of the fire line; Adjustment: Input the adjusted forest fire spread speed at K-1 time and the fire line state analysis value at K-1 time into the Huygens fluctuation model to regain the predicted fire line position information at time K.
在本申请一些实施例中,调整模块907根据预设的林火蔓延速度更新系数矩阵和偏差对K-1时刻的林火蔓延速度进行调整的具体实现过程可如下:将林火蔓延速度更新系数矩阵和偏差进行乘法运算,将得到的乘积与经过调整的K-1时刻的林火蔓延速度进行加法运算。In some embodiments of the present application, the specific implementation process for the adjustment module 907 to adjust the forest fire spread rate at time K-1 according to the preset forest fire spread rate update coefficient matrix and the deviation may be as follows: The matrix and the deviation are multiplied, and the resulting product is added to the adjusted forest fire spread rate at time K-1.
在本申请一些实施例中,数据同化模块908根据重新计算的K时刻的火线预测位置信息和火线观测位置信息计算K时刻的火线状态分析值的具体实现过程可如下:基于集合卡尔曼滤波算法,将重新计算的K时刻的火线预测位置信息与火线观测位置信息进行最小二乘拟合,得到K时刻的火线状态分析值。In some embodiments of the present application, the specific implementation process for the data assimilation module 908 to calculate the live line state analysis value at time K according to the recalculated live line predicted position information and live line observation position information at time K may be as follows: based on the set Kalman filter algorithm, The least squares fitting is performed between the recalculated live line predicted position information at time K and the live line observation position information to obtain the live line state analysis value at time K.
需要说明的是,前述对基于无人机视频的林火蔓延数据同化方法的解释说明,也适用于本申请实施例的基于无人机视频的林火蔓延数据同化装置,其实现原理类似,在此不再赘述。It should be noted that the foregoing explanation of the method for assimilation of forest fire spread data based on drone video is also applicable to the device for assimilation of forest fire spread data based on drone video in the embodiment of the present application, and its implementation principle is similar. This will not be repeated here.
综上,本申请实施例的基于无人机视频的林火蔓延数据同化装置,通过获取火灾发生地的气象数据和基础地理信息数据;获取火灾发生地K-1时刻的火线状态分析值;将火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至林火蔓延模型,获取K时刻的火线预测位置信息;获取基于无人机拍摄的火场区域热成像视频;根据火场区域热成像视频获取K时刻的火线观测位置信息;根据K时刻的火线预测位置信息和火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整;需要对林火蔓延模型进行参数调整时,根据K时刻的火线预测位置信息和火线观测位置信息调整林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息;根据重新计算的K时刻的火线预测位置信息和火线观测位置信息计算K时刻的火线状态分析值。这种基于无人机视频的林火蔓延数据同化装置采用了无人机作为前端监测设备,实时提取火线,获得火线的位置信息,针对林火蔓延模型提出了参数可动态调整的同化林火蔓延模型,有效解决了仿真模型不能针对模拟环境的变化动态调整、林火模型不 适应于非稳态、环境的变化无法实时传输等问题,提高了模型预测精度。无人机具有高机动性和低成本的优点,无人机可以实时回传现场视频,从而使得观测火线的更新间隔为分钟级甚至秒级识别。该模型采取的数据同化方法,不断同化林火蔓延模型,提高过火面积的预测精度,为林火扑救工作提供客观的火场信息。To sum up, the forest fire spread data assimilation device based on the drone video of the embodiment of the present application obtains the meteorological data and basic geographic information data of the fire occurrence place; obtains the fire line state analysis value at the time K-1 of the fire occurrence place; The meteorological data, basic geographic information data and the analysis value of the fire line state at the time of K-1 are input into the forest fire spread model, and the predicted position information of the fire line at time K is obtained; the thermal imaging of the fire area based on the drone shooting is obtained. video; obtain the fire line observation position information at time K according to the thermal imaging video of the fire field area; according to the fire line prediction position information and fire line observation position information at time K, determine whether it is necessary to adjust the parameters of the forest fire spread model; When the parameters of the model are adjusted, the model parameters of the forest fire spread model are adjusted according to the predicted position information of the fire line and the observation position information of the fire line at time K, and the predicted position information of the fire line at time K is recalculated according to the forest fire spread model adjusted by the model parameters; The hotline state analysis value at time K is calculated from the recalculated hotline predicted position information and hotline observation position information at time K. This UAV video-based forest fire spread data assimilation device uses UAV as the front-end monitoring equipment, extracts the fire line in real time, and obtains the position information of the fire line. According to the forest fire spread model, a dynamic adjustment of the parameters of the forest fire spread assimilation is proposed. The model effectively solves the problems that the simulation model cannot be dynamically adjusted for the changes of the simulated environment, the forest fire model is not suitable for non-steady state, and the changes of the environment cannot be transmitted in real time, etc., and the prediction accuracy of the model is improved. UAVs have the advantages of high maneuverability and low cost. UAVs can send back live video in real time, so that the update interval of the observation line of fire can be identified in minutes or even seconds. The data assimilation method adopted by the model continuously assimilates the forest fire spread model, improves the prediction accuracy of the burned area, and provides objective fire site information for forest fire fighting work.
根据本申请的实施例,本申请还提供了一种电子设备和一种可读存储介质。According to the embodiments of the present application, the present application further provides an electronic device and a readable storage medium.
如图10所示,是根据本申请实施例的基于无人机视频的林火蔓延数据同化的方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。As shown in FIG. 10 , it is a block diagram of an electronic device of a method for assimilating forest fire spread data based on drone video according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
如图10所示,该电子设备包括:一个或多个处理器1001、存储器1002,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图10中以一个处理器1001为例。As shown in FIG. 10, the electronic device includes: one or more processors 1001, a memory 1002, and interfaces for connecting various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or otherwise as desired. The processor may process instructions executed within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired. Likewise, multiple electronic devices may be connected, each providing some of the necessary operations (eg, as a server array, a group of blade servers, or a multiprocessor system). In FIG. 10, a processor 1001 is used as an example.
存储器1002即为本申请所提供的非瞬时计算机可读存储介质。其中,所述存储器存储有可由至少一个处理器执行的指令,以使所述至少一个处理器执行本申请所提供的基于无人机视频的林火蔓延数据同化的方法。本申请的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本申请所提供的基于无人机视频的林火蔓延数据同化的方法。The memory 1002 is the non-transitory computer-readable storage medium provided by the present application. Wherein, the memory stores instructions executable by at least one processor, so that the at least one processor executes the method for data assimilation of forest fire spread based on drone video provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method for assimilating forest fire spread data based on drone video provided by the present application.
存储器1002作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本申请实施例中的基于无人机视频的林火蔓延数据同化的方法对应的程序指令/模块(例如,附图9所示的第一获取模块901、第二获取模块902、第三获取模块903、第四获取模块904、第五获取模块905、判断模块906、调整模块907、数据同化模块908)。处理器1001通过运行存储在存储器1002中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的基于无人机视频的林火蔓延数据同化的方法。As a non-transitory computer-readable storage medium, the memory 1002 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as the method for assimilating forest fire spread data based on drone video in the embodiments of the present application Corresponding program instructions/modules (for example, the first acquisition module 901, the second acquisition module 902, the third acquisition module 903, the fourth acquisition module 904, the fifth acquisition module 905, the judgment module 906, the adjustment module 906 shown in FIG. module 907, data assimilation module 908). The processor 1001 executes various functional applications and data processing of the server by running the non-transient software programs, instructions and modules stored in the memory 1002, that is, to realize the forest fire spread data based on the drone video in the above method embodiments. assimilation method.
存储器1002可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、 至少一个功能所需要的应用程序;存储数据区可存储根据基于无人机视频的林火蔓延数据同化的电子设备的使用所创建的数据等。此外,存储器1002可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器1002可选包括相对于处理器1001远程设置的存储器,这些远程存储器可以通过网络连接至基于无人机视频的林火蔓延数据同化的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 1002 may include a stored program area and a stored data area, wherein the stored program area may store an operating system and an application program required by at least one function; data created by the use of the device, etc. Additionally, memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 1002 may optionally include memory located remotely relative to the processor 1001 that may be connected via a network to electronics for assimilation of forest fire spread data based on drone video. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
基于无人机视频的林火蔓延数据同化的方法的电子设备还可以包括:输入装置1003和输出装置1004。处理器1001、存储器1002、输入装置1003和输出装置1004可以通过总线或者其他方式连接,图10中以通过总线连接为例。The electronic device of the method for assimilation of forest fire spread data based on drone video may further include: an input device 1003 and an output device 1004 . The processor 1001 , the memory 1002 , the input device 1003 and the output device 1004 may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 10 .
输入装置1003可接收输入的数字或字符信息,以及产生与基于无人机视频的林火蔓延数据同化的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置1004可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。The input device 1003 can receive input numerical or character information, as well as generate key signal input related to user settings and function control of electronic equipment for assimilation of forest fire spread data based on drone video, such as touch screen, keypad, mouse, track Input devices such as pads, touchpads, pointing sticks, one or more mouse buttons, trackballs, joysticks, etc. Output devices 1004 may include display devices, auxiliary lighting devices (eg, LEDs), haptic feedback devices (eg, vibration motors), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that The processor, which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device. These computational programs (also referred to as programs, software, software applications, or codes) include machine instructions for programmable processors, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages calculation program. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or apparatus for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs), including machine-readable media that receive machine instructions as machine-readable signals. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用 户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer. Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。The systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system. The components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。其中,服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS(Virtual Private Server虚拟专用服务器)服务中,存在的管理难度大,业务扩展性弱的缺陷。A computer system can include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Among them, the server can be a cloud server, also known as a cloud computing server or a cloud host, which is a host product in the cloud computing service system to solve the problems existing in traditional physical hosts and VPS (Virtual Private Server) services. The management is difficult and the business expansion is weak.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.

Claims (12)

  1. 一种基于无人机视频的林火蔓延数据同化方法,其特征在于,包括:A method for data assimilation of forest fire spread based on drone video, characterized in that it includes:
    获取火灾发生地的气象数据和基础地理信息数据,并获取所述火灾发生地K-1时刻的火线状态分析值;Obtain the meteorological data and basic geographic information data of the fire place, and obtain the fire line state analysis value at the time K-1 of the fire place;
    将所述火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息;Input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model, and obtain the predicted fire line position information at the time of K;
    获取基于无人机拍摄的火场区域热成像视频,并根据所述火场区域热成像视频获取所述K时刻的火线观测位置信息;Obtaining the thermal imaging video of the fire field area based on the unmanned aerial vehicle, and acquiring the fire line observation position information at the K time according to the thermal imaging video of the fire field area;
    根据所述K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整;According to the predicted position information of the fire line at the time K and the observed position information of the fire line, it is judged whether parameter adjustment of the forest fire spread model needs to be performed;
    如果需要对所述林火蔓延模型进行参数调整,则根据所述K时刻的火线预测位置信息和所述火线观测位置信息调整所述林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,以及根据重新计算的K时刻的火线预测位置信息和所述火线观测位置信息计算K时刻的火线状态分析值。If the parameters of the forest fire spread model need to be adjusted, the model parameters of the forest fire spread model are adjusted according to the predicted position information of the fire line at the time K and the observed position information of the fire line, and the forest fire spread model adjusted according to the model parameters is adjusted. The fire spread model recalculates the predicted live position information at time K, and calculates the live wire state analysis value at time K according to the recalculated predicted live position information at time K and the observed live position information.
  2. 根据权利要求1所述的方法,其特征在于,所述林火蔓延模型包括Rothermel模型和惠更斯波动模型;所述将所述火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息,包括:The method according to claim 1, wherein the forest fire spread model comprises a Rothermel model and a Huygens wave model; the meteorological data, basic geographic information data and the K- The fire line state analysis value at time 1 is input into the forest fire spread model, and the predicted fire line position information at time K is obtained, including:
    将所述火灾发生地的气象数据和所述基础地理信息数据输入至所述Rothermel模型,获取K-1时刻的林火蔓延速度;Input the meteorological data of the fire place and the basic geographic information data into the Rothermel model to obtain the forest fire spread speed at the time of K-1;
    将所述K-1时刻的林火蔓延速度和所述K-1时刻的火线状态分析值输入至所述惠更斯波动模型进行火线位置的预测,得到K时刻的火线预测位置信息。The forest fire spread speed at the time K-1 and the analysis value of the fire line state at the time K-1 are input into the Huygens fluctuation model to predict the fire line position, and the fire line prediction position information at the time K is obtained.
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述火场区域热成像视频获取所述K时刻的火线观测位置信息,包括:The method according to claim 1, wherein the obtaining the fire line observation position information at the K time according to the thermal imaging video of the fire field area comprises:
    从所述火场区域热成像视频中获取K时刻的火场区域热成像;Obtain the thermal image of the fire area at time K from the thermal image video of the fire area;
    确定所述火场区域热成像之中各个像素所对应的温度信息;determining the temperature information corresponding to each pixel in the thermal imaging of the fire field area;
    根据所述火场区域热成像之中各个像素所对应的温度信息和温度阈值,从火场区域热成像中提取火场范围;According to the temperature information and the temperature threshold corresponding to each pixel in the thermal imaging of the fire field area, extract the fire field range from the thermal imaging of the fire field area;
    对所述火场区域热成像之中的火场范围进行边缘提取,获得火线的像素位置;Perform edge extraction on the fire field range in the thermal imaging of the fire field area to obtain the pixel position of the fire line;
    将所述火线的像素位置转换为火线的GPS坐标,获得所述K时刻的火线观测位置信息。Convert the pixel position of the live line to the GPS coordinates of the live line, and obtain the live line observation position information at the K time.
  4. 根据权利要求3所述的方法,其特征在于,所述无人机的个数为至少一个,针对至少一个无人机在多个观测点拍摄的火场区域热成像视频,获取火线的多个像素位置;所述 将所述火线的像素位置转换为火线的GPS坐标,获得所述K时刻的火线观测位置信息,包括:The method according to claim 3, wherein the number of the unmanned aerial vehicle is at least one, and for the thermal imaging video of the fire field area captured by the at least one unmanned aerial vehicle at multiple observation points, a plurality of pixels of the fire line are obtained. position; the pixel position of the live line is converted into the GPS coordinates of the live line, and the observation position information of the live line at the K moment is obtained, including:
    对所述火线的多个像素位置分别进行坐标转换,获得所述火线在无人机地理坐标系中的多个坐标值;Performing coordinate transformation on a plurality of pixel positions of the line of fire, respectively, to obtain a plurality of coordinate values of the line of fire in the UAV geographic coordinate system;
    根据所述火线在无人机地理坐标系中的多个坐标值计算所述火线的多个观测高度角矩阵和多个方位角矩阵;Calculate a plurality of observation altitude angle matrices and a plurality of azimuth angle matrices of the fire line according to the plurality of coordinate values of the fire line in the UAV geographic coordinate system;
    根据所述火线的多个观测高度角矩阵和多个方位角矩阵对火线位置进行卡尔曼滤波估计,获得所述火线的坐标估计值;Perform Kalman filter estimation on the position of the live line according to multiple observation elevation angle matrices and multiple azimuth angle matrices of the live line, and obtain the coordinate estimated value of the live line;
    将所述火线的坐标估计值通过GPS坐标转换,获得所述K时刻的火线观测位置信息。The estimated value of the coordinates of the line of fire is converted by GPS coordinates to obtain the observation position information of the line of fire at the time K.
  5. 根据权利要求3所述的方法,其特征在于,所述将所述火线的像素位置转换为火线的GPS坐标,获得所述K时刻的火线观测位置信息,包括:The method according to claim 3, wherein the converting the pixel position of the live line into the GPS coordinates of the live line, and obtaining the live line observation position information at the K time, comprises:
    获取所述火灾发生地的DEM地理信息;Obtain the DEM geographic information of the fire location;
    获取所述无人机的GPS信息、姿态信息和内置参数;Obtain the GPS information, attitude information and built-in parameters of the UAV;
    根据所述DEM地理信息、所述无人机的GPS信息、姿态信息和内置参数,生成无人机点位的虚拟视角;According to the DEM geographic information, the GPS information, attitude information and built-in parameters of the UAV, a virtual perspective of the point of the UAV is generated;
    根据所述无人机点位的虚拟视角模拟实际无人机成像过程,得到仿真图像;According to the virtual viewing angle of the UAV point, the actual UAV imaging process is simulated to obtain a simulated image;
    根据所述火线的像素位置,确定所述火线在所述仿真图像中的像素坐标;According to the pixel position of the live line, determine the pixel coordinates of the live line in the simulation image;
    将所述火线在所述仿真图像中的像素坐标通过GPS坐标转换,获得所述K时刻的火线观测位置信息。Converting the pixel coordinates of the live line in the simulation image through GPS coordinates to obtain the fire line observation position information at the K time.
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整,包括:The method according to claim 1, wherein the determining whether parameter adjustment of the forest fire spread model needs to be performed according to the predicted position information of the fire line at the time K and the observed position information of the fire line, comprising:
    计算所述K时刻的火线预测位置信息和所述火线观测位置信息的偏差;Calculate the deviation of the predicted live position information at time K and the observed live position information;
    判断所述偏差是否收敛在目标范围内;Judging whether the deviation converges within the target range;
    若所述偏差未收敛在所述目标范围,则判断针对所述火蔓延模型的已迭代次数是否小于最高迭代次数;If the deviation does not converge within the target range, determining whether the number of iterations for the fire spread model is less than the maximum number of iterations;
    若所述火蔓延模型的已迭代次数小于所述最高迭代次数,则判定需要对所述林火蔓延模型进行参数调整;If the number of iterations of the fire spread model is less than the maximum number of iterations, it is determined that parameter adjustment of the forest fire spread model is required;
    若所述偏差收敛在所述目标范围,和/或,所述火蔓延模型的已迭代次数大于或等于所述最高迭代次数,则停止对所述林火蔓延模型进行参数调整。If the deviation converges to the target range, and/or the number of iterations of the fire spread model is greater than or equal to the maximum number of iterations, the parameter adjustment of the forest fire spread model is stopped.
  7. 根据权利要求2所述的方法,其特征在于,所述根据所述K时刻的火线预测位置信息和所述火线观测位置信息调整所述林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息,包括:The method according to claim 2, wherein the model parameters of the forest fire spread model are adjusted according to the predicted fire line position information and the fire line observation position information at the time K, and the model parameters adjusted according to the model parameters are adjusted. The forest fire spread model recalculates the predicted location information of the fire line at time K, including:
    计算所述K时刻的火线预测位置信息和所述火线观测位置信息的偏差;Calculate the deviation of the predicted live position information at time K and the observed live position information;
    根据预设的林火蔓延速度更新系数矩阵和所述偏差对所述K-1时刻的林火蔓延速度进行调整;Adjust the forest fire spread rate at the K-1 moment according to the preset forest fire spread rate update coefficient matrix and the deviation;
    将经过调整的所述K-1时刻的林火蔓延速度和所述K-1时刻的火线状态分析值输入至所述惠更斯波动模型,重新获得K时刻的火线预测位置信息。Inputting the adjusted forest fire spreading speed at the time K-1 and the analysis value of the fire line state at the time K-1 into the Huygens fluctuation model, and re-obtaining the predicted position information of the fire line at the time K-1.
  8. 根据权利要求7所述的方法,其特征在于,所述根据预设的林火蔓延速度更新系数矩阵和所述偏差对所述K-1时刻的林火蔓延速度进行调整,包括:The method according to claim 7, wherein the adjusting the forest fire spread speed at the time K-1 according to the preset forest fire spread speed update coefficient matrix and the deviation comprises:
    将所述林火蔓延速度更新系数矩阵和所述偏差进行乘法运算,将得到的乘积与经过调整的所述K-1时刻的林火蔓延速度进行加法运算。Multiply the forest fire spreading speed update coefficient matrix and the deviation, and perform an addition operation on the obtained product and the adjusted forest fire spreading speed at time K-1.
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,所述根据重新计算的K时刻的火线预测位置信息和所述火线观测位置信息计算K时刻的火线状态分析值,包括:The method according to any one of claims 1 to 8, wherein the calculation of the live line state analysis value at time K according to the recalculated live line predicted position information at time K and the live line observation position information, comprising:
    基于集合卡尔曼滤波算法,将所述重新计算的K时刻的火线预测位置信息与所述火线观测位置信息进行最小二乘拟合,得到所述K时刻的火线状态分析值。Based on the ensemble Kalman filter algorithm, least square fitting is performed on the recalculated live line predicted location information at time K and the live line observed location information to obtain the live line state analysis value at the K time.
  10. 一种基于无人机视频的林火蔓延数据同化装置,其特征在于,包括:A device for data assimilation of forest fire spread based on drone video, characterized in that it includes:
    第一获取模块,用于获取火灾发生地的气象数据和基础地理信息数据;The first acquisition module is used to acquire meteorological data and basic geographic information data of the fire place;
    第二获取模块,用于获取所述火灾发生地K-1时刻的火线状态分析值;The second acquisition module is used to acquire the analysis value of the live line state at the time K-1 of the fire occurrence place;
    第三获取模块,用于将所述火灾发生地的气象数据、基础地理信息数据和所述K-1时刻的火线状态分析值输入至所述林火蔓延模型,获取K时刻的火线预测位置信息;The third acquisition module is used to input the meteorological data, basic geographic information data and the fire line state analysis value at the time of K-1 into the forest fire spread model, and obtain the predicted position information of the fire line at time K at the time of K-1. ;
    第四获取模块,用于获取基于无人机拍摄的火场区域热成像视频;The fourth acquisition module is used to acquire the thermal imaging video of the fire area based on the drone shooting;
    第五获取模块,用于根据所述火场区域热成像视频获取所述K时刻的火线观测位置信息;a fifth acquisition module, configured to acquire the fire line observation position information at the K moment according to the thermal imaging video of the fire field area;
    判断模块,用于根据所述K时刻的火线预测位置信息和所述火线观测位置信息,判断是否需要对所述林火蔓延模型进行参数调整;a judgment module, configured to judge whether parameter adjustment of the forest fire spread model needs to be performed according to the predicted position information of the fire line at the time K and the observed position information of the fire line;
    调整模块,用于在需要对所述林火蔓延模型进行参数调整时,根据所述K时刻的火线预测位置信息和所述火线观测位置信息调整所述林火蔓延模型的模型参数,并根据经过模型参数调整的林火蔓延模型重新计算K时刻的火线预测位置信息;The adjustment module is configured to adjust the model parameters of the forest fire spread model according to the predicted position information of the fire line at the K time and the observed position information of the fire line when the parameters of the forest fire spread model need to be adjusted, and The forest fire spread model adjusted by the model parameters recalculates the predicted position information of the fire line at time K;
    数据同化模块,用于根据重新计算的K时刻的火线预测位置信息和所述火线观测位置信息计算K时刻的火线状态分析值。The data assimilation module is configured to calculate the live line state analysis value at time K according to the recalculated live line predicted position information at time K and the live line observation position information.
  11. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如权利要求1至9中任一项所述的基于无人机视频的林火蔓延数据同化方法。An electronic device, characterized in that it comprises: a memory, a processor, and a computer program stored on the memory and running on the processor, when the processor executes the computer program, the computer program as claimed in the claims is realized. The data assimilation method of forest fire spread based on drone video according to any one of 1 to 9.
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程 序被处理器执行时实现如权利要求1至9中任一项所述的基于无人机视频的林火蔓延数据同化方法。A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the forest fire based on drone video according to any one of claims 1 to 9 is realized Sprawl data assimilation method.
PCT/CN2021/112848 2020-11-27 2021-08-16 Unmanned aerial vehicle video-based forest fire spreading data assimilation method and apparatus WO2022110912A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011367733.0A CN112464819B (en) 2020-11-27 2020-11-27 Forest fire spread data assimilation method and device based on unmanned aerial vehicle video
CN202011367733.0 2020-11-27

Publications (1)

Publication Number Publication Date
WO2022110912A1 true WO2022110912A1 (en) 2022-06-02

Family

ID=74809410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112848 WO2022110912A1 (en) 2020-11-27 2021-08-16 Unmanned aerial vehicle video-based forest fire spreading data assimilation method and apparatus

Country Status (2)

Country Link
CN (1) CN112464819B (en)
WO (1) WO2022110912A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115099493A (en) * 2022-06-27 2022-09-23 东北林业大学 CNN-based forest fire spreading rate prediction method in any direction
CN115661245A (en) * 2022-10-24 2023-01-31 东北林业大学 Large-scale live wire instantaneous positioning method based on unmanned aerial vehicle
CN115671617A (en) * 2022-11-03 2023-02-03 国网冀北电力有限公司超高压分公司 Fire positioning method, device, equipment and storage medium for flexible direct current converter station
CN116952081A (en) * 2023-07-26 2023-10-27 武汉巨合科技有限公司 Aerial monitoring system and monitoring method for parameter images of drop points of fire extinguishing bomb
CN117152592A (en) * 2023-10-26 2023-12-01 青岛澳西智能科技有限公司 Building information and fire information visualization system and method
CN117163302A (en) * 2023-10-31 2023-12-05 安胜(天津)飞行模拟系统有限公司 Aircraft instrument display method, device, equipment and storage medium
CN117689520A (en) * 2024-02-01 2024-03-12 青岛山科智汇信息科技有限公司 Grassland fire extinguishing bomb coverage capability evaluation method, medium and system

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464819B (en) * 2020-11-27 2024-01-12 清华大学 Forest fire spread data assimilation method and device based on unmanned aerial vehicle video
CN112947264A (en) * 2021-04-21 2021-06-11 苏州希盟科技股份有限公司 Control method and device for dispenser, electronic equipment and medium
CN113554845B (en) * 2021-06-25 2022-09-30 东莞市鑫泰仪器仪表有限公司 Be used for forest fire prevention thermal imaging device
CN114495416A (en) * 2021-12-29 2022-05-13 北京辰安科技股份有限公司 Fire monitoring method and device based on unmanned aerial vehicle and terminal equipment
CN115518316B (en) * 2022-09-20 2024-02-20 珠海安擎科技有限公司 Intelligent fire protection system based on interconnection of unmanned aerial vehicle, cloud platform and AR glasses
CN117745536A (en) * 2023-12-25 2024-03-22 东北林业大学 Forest fire large-scale live wire splicing method and system based on multiple unmanned aerial vehicles

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202976376U (en) * 2012-11-22 2013-06-05 华南农业大学 Forest fire monitoring and emergency command system based unmanned aerial vehicle
CN106021666A (en) * 2016-05-10 2016-10-12 四川大学 Mountain fire disaster early-warning method for overhead power transmission line
US9977963B1 (en) * 2017-03-03 2018-05-22 Northrop Grumman Systems Corporation UAVs for tracking the growth of large-area wildland fires
CN108763811A (en) * 2018-06-08 2018-11-06 中国科学技术大学 Dynamic data drives forest fire appealing prediction technique
CN112307884A (en) * 2020-08-19 2021-02-02 航天图景(北京)科技有限公司 Forest fire spreading prediction method based on continuous time sequence remote sensing situation data and electronic equipment
CN112464819A (en) * 2020-11-27 2021-03-09 清华大学 Forest fire spreading data assimilation method and device based on unmanned aerial vehicle video

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819926B (en) * 2012-08-24 2015-04-29 华南农业大学 Fire monitoring and warning method on basis of unmanned aerial vehicle
KR20170101516A (en) * 2016-02-29 2017-09-06 한국전자통신연구원 Apparatus and method for fire monitoring using unmanned aerial vehicle
CN109472421A (en) * 2018-11-22 2019-03-15 广东电网有限责任公司 A kind of power grid mountain fire sprawling method for early warning and device
CN109871613B (en) * 2019-02-18 2023-05-19 南京林业大学 Forest fire discrimination model acquisition method and prediction application
CN110390135B (en) * 2019-06-17 2023-04-21 北京中科锐景科技有限公司 Method for improving forest fire spreading prediction precision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202976376U (en) * 2012-11-22 2013-06-05 华南农业大学 Forest fire monitoring and emergency command system based unmanned aerial vehicle
CN106021666A (en) * 2016-05-10 2016-10-12 四川大学 Mountain fire disaster early-warning method for overhead power transmission line
US9977963B1 (en) * 2017-03-03 2018-05-22 Northrop Grumman Systems Corporation UAVs for tracking the growth of large-area wildland fires
CN108763811A (en) * 2018-06-08 2018-11-06 中国科学技术大学 Dynamic data drives forest fire appealing prediction technique
CN112307884A (en) * 2020-08-19 2021-02-02 航天图景(北京)科技有限公司 Forest fire spreading prediction method based on continuous time sequence remote sensing situation data and electronic equipment
CN112464819A (en) * 2020-11-27 2021-03-09 清华大学 Forest fire spreading data assimilation method and device based on unmanned aerial vehicle video

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115099493A (en) * 2022-06-27 2022-09-23 东北林业大学 CNN-based forest fire spreading rate prediction method in any direction
CN115099493B (en) * 2022-06-27 2023-11-10 东北林业大学 Forest fire spreading rate prediction method in any direction based on CNN
CN115661245A (en) * 2022-10-24 2023-01-31 东北林业大学 Large-scale live wire instantaneous positioning method based on unmanned aerial vehicle
CN115671617A (en) * 2022-11-03 2023-02-03 国网冀北电力有限公司超高压分公司 Fire positioning method, device, equipment and storage medium for flexible direct current converter station
CN116952081A (en) * 2023-07-26 2023-10-27 武汉巨合科技有限公司 Aerial monitoring system and monitoring method for parameter images of drop points of fire extinguishing bomb
CN116952081B (en) * 2023-07-26 2024-04-16 武汉巨合科技有限公司 Aerial monitoring system and monitoring method for parameter images of drop points of fire extinguishing bomb
CN117152592A (en) * 2023-10-26 2023-12-01 青岛澳西智能科技有限公司 Building information and fire information visualization system and method
CN117152592B (en) * 2023-10-26 2024-01-30 青岛澳西智能科技有限公司 Building information and fire information visualization system and method
CN117163302A (en) * 2023-10-31 2023-12-05 安胜(天津)飞行模拟系统有限公司 Aircraft instrument display method, device, equipment and storage medium
CN117163302B (en) * 2023-10-31 2024-01-23 安胜(天津)飞行模拟系统有限公司 Aircraft instrument display method, device, equipment and storage medium
CN117689520A (en) * 2024-02-01 2024-03-12 青岛山科智汇信息科技有限公司 Grassland fire extinguishing bomb coverage capability evaluation method, medium and system
CN117689520B (en) * 2024-02-01 2024-05-10 青岛山科智汇信息科技有限公司 Grassland fire extinguishing bomb coverage capability evaluation method, medium and system

Also Published As

Publication number Publication date
CN112464819B (en) 2024-01-12
CN112464819A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
WO2022110912A1 (en) Unmanned aerial vehicle video-based forest fire spreading data assimilation method and apparatus
US20190206073A1 (en) Aircraft information acquisition method, apparatus and device
US9185289B2 (en) Generating a composite field of view using a plurality of oblique panoramic images of a geographic area
Stipaničev et al. Advanced automatic wildfire surveillance and monitoring network
US10740875B1 (en) Displaying oblique imagery
CN111649724A (en) Visual positioning method and device based on mobile edge calculation
US10726614B2 (en) Methods and systems for changing virtual models with elevation information from real world image processing
CN109102566A (en) A kind of indoor outdoor scene method for reconstructing and its device of substation
WO2023125587A1 (en) Fire monitoring method and apparatus based on unmanned aerial vehicle
Renwick et al. Drone-based reconstruction for 3D geospatial data processing
Qiao et al. Ground target geolocation based on digital elevation model for airborne wide-area reconnaissance system
CN109977609A (en) A kind of ground high temperature heat source Infrared Image Simulation method based on true remotely-sensed data
Li et al. Verification of monocular and binocular pose estimation algorithms in vision-based UAVs autonomous aerial refueling system
JP2020008802A (en) Three-dimensional map generation device and three-dimensional map generation method
WO2023273415A1 (en) Positioning method and apparatus based on unmanned aerial vehicle, storage medium, electronic device, and product
Bradley et al. Georeferenced mosaics for tracking fires using unmanned miniature air vehicles
Qu et al. Retrieval of 30-m-resolution leaf area index from China HJ-1 CCD data and MODIS products through a dynamic Bayesian network
WO2021051220A1 (en) Point cloud fusion method, device, and system, and storage medium
US11557059B2 (en) System and method for determining position of multi-dimensional object from satellite images
Hu et al. A spatiotemporal intelligent framework and experimental platform for urban digital twins
Zheng et al. Dual LIDAR online calibration and mapping and perception system
CN116597155A (en) Forest fire spreading prediction method and system based on multi-platform collaborative computing mode
Stødle et al. High-performance visualisation of UAV sensor and image data with raster maps and topography in 3D
KR101640189B1 (en) appratus and method for setting path by using geographical information
CN114359425A (en) Method and device for generating ortho image, and method and device for generating ortho exponential graph

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896417

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896417

Country of ref document: EP

Kind code of ref document: A1