CN116385239A - Emergency management method based on dynamic perception fusion of disaster site information - Google Patents

Emergency management method based on dynamic perception fusion of disaster site information Download PDF

Info

Publication number
CN116385239A
CN116385239A CN202310361774.6A CN202310361774A CN116385239A CN 116385239 A CN116385239 A CN 116385239A CN 202310361774 A CN202310361774 A CN 202310361774A CN 116385239 A CN116385239 A CN 116385239A
Authority
CN
China
Prior art keywords
disaster
time
data
information
disaster site
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310361774.6A
Other languages
Chinese (zh)
Inventor
洪赢政
薛林
高浩翔
孙青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fire Research Institute of MEM
Original Assignee
Shanghai Fire Research Institute of MEM
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fire Research Institute of MEM filed Critical Shanghai Fire Research Institute of MEM
Priority to CN202310361774.6A priority Critical patent/CN116385239A/en
Publication of CN116385239A publication Critical patent/CN116385239A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Computer Security & Cryptography (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The disaster site emergency management means that when sudden events such as natural disasters, accident disasters and the like occur, various emergency resources are organized and coordinated, management activities of emergency rescue and post-disaster recovery work are implemented, the efficiency of disaster site rescue and recovery is improved, and casualties and property loss caused by disasters are reduced to the greatest extent. However, the current disaster site information sensing sensor technology and monitoring equipment have limited resources, and the disaster site personnel are insufficient, so that the disaster site information collection and processing efficiency is low, and the needs of disaster emergency management are difficult to meet. Therefore, the emergency management method based on the disaster site information dynamic perception fusion is provided, the technologies such as the Internet of things, big data and artificial intelligence are integrated to carry out the real-time, dynamic and interactive perception fusion of the data to generate a disaster site situation perception map, and the disaster site situation perception map is used for carrying out data analysis and timely early warning, so that the disaster treatment accuracy, the rescue command intellectualization and the dispatching work efficiency are realized.

Description

Emergency management method based on dynamic perception fusion of disaster site information
Technical Field
The invention belongs to the field of disaster site emergency management, and particularly relates to an emergency management method based on dynamic perception fusion of disaster site information.
Background
The disaster site emergency management is management activities for organizing and coordinating various emergency resources when sudden events such as natural disasters, accident disasters and the like occur, and implementing emergency rescue and post-disaster recovery work. In recent years, disasters frequently occur, and great threat is brought to life and property safety and social stability of people, so that the emergency management of disaster sites is particularly important, on one hand, the efficiency of rescue and recovery of disaster sites can be improved, and casualties and property loss caused by the disasters are reduced to the greatest extent; on the other hand, the sense of social responsibility in various aspects of government, enterprises and institutions and the like can be enhanced, the social responsibility can be better fulfilled, and the negative influence caused by disasters is reduced. In addition, the emergency management in disaster sites is enhanced, the environment can be better protected, the national security can be maintained, the rescue efficiency can be improved, the lives and properties can be saved to the maximum extent, and the emergency management system is an important work. The disaster site emergency management method mainly comprises the links of disaster early warning, emergency response, resource allocation, rescue, recovery and the like, and still has some disadvantages. Information awareness at the current disaster site has a plurality of disadvantages, for example, limited deployment of sensor technology and monitoring equipment leads to the situation that disaster areas cannot be covered comprehensively, and certain key information cannot be acquired in time; the information collection capability and the processing capability of disaster site personnel are limited, disaster information cannot be timely and accurately identified and fed back, information island problems among different departments are serious, information deposition and repeated work are caused, intelligent and automatic technical support is lacked, the information collection and processing efficiency is low, and the requirement of disaster emergency management is difficult to meet.
Therefore, the emergency management method based on the disaster site information dynamic perception fusion is provided, the technologies such as the Internet of things, big data and artificial intelligence are integrated to carry out the real-time, dynamic and interactive perception fusion of the data to generate a disaster site situation perception map, and the disaster site situation perception map is used for carrying out data analysis and timely early warning, so that the disaster treatment accuracy, the rescue command intellectualization and the dispatching work efficiency are realized.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an emergency management method based on dynamic perception fusion of disaster site information, which adopts the following technical scheme:
s1, collecting man-machine object ternary space perception data of a disaster site for emergency management command decision. The sensor can be a meteorological sensor, an optical sensor and an audio sensor which are deployed on site specially for disaster management purposes or a wearable sensing device carried by a scene rescue human body, and the sensor device is mainly used for collecting scene real-time information such as scene disaster dynamics, videos, environmental indexes and the like, geographic meteorological information and the like. The social media application may include microblogs, weChat, tremble other platforms on which users may share audio-video multimedia content related to disaster scenarios, filtering information collected from the social media application based on filtering related disaster causing keywords or semantic matches.
S2, preprocessing the collected multi-source heterogeneous data of the disaster scene, including filtering the collected data to remove any repeated or information irrelevant to the disaster scene, removing sensing noise information caused by environmental factors such as wind, rain or other interference sources by using signal processing technologies such as filtering and smoothing, and normalizing the collected multi-source heterogeneous data into a universal format by using technologies such as scaling or normalization.
S3, a time-phase feature extraction module is designed to extract time features from the multi-source heterogeneous data, and a space feature extraction module is designed to extract space features from each factor layer after time fusion.
S4, the time characteristics extracted from a plurality of perception data sources in the disaster site in the step S3 inevitably lead to information redundancy, so that the network is difficult to fit. The network receives the spatial features of the perception time sequence information extracted by the time sequence feature fusion module, processes long-term dependence of time data, remembers disaster site feature information, realizes compression of the disaster site perception information time features, and obtains fusion of the time and spatial features of the same dimension.
S5, a designed disaster site multi-element heterogeneous perception information space feature fusion module comprises a deep confidence network and a deep convolution neural network. The module firstly expands all shallow features of each type of image element into one-dimensional vectors as input variables of a depth confidence network; secondly, the fused characteristic data and the original data are used as deep convolutional neural network inputs. Training the two networks at the same time, after the input data is extracted by the two networks, the deep belief network outputs a one-dimensional feature vector, the deep convolution neural network outputs a two-dimensional feature matrix, and the output feature vectors of the two networks are combined into a new feature matrix and input into the logistic regression classifier for discrimination.
S6, after the space-time characteristics of the disaster sites are fused in the step S5, the deep convolution neural network outputs a two-dimensional characteristic map, the deep confidence network outputs a one-dimensional characteristic vector, and the two output characteristics are reconstructed into a two-dimensional characteristic matrix. In the process of reverse error propagation, the fused disaster site feature matrix is split into a one-dimensional feature vector and an independent feature map, and the feature vector and the independent feature map respectively participate in parameter optimization of the deep belief network and the deep convolutional neural network in the step S5. The invention selects a Bayesian algorithm and a random search as the parameter optimization algorithm of the model in consideration of the two networks and the dimensionality of the obtained data. And designing a storage structure in the parameter transmission, and transmitting the parameters subjected to the combined action of random search and Bayesian optimization to the next layer of the network to realize the parameter optimization of each layer of the network.
S7, for the disaster site space-time feature extraction and fusion network in the steps S3, S4, S5 and S6, the method uses a superposition function of the mean square error loss and the cross entropy loss as a loss function of the network, and is expressed as follows:
Figure BDA0004165321260000031
a=f(w·x+b)
where L is the loss function value, y is the actual label value of the landslide samples, a is the model predictive value, x is the input of the model, n is the total number of landslide samples, f is the activation function, and w, b are the network parameters.
S8, based on the space-time characteristics reconstructed in the step S7, carrying out disaster state judgment and classification by adopting a deep learning algorithm, classifying image data by utilizing a convolutional neural network, classifying time sequence data by utilizing a long-short-time memory network, and carrying out disaster classification, emotion analysis, entity recognition and the like on text data by utilizing a language model based on a transducer model. And constructing a disaster site situation awareness map based on the state judgment and classification results, wherein the map, the video picture, the sensor data and other multidimensional information are included, and the virtual reality technology, the three-dimensional visualization technology and the like are adopted to improve the readability and the understandability of the information.
S9, establishing a disaster information database based on the disaster site situation awareness map constructed in the step S8, wherein the disaster information database comprises information such as disaster types, disaster degrees, disaster areas, casualties, material losses and the like, determining the emergency situation of the disaster sites, comprising information such as the scale of disasters, the distribution of disaster-stricken personnel, the demand of materials and the like, and formulating disaster emergency management countermeasures, comprising emergency decisions such as personnel scheduling, material allocation, rescue path planning and the like, on the basis of analyzing the disaster site information.
Preferably, in step S3, the space-time feature extraction of the disaster site multi-source heterogeneous sensing information specifically includes:
in order to solve the problem of disaster site environment perception and wearable perception information fusion carried by human bodies, the invention constructs a multi-source heterogeneous information time-phase feature extraction module, which firstly completes the extraction of time sequence features by using a CNN network with multiple inputs and multiple outputs, and then compresses space data of the time sequence by using an LSTM network with multiple inputs and single outputs to generate a time-space high-dimensional feature map of disaster site perception information. On the basis, an up-sampling layer is constructed, and a feature map with the same size as the input data is generated.
The invention designs a disaster site multi-source heterogeneous perception information spatial feature extraction module, wherein given disaster site multi-source heterogeneous perception data are combined into three-dimensional convolution pairs (multi-source, time, length and width), the three-dimensional convolution pairs are input into a deep convolution neural network to perform spatial feature extraction, and simultaneously, the features of space and time dimensions are calculated, and a plurality of continuous data and three-dimensional convolution kernels are overlapped to form multi-dimensional data. The feature map of the ith element of the values of the deep convolutional neural network at x, y, z of the jth layer is shown in the following equation.
Figure BDA0004165321260000032
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004165321260000033
feature mapping of the ith element to a value at x, y, z of the jth layer; b ij Is that; r is R i Size of convolution kernel along time dimension for three dimensions, +.>
Figure BDA0004165321260000034
Values of (p, q, r) cores mapped for the mth feature connected to the upper layer;
Figure BDA0004165321260000041
characteristic mapping of the (i-1) th element to the value at x+p, y+q, z+r of the mth layerAnd (5) emitting.
Preferably, the process of updating the disaster site multi-source heterogeneous information time feature fusion network in step S4 includes three steps:
(1) And (5) establishing time fusion data, establishing a forgetting door, and inputting disaster site related factor characteristic data. The calculation formula of the forgetting gate is as follows.
z f =σ(W f ·[h t-1 ,x t ]+b f )
Wherein z is f Is a forgotten gating activation value, σ is a sigmoid function, W f Is a weighting matrix for forgetting gating, h t-1 Is the output value of time phase fusion data and the data of the moment before the factor characteristic, x t Is the input value of the time phase fusion data and the factor characteristic data at the current moment, b f Is a forgotten gating bias term.
Forgetting the gate will determine which information is the cell status information C from the last time t-1 The gate reads the output value h of the time phase fusion data and the factor characteristic data at the time t-1 t-1 Input value x at the current time t And forget gating bias term b f And calculates the forgetting gate activation value z f There is a sigmoid function decision. Output activation value from 0 to 1, representing the state C of the previous layer of cells t-1 Probability of forgetting, 1 is complete retention to C t 0 is completely discarded.
(2) Calculating state information C of time phase fusion data and factor characteristic data at time t t . The procedure first determines the cell state C at the current moment t Comprises a memory gating activation value z i And memory cell input state z. The calculation formula is as follows:
z i =σ(W i ·[h t-1 ,x t ]+b i )
z=tanh(W·[h t-1 ,x t ]+b)
C t =z f ×C t-1 +z i ×z
wherein z is i Is the memory gating activation value, z is the memory cell input state, σ is sigmoid function, W i Is a memory gating weight matrix, W is a weight matrix of memory cell input states, b i Is the bias term of the input gate, b is the bias term of the memory cell input state, tanh is the hyperbolic tangent function, c t State information of time phase fusion data and factor characteristic data at time t, z f To forget the gating activation value, C t-1 And the state information of the time phase fusion data and the factor characteristic data at the time t-1.
(3) And calculating the output state of the time phase fusion data and the factor characteristic data at the current moment (time t). z o Is the output gating, the state information C at the control time t t Transfer to h t To a degree of (3). The calculation formula is as follows:
z o =σ(W o ·[h t-1 ,x t ]+h o )
h t =z o ×tanh(C t )
wherein W is o Is the weight matrix of the output gating, b o Is the bias term for the output gating, and tanh is the hyperbolic tangent function.
The invention has the beneficial effects that: the invention synthesizes technologies such as the Internet of things, big data and artificial intelligence to perform real-time, dynamic and interactive perception fusion of data to generate a disaster scene situation perception map, and provides an emergency management method based on the dynamic perception fusion of disaster scene information, which is beneficial to realizing disaster treatment accuracy, rescue command intellectualization and dispatching work high efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. The drawings in the following description are only examples of embodiments of the present invention and other drawings may be made from these drawings by those of ordinary skill in the art without undue burden.
Wherein:
FIG. 1 is a drawing of the abstract of the specification of the present invention;
FIG. 2 is a flow chart of extracting spatiotemporal features of multi-source heterogeneous perception information in a disaster scene in an embodiment of the invention;
FIG. 3 is a schematic diagram of a disaster scene multi-source heterogeneous information time feature fusion network according to an embodiment of the present invention;
FIG. 4 is a flow chart of a process for updating a multi-source heterogeneous information time feature fusion network in a disaster scene according to the method of the invention;
Detailed Description
The following description of the embodiments of the present invention will be made clearly and perfectly with reference to the accompanying drawings of the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which are obtained without inventive effort by a person skilled in the art based on the embodiments of the present invention, are within the scope of the present invention.
Example 1
The invention provides an emergency management method based on dynamic perception fusion of disaster site information, which comprises the following steps:
s1, collecting man-machine object ternary space perception data of a disaster site for emergency management command decision. The sensor can be a meteorological sensor, an optical sensor and an audio sensor which are deployed on site specially for disaster management purposes or a wearable sensing device carried by a scene rescue human body, and the sensor device is mainly used for collecting scene real-time information such as scene disaster dynamics, videos, environmental indexes and the like, geographic meteorological information and the like. The social media application may include microblogs, weChat, tremble other platforms on which users may share audio-video multimedia content related to disaster scenarios, filtering information collected from the social media application based on filtering related disaster causing keywords or semantic matches.
S2, preprocessing the collected multi-source heterogeneous data of the disaster scene, including filtering the collected data to remove any repeated or information irrelevant to the disaster scene, removing sensing noise information caused by environmental factors such as wind, rain or other interference sources by using signal processing technologies such as filtering and smoothing, and normalizing the collected multi-source heterogeneous data into a universal format by using technologies such as scaling or normalization.
S3, a time-phase feature extraction module is designed to extract time features from the multi-source heterogeneous data, and a space feature extraction module is designed to extract space features from each factor layer after time fusion.
S4, the time characteristics extracted from a plurality of perception data sources in the disaster site in the step S3 inevitably lead to information redundancy, so that the network is difficult to fit. The network receives the spatial features of the perception time sequence information extracted by the time sequence feature fusion module, processes long-term dependence of time data, remembers disaster site feature information, realizes compression of the disaster site perception information time features, and obtains fusion of the time and spatial features of the same dimension.
S5, a designed disaster site multi-element heterogeneous perception information space feature fusion module comprises a deep confidence network and a deep convolution neural network. The module firstly expands all shallow features of each type of image element into one-dimensional vectors as input variables of a depth confidence network; secondly, the fused characteristic data and the original data are used as deep convolutional neural network inputs. Training the two networks at the same time, after the input data is extracted by the two networks, the deep belief network outputs a one-dimensional feature vector, the deep convolution neural network outputs a two-dimensional feature matrix, and the output feature vectors of the two networks are combined into a new feature matrix and input into the logistic regression classifier for discrimination.
S6, after the space-time characteristics of the disaster sites are fused in the step S5, the deep convolution neural network outputs a two-dimensional characteristic map, the deep confidence network outputs a one-dimensional characteristic vector, and the two output characteristics are reconstructed into a two-dimensional characteristic matrix. In the process of reverse error propagation, the fused disaster site feature matrix is split into a one-dimensional feature vector and an independent feature map, and the feature vector and the independent feature map respectively participate in parameter optimization of the deep belief network and the deep convolutional neural network in the step S5. The invention selects a Bayesian algorithm and a random search as the parameter optimization algorithm of the model in consideration of the two networks and the dimensionality of the obtained data. And designing a storage structure in the parameter transmission, and transmitting the parameters subjected to the combined action of random search and Bayesian optimization to the next layer of the network to realize the parameter optimization of each layer of the network.
S7, for the disaster site space-time feature extraction and fusion network in the steps S3, S4, S5 and S6, the method uses a superposition function of the mean square error loss and the cross entropy loss as a loss function of the network, and is expressed as follows:
Figure BDA0004165321260000061
a=f(w·x+b)
where L is the loss function value, y is the actual label value of the landslide samples, a is the model predictive value, x is the input of the model, n is the total number of landslide samples, f is the activation function, and w, b are the network parameters.
S8, based on the space-time characteristics reconstructed in the step S7, carrying out disaster state judgment and classification by adopting a deep learning algorithm, classifying image data by utilizing a convolutional neural network, classifying time sequence data by utilizing a long-short-time memory network, and carrying out disaster classification, emotion analysis, entity recognition and the like on text data by utilizing a language model based on a transducer model. And constructing a disaster site situation awareness map based on the state judgment and classification results, wherein the map, the video picture, the sensor data and other multidimensional information are included, and the virtual reality technology, the three-dimensional visualization technology and the like are adopted to improve the readability and the understandability of the information.
S9, establishing a disaster information database based on the disaster site situation awareness map constructed in the step S8, wherein the disaster information database comprises information such as disaster types, disaster degrees, disaster areas, casualties, material losses and the like, determining the emergency situation of the disaster sites, comprising information such as the scale of disasters, the distribution of disaster-stricken personnel, the demand of materials and the like, and formulating disaster emergency management countermeasures, comprising emergency decisions such as personnel scheduling, material allocation, rescue path planning and the like, on the basis of analyzing the disaster site information.
Example two
On the basis of the first embodiment, as shown in fig. 2, in step S3 of the present invention, the space-time feature extraction of the multi-source heterogeneous perception information of the disaster site is specifically: the method comprises the steps of constructing a multi-source heterogeneous information time-phase feature extraction module, firstly completing the extraction of time sequence features by using a multi-input to multi-output CNN network, and then compressing space data of the time sequence by using a multi-input to single-output LSTM network to generate a space-time high-dimensional feature map of disaster site perception information. On the basis, an up-sampling layer is constructed, and a feature map with the same size as the input data is generated.
The invention designs a disaster site multi-source heterogeneous perception information spatial feature extraction module, wherein given disaster site multi-source heterogeneous perception data are combined into three-dimensional convolution pairs (multi-source, time, length and width), the three-dimensional convolution pairs are input into a deep convolution neural network to perform spatial feature extraction, and simultaneously, the features of space and time dimensions are calculated, and a plurality of continuous data and three-dimensional convolution kernels are overlapped to form multi-dimensional data. The feature map of the ith element of the values of the deep convolutional neural network at x, y, z of the jth layer is shown in the following equation.
Figure BDA0004165321260000071
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004165321260000072
feature mapping of the ith element to a value at x, y, z of the jth layer; b ij Is that; r is R i Size of convolution kernel along time dimension for three dimensions, +.>
Figure BDA0004165321260000073
Values of (p, q, r) cores mapped for the mth feature connected to the upper layer;
Figure BDA0004165321260000074
feature mapping of the i-1 st element, which is a value at x+p, y+q, z+r of the mth layer.
Example III
On the basis of the first and second embodiments, as shown in fig. 3 and fig. 4, the method of the present invention comprises three steps:
(1) And inputting disaster site related factor characteristic data, establishing time fusion data, and establishing a forgetting door. The calculation formula of the forgetting gate is as follows.
z f =σ(W f ·[h t-1 ,x t ]+b f )
Wherein z is f Is a forgotten gating activation value, σ is a sigmoid function, W f Is a weighting matrix for forgetting gating, h t-1 Is the output value of time phase fusion data and the data of the moment before the factor characteristic, x t Is the input value of the time phase fusion data and the factor characteristic data at the current moment, b f Is a forgotten gating bias term.
Forgetting the gate will determine which information is the cell status information C from the last time t-1 The gate reads the output value h of the time phase fusion data and the factor characteristic data at the time t-1 t-1 Input value x at the current time t And forget gating bias term b f And calculates the forgetting gate activation value z f There is a sigmoid function decision. Output activation value from 0 to 1, representing the state C of the previous layer of cells t-1 Probability of forgetting, 1 is complete retention to C t 0 is completely discarded.
(2) Calculating state information C of time phase fusion data and factor characteristic data at time t t . The procedure first determines the cell state C at the current moment t Comprises a memory gating activation value z i And memory cell input state z. The calculation formula is as follows:
z i =σ(W i ·[h t-1 ,x t ]+b i )
z=tanh(W·[h t-1 ,x t ]+b)
C t =z f ×C t-1 +z i ×z
wherein z is i Is the memory gating activation value, z is the memory cell input state, σ is the sigmoid function, W i Is a memory gating weight matrix, W is a weight matrix of memory cell input states, b i Is the bias term of the input gate, b is the bias term of the memory cell input state, tanh is the hyperbolic tangent function, C t State information of time phase fusion data and factor characteristic data at time t, z f To forget the gating activation value, C t-1 And the state information of the time phase fusion data and the factor characteristic data at the time t-1.
(3) And calculating the output state of the time phase fusion data and the factor characteristic data at the current moment (time t). z o Is the output gating, the state information C at the control time t t Transfer to h t To a degree of (3). The calculation formula is as follows:
z o =σ(W o ·[h t-1 ,x t ]+b o )
h t =z o ×tanh(C t )
wherein W is o Is the weight matrix of the output gating, b o Is the bias term for the output gating, and tanh is the hyperbolic tangent function.
In summary, the embodiment of the invention provides an emergency management method based on dynamic perception fusion of disaster site information, which integrates technologies such as the internet of things, big data and artificial intelligence to perform real-time, dynamic and interactive perception fusion of data to generate a disaster site situation perception map, and based on the disaster site situation perception map, the requirements of scientific, accurate and efficient digital command and dispatch are implemented, so that holographic perception of fight information such as site fighters, equipment, materials, battlefield environments and the like is realized.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (3)

1. The invention provides an emergency management method based on dynamic perception fusion of disaster site information, which is characterized by comprising the following steps:
s1, collecting man-machine object ternary space perception data of a disaster site for emergency management command decision. The sensor can be a meteorological sensor, an optical sensor and an audio sensor which are deployed on site specially for disaster management purposes or a wearable sensing device carried by a scene rescue human body, and the sensor device is mainly used for collecting scene real-time information such as scene disaster dynamics, videos, environmental indexes and the like, geographic meteorological information and the like. The social media application may include microblogs, weChat, tremble other platforms on which users may share audio-video multimedia content related to disaster scenarios, filtering information collected from the social media application based on filtering related disaster causing keywords or semantic matches.
S2, preprocessing the collected multi-source heterogeneous data of the disaster scene, including filtering the collected data to remove any repeated or information irrelevant to the disaster scene, removing sensing noise information caused by environmental factors such as wind, rain or other interference sources by using signal processing technologies such as filtering and smoothing, and normalizing the collected multi-source heterogeneous data into a universal format by using technologies such as scaling or normalization.
S3, a time-phase feature extraction module is designed to extract time features from the multi-source heterogeneous data, and a space feature extraction module is designed to extract space features from each factor layer after time fusion.
S4, the time characteristics extracted from a plurality of perception data sources in the disaster site in the step S3 inevitably lead to information redundancy, so that the network is difficult to fit. The network receives the spatial features of the perception time sequence information extracted by the time sequence feature fusion module, processes long-term dependence of time data, remembers disaster site feature information, realizes compression of the disaster site perception information time features, and obtains fusion of the time and spatial features of the same dimension.
S5, a designed disaster site multi-element heterogeneous perception information space feature fusion module comprises a deep confidence network and a deep convolution neural network. The module firstly expands all shallow features of each type of image element into one-dimensional vectors as input variables of a depth confidence network; secondly, the fused characteristic data and the original data are used as deep convolutional neural network inputs. Training the two networks at the same time, after the input data is extracted by the two networks, the deep belief network outputs a one-dimensional feature vector, the deep convolution neural network outputs a two-dimensional feature matrix, and the output feature vectors of the two networks are combined into a new feature matrix and input into the logistic regression classifier for discrimination.
S6, after the space-time characteristics of the disaster sites are fused in the step S5, the deep convolution neural network outputs a two-dimensional characteristic map, the deep confidence network outputs a one-dimensional characteristic vector, and the two output characteristics are reconstructed into a two-dimensional characteristic matrix. In the process of reverse error propagation, the fused disaster site feature matrix is split into a one-dimensional feature vector and an independent feature map, and the feature vector and the independent feature map respectively participate in parameter optimization of the deep belief network and the deep convolutional neural network in the step S5. The invention selects a Bayesian algorithm and a random search as the parameter optimization algorithm of the model in consideration of the two networks and the dimensionality of the obtained data. And designing a storage structure in the parameter transmission, and transmitting the parameters subjected to the combined action of random search and Bayesian optimization to the next layer of the network to realize the parameter optimization of each layer of the network.
S7, for the disaster site space-time feature extraction and fusion network in the steps S3, S4, S5 and S6, the method uses a superposition function of the mean square error loss and the cross entropy loss as a loss function of the network, and is expressed as follows:
Figure FDA0004165321250000021
a=f(w·x+b)
where L is the loss function value, y is the actual label value of the landslide samples, a is the model predictive value, x is the input of the model, n is the total number of landslide samples, f is the activation function, and w, b are the network parameters.
S8, based on the space-time characteristics reconstructed in the step S7, carrying out disaster state judgment and classification by adopting a deep learning algorithm, classifying image data by utilizing a convolutional neural network, classifying time sequence data by utilizing a long-short-time memory network, and carrying out disaster classification, emotion analysis, entity recognition and the like on text data by utilizing a language model based on a transducer model. And constructing a disaster site situation awareness map based on the state judgment and classification results, wherein the map, the video picture, the sensor data and other multidimensional information are included, and the virtual reality technology, the three-dimensional visualization technology and the like are adopted to improve the readability and the understandability of the information.
S9, establishing a disaster information database based on the disaster site situation awareness map constructed in the step S8, wherein the disaster information database comprises information such as disaster types, disaster degrees, disaster areas, casualties, material losses and the like, determining the emergency situation of the disaster sites, comprising information such as the scale of disasters, the distribution of disaster-stricken personnel, the demand of materials and the like, and formulating disaster emergency management countermeasures, comprising emergency decisions such as personnel scheduling, material allocation, rescue path planning and the like, on the basis of analyzing the disaster site information.
2. The emergency management method based on dynamic perception fusion of disaster site information as claimed in claim 1, wherein the space-time feature extraction of the disaster site multi-source heterogeneous perception information is specifically as follows:
in order to solve the problem of disaster site environment perception and wearable perception information fusion carried by human bodies, the invention constructs a multi-source heterogeneous information time-phase feature extraction module, which firstly completes the extraction of time sequence features by using a CNN network with multiple inputs and multiple outputs, and then compresses space data of the time sequence by using an LSTM network with multiple inputs and single outputs to generate a time-space high-dimensional feature map of disaster site perception information. On the basis, an up-sampling layer is constructed, and a feature map with the same size as the input data is generated.
The invention designs a disaster site multi-source heterogeneous perception information spatial feature extraction module, wherein given disaster site multi-source heterogeneous perception data are combined into three-dimensional convolution pairs (multi-source, time, length and width), the three-dimensional convolution pairs are input into a deep convolution neural network to perform spatial feature extraction, and simultaneously, the features of space and time dimensions are calculated, and a plurality of continuous data and three-dimensional convolution kernels are overlapped to form multi-dimensional data. The feature map of the ith element of the values of the deep convolutional neural network at x, y, z of the jth layer is shown in the following equation.
Figure FDA0004165321250000031
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004165321250000032
feature mapping of the ith element to a value at x, y, z of the jth layer; b ij Is that; r is R i Size of convolution kernel along time dimension for three dimensions, +.>
Figure FDA0004165321250000033
Values of (p, q, r) cores mapped for the mth feature connected to the upper layer;
Figure FDA0004165321250000034
feature mapping of the i-1 st element, which is a value at x+p, y+q, z+r of the mth layer.
3. The emergency management method based on dynamic perception fusion of disaster site information as claimed in claim 1, wherein the process of updating the disaster site multi-source heterogeneous information space feature fusion network comprises the following three steps:
(1) And (5) establishing time fusion data, establishing a forgetting door, and inputting disaster site related factor characteristic data. The calculation formula of the forgetting gate is as follows.
z f =σ(W f ·[h t-1 ,x t ]+b f )
Wherein z is f Is a forgotten gating activation value, σ is a sigmoid function, W f Is a weighting matrix for forgetting gating, h t-1 Is the output value of time phase fusion data and the data of the moment before the factor characteristic, x t Is the input value of the time phase fusion data and the factor characteristic data at the current moment, b f Is a forgotten gating bias term.
Forgetting the gate will determine which information is the cell status information C from the last time t-1 The gate reads the output value h of the time phase fusion data and the factor characteristic data at the time t-1 t-1 Input value x at the current time t And forget gating bias term b f And calculates the forgetting gate activation value z f There is a sigmoid function decision. Output activation value from 0 to 1, representing the state C of the previous layer of cells t-1 Probability of forgetting, 1 is complete retention to C t 0 is completely discarded.
(2) Calculating state information C of time phase fusion data and factor characteristic data at time t t . The procedure first determines the cell state C at the current moment t Comprises a memory gating activation value z i And memory cell input state z. The calculation formula is as follows:
z i =σ(W i ·[h t-1 ,x t ]+b i )
z=tanh(W·[h t-1 ,x t ]+b)
C t =z f ×C t-1 +z i ×z
wherein z is i Is the memory gating activation value, z is the memory cell input state, σ is the sigmoid function, W i Is a memory gating weight matrix, W is a weight matrix of memory cell input states, b i Is the bias term for the input gating, b is the bias for the memory cell input stateThe term, tanh, is the hyperbolic tangent function, C t State information of time phase fusion data and factor characteristic data at time t, z f To forget the gating activation value, C t-1 And the state information of the time phase fusion data and the factor characteristic data at the time t-1.
(3) And calculating the output state of the time phase fusion data and the factor characteristic data at the current moment (time t). z o Is the output gating, the state information C at the control time t t Transfer to h t To a degree of (3). The calculation formula is as follows:
z o =σ(W o ·[h t-1 ,x t ]+b o )
h t =z o ×tanh(C t )
wherein W is o Is the weight matrix of the output gating, b o Is the bias term for the output gating, and tanh is the hyperbolic tangent function.
CN202310361774.6A 2023-04-07 2023-04-07 Emergency management method based on dynamic perception fusion of disaster site information Pending CN116385239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310361774.6A CN116385239A (en) 2023-04-07 2023-04-07 Emergency management method based on dynamic perception fusion of disaster site information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310361774.6A CN116385239A (en) 2023-04-07 2023-04-07 Emergency management method based on dynamic perception fusion of disaster site information

Publications (1)

Publication Number Publication Date
CN116385239A true CN116385239A (en) 2023-07-04

Family

ID=86968964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310361774.6A Pending CN116385239A (en) 2023-04-07 2023-04-07 Emergency management method based on dynamic perception fusion of disaster site information

Country Status (1)

Country Link
CN (1) CN116385239A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843850A (en) * 2023-07-24 2023-10-03 保利长大工程有限公司 Emergency terrain data acquisition method, system and computer readable storage medium
CN116843074A (en) * 2023-07-06 2023-10-03 南宁师范大学 Typhoon disaster damage prediction method based on CNN-LSTM model
CN117035164A (en) * 2023-07-10 2023-11-10 江苏省地质调查研究院 Ecological disaster monitoring method and system
CN117648670A (en) * 2024-01-24 2024-03-05 润泰救援装备科技河北有限公司 Rescue data fusion method, electronic equipment, storage medium and rescue fire truck
CN116843850B (en) * 2023-07-24 2024-05-28 保利长大工程有限公司 Emergency terrain data acquisition method, system and computer readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843074A (en) * 2023-07-06 2023-10-03 南宁师范大学 Typhoon disaster damage prediction method based on CNN-LSTM model
CN117035164A (en) * 2023-07-10 2023-11-10 江苏省地质调查研究院 Ecological disaster monitoring method and system
CN117035164B (en) * 2023-07-10 2024-03-12 江苏省地质调查研究院 Ecological disaster monitoring method and system
CN116843850A (en) * 2023-07-24 2023-10-03 保利长大工程有限公司 Emergency terrain data acquisition method, system and computer readable storage medium
CN116843850B (en) * 2023-07-24 2024-05-28 保利长大工程有限公司 Emergency terrain data acquisition method, system and computer readable storage medium
CN117648670A (en) * 2024-01-24 2024-03-05 润泰救援装备科技河北有限公司 Rescue data fusion method, electronic equipment, storage medium and rescue fire truck
CN117648670B (en) * 2024-01-24 2024-04-12 润泰救援装备科技河北有限公司 Rescue data fusion method, electronic equipment, storage medium and rescue fire truck

Similar Documents

Publication Publication Date Title
CN116385239A (en) Emergency management method based on dynamic perception fusion of disaster site information
CN109670548B (en) Multi-size input HAR algorithm based on improved LSTM-CNN
CN111091045A (en) Sign language identification method based on space-time attention mechanism
CN111639544A (en) Expression recognition method based on multi-branch cross-connection convolutional neural network
CN113255443A (en) Pyramid structure-based method for positioning time sequence actions of graph attention network
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
WO2024098956A1 (en) Method for fusing social media data and moving track data
Tahir et al. Wildfire detection in aerial images using deep learning
CN112950780A (en) Intelligent network map generation method and system based on remote sensing image
CN115423998A (en) Visible light forest fire detection method based on lightweight anchor-free detection model
CN114202803A (en) Multi-stage human body abnormal action detection method based on residual error network
CN115393690A (en) Light neural network air-to-ground observation multi-target identification method
CN110728186B (en) Fire detection method based on multi-network fusion
Gadhavi et al. Transfer learning approach for recognizing natural disasters video
Sun et al. Two-stage deep regression enhanced depth estimation from a single RGB image
Sun Analyzing multispectral satellite imagery of south american wildfires using deep learning
Bouzidi et al. Enhancing crisis management because of deep learning, big data and parallel computing environment: survey
Li et al. A new data fusion framework of business intelligence and analytics in economy, finance and management
CN116206158A (en) Scene image classification method and system based on double hypergraph neural network
CN115965819A (en) Lightweight pest identification method based on Transformer structure
Li Multimodal visual image processing of mobile robot in unstructured environment based on semi-supervised multimodal deep network
CN116778214A (en) Behavior detection method, device, equipment and storage medium thereof
CN110070018A (en) A kind of earthquake disaster scene recognition method of combination deep learning
CN113762032A (en) Image processing method, image processing device, electronic equipment and storage medium
Zhou et al. ASSD-YOLO: a small object detection method based on improved YOLOv7 for airport surface surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination