WO2019235415A1 - Disaster state determination system and disaster determination flight system - Google Patents

Disaster state determination system and disaster determination flight system Download PDF

Info

Publication number
WO2019235415A1
WO2019235415A1 PCT/JP2019/021955 JP2019021955W WO2019235415A1 WO 2019235415 A1 WO2019235415 A1 WO 2019235415A1 JP 2019021955 W JP2019021955 W JP 2019021955W WO 2019235415 A1 WO2019235415 A1 WO 2019235415A1
Authority
WO
WIPO (PCT)
Prior art keywords
disaster
unit
state
deep learning
determination system
Prior art date
Application number
PCT/JP2019/021955
Other languages
French (fr)
Japanese (ja)
Inventor
山本 慎也
Original Assignee
全力機械株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 全力機械株式会社 filed Critical 全力機械株式会社
Priority to JP2020523087A priority Critical patent/JP7065477B2/en
Publication of WO2019235415A1 publication Critical patent/WO2019235415A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems

Definitions

  • the present invention relates to a disaster situation determination system and a disaster determination flight system.
  • Patent Literature 1 Japanese Patent No. 5760155 discloses a search support system that improves search efficiency in order to realize rescue within 72 hours.
  • a search activity support system which is connected to a network, each of which is a first terminal installed in a disaster response headquarters, a management device installed in a data center,
  • the search terminal has a second terminal carried by the search conductor and a third terminal carried by the searcher.
  • the first terminal sends map data including the disaster occurrence location to the management device based on the occurrence of the disaster.
  • Transmitting means for transmitting a signal requesting transmission to the receiving means receiving means for receiving map data sent from the management device, display means for displaying a map based on the received map data, and searching the displayed map
  • the management apparatus has receiving means for receiving the search map and display means for displaying the received search map, and the management apparatus acquires specific map data based on the map data transmission request signal received from the first terminal.
  • a processing means a transmitting means for transmitting the acquired map data to the first terminal; a storage means for storing the search map received from the first terminal; and a search in a search range designated based on the stored search map
  • Processing means for obtaining requirements, transmission means for transmitting the obtained search requirements to the first terminal or the first terminal and the second terminal, and storing in the storage means based on the position information of each searcher received from the third terminal
  • Processing means for adding a symbol indicating the position of each searcher to the search map, and transmission means for transmitting the search map or the search map after the symbol is added to at least the first terminal and the second terminal
  • the second terminal has a receiving means for receiving the search map transmitted by the management apparatus, a display means for displaying the received search map, and a receiving means for receiving the search requirement transmitted by the management apparatus.
  • a display means for displaying the received search requirement, and a communication means for the search commander and the searcher to communicate with each other between the third terminal, the third terminal, positioning means such as GPS, It has a transmission means which transmits the positional information obtained by the positioning means to the management device, and a communication means for calling between the search conductor and the searcher between the second terminal.
  • Patent Document 2 Japanese Patent Laid-Open No. 2018-63707 describes, for example, a dental panoramic X-ray image of a found corpse and dentistry before the identity of a person to be confirmed as an aid for identification work utilizing dental information.
  • an image analysis system that makes it possible to easily obtain identity confirmation information by collating with information, and further, image analysis system having high reliability with respect to tooth number information in order to improve the accuracy of image analysis processing Is disclosed.
  • Patent Document 3 Japanese Patent Application Laid-Open No. 2017-216757 discloses a system monitoring apparatus and a program for predicting the next state of the target system by reflecting information obtained from outside the target system.
  • the system monitoring device described in Patent Literature 3 includes a storage unit that stores state transition information including a plurality of states of a target system and a state transition path from one state to another state among the plurality of states, The current status of the target system is determined from the measurement information that is measured by the sensor provided in the target system and the input unit that receives input of peripheral information related to the target system, and the measurement information received by the input unit.
  • the state detection unit Based on the state detection unit to be detected, the current state of the target system detected by the state detection unit, the state transition information stored in the storage unit, and the peripheral information received by the input unit, other than the current state
  • a transition probability calculation unit that calculates the transition probability to the state of the current state, a transition state determination unit that determines the next state to transition from the current state based on the transition probability, and an output unit that outputs the determination result of the transition state determination unit , It is those with a.
  • Patent Document 4 Japanese Patent Laid-Open No. 2017-181870 discloses an information processing apparatus that realizes dynamic update of map information in accordance with changes in the real world with higher accuracy.
  • the information processing apparatus described in Patent Document 4 is based on an acquisition unit that acquires observation information related to a real world unit space from one or a plurality of sensors, and information related to inconsistency between the reference map and the real world unit space. And a communication unit that transmits observation information acquired in the unit space.
  • Patent Document 5 Japanese Patent Laid-Open No. 2017-135545 discloses a network management system and a network management method that can quickly use a service without separately preparing a network for the integrated management system.
  • the network management system described in Patent Document 5 is connected to an access node accessed by a user terminal and a management system that manages a service node that provides a service, and the management system, and constructs a network of the access node and the service node.
  • An authentication system that holds an authentication key for using a service and authenticates the user terminal when the access node receives a service setting request including the authentication key from the user terminal.
  • a communication management unit that is arranged in the access node and the authentication unit authenticates the user terminal and then sets a control session different from the service session in the access node, and the service of the user terminal authenticated by the authentication unit
  • a control session in which the communication manager forms a setting request A setting request unit that is transmitted from the access node to either the integrated management system or the management system, and a service session that receives the service setting request transmitted by the setting request unit and enables communication between the user terminal and the service node.
  • an internal request receiving unit for constructing a service formed in the access node.
  • Patent Document 6 Japanese Patent Laid-Open No. 2018-17103 discloses an information processing apparatus, an information processing method, and a program for comprehensively determining the state of a surface such as a building.
  • the information processing apparatus described in Patent Document 6 is an image obtained by imaging a learning unit that performs deep learning on an image obtained by imaging a surface based on teaching data indicating the state of the surface, and an area including the surface.
  • An image acquisition unit that acquires an image associated with position information indicating a captured position, and a state determination that determines a state of a surface in the image acquired by the image acquisition unit based on a learning result of the learning unit A section.
  • the main object of the present invention is to provide a disaster situation judgment system and a disaster judgment flight system that judge a disaster state from a disaster video after a disaster in real time.
  • a disaster situation determination system includes a recording unit that records an artificially created disaster video indicating a disaster state, a deep learning unit that learns a disaster state using the disaster video recorded in the recording unit, and a deep And a display unit for determining a disaster state of a disaster image after the disaster by the learning unit and displaying a disaster situation map.
  • the disaster situation can be analyzed in real time and the disaster situation map can be displayed. That is, in the conventional deep learning (deep learning), disaster images of actual disasters used as teacher data are necessary, but many disaster images of disasters cannot be obtained. Therefore, the present inventor has found that the disaster state can be determined from the disaster image after the disaster in real time by creating a disaster image artificially showing the disaster state and using the disaster image as teacher data. .
  • the artificially created disaster video showing the disaster situation may be formed in three dimensions.
  • the artificially created disaster image showing the disaster state is composed of three dimensions, so that it is possible to accurately determine the disaster image after the actual disaster.
  • the disaster situation determination system is the disaster situation determination system according to one aspect or the second aspect of the invention, further including a route guidance instruction section, which is capable of passing based on the disaster situation map A simple route may be displayed on the display unit.
  • the route guidance instruction unit can display a route that can be passed based on the disaster situation map, it is possible to safely move relief supplies and rescues.
  • the disaster situation determination system is the disaster situation judgment system according to the third aspect of the invention, wherein the disaster image showing the artificially created disaster state is a collapsed building, a collapsed bridge, a collapsed It may include at least one or more of a mountain, a collapsed embankment, a collapsed road, a collapsed tunnel, an earthquake, a tsunami, a fire, a flood, a ground crack, and an electrical line failure.
  • the disaster image that shows the artificially created disaster state is a concrete collapsed building, collapsed bridge, collapsed mountain, collapsed embankment, collapsed road, collapsed tunnel, earthquake, tsunami, fire, flood
  • the disaster image that shows the artificially created disaster state is a concrete collapsed building, collapsed bridge, collapsed mountain, collapsed embankment, collapsed road, collapsed tunnel, earthquake, tsunami, fire, flood
  • at least one or more of ground cracks and malfunctions of electric lines are included, it is possible to easily and accurately determine a disaster situation.
  • the disaster situation determination system is the disaster situation determination system according to the fourth aspect of the invention, wherein the deep learning unit may display the probability of the disaster situation on the display unit.
  • the Deep Learning Department can display the probability of a disaster situation, so that it can indicate whether the building is completely collapsed or is likely to collapse further due to a future secondary disaster. Can do. That is, secondary disasters are likely to occur again in the event of an earthquake, fire, flood, tsunami, or the like. Naturally, secondary disasters and tertiary disasters are included.
  • the disaster situation determination system is the disaster situation determination system according to the fifth aspect of the invention, further comprising a weather display unit, wherein the deep learning unit is responsive to the weather forecast information from the weather display unit.
  • the weather information on the disaster situation map may be displayed on the display unit.
  • the disaster situation determination system is the disaster situation determination system according to the sixth aspect of the invention, wherein the deep learning section divides the disaster video after the disaster as one image and a plurality of the images. It is also possible to synthesize the images after determining the disaster state of each image.
  • the image processing speed can be increased and the accuracy of determining the disaster situation can be increased.
  • a disaster determination flight system includes a disaster situation determination system according to the seventh aspect of the present invention and a flying object equipped with a photographing device and capable of transmitting a disaster image.
  • the disaster situation of the disaster image can be transmitted from the flying object by the disaster judgment flight system, and the disaster situation map can be easily displayed by the disaster situation judgment system.
  • FIG. 1 is a schematic diagram showing an example of the overall configuration of a disaster situation determination system 100 according to the present invention.
  • the disaster situation determination system 100 includes a recording unit 110, a deep learning unit 120, and a display unit 130.
  • the disaster information determination system 100 determines the actual disaster level SL from a deep learning disaster model 150 described later and creates disaster map data 240 when a disaster occurs.
  • the disaster in the present invention includes natural disasters and man-made disasters. Specifically, meteorological disasters, rain, torrential rains, floods, river floods, landslide disasters, slope failures, landslides, debris flows, landslides, tornadoes, storm surges, avalanches, snowstorms, lightning strikes, droughts, earthquakes, tsunamis, earthquake fires, Eruption, cinder, lava flow, pyroclastic flow, mudflow, large-scale accident, fire, train accident, aviation accident, maritime accident, traffic accident, explosion accident, coal mine accident, oil spill accident, chemical pollution accident, nuclear accident, terrorism, Includes any other disasters such as war, war damage, etc.
  • meteorological disasters rain, torrential rains, floods, river floods, landslide disasters, slope failures, landslides, debris flows, landslides, tornadoes, storm surges, avalanches, snowstorms, lightning strikes, droughts, earthquakes, tsunamis, earthquake fires, Er
  • the teacher data 140 for the disaster model 150 needs at least tens of thousands or more to construct the disaster model 150 for deep learning. For this reason, it is common sense that in order to create a highly accurate disaster model 150, not only decades in the future but also hundreds of years will be required.
  • the disaster model 150 can learn from 500 to 100,000 types of teacher data 140 for each disaster to be determined, and can learn from 2000 to 25000 types of teacher data 140.
  • learning is performed using the teacher data 140 of 8000 types or more and 12,500 types or less. In this way, disaster learning can be carried out with high accuracy by learning beyond the lower limit. On the other hand, when the upper limit is exceeded, the learning effect is saturated.
  • the recording unit 110 includes a general recording device. In the present embodiment, the recording apparatus is used. However, the present invention is not limited to this, and a recording apparatus such as a cloud may be used.
  • the recording unit 110 records a plurality of artificially created disaster data 200. Furthermore, the recording unit 110 records the disaster model 150 that has been deeply learned in the deep learning unit 120.
  • the recording unit 110 records map data 230 that is a basis for creating a disaster map. Further, the disaster map data 240 and / or the route map 250 created by the deep learning unit 120 are recorded.
  • the deep learning unit 120 performs deep learning (deep learning) based on the teacher data 140 recorded in the recording unit 110 by the multilayer neural network. That is, the deep learning unit 120 creates the disaster model 150 for each disaster type in advance as the teacher data 140 recorded in the recording unit 110.
  • the deep learning unit 120 in this embodiment can use YOLO (Redmon, Joseph, et al. “YOLOv3: An Incremental Improvement” arXiv preprint arXiv: 1804.002767).
  • YOLO Redmon, Joseph, et al.
  • object classification Classification
  • bounding box are calculated for each region (Bounding Box Regression).
  • the width of the disaster video input by YOLO is set to 320 to 10,000 pixels
  • the height is set to 240 to 5000 pixels
  • the width and height of the image size before convolution are both 104 to 1664 pixels. It is more preferable to set at 208 to 832 pixels. Thereby, as will be described later, the determination using the disaster model 150 can be performed with high accuracy on an actual disaster video.
  • YOLO is used as the deep learning unit 120.
  • the present invention is not limited to this, and R-CNN, SPPnet, or any other engine (computer program) may be used.
  • the disaster model 150 is recorded in the recording unit 110.
  • the present invention is not limited to this, and the disaster model 150 may be recorded in the deep learning unit 120.
  • the display unit 130 includes a liquid crystal display unit or a plasma display.
  • the display unit 130 includes a communication unit 131 and can communicate with the deep learning unit 120.
  • the display unit 130 can display the disaster map data 240 obtained from the deep learning unit 120.
  • the display unit 130 can display a route map 250 obtained from the deep learning unit 120.
  • the display unit 130 is made of a liquid crystal display unit or a plasma display.
  • the present invention is not limited to this, and any display device such as a mobile terminal, a mobile phone, a smartphone, a tablet terminal, or a head-mounted display can be used.
  • a display unit 130 is included.
  • FIG. 2 is a schematic diagram illustrating an example of the teacher data 140.
  • the teacher data 140 is composed of a plurality of artificially created teacher data 140. That is, various teacher data 140 are formed according to the state of each disaster.
  • the teacher data 140 is composed of 3D video. Therefore, unlike a two-dimensional image, the amount of information can be increased.
  • the teacher data 140 can be artificially created using software such as OpenGL. Furthermore, in the present embodiment, the teacher data 140 uses artificially created data. However, when a disaster actually occurs in the past and there is video data of the disaster, the past video is recorded. Data may be added to form teacher data 140.
  • the teacher data 140 is composed of a plurality of artificially created teacher data 140. Each teacher data 140 may be a two-dimensional image in which disaster video data may be partitioned in a short time. Also good. For example, in the case of the teacher data 140 whose video time is 30 seconds, the teacher data 140 may be divided every 5 seconds.
  • FIG. 3 is a schematic diagram illustrating an example of creating the disaster model 150.
  • learning is performed using a neural network having a multilayer structure (deep neural network).
  • the deep learning model is an expression indicating the structure of the deep neural network.
  • the deep learning unit 120 uses the teacher data 140 to generate at least a part of the deep learning model (structure of the deep neural network) without human intervention, and includes the generated deep learning model A disaster model 150 is output. Therefore, the deep learning model is automatically constructed.
  • an actual disaster video is analyzed using the neural network thus obtained. Therefore, unlike conventional machine learning methods, processes such as region search and feature extraction are not required, and therefore, processing can be performed at higher speed.
  • the disaster model 150 may be ranked according to the level of the disaster. Specifically, the disaster model 150 may be divided into a plurality of ranks, such as a disaster level SL1 to a disaster level SL5. Specifically, in a disaster caused by an earthquake, the disaster level SL1 of one building is a state in which a wall is cracked, the disaster level SL2 is a state in which the window glass is broken, and the disaster level SL3 is a collapsed state. The probability is less than 50%, the disaster level SL4 is a state in which the possibility of collapse is 70% or more, and the disaster level SL5 is in a state of collapse.
  • the number of disaster levels may be divided into arbitrary ranks, such as 2, 3, 4, 10, 100.
  • the disaster level when the disaster level is divided into 100, it can be displayed with a probability in percentage, and when the disaster level is divided into 10, it can be displayed with a ratio.
  • FIG. 4 is a schematic diagram illustrating an example of the disaster map data 240.
  • the disaster learning data 240 is created based on the map data 230 by the deep learning unit 120 and displayed on the display unit 130.
  • the disaster map data 240 may be a planar map as shown in FIG. 4 or map data composed of a three-dimensional image.
  • the disaster map data 240 may be displayed by adding future weather forecast information.
  • the disaster map data 240 is information at the present time, for example, in the case of a flood disaster, an earthquake disaster, a heavy rain disaster, etc., weather forecast information, particularly weather forecast, AMeDAS forecast, etc. are important.
  • the disaster map data 240 may display prediction changes in time series. As a result, it is possible to predict a disaster situation several hours or days after the present time.
  • FIG. 5 is a schematic diagram illustrating an example of a route map 250.
  • the deep learning unit 120 displays a route unit 250 showing a route map to the plaza B5 on the display unit 130 based on the disaster map data 240. That is, the deep learning unit 120 shows the route to the square 5 via the bridge B2 on the display unit 130 as the route map 250 while avoiding the vicinity of the building B3 and the house B4.
  • FIG. 6 is a schematic diagram showing an example of the overall configuration of the disaster determination flight system 700 according to the present invention.
  • the disaster determination flight system 700 includes a flight vehicle 500 and a disaster situation determination system 100.
  • the disaster situation determination system 100 is the same as that shown in FIG.
  • the flying vehicle 500 captures an actual disaster image 300 and transmits it to the deep learning unit 120 of the disaster situation determination system 100.
  • the flying vehicle 520 is equipped with the imaging device 510 and moves in flight.
  • the imaging device 510 may be a camera, a video camera, or any other device that acquires video, and may be an image.
  • the flying body 520 may be an airplane, an unmanned airplane, a drone, a helicopter, a kite, or the like.
  • the disaster determination flight system 700 can immediately acquire the disaster video 300 and give it to the disaster situation determination system 100 when an actual disaster occurs, the municipalities, residents, police, fire fighting, self-defense forces Information can be given appropriately.
  • the disaster determination flight system 700 can transmit the disaster image 300 from the flying object 520, and the disaster situation determination system 100 can easily display the disaster map data 240 on the display unit 130 in real time.
  • the disaster situation determination system 100 in this specification corresponds to the “disaster situation judgment system”
  • the disaster judgment flight system 700 corresponds to the “disaster judgment flight system”
  • the teacher data 140 indicates “an artificially created disaster state.
  • recording unit 110 corresponds to “recording unit”
  • disaster map data 240 corresponds to “disaster situation map”
  • route map 250 corresponds to “passable route”
  • deep learning unit 120 corresponds to “route guidance instruction unit
  • weather display unit deep learning unit
  • display unit 130 corresponds to “display unit”
  • imaging device 510 corresponds to “imaging device”
  • flying body 520 corresponds to “flight”
  • the disaster video 300 corresponds to “disaster video”.
  • DESCRIPTION OF SYMBOLS 100 Disaster situation determination system 110 Recording part 120 Deep learning part 130 Display part 140 Teacher data 240 Disaster map data 250 Path map 300 Disaster image 510 Imaging device (imaging device) 520 Aircraft 700 Disaster Judgment Flight System

Abstract

[Problem] To provide a disaster state determination system and a disaster determination flight system for determining in real time a disaster state from a disaster video after occurrence of a disaster. [Solution] A disaster state determination system 100 according to the present invention includes: a recording unit 110 that records teaching data 140 indicating an artificially generated disaster state; a deep learning unit 120 that learns the disaster state by using the teaching data 140 recorded in the recording unit 110; and a display unit 130 that, through determination of the disaster state of a disaster video 300 after occurrence of a disaster by the deep learning unit 120, displays disaster map data 240. A disaster determination flight system 700 includes the disaster state determination system 100, and a flying object 520 that has an imaging device 510 mounted thereon and that can transmit the disaster video.

Description

災害状況判定システムおよび災害判定飛行システムDisaster judgment system and disaster judgment flight system
 本発明は、災害状況判定システムおよび災害判定飛行システムに関する。 The present invention relates to a disaster situation determination system and a disaster determination flight system.
 従来、災害に関して種々の研究開発が実施されている。例えば、特許文献1(特許第5760155号公報)には、72時間以内の救助を実現するべく、捜索効率を向上させる捜索支援システムについて開示されている。 Conventionally, various research and development related to disasters have been conducted. For example, Patent Literature 1 (Japanese Patent No. 5760155) discloses a search support system that improves search efficiency in order to realize rescue within 72 hours.
 特許文献1記載の捜索活動支援システムにおいては、捜索活動を支援するシステムであって、それぞれネットワークに接続される、災害対策本部に設置された第1端末と、データセンタに設置された管理装置と、捜索指揮者が携帯する第2端末と、捜索員が携帯する第3端末とを有し、第1端末は、災害発生に基づいて管理装置に対し災害発生地を含む地図データを第1端末に送信することを要求する信号を送信する送信手段と、管理装置から送られる地図データを受信する受信手段と、受信した地図データを基に地図を表示する表示手段と、表示された地図に捜索範囲を指定して捜索用マップを作成する入力手段と、その捜索用マップを管理装置に送信する送信手段と、管理装置が送信する捜索に必要な時間的・人的要件である捜索要件を受信する受信手段と、受信した捜索要件を表示する表示手段と、管理装置が第3端末から受信した各捜索員の位置情報に基づき、捜索用マップに各捜索員の位置を表示した後に送信する捜索用マップを受信する受信手段と、受信した捜索用マップを表示する表示手段とを有し、管理装置は、第1端末から受信した地図データ送信要求信号に基づき、特定の地図データを取得する処理手段と、取得した地図データを第1端末に送信する送信手段と、第1端末から受信した捜索用マップを記憶する記憶手段と、記憶した捜索用マップに基づき指定された捜索範囲での捜索要件を求める処理手段と、求めた捜索要件を第1端末または第1端末と第2端末に送信する送信手段と、第3端末から受信した各捜索員の位置情報に基づき、記憶手段に記憶されている捜索用マップに各捜索員の位置を示すシンボルを付加する処理手段と、捜索用マップまたはシンボルが付加された後の捜索用マップを少なくとも第1端末及び第2端末に送信する送信手段と、を有し、第2端末は、管理装置が送信する捜索用マップを受信する受信手段と、受信した捜索用マップを表示する表示手段と、管理装置が送信する捜索要件を受信する受信手段と、受信した捜索要件を表示する表示手段と、第3端末との間で捜索指揮者と捜索員が通話するための通信手段とを有し、第3端末は、GPS等の測位手段と、その測位手段が得た位置情報を管理装置に送信する送信手段と、第2端末との間で捜索指揮者と捜索員とが通話するための通信手段とを有するものである。 In the search activity support system described in Patent Document 1, a search activity support system, which is connected to a network, each of which is a first terminal installed in a disaster response headquarters, a management device installed in a data center, The search terminal has a second terminal carried by the search conductor and a third terminal carried by the searcher. The first terminal sends map data including the disaster occurrence location to the management device based on the occurrence of the disaster. Transmitting means for transmitting a signal requesting transmission to the receiving means, receiving means for receiving map data sent from the management device, display means for displaying a map based on the received map data, and searching the displayed map An input means for creating a search map by specifying a range, a transmission means for transmitting the search map to the management apparatus, and a search requirement that is a time and human requirement necessary for the search transmitted by the management apparatus Based on the position information of each searcher received from the third terminal by the receiving means for receiving, the display means for displaying the received search requirements, and transmitting after displaying the position of each searcher on the search map The management apparatus has receiving means for receiving the search map and display means for displaying the received search map, and the management apparatus acquires specific map data based on the map data transmission request signal received from the first terminal. A processing means; a transmitting means for transmitting the acquired map data to the first terminal; a storage means for storing the search map received from the first terminal; and a search in a search range designated based on the stored search map Processing means for obtaining requirements, transmission means for transmitting the obtained search requirements to the first terminal or the first terminal and the second terminal, and storing in the storage means based on the position information of each searcher received from the third terminal Processing means for adding a symbol indicating the position of each searcher to the search map, and transmission means for transmitting the search map or the search map after the symbol is added to at least the first terminal and the second terminal The second terminal has a receiving means for receiving the search map transmitted by the management apparatus, a display means for displaying the received search map, and a receiving means for receiving the search requirement transmitted by the management apparatus. And a display means for displaying the received search requirement, and a communication means for the search commander and the searcher to communicate with each other between the third terminal, the third terminal, positioning means such as GPS, It has a transmission means which transmits the positional information obtained by the positioning means to the management device, and a communication means for calling between the search conductor and the searcher between the second terminal.
 また、特許文献2(特開2018-63707号公報)には、歯科情報を活用した身元確認作業の一助として、例えば発見された遺体の歯科パノラマX線画像と身元確定対象者の生前の歯科に関する情報とを照合することにより容易に身元確認情報を得ることを可能とする画像分析システムを提供し、さらに、画像分析処理の精度向上のために、歯番情報に関し高い信頼性を有する画像分析システムについて開示されている。 Patent Document 2 (Japanese Patent Laid-Open No. 2018-63707) describes, for example, a dental panoramic X-ray image of a found corpse and dentistry before the identity of a person to be confirmed as an aid for identification work utilizing dental information. Provided is an image analysis system that makes it possible to easily obtain identity confirmation information by collating with information, and further, image analysis system having high reliability with respect to tooth number information in order to improve the accuracy of image analysis processing Is disclosed.
 特許文献2記載の画像分析システムにおいては、少なくとも管理装置サーバ及び管理装置データベースを有する管理システム並びに分析装置プロセッサ及び分析装置データベースを有する分析システムを備えてなる画像分析システムにおいて、管理装置サーバ及び/もしくは分析装置プロセッサによって画像分析処理の対象となる画像種別に係る第1の情報を入力する手段と、第1の情報を管理装置サーバと管理装置データベースとによる及び/もしくは分析装置プロセッサと分析装置データベースとによって管理装置サーバ及び/もしくは分析装置データベース上に第1の記憶として格納する手段と、管理装置サーバ及び/もしくは分析装置プロセッサによって画像種別による画像へ附番した歯番に係る第2の情報を入力する手段と、第2の情報を管理装置サーバと管理装置データベースとによる及び/もしくは分析装置プロセッサと分析装置データベースとによって管理装置サーバ及び/もしくは分析装置データベース上に第2の記憶として格納する手段と、管理装置サーバ及び/もしくは分析装置プロセッサによって歯番ごとの歯状態に係る第3の情報を入力する手段と、第3の情報を管理装置サーバと管理装置データベースとによる及び/もしくは分析装置プロセッサと分析装置データベースとによって管理装置サーバ及び/もしくは分析装置データベース上に第3の記憶として格納する手段とを備えたものである。 In the image analysis system described in Patent Document 2, at least a management system having a management apparatus server and a management apparatus database, and an image analysis system including an analysis system having an analysis apparatus processor and an analysis apparatus database, the management apparatus server and / or Means for inputting first information relating to an image type to be subjected to image analysis processing by the analysis device processor; and the first information is sent from the management device server and the management device database; and / or the analysis device processor and the analysis device database; Means for storing as first storage on the management device server and / or analysis device database and second information related to the tooth number assigned to the image according to the image type by the management device server and / or analysis device processor And means to The management device server and the management device database and / or the analysis device processor and the analysis device database as a second storage on the management device server and / or the analysis device database, the management device server, and / or Alternatively, means for inputting third information related to the tooth state for each tooth number by the analyzer processor, and the third information is managed by the management device server and the management device database and / or managed by the analysis device processor and the analysis device database. Means for storing as a third storage on the apparatus server and / or the analysis apparatus database.
 さらに、特許文献3(特開2017-216757号公報)は対象システムの外部から得られる情報を反映して、対象システムの次の状態を予測するシステム監視装置およびプログラムについて開示されている。 Further, Patent Document 3 (Japanese Patent Application Laid-Open No. 2017-216757) discloses a system monitoring apparatus and a program for predicting the next state of the target system by reflecting information obtained from outside the target system.
 特許文献3記載のシステム監視装置は、対象システムの複数の状態と、複数の状態の中の一の状態から他の状態への状態遷移の経路とを含む状態遷移情報を記憶する記憶部と、対象システムに設けられたセンサで計測された情報である計測情報、及び対象システムに関わりを持つ周辺情報の入力を受け付ける入力部と、入力部で受け付けた計測情報から、対象システムの現在の状態を検知する状態検知部と、状態検知部で検知された対象システムの現在の状態と、記憶部に記憶された状態遷移情報と、入力部で受け付けた周辺情報とを元に、現在の状態から他の状態への遷移確率を算出する遷移確率算出部と、遷移確率に基づいて現在の状態から遷移する次の状態を判定する遷移状態判定部と、遷移状態判定部の判定結果を出力する出力部と、を備えるものである。 The system monitoring device described in Patent Literature 3 includes a storage unit that stores state transition information including a plurality of states of a target system and a state transition path from one state to another state among the plurality of states, The current status of the target system is determined from the measurement information that is measured by the sensor provided in the target system and the input unit that receives input of peripheral information related to the target system, and the measurement information received by the input unit. Based on the state detection unit to be detected, the current state of the target system detected by the state detection unit, the state transition information stored in the storage unit, and the peripheral information received by the input unit, other than the current state A transition probability calculation unit that calculates the transition probability to the state of the current state, a transition state determination unit that determines the next state to transition from the current state based on the transition probability, and an output unit that outputs the determination result of the transition state determination unit , It is those with a.
 また、特許文献4(特開2017-181870号公報)には、実世界の変化に応じた地図情報の動的更新をより精度高く実現する情報処理装置について開示されている。 Further, Patent Document 4 (Japanese Patent Laid-Open No. 2017-181870) discloses an information processing apparatus that realizes dynamic update of map information in accordance with changes in the real world with higher accuracy.
 特許文献4記載の情報処理装置には、一または複数のセンサから実世界の単位空間に係る観測情報を取得する取得部と、リファレンス地図と実世界の単位空間との不整合に係る情報に基づいて、単位空間において取得した観測情報を送信する通信部と、を備えるものである。 The information processing apparatus described in Patent Document 4 is based on an acquisition unit that acquires observation information related to a real world unit space from one or a plurality of sensors, and information related to inconsistency between the reference map and the real world unit space. And a communication unit that transmits observation information acquired in the unit space.
 最後に、特許文献5(特開2017-135545号公報)には、統合管理システムへのネットワークを別途用意することなく迅速にサービスを利用できるネットワーク管理システム及びネットワーク管理方法について開示されている。 Finally, Patent Document 5 (Japanese Patent Laid-Open No. 2017-135545) discloses a network management system and a network management method that can quickly use a service without separately preparing a network for the integrated management system.
 特許文献5記載のネットワーク管理システムは、ユーザ端末がアクセスするアクセスノード及びサービスを提供するサービスノードを管理する管理システムと、管理システムと接続しており、アクセスノードとサービスノードとのネットワークを構築する統合管理システムと、を備えるネットワーク管理システムであって、サービスを利用するための認証キーを保持し、アクセスノードがユーザ端末から認証キーを含むサービス設定要求を受けたときにユーザ端末を認証する認証部と、アクセスノードに配置され、認証部がユーザ端末を認証した後、サービス用セッションとは別である制御用セッションをアクセスノードに設定する通信管理部と、認証部が認証したユーザ端末のサービス設定要求を通信管理部が形成した制御用セッションでアクセスノードから統合管理システムまたは管理システムのいずれか一方に送信する設定要求部と、設定要求部が送信したサービス設定要求を受信し、ユーザ端末とサービスノードとの通信を可能とするサービス用セッションをアクセスノードに形成するサービス構築を行う内部要求受付部と、を有するものである。 The network management system described in Patent Document 5 is connected to an access node accessed by a user terminal and a management system that manages a service node that provides a service, and the management system, and constructs a network of the access node and the service node. An authentication system that holds an authentication key for using a service and authenticates the user terminal when the access node receives a service setting request including the authentication key from the user terminal. And a communication management unit that is arranged in the access node and the authentication unit authenticates the user terminal and then sets a control session different from the service session in the access node, and the service of the user terminal authenticated by the authentication unit A control session in which the communication manager forms a setting request A setting request unit that is transmitted from the access node to either the integrated management system or the management system, and a service session that receives the service setting request transmitted by the setting request unit and enables communication between the user terminal and the service node. And an internal request receiving unit for constructing a service formed in the access node.
 特許文献6(特開2018-17103号公報)には、建物などの面の状態を網羅的に判定する情報処理装置、情報処理方法、およびプログラムについて開示されている。 Patent Document 6 (Japanese Patent Laid-Open No. 2018-17103) discloses an information processing apparatus, an information processing method, and a program for comprehensively determining the state of a surface such as a building.
 特許文献6記載の情報処理装置は、面を撮像した画像について、当該面の状態を示す教示データに基づいて、深層学習を行う学習部と、面を含む領域が撮像された画像であって、撮像された位置を示す位置情報と対応付けられた画像を取得する画像取得部と、前記学習部の学習結果に基づいて、前記画像取得部が取得した画像中の面の状態を判定する状態判定部と、を備えるものである。 The information processing apparatus described in Patent Document 6 is an image obtained by imaging a learning unit that performs deep learning on an image obtained by imaging a surface based on teaching data indicating the state of the surface, and an area including the surface. An image acquisition unit that acquires an image associated with position information indicating a captured position, and a state determination that determines a state of a surface in the image acquired by the image acquisition unit based on a learning result of the learning unit A section.
特許第5760155号公報Japanese Patent No. 5760155 特開2018-63707号公報JP-A-2018-63707 特開2017-216757号公報JP 2017-216757 A 特開2017-181870号公報JP 2017-181870 A 特開2017-135545号公報JP 2017-135545 A 特開2018-17103号公報JP 2018-17103 A
 しかしながら、特許文献1から特許文献6の技術においては、ディープラーニング(深層学習)技術および災害マップの作成技術の記載があるものの、ディープラーニングには、数多くのデータが必要となり、災害が多数頻発しないと対応できないという問題があった。 However, in the techniques of Patent Document 1 to Patent Document 6, although there is a description of deep learning (deep learning) technology and disaster map creation technology, deep learning requires a lot of data, and many disasters do not occur frequently. There was a problem that could not be handled.
 すなわち、災害の種類と発生直後のデータとの両者を多数収集するには、多数の時間(数十年から数百年の時間における災害)と、多数の労力とが必要となるという問題があり、実現することが困難であった。 In other words, in order to collect a large number of both types of disasters and data immediately after the occurrence, there is a problem that a large amount of time (disasters in the time of several decades to hundreds of years) and a lot of labor are required. It was difficult to realize.
 本発明の主な目的は、リアルタイムに災害後の災害映像から災害状態を判定する災害状況判定システムおよび災害判定飛行システムを提供することである。 The main object of the present invention is to provide a disaster situation judgment system and a disaster judgment flight system that judge a disaster state from a disaster video after a disaster in real time.
(1)
 一局面に従う災害状況判定システムは、人工的に作成した災害状態を示す災害映像を記録する記録部と、前記記録部に記録された災害映像を用いて災害状態を学習するディープラーニング部と、ディープラーニング部により災害後の災害映像の災害状態を判定し、災害状況地図を表示する表示部と、を含むものである。
(1)
A disaster situation determination system according to one aspect includes a recording unit that records an artificially created disaster video indicating a disaster state, a deep learning unit that learns a disaster state using the disaster video recorded in the recording unit, and a deep And a display unit for determining a disaster state of a disaster image after the disaster by the learning unit and displaying a disaster situation map.
 この場合、リアルタイムで災害状況を分析し、災害状況地図を表示することができる。すなわち、従来のディープラーニング(深層学習)では、教師データとなる実際の災害の災害映像が必要であったが、災害の災害映像は数多く得ることができない。そこで、本発明者は、人工的に災害状態を示す災害映像を作成し、その災害映像を教師データとして用いることで、リアルタイムに災害後の災害映像から災害状態を判定することができることを見出した。 In this case, the disaster situation can be analyzed in real time and the disaster situation map can be displayed. That is, in the conventional deep learning (deep learning), disaster images of actual disasters used as teacher data are necessary, but many disaster images of disasters cannot be obtained. Therefore, the present inventor has found that the disaster state can be determined from the disaster image after the disaster in real time by creating a disaster image artificially showing the disaster state and using the disaster image as teacher data. .
(2)
 第2の発明にかかる災害状況判定システムは、一局面にかかる災害状況判定システムにおいて、人工的に作成した災害状態を示す災害映像は、三次元により形成されてもよい。
(2)
In the disaster situation determination system according to the second aspect of the invention, the artificially created disaster video showing the disaster situation may be formed in three dimensions.
 この場合、人工的に作成した災害状態を示す災害映像は、三次元からなるので、現実の災害後の災害映像に対して正確に判定を行うことができる。 In this case, the artificially created disaster image showing the disaster state is composed of three dimensions, so that it is possible to accurately determine the disaster image after the actual disaster.
(3)
 第3の発明にかかる災害状況判定システムは、一局面または第2の発明にかかる災害状況判定システムにおいて、経路案内指示部をさらに含み、経路案内指示部は、災害状況地図に基づいて、通行可能な経路を表示部に表示してもよい。
(3)
The disaster situation determination system according to the third aspect of the invention is the disaster situation determination system according to one aspect or the second aspect of the invention, further including a route guidance instruction section, which is capable of passing based on the disaster situation map A simple route may be displayed on the display unit.
 この場合、経路案内指示部は、災害状況地図に基づいて、通行可能な経路を表示することができるため、救援物資、救助などの移動を安全に行わせることができる。 In this case, since the route guidance instruction unit can display a route that can be passed based on the disaster situation map, it is possible to safely move relief supplies and rescues.
(4)
 第4の発明にかかる災害状況判定システムは、一局面から第3の発明にかかる災害状況判定システムにおいて、人工的に作成した災害状態を示す災害映像は、崩壊した建物、崩壊した橋、崩壊した山、崩壊した堤、崩壊した道、崩壊したトンネル、地震、津波、火災、水害、地割れ、電線路の不具合、のうち少なくとも1または複数を含んでもよい。
(4)
The disaster situation determination system according to the fourth aspect of the invention is the disaster situation judgment system according to the third aspect of the invention, wherein the disaster image showing the artificially created disaster state is a collapsed building, a collapsed bridge, a collapsed It may include at least one or more of a mountain, a collapsed embankment, a collapsed road, a collapsed tunnel, an earthquake, a tsunami, a fire, a flood, a ground crack, and an electrical line failure.
 この場合、人工的に作成した災害状態を示す災害映像は、具体的な崩壊した建物、崩壊した橋、崩壊した山、崩壊した堤、崩壊した道、崩壊したトンネル、地震、津波、火災、水害、地割れ、電線路の不具合、のうち少なくとも1または複数を含むので、災害状況を容易にかつ正確に判定することができる。 In this case, the disaster image that shows the artificially created disaster state is a concrete collapsed building, collapsed bridge, collapsed mountain, collapsed embankment, collapsed road, collapsed tunnel, earthquake, tsunami, fire, flood In addition, since at least one or more of ground cracks and malfunctions of electric lines are included, it is possible to easily and accurately determine a disaster situation.
 (5)
 第5の発明にかかる災害状況判定システムは、一局面から第4の発明にかかる災害状況判定システムにおいて、ディープラーニング部は、表示部に災害状況の確率を表示してもよい。
(5)
The disaster situation determination system according to the fifth aspect of the present invention is the disaster situation determination system according to the fourth aspect of the invention, wherein the deep learning unit may display the probability of the disaster situation on the display unit.
 この場合、ディープラーニング部は、災害状況の確率を表示することができるため、建物が完全に倒壊しているのか、または今後の二次災害によりさらに倒壊する可能性があるのか、を表示させることができる。すなわち、二次災害とは、地震、火災、水害、津波、等の場合、再度発生する可能性があるためである。当然のことながら、二次災害、三次災害等が含まれる。 In this case, the Deep Learning Department can display the probability of a disaster situation, so that it can indicate whether the building is completely collapsed or is likely to collapse further due to a future secondary disaster. Can do. That is, secondary disasters are likely to occur again in the event of an earthquake, fire, flood, tsunami, or the like. Naturally, secondary disasters and tertiary disasters are included.
(6)
 第6の発明にかかる災害状況判定システムは、一局面から第5の発明にかかる災害状況判定システムにおいて、天候表示部をさらに含み、ディープラーニング部は、天候表示部からの天気予報情報に応じて、表示部に災害状況地図の天候情報を表示してもよい。
(6)
The disaster situation determination system according to a sixth aspect of the present invention is the disaster situation determination system according to the fifth aspect of the invention, further comprising a weather display unit, wherein the deep learning unit is responsive to the weather forecast information from the weather display unit. The weather information on the disaster situation map may be displayed on the display unit.
 この場合、表示部に天候情報も表示されるため、災害状況の悪化を予測することができる。 In this case, since the weather information is also displayed on the display unit, the deterioration of the disaster situation can be predicted.
(7)
 第7の発明にかかる災害状況判定システムは、一局面から第6の発明にかかる災害状況判定システムにおいて、ディープラーニング部は、災害後の災害映像を一の画像として分割するとともに、当該画像を複数に分割し、個々の画像の災害状態を判定した後、合成してもよい。
(7)
The disaster situation determination system according to a seventh aspect of the present invention is the disaster situation determination system according to the sixth aspect of the invention, wherein the deep learning section divides the disaster video after the disaster as one image and a plurality of the images. It is also possible to synthesize the images after determining the disaster state of each image.
 この場合、災害映像を個々の画像に分割するので、画像処理速度を高めることができるとともに、災害状況の判定精度を高めることができる。 In this case, since the disaster video is divided into individual images, the image processing speed can be increased and the accuracy of determining the disaster situation can be increased.
(8)
 他の局面に従う災害判定飛行システムは、一局面から第7の発明にかかる災害状況判定システムと、撮影装置を搭載し、災害映像を送信することができる飛行物と、を含むものである。
(8)
A disaster determination flight system according to another aspect includes a disaster situation determination system according to the seventh aspect of the present invention and a flying object equipped with a photographing device and capable of transmitting a disaster image.
 この場合、災害判定飛行システムにより、災害映像の災害状況を飛行物から送信し、災害状況判定システムにより、災害状況地図を容易に表示させることができる。 In this case, the disaster situation of the disaster image can be transmitted from the flying object by the disaster judgment flight system, and the disaster situation map can be easily displayed by the disaster situation judgment system.
本発明にかかる災害状況判定システムの全体の構成の一例を示す模式図である。It is a schematic diagram which shows an example of the whole structure of the disaster condition determination system concerning this invention. 教師データの一例を示す模式図である。It is a schematic diagram which shows an example of teacher data. 災害モデルを作成する一例を示す模式図である。It is a schematic diagram which shows an example which produces a disaster model. 災害地図データの一例を示す模式図である。It is a schematic diagram which shows an example of disaster map data. 経路図の一例を示す模式図である。It is a schematic diagram which shows an example of a route map. 本発明にかかる災害判定飛行システムの全体の構成の一例を示す模式図である。It is a schematic diagram which shows an example of the whole structure of the disaster determination flight system concerning this invention.
 以下、図面を参照しつつ、本発明の実施の形態について説明する。以下の説明においては、同一の部品には同一の符号を付してある。それらの名称および機能も同じである。したがって、それらについての詳細な説明は繰り返さない。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following description, the same parts are denoted by the same reference numerals. Their names and functions are also the same. Therefore, detailed description thereof will not be repeated.
 <実施の形態>
(災害状況判定システム100)
 図1は、本発明にかかる災害状況判定システム100の全体の構成の一例を示す模式図である。
<Embodiment>
(Disaster situation judgment system 100)
FIG. 1 is a schematic diagram showing an example of the overall configuration of a disaster situation determination system 100 according to the present invention.
 図1に示すように、災害状況判定システム100は、記録部110、ディープラーニング部120、および表示部130を含む。
 本発明にかかる災害情報判定システム100は、災害が発生した場合に、後述する深層学習の災害モデル150から現実の災害のレベルSLを判定し、災害地図データ240を作成するものである。
As illustrated in FIG. 1, the disaster situation determination system 100 includes a recording unit 110, a deep learning unit 120, and a display unit 130.
The disaster information determination system 100 according to the present invention determines the actual disaster level SL from a deep learning disaster model 150 described later and creates disaster map data 240 when a disaster occurs.
 なお、本発明における災害とは、自然災害、および人為的災害を含むものである。具体的には、気象災害、雨、集中豪雨、洪水、河川の氾濫、土砂災害、斜面崩壊、がけ崩れ、土石流、地滑り、竜巻、高潮、雪崩、吹雪、落雷、雹、地震、津波、地震火災、噴火、噴石、溶岩流、火砕流、泥流、大規模事故、火災、列車事故、航空事故、海難事故、交通事故、爆発事故、炭鉱事故、石油流出事故、化学物質汚染事故、原子力事故、テロ、戦争、戦災、等、その他任意の災害を含むものである。 The disaster in the present invention includes natural disasters and man-made disasters. Specifically, meteorological disasters, rain, torrential rains, floods, river floods, landslide disasters, slope failures, landslides, debris flows, landslides, tornadoes, storm surges, avalanches, snowstorms, lightning strikes, droughts, earthquakes, tsunamis, earthquake fires, Eruption, cinder, lava flow, pyroclastic flow, mudflow, large-scale accident, fire, train accident, aviation accident, maritime accident, traffic accident, explosion accident, coal mine accident, oil spill accident, chemical pollution accident, nuclear accident, terrorism, Includes any other disasters such as war, war damage, etc.
 また、一般的に深層学習の災害モデル150を構築するためには、災害モデル150のための教師データ140は、少なくとも数万以上必要であることがわかっている。そのため、精度の高い災害モデル150を作成するには、今後何十年のみならず、何百年の時間が必要であるのが常識である。
 本実施の形態においては、災害モデル150は、判定する災害ごとに500種類以上100000種類以下の教師データ140を用いて学習し、2000種類以上25000種類以下の教師データ140を用いて学習することが好ましく、8000種類以上12500種類以下の教師データ140を用いて学習することがさらに好ましい。このように下限値以上の学習をすることで、災害の判定を高精度で実施することができる。一方で、上記上限値を超えると学習の効果が飽和する。
In general, it is known that the teacher data 140 for the disaster model 150 needs at least tens of thousands or more to construct the disaster model 150 for deep learning. For this reason, it is common sense that in order to create a highly accurate disaster model 150, not only decades in the future but also hundreds of years will be required.
In the present embodiment, the disaster model 150 can learn from 500 to 100,000 types of teacher data 140 for each disaster to be determined, and can learn from 2000 to 25000 types of teacher data 140. Preferably, learning is performed using the teacher data 140 of 8000 types or more and 12,500 types or less. In this way, disaster learning can be carried out with high accuracy by learning beyond the lower limit. On the other hand, when the upper limit is exceeded, the learning effect is saturated.
(記録部110)
 本実施の形態において、記録部110は、一般的な記録装置からなる。なお、本実施の形態においては、記録装置からなることとしているが、これに限定されず、クラウド等の記録装置を用いてもよい。
 また、記録部110は、人工的に作成された複数の災害データ200を記録する。さらに、記録部110は、ディープラーニング部120において深層学習された災害モデル150を記録する。
 また、記録部110は、災害地図を作成するための基礎となる地図データ230を記録する。さらに、ディープラーニング部120で作成された災害地図データ240または/および経路図250を記録する。
(Recording unit 110)
In the present embodiment, the recording unit 110 includes a general recording device. In the present embodiment, the recording apparatus is used. However, the present invention is not limited to this, and a recording apparatus such as a cloud may be used.
The recording unit 110 records a plurality of artificially created disaster data 200. Furthermore, the recording unit 110 records the disaster model 150 that has been deeply learned in the deep learning unit 120.
The recording unit 110 records map data 230 that is a basis for creating a disaster map. Further, the disaster map data 240 and / or the route map 250 created by the deep learning unit 120 are recorded.
(ディープラーニング部120)
 ディープラーニング部120は、多層ニューラルネットワークにより記録部110に記録された教師データ140に基づいてディープラーニング(深層学習)を実施する。すなわち、ディープラーニング部120は、記録部110に記録された教師データ140として、予め災害モデル150を災害の種別毎に作成する。
(Deep learning part 120)
The deep learning unit 120 performs deep learning (deep learning) based on the teacher data 140 recorded in the recording unit 110 by the multilayer neural network. That is, the deep learning unit 120 creates the disaster model 150 for each disaster type in advance as the teacher data 140 recorded in the recording unit 110.
 本実施の形態におけるディープラーニング部120は、YOLO(Redmon, Joseph, et al.“YOLOv3: An Incremental Improvement” arXiv preprint arXiv:1804.02767)を用いることができる。YOLOは、あらかじめ画像全体をグリッド分割しておき領域ごとに物体のクラス分類(Classification)とバウンディングボックスとの計算を行い(Bounding Box Regression)、また1つのネットワークで構築されるため、高精度かつ高速の処理をすることができる。
 また、YOLOの入力する災害映像の幅は320ピクセル以上10000ピクセル以下、高さは240ピクセル以上5000ピクセル以下に設定し、畳み込み前の画像サイズの幅、高さは共に、104ピクセル以上1664ピクセル以下で設定し、208ピクセル以上832ピクセル以下で設定することがより好ましい。
 これにより、後述するように、現実の災害映像に対して、災害モデル150を用いた判定を高い精度で実施することができる。
The deep learning unit 120 in this embodiment can use YOLO (Redmon, Joseph, et al. “YOLOv3: An Incremental Improvement” arXiv preprint arXiv: 1804.002767). In YOLO, the entire image is divided into grids in advance, and object classification (Classification) and bounding box are calculated for each region (Bounding Box Regression). Also, because it is constructed with one network, it is highly accurate and fast. Can be processed.
The width of the disaster video input by YOLO is set to 320 to 10,000 pixels, the height is set to 240 to 5000 pixels, and the width and height of the image size before convolution are both 104 to 1664 pixels. It is more preferable to set at 208 to 832 pixels.
Thereby, as will be described later, the determination using the disaster model 150 can be performed with high accuracy on an actual disaster video.
 なお、本実施の形態においては、ディープラーニング部120としてYOLOを用いることとしているが、これに限定されず、R-CNN、SPPnet、その他の任意のエンジン(コンピュータプログラム)を用いても良い。
 また、本実施の形態においては、災害モデル150を記録部110に記録することとしたが、これに限定されず、ディープラーニング部120に記録してもよい。
In the present embodiment, YOLO is used as the deep learning unit 120. However, the present invention is not limited to this, and R-CNN, SPPnet, or any other engine (computer program) may be used.
In the present embodiment, the disaster model 150 is recorded in the recording unit 110. However, the present invention is not limited to this, and the disaster model 150 may be recorded in the deep learning unit 120.
(表示部130)
 表示部130は、液晶表示部またはプラズマディスプレイからなる。また、表示部130は、通信部131を有し、ディープラーニング部120と通信可能な装置である。
 表示部130は、ディープラーニング部120から得られた災害地図データ240を表示することができる。また、表示部130は、ディープラーニング部120から得られた経路図250を表示することができる。
 なお、本実施の形態においては、表示部130を液晶表示部またはプラズマディスプレイからなることとしているが、これに限定されず、携帯端末、携帯電話、スマートフォン、タブレット端末、ヘッドマウントディスプレイ等の任意の表示部130を含む。
(Display unit 130)
The display unit 130 includes a liquid crystal display unit or a plasma display. The display unit 130 includes a communication unit 131 and can communicate with the deep learning unit 120.
The display unit 130 can display the disaster map data 240 obtained from the deep learning unit 120. In addition, the display unit 130 can display a route map 250 obtained from the deep learning unit 120.
In the present embodiment, the display unit 130 is made of a liquid crystal display unit or a plasma display. However, the present invention is not limited to this, and any display device such as a mobile terminal, a mobile phone, a smartphone, a tablet terminal, or a head-mounted display can be used. A display unit 130 is included.
(教師データ140)
 次に、図2は、教師データ140の一例を示す模式図である。
(Teacher data 140)
Next, FIG. 2 is a schematic diagram illustrating an example of the teacher data 140.
 図2に示すように、教師データ140は、人工的に作成された複数の教師データ140からなる。すなわち、災害毎の状態に応じて教師データ140が種々形成される。本実施の形態においては、教師データ140は、三次元映像からなる。そのため、二次元画像と異なり、情報量を増加させることができる。 As shown in FIG. 2, the teacher data 140 is composed of a plurality of artificially created teacher data 140. That is, various teacher data 140 are formed according to the state of each disaster. In the present embodiment, the teacher data 140 is composed of 3D video. Therefore, unlike a two-dimensional image, the amount of information can be increased.
 例えば、教師データ140は、人工的にOpenGL等のソフトウェアを用いて作成することができる。
 さらに、本実施の形態においては、教師データ140は人工的に作成されたものを利用しているが、過去に現実に災害が生じ、当該災害の映像データがある場合には、当該過去の映像データを付加して教師データ140としてもよい。
 また、教師データ140は、人工的に作成された複数の教師データ140からなるが、個々の教師データ140は、災害の映像データが短時間で区切られていてもよく、二次元画像であってもよい。例えば、教師データ140の映像時間が30秒の教師データ140の場合には、5秒毎に教師データ140を分割してもよい。
For example, the teacher data 140 can be artificially created using software such as OpenGL.
Furthermore, in the present embodiment, the teacher data 140 uses artificially created data. However, when a disaster actually occurs in the past and there is video data of the disaster, the past video is recorded. Data may be added to form teacher data 140.
The teacher data 140 is composed of a plurality of artificially created teacher data 140. Each teacher data 140 may be a two-dimensional image in which disaster video data may be partitioned in a short time. Also good. For example, in the case of the teacher data 140 whose video time is 30 seconds, the teacher data 140 may be divided every 5 seconds.
(災害モデル150)
 図3は、災害モデル150を作成する一例を示す模式図である。
 図3に示すように、多層構造のニューラルネットワーク(深層ニューラルネットワーク)を用いた学習である。また、深層学習モデルとは、その深層ニューラルネットワークの構造を示す表現である。ディープラーニング部120は、教師データ140を用いて、深層学習モデル(深層ニューラルネットワークの構造)の少なくとも一部の構成要素について、人手を介することなく生成し、生成された構成要素を含む深層学習モデルとなる災害モデル150を出力する。
 したがって、深層学習モデルは自動的に構築される。本発明では、このようにして得られたニューラルネットワークを用いて現実の災害映像の分析を行う。したがって、従来の機械学習の手法とは異なり、領域探索、特徴抽出などの工程を必要としないため、より高速に処理をすることができる。
(Disaster model 150)
FIG. 3 is a schematic diagram illustrating an example of creating the disaster model 150.
As shown in FIG. 3, learning is performed using a neural network having a multilayer structure (deep neural network). The deep learning model is an expression indicating the structure of the deep neural network. The deep learning unit 120 uses the teacher data 140 to generate at least a part of the deep learning model (structure of the deep neural network) without human intervention, and includes the generated deep learning model A disaster model 150 is output.
Therefore, the deep learning model is automatically constructed. In the present invention, an actual disaster video is analyzed using the neural network thus obtained. Therefore, unlike conventional machine learning methods, processes such as region search and feature extraction are not required, and therefore, processing can be performed at higher speed.
 また、災害モデル150は、災害のレベルに応じてランク分けがなされても良い。具体的には、災害モデル150は、災害のレベルSL1から災害のレベルSL5のように複数にランクで分けられていてもよい。
 具体的には、地震による災害で、一つのビルの災害レベルSL1は、壁にひびが入った状態であり、災害レベルSL2は、窓ガラスが割れた状態であり、災害レベルSL3は、倒壊の可能性が50%未満の状態であり、災害レベルSL4は、倒壊の可能性が70%以上の状態であり、災害レベルSL5は、倒壊している状態である。
The disaster model 150 may be ranked according to the level of the disaster. Specifically, the disaster model 150 may be divided into a plurality of ranks, such as a disaster level SL1 to a disaster level SL5.
Specifically, in a disaster caused by an earthquake, the disaster level SL1 of one building is a state in which a wall is cracked, the disaster level SL2 is a state in which the window glass is broken, and the disaster level SL3 is a collapsed state. The probability is less than 50%, the disaster level SL4 is a state in which the possibility of collapse is 70% or more, and the disaster level SL5 is in a state of collapse.
 なお、災害レベルSL1から災害レベルSL5で説明を行ったが、これに限定されず、災害レベルの数は、2、3、4、10、100等の任意のランクに分けても良い。例えば、災害レベルを100に分けた場合には、パーセントでの確率で表示することができ、災害レベルを10に分けた場合には、割合で表示することができる。 In addition, although it demonstrated from disaster level SL1 to disaster level SL5, it is not limited to this, The number of disaster levels may be divided into arbitrary ranks, such as 2, 3, 4, 10, 100. For example, when the disaster level is divided into 100, it can be displayed with a probability in percentage, and when the disaster level is divided into 10, it can be displayed with a ratio.
(災害地図データ240)
 次に、図4は、災害地図データ240の一例を示す模式図である。
 図1に示す現実の災害映像300を災害状況判定システム100に与えた場合、ディープラーニング部120により、地図データ230に基づいて、災害地図データ240が作成され、表示部130に表示される。
(Disaster map data 240)
Next, FIG. 4 is a schematic diagram illustrating an example of the disaster map data 240.
When the actual disaster video 300 shown in FIG. 1 is given to the disaster situation determination system 100, the disaster learning data 240 is created based on the map data 230 by the deep learning unit 120 and displayed on the display unit 130.
 図4に示すように、現実の災害が、河川の氾濫の場合、橋B1は、流されていることを示し、橋B2は、被害がないことを示す。また、ビルB3は、1Fまで水没していることを示し、家B4は、火災が発生していることを示す。また、広場B5は、被害がないことを示す。
 また、当該災害地図データ240は、図4に示すように、平面地図であってもよく、三次元の映像からなる地図データでもよい。
As shown in FIG. 4, when the actual disaster is a river flood, the bridge B1 indicates that the river is being washed away, and the bridge B2 indicates that there is no damage. In addition, the building B3 indicates that it is submerged to 1F, and the house B4 indicates that a fire has occurred. Square B5 indicates that there is no damage.
Further, the disaster map data 240 may be a planar map as shown in FIG. 4 or map data composed of a three-dimensional image.
 また、図4に示すように、災害地図データ240は、今後の天気予報情報を追加して表示されてもよい。この場合、災害地図データ240は、現時点の情報であるため、例えば洪水災害、地震災害、大雨災害の場合等においては、天気予報情報、特に天気予報、アメダス予報などが重要となる。
 また、災害地図データ240は、時系列による予測変化を表示させてもよい。その結果、現時点から数時間、数日後の災害状況予測を行うことができる。
Moreover, as shown in FIG. 4, the disaster map data 240 may be displayed by adding future weather forecast information. In this case, since the disaster map data 240 is information at the present time, for example, in the case of a flood disaster, an earthquake disaster, a heavy rain disaster, etc., weather forecast information, particularly weather forecast, AMeDAS forecast, etc. are important.
Further, the disaster map data 240 may display prediction changes in time series. As a result, it is possible to predict a disaster situation several hours or days after the present time.
(経路図250)
 続いて、図5は、経路図250の一例を示す模式図である。
 ディープラーニング部120は、表示部130に対して、災害地図データ240に基づいて、広場B5までの経路図を示した経路部250が表示される。すなわち、ディープラーニング部120は、ビルB3周辺、家B4周辺を避けつつ、橋B2を経由して広場5までの道筋を経路図250として表示部130に示す。
(Route map 250)
Next, FIG. 5 is a schematic diagram illustrating an example of a route map 250.
The deep learning unit 120 displays a route unit 250 showing a route map to the plaza B5 on the display unit 130 based on the disaster map data 240. That is, the deep learning unit 120 shows the route to the square 5 via the bridge B2 on the display unit 130 as the route map 250 while avoiding the vicinity of the building B3 and the house B4.
(他の実施の形態)
 図6は、本発明にかかる災害判定飛行システム700の全体の構成の一例を示す模式図である。
 図6に示すように、災害判定飛行システム700は、飛行移動体500および災害状況判定システム100を含む。
 災害状況判定システム100については、図1に示したものと同一である。
(Other embodiments)
FIG. 6 is a schematic diagram showing an example of the overall configuration of the disaster determination flight system 700 according to the present invention.
As shown in FIG. 6, the disaster determination flight system 700 includes a flight vehicle 500 and a disaster situation determination system 100.
The disaster situation determination system 100 is the same as that shown in FIG.
 飛行移動体500は、現実の災害映像300を撮像し、災害状況判定システム100のディープラーニング部120に送信する送信部を含む撮像装置510、当該撮像装置510を装備して飛行移動する飛行体520を含む。
 ここで撮像装置510は、カメラ、ビデオカメラ、その他の映像を取得するものであればよく、画像であってもよい。また、飛行体520は、飛行機、無人飛行機、ドローン、ヘリコプター、凧等であってもよい。
The flying vehicle 500 captures an actual disaster image 300 and transmits it to the deep learning unit 120 of the disaster situation determination system 100. The flying vehicle 520 is equipped with the imaging device 510 and moves in flight. including.
Here, the imaging device 510 may be a camera, a video camera, or any other device that acquires video, and may be an image. The flying body 520 may be an airplane, an unmanned airplane, a drone, a helicopter, a kite, or the like.
 本発明にかかる災害判定飛行システム700は、現実の災害が発生した場合、即座に、災害映像300を取得し、災害状況判定システム100に与えることができるため、市町村、住人、警察、消防、自衛隊等に適切に情報を与えることができる。 Since the disaster determination flight system 700 according to the present invention can immediately acquire the disaster video 300 and give it to the disaster situation determination system 100 when an actual disaster occurs, the municipalities, residents, police, fire fighting, self-defense forces Information can be given appropriately.
 以上のように、災害判定飛行システム700により、災害映像300を飛行体520から送信し、災害状況判定システム100により、リアルタイムで災害地図データ240を容易に表示部130に表示させることができる。 As described above, the disaster determination flight system 700 can transmit the disaster image 300 from the flying object 520, and the disaster situation determination system 100 can easily display the disaster map data 240 on the display unit 130 in real time.
[実施形態における各部と請求項の各構成要素との対応関係]
 本明細書における災害状況判定システム100が「災害状況判定システム」に相当し、災害判定飛行システム700が「災害判定飛行システム」に相当し、教師データ140が「人工的に作成した災害状態を示す災害映像」に相当し、記録部110が「記録部」に相当し、災害地図データ240が「災害状況地図」に相当し、経路図250が「通行可能な経路」に相当し、ディープラーニング部120が「経路案内指示部、天候表示部、ディープラーニング部」に相当し、表示部130が「表示部」に相当し、撮像装置510が「撮影装置」に相当し、飛行体520が「飛行物」に相当し、災害映像300が「災害映像」に相当する。
[Correspondence Relationship Between Each Part in Embodiment and Each Component in Claim]
The disaster situation determination system 100 in this specification corresponds to the “disaster situation judgment system”, the disaster judgment flight system 700 corresponds to the “disaster judgment flight system”, and the teacher data 140 indicates “an artificially created disaster state. Corresponding to “disaster video”, recording unit 110 corresponds to “recording unit”, disaster map data 240 corresponds to “disaster situation map”, route map 250 corresponds to “passable route”, deep learning unit 120 corresponds to “route guidance instruction unit, weather display unit, deep learning unit”, display unit 130 corresponds to “display unit”, imaging device 510 corresponds to “imaging device”, and flying body 520 corresponds to “flight” The disaster video 300 corresponds to “disaster video”.
 100 災害状況判定システム
 110 記録部
 120 ディープラーニング部
 130 表示部
 140 教師データ
 240 災害地図データ
 250 経路図
 300 災害映像
 510 撮像装置(撮影装置)
 520 飛行体
 700 災害判定飛行システム
 
 
 
DESCRIPTION OF SYMBOLS 100 Disaster situation determination system 110 Recording part 120 Deep learning part 130 Display part 140 Teacher data 240 Disaster map data 250 Path map 300 Disaster image 510 Imaging device (imaging device)
520 Aircraft 700 Disaster Judgment Flight System

Claims (8)

  1.  人工的に作成した災害状態を示す災害映像を記録する記録部と、
     前記記録部に記録された災害映像を用いて災害状態を学習するディープラーニング部と、
     前記ディープラーニング部により災害後の災害映像の災害状態を判定し、災害状況地図を表示する表示部と、を含む、災害状況判定システム。
    A recording unit that records a disaster video showing an artificially created disaster state;
    A deep learning unit that learns a disaster state using the disaster video recorded in the recording unit;
    A disaster situation determination system comprising: a display section for judging a disaster state of a disaster video after a disaster by the deep learning section and displaying a disaster situation map.
  2.  前記人工的に作成した災害状態を示す災害映像は、三次元により形成される、請求項1に記載の災害状況判定システム。 The disaster situation determination system according to claim 1, wherein the artificially created disaster video indicating the disaster state is formed in three dimensions.
  3.  経路案内指示部をさらに含み、
     前記経路案内指示部は、前記災害状況地図に基づいて、通行可能な経路を前記表示部に表示する、請求項1または2に記載の災害状況判定システム。
    A route guidance unit;
    The disaster situation determination system according to claim 1, wherein the route guidance instruction unit displays a route that can be passed on the display unit based on the disaster situation map.
  4.  前記人工的に作成した災害状態を示す災害映像は、崩壊した建物、崩壊した橋、崩壊した山、崩壊した堤、崩壊した道、崩壊したトンネル、地震、津波、火災、水害、地割れ、電線路の不具合、のうち少なくとも1または複数を含む、請求項1から3のいずれか1項に記載の災害状況判定システム。 The disaster images showing the artificially created disaster states are collapsed buildings, collapsed bridges, collapsed mountains, collapsed embankments, collapsed roads, collapsed tunnels, earthquakes, tsunamis, fires, floods, ground cracks, electric lines The disaster situation determination system according to any one of claims 1 to 3, including at least one or more of the problems.
  5.  前記ディープラーニング部は、前記表示部に災害状況の確率を表示する、請求項1から4のいずれか1項に記載の災害状況判定システム。 5. The disaster situation determination system according to any one of claims 1 to 4, wherein the deep learning section displays a probability of a disaster situation on the display section.
  6.  天候表示部をさらに含み、
     前記ディープラーニング部は、前記天候表示部からの天気予報情報に応じて、前記表示部に災害状況地図の天候情報を表示する、請求項1から5のいずれか1項に記載の災害状況判定システム。
    A weather indicator,
    The disaster situation determination system according to any one of claims 1 to 5, wherein the deep learning unit displays weather information of a disaster situation map on the display unit according to weather forecast information from the weather display unit. .
  7.  前記ディープラーニング部は、災害後の災害映像を一の画像として分割するとともに、当該画像を複数に分割し、個々の画像の災害状態を判定した後、合成する、請求項1から6のいずれか1項に記載の災害状況判定システム。 The deep learning unit divides the disaster video after the disaster as one image, divides the image into a plurality of images, determines the disaster state of each image, and then combines them. The disaster situation determination system according to item 1.
  8.  請求項1から請求項7の少なくとも1つの災害状況判定システムと、
     撮影装置を搭載し、災害映像を送信することができる飛行物と、を含む、災害判定飛行システム。
    At least one disaster situation determination system according to claim 1;
    A disaster determination flight system including a flying object equipped with a photographing device and capable of transmitting a disaster image.
PCT/JP2019/021955 2018-06-04 2019-06-03 Disaster state determination system and disaster determination flight system WO2019235415A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020523087A JP7065477B2 (en) 2018-06-04 2019-06-03 Disaster situation judgment system and disaster judgment flight system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-106648 2018-06-04
JP2018106648 2018-06-04

Publications (1)

Publication Number Publication Date
WO2019235415A1 true WO2019235415A1 (en) 2019-12-12

Family

ID=68770375

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/021955 WO2019235415A1 (en) 2018-06-04 2019-06-03 Disaster state determination system and disaster determination flight system

Country Status (2)

Country Link
JP (1) JP7065477B2 (en)
WO (1) WO2019235415A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985355A (en) * 2020-08-01 2020-11-24 桂林理工大学 Remote sensing building earthquake damage assessment method and system based on deep learning and cloud computing
CN112883907A (en) * 2021-03-16 2021-06-01 云南师范大学 Landslide detection method and device for small-volume model
CN113296072A (en) * 2021-05-24 2021-08-24 伍志方 Method and system for automatically identifying thunderstorm strong wind based on YOLOv3 model
WO2021245765A1 (en) * 2020-06-02 2021-12-09 三菱電機ビルテクノサービス株式会社 Elevator system
WO2022070808A1 (en) * 2020-10-01 2022-04-07 富士フイルム株式会社 Disaster information processing device, method for operating disaster information processing device, program for operating disaster information processing device, and disaster information processing system
JP7382859B2 (en) 2020-03-09 2023-11-17 株式会社Nttドコモ Disaster scale estimation device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08124064A (en) * 1994-10-24 1996-05-17 Fuji Facom Corp Fire detection using image and escape guiding device in fire
JPH10257458A (en) * 1997-03-12 1998-09-25 Hochiki Corp Multiple complex house managing system
JP2010097430A (en) * 2008-10-16 2010-04-30 Tokyo Univ Of Agriculture & Technology Smoke detection device and smoke detection method
WO2018079400A1 (en) * 2016-10-24 2018-05-03 ホーチキ株式会社 Fire monitoring system
WO2018083798A1 (en) * 2016-11-07 2018-05-11 株式会社ラムロック Monitoring system and mobile robot device
JP2018084955A (en) * 2016-11-24 2018-05-31 株式会社小糸製作所 Unmanned aircraft

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3721407B2 (en) 2003-05-23 2005-11-30 九州大学長 Sediment disaster prediction system, method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08124064A (en) * 1994-10-24 1996-05-17 Fuji Facom Corp Fire detection using image and escape guiding device in fire
JPH10257458A (en) * 1997-03-12 1998-09-25 Hochiki Corp Multiple complex house managing system
JP2010097430A (en) * 2008-10-16 2010-04-30 Tokyo Univ Of Agriculture & Technology Smoke detection device and smoke detection method
WO2018079400A1 (en) * 2016-10-24 2018-05-03 ホーチキ株式会社 Fire monitoring system
WO2018083798A1 (en) * 2016-11-07 2018-05-11 株式会社ラムロック Monitoring system and mobile robot device
JP2018084955A (en) * 2016-11-24 2018-05-31 株式会社小糸製作所 Unmanned aircraft

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7382859B2 (en) 2020-03-09 2023-11-17 株式会社Nttドコモ Disaster scale estimation device
WO2021245765A1 (en) * 2020-06-02 2021-12-09 三菱電機ビルテクノサービス株式会社 Elevator system
JPWO2021245765A1 (en) * 2020-06-02 2021-12-09
JP7206440B2 (en) 2020-06-02 2023-01-17 三菱電機ビルソリューションズ株式会社 elevator system
CN115697877A (en) * 2020-06-02 2023-02-03 三菱电机楼宇解决方案株式会社 Elevator system
CN115697877B (en) * 2020-06-02 2023-10-03 三菱电机楼宇解决方案株式会社 Elevator system
CN111985355A (en) * 2020-08-01 2020-11-24 桂林理工大学 Remote sensing building earthquake damage assessment method and system based on deep learning and cloud computing
WO2022070808A1 (en) * 2020-10-01 2022-04-07 富士フイルム株式会社 Disaster information processing device, method for operating disaster information processing device, program for operating disaster information processing device, and disaster information processing system
CN112883907A (en) * 2021-03-16 2021-06-01 云南师范大学 Landslide detection method and device for small-volume model
CN113296072A (en) * 2021-05-24 2021-08-24 伍志方 Method and system for automatically identifying thunderstorm strong wind based on YOLOv3 model

Also Published As

Publication number Publication date
JPWO2019235415A1 (en) 2021-05-13
JP7065477B2 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
WO2019235415A1 (en) Disaster state determination system and disaster determination flight system
WO2023061039A1 (en) Tailing pond risk monitoring and early-warning system based on internet of things
Duque et al. Synthesis of unmanned aerial vehicle applications for infrastructures
US20190051046A1 (en) Incident Site Investigation and Management Support System Based on Unmanned Aerial Vehicles
Mandirola et al. Use of UAS for damage inspection and assessment of bridge infrastructures
US20100280755A1 (en) Method, apparatus, and system for rapid assessment
US20200401138A1 (en) Large scale unmanned monitoring device assessment of utility system components
US10726268B2 (en) Building black box
Huang et al. A method for using unmanned aerial vehicles for emergency investigation of single geo-hazards and sample applications of this method
van Aardt et al. Geospatial disaster response during the Haiti earthquake: A case study spanning airborne deployment, data collection, transfer, processing, and dissemination
Meyer et al. UAV-based post disaster assessment of cultural heritage sites following the 2014 South Napa Earthquake
Congress et al. Methodology for resloping of rock slope using 3D models from UAV-CRP technology
Blyth et al. Documentation, structural health monitoring and numerical modelling for damage assessment of the Morris Island Lighthouse
Jalinoos et al. Experimental evaluation of unmanned aerial system for measuring bridge movement
RU2467298C1 (en) System of satellite monitoring of engineering facilities displacements using satellite navigation systems glonass/gps
WO2021084698A1 (en) Analysis device and analysis method
Huyck et al. Remote sensing for disaster response: A rapid, image-based perspective
Crawford et al. Rapid disaster data dissemination and vulnerability assessment through synthesis of a web-based extreme event viewer and deep learning
Perez Jimeno et al. An integrated framework for non-destructive evaluation of bridges using UAS: A case study
JP6968307B2 (en) Disaster response support device and disaster response support method
D’Urso et al. Rescue Management and Assessment of Structural Damage by Uav in Post-Seismic Emergency
Yasin et al. A review of Small Unmanned Aircraft System (UAS) advantages as a tool in condition survey works
Tsai et al. Using mobile disaster response system in bridge management
Yıldız et al. Using drone technologies for construction project management: A narrative review
Huang et al. Method and application of using unmanned aerial vehicle for emergency investigation of single geo-hazard

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19815191

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020523087

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19815191

Country of ref document: EP

Kind code of ref document: A1