CN116976499A - Storm flood model system based on massive parallel computing - Google Patents

Storm flood model system based on massive parallel computing Download PDF

Info

Publication number
CN116976499A
CN116976499A CN202310762951.1A CN202310762951A CN116976499A CN 116976499 A CN116976499 A CN 116976499A CN 202310762951 A CN202310762951 A CN 202310762951A CN 116976499 A CN116976499 A CN 116976499A
Authority
CN
China
Prior art keywords
data
computing
calculation
module
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310762951.1A
Other languages
Chinese (zh)
Inventor
马强
刘昌军
文磊
张顺福
王龙阳
朱化鹏
何丽佳
李小萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Institute of Water Resources and Hydropower Research
Original Assignee
China Institute of Water Resources and Hydropower Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Institute of Water Resources and Hydropower Research filed Critical China Institute of Water Resources and Hydropower Research
Priority to CN202310762951.1A priority Critical patent/CN116976499A/en
Publication of CN116976499A publication Critical patent/CN116976499A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • G01N15/08Investigating permeability, pore-volume, or surface area of porous materials
    • G01N15/082Investigating permeability by forcing a fluid through a sample
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/24Earth materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/23Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/08Fluids

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Biochemistry (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Immunology (AREA)
  • Game Theory and Decision Science (AREA)
  • Remote Sensing (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Medical Informatics (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Computing Systems (AREA)

Abstract

The application discloses a rainstorm flood model system based on massive parallel computation, which relates to the technical field of flood prediction and comprises the following steps: the data processing module is used for analyzing and acquiring basic data of the storm flood model through the soil texture data; the calculation unit module is used for carrying out small-drainage-basin analysis according to the basic data; the parallel computing module is used for controlling the computing unit modules to perform parallel analysis on the small watershed between the upstream node and the downstream node of the primary watershed; the node server distribution module is used for distributing the parallel calculation module to the node server for calculation according to the primary drainage basin, and predicting the storm flood risk of the primary drainage basin; the database module is used for storing data generated by the system; the database logic control module is used for controlling the data interaction between the database module and other functional modules to realize the module functions of the system; the application provides a new technical idea for predicting the storm flood.

Description

Storm flood model system based on massive parallel computing
Technical Field
The application relates to the technical field of flood prediction, in particular to a storm flood model system based on massive parallel computing.
Background
Flood disasters have been one of the major natural disasters restricting the development of human society. Flood disasters are classified into river floods, lake floods, storm floods, etc. In the prior art, the prediction of the storm flood also belongs to a starting stage, so that a new prediction system for the storm flood is urgently needed to be arranged so as to meet the prediction requirement of the flood disaster caused by typhoons.
Disclosure of Invention
In order to solve the above problems, the present application provides a storm flood model system based on massive parallel computing, comprising:
the data processing module is used for analyzing and acquiring basic data of the storm flood model through the soil texture data;
the calculation unit module is used for carrying out small-drainage-basin analysis according to the basic data;
the parallel computing module is used for controlling the computing unit modules to perform parallel analysis on the small watershed between the upstream node and the downstream node of the primary watershed;
the node server distribution module is used for distributing the parallel calculation module to the node server for calculation according to the primary drainage basin, and predicting the storm flood risk of the primary drainage basin;
the database module is used for storing soil texture data, basic data, small drainage basin analysis data, parallel calculation results, primary drainage basin composition data and prediction data;
and the database logic control module is used for controlling the data interaction between the database module and other functional modules and realizing the module functions of the system.
Preferably, the data processing module analyzes the corresponding relation between the soil texture type and the slope downward permeability of the small drainage basin by utilizing the soil texture data, performs rasterization processing, calculates the permeability coefficient of each grid of the small drainage basin, performs weighted average according to the specific gravity of all grids in the drainage basin of the grids with the same permeability coefficient, obtains the stable downward permeability and the maximum downward permeability of the underlying surface of each small drainage basin, generalizes and determines the downward permeability characteristic parameters of the small drainage basin, and obtains the basic data of the model by analyzing the nonlinear characteristics of the flow produced by the small drainage basin.
Preferably, the data processing module is further used for obtaining the combination proportion of clay particles, powder particles and sand particles of mineral particles in the soil, carrying out interpolation analysis to obtain soil texture data of surface soil of 0-20cm, correcting according to hydrologic characteristics to obtain soil texture type data, and obtaining maximum response depths and flow time of different soil texture types under short-duration heavy rain conditions according to small-river basin flow characteristic parameters, and constructing basic data, wherein the small-river basin flow characteristic parameters comprise saturated water content, residual water content, soil field water holding capacity, soil wetting capillary suction force, hydraulic conductivity and soil water holding curve model parameters.
Preferably, the computing unit module is further configured to construct a structure according to the upstream inlet list, the downstream outlet, the outlet number, the type field, and the computing unit name of the small basin; initializing a hash mapping table watershed map with a small watershed number as a key value, and initializing a hash mapping table unitMap with a unit name as a key value; traversing an input file, and constructing a watershed map and a unitMap; circularly traversing the unitMap, and counting the upstream inlet nodes FCD of each small basin of the same computing unit, and the outlet numbers ONDCD and the outlet OCD of the computing units; the output result comprises: serial number ID, computing unit name UNITCD, upstream computing unit FCD, downstream computing unit OCD, computing unit exit number ONDCD, classification code GB.
Preferably, the computing unit module is further configured to split a river basin, a river channel, and a node file, where the splitting process includes:
copying original RIVL data, deleting a coastline, deleting an edge line of a reservoir, selecting according to attributes in an attribute table, and then deleting the edge line of the reservoir and the coastline;
assigning the computing unit name to the riverway line file RIVL by using an Analysis Tools-Overlay-space join tool;
checking the attribute tables of the shps of the WATA and the river in the ArcGIS, ensuring that no other fields except the original field and the newly added WSCU_Name field exist, and deleting the redundant fields in the attribute tables of the ArcGIS;
running the written Matlab program, and splitting WATA, RIVL and NODE into computing unit modules.
Preferably, the parallel computing module is further used for carrying out river channel topology according to grid data including elevation, gradient, river channel, land utilization and soil texture, cutting and processing the grid data, modeling and analyzing the small river basin.
Preferably, the modeling process of the parallel computing module comprises the following steps:
hydro1-6.Py is run in Python;
running the outflow_HRU.m in MATLAB;
running crt_bat.m in MATLAB, then running all_crt.bat, noting if each unit can run successfully;
hydro7-8.Py is run in Python;
operating copy_file.py in Python, copying input files required by the model into a results folder, and generating a groovy control file in batch;
running get_extension_index.m in Matlab, and acquiring four-to-four coordinates of each calculation unit and the lower left coordinates projected by the corresponding grid UTM;
selecting a field of precipitation data to be placed in a rain data folder, running rain data. Py in Python, and extracting the precipitation data to each unit;
and running createRainIndex m in Matlab to obtain precipitation data indexes.
Preferably, the parallel computing module is configured to perform parallel computing from three aspects of space, time and subprocess, where the space is parallel: on the basis of considering the calculation dependency relationship among the simulation units, the calculation tasks of different simulation units are distributed to a plurality of calculation units to perform parallel calculation in a space decomposition mode;
time parallelism: from a time perspective, model simulation is performed at a plurality of moments in a continuous time sequence, and the output of the last moment is taken as the input of the next moment;
the sub-processes are parallel: the flood model involves numerous production pooling sub-processes, employing parallel computing.
Preferably, the database module is configured to store rainfall data converted from binary rainfall data into an ASCII file, and includes computing unit parameter information, server parallel scheduling information and computing node server resource allocation information, where the computing unit parameter information includes basic data and computing unit modeling result file, and the basic data includes: calculating unit codes, names, upstream and downstream topological unit information, located rivers, unit outlet nodes, belonging calculation groups, calculating unit four-to-attribute information and calculating unit gridding attribute information; the calculation unit modeling result file includes: a small river basin parameter file, a river channel parameter file, a node parameter file, a land utilization parameter file, a soil texture parameter file, an elevation parameter file and a flood model calculation parameter file;
the server parallel scheduling information includes: program number, program name, computing node to which the program belongs, program data storage physical address, program and computing unit association ID, program step length setting, computing node IP address to which the program belongs, computing result type, return parameter setting, test data starting time, program parameter configuration and region where the group is located;
the computing node server resource allocation information includes: node server name, IP address, primary basin name, and connection information are calculated.
Preferably, the node server distribution module is configured to set a node monitoring module at each node, and call a plurality of computing programs in parallel to work by adopting a process dynamic distribution technology, and specifically performs the following steps:
the client side remotely invokes a node monitoring module on a computing node participating in the computing task through an MPICH parallel computing command;
the node monitoring module reads the number of processing cores of the node CPU processor in real time, and starts a corresponding number of calculation programs to execute in parallel according to the number of the CPU cores; monitoring the running states of all calculation programs on the current node in real time;
the node monitoring module returns the calculation state of the current node to the client through a TCP/IP network communication protocol;
the client is in charge of collecting state return values of all nodes participating in the calculation task, and then carrying out statistical analysis on the calculation result;
and the client reports the result of the calculation task to the server.
The application discloses the following technical effects:
according to the method, through the design of parallel computing, the computing efficiency of the system is improved, and a new technical idea is provided for the prediction of storm flood.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a parallel computing structure of a flooding model according to the application;
FIG. 2 is a schematic diagram of a parallel computing system according to the present application;
FIG. 3 is a schematic diagram of server parallel scheduling in accordance with the present application;
FIG. 4 is a schematic structural diagram of a flood model system according to the present application;
FIG. 5 is a schematic diagram of the parallel computing architecture according to the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
1-5, the application provides a storm flood model system based on massive parallel computation, and the construction process of the system is as follows:
firstly, small-river-basin unit data extraction and small-river-basin and grid model construction are carried out based on mountain torrent disaster investigation and evaluation results and typhoon rainfall data, the CART model developed by the application is used for processing the data, a model database is constructed, a model is deployed on a computing unit, and then simulation is carried out according to a parallel computing principle.
1. Small-drainage-basin data extraction:
and analyzing the corresponding relation between the soil texture type and the slope infiltration characteristics of the small drainage basin by utilizing the soil texture data, carrying out rasterization treatment, calculating the infiltration coefficient of each grid of the small drainage basin, carrying out weighted average on the specific gravity of all grids in the drainage basin according to the grids with the same infiltration coefficient, obtaining the stable infiltration rate and the maximum infiltration rate of the underlying surface of each small drainage basin, and presumably determining the infiltration characteristic parameters of the small drainage basin. And further analyzing the nonlinear characteristics of the small-river basin runoff production, and providing basic data for building a mixed runoff production model.
2. Soil texture data processing:
the method comprises the steps of collecting and arranging a soil type data set, obtaining the combination proportion of clay particles, powder particles and sand particles of mineral particles in soil, carrying out interpolation analysis by adopting an international system soil texture classification standard (ISSS), obtaining soil texture data of surface soil of 0-20cm, and correcting according to hydrologic characteristics to obtain the soil texture type data. According to hydrologic characteristics, 5 types of rocks, stones, gravel, water areas, towns and hardened ground are added
3. Small drainage basin infiltration characteristic parameters:
the small watershed has short converging time, large change of rain intensity and obvious non-linear characteristic of the produced flow. The mixed flow theory can better simulate the nonlinear flow process of the small river basin. According to the small-river-basin soil texture classification result, 7 small-river-basin flow characteristic parameters such as saturated water content, residual water content, soil field water holding capacity, soil wetting capillary suction, hydraulic conductivity, soil water holding curve Van Genuchten model parameters (alpha and n) and the like required by the mixed flow model are subjected to statistical analysis, so that parameter values of 12 standard soil, stone and crushed gravel are obtained. The 3 kinds of rock, water and town and hardened ground can be considered as water impermeable layers.
4\soil infiltration influence depth and flow time:
the storm response depth is a key parameter for determining the water storage capacity of the soil, and the runoff producing time can reflect the speed of the soil on rainfall response. And solving the infiltration process curves of the 9 kinds of soil under different initial water contents and rainfall processes (within 6 hours) by adopting a Richard equation and adopting a finite element numerical method. Maximum response depth and runoff time for different soil texture types under short duration stormwater conditions.
5. Collecting and sorting relevant soil data sets:
the hydraulic parameters such as the soil type with the minimum resolution of 250m, the soil texture types with different depths, the saturated water content, the effective water content, the withering water content and the like are obtained, and are important references for researching the flow characteristics of the small river basin.
6. Modeling a small-drainage basin model:
6.1, manual division calculation unit: the unit requirement is a complete watershed, only one outlet is provided, each node can only correspond to one calculation unit, and the area of the watershed is about 120-200 square kilometers. In addition, the reservoir must be a stand-alone computing unit (8.5.2020 update: upstream of the reservoir)<10km 2 Is combined with the reservoir, coastline units are combined with units that are identical to the outlet nodes).
A field wscu_name is added to the file, the data type is text, and the calculation unit number is added. Numbering rules: the 5 bits before the number of the small drainage basin are used for processing the area number and the serial number, and the serial number ensures that the ID is not repeated. Using the attribute field calculator formula in ArcGIS: left ([ WSCD ], 5) & "_area code_ID number ]"
6.2, performing topology coding on the computing unit:
export the attribute table of the WATA file to which the wscu_name field has been added (attribute table of the file before 1-c fusion): the WSCU_Name field is removed from the definition query as a NULL area, and then the attribute table is opened to be exported as a dbf file.
Opening dbf files in excel, and replacing all the matters in IWSCD with the matters; "(all commas are replaced with semicolons, noted english characters), the line with GB field 250100 is deleted. And the file is stored in csv format.
WSTopolity. Py (test run on python3.7 has no problem), modify in-red the input_path to the storage path and filename of csv in 5-b, and output_path to the topology output path (self-settable). And running the program to obtain a calculation unit topological structure table.
Program description:
the structure is constructed from the upstream entry list, downstream exit, exit number, type field and computational unit name of the small basin.
The hash mapping table watershed map with the small watershed number as the key value is initialized, and the hash mapping table unitMap with the unit name as the key value is initialized.
Traversing the input file and constructing a watershaedmap and a unitMap.
The unitMap is cycled through, and for each small basin of the same computing unit, its upstream entry node FCD is counted, as well as the exit number ONDCD and the exit OCD of the computing unit.
The output result comprises: serial number ID, computing unit name UNITCD, upstream computing unit FCD, downstream computing unit OCD, computing unit exit number ONDCD, classification code GB.
All corresponding uniccd of gb= 410402 are replaced with-1 in excel (note that errors are not to be replaced, e.g. whf62_3_11 may be replaced by whf62_3_1); afterwards, the method comprises the steps of (1); "sum"; -1 "all deleted (with substitution to blank)
In contrast to ArcGIS, it is checked whether the topology is correct (possibly due to errors in cell division, which is equivalent to a secondary check of cell division)
6.3, splitting river basin, river channel and node file:
copying original RIVL data, deleting coastline, deleting edge line of reservoir, selecting according to attribute in attribute table, then deleting
GB210501, rvlen=0, reservoir edge line
GB250200 coastline
And assigning the computing unit name to the riverway line file RIVL by using an Analysis Tools-Overlay-space join tool. Setting as a figure, and naming as RIVL_area code_sj.shp; the small watershed with WSCU_Name empty is deleted after join, and the WSCU_Name field small watershed (wata_zone number. Shp), riverway (RIVL_zone number. Sj. Shp) and original NODE. Shp files are copied into two folders (two folders are "hainan zone number_project" (for hydrological model entry) and "hainan zone number\base_data" (for grid model data modeling), respectively). In the first part, only the files for the hydrologic model into the library folder are manipulated.
Checking the shp attribute tables of WATA and river in ArcGIS, ensuring that there are no other fields except the original field and the newly added WSCU_Name field, and if there are redundant fields, deleting in the ArcGIS attribute table.
And running a written Matlab program [ split RIVL.m ], and splitting WATA, RIVL and NODE into computing units, wherein the program can automatically split the computing small-river-basin layer, the river-channel layer and the NODE layer into corresponding folders according to the computing units. At the same time, a batch. Xls is output for batch clipping
6.4, cutting land utilization, and soil texture shp files:
first, the land utilization and soil map layers are opened at arcgis, and are respectively named "USLU" and "SLTA".
And copying the batch.xls file generated in the 3-d, and automatically generating the file in batch in a dialog box opened by the analysis-extract-clip right key selection batch. Clicking ok after the operation according to the graph, and automatically outputting the clipping result to the corresponding folder.
6.5, warehousing shp files in each folder:
new project name: place name-area code, all the calculation units are put into the same engineering file, and checked after being imported.
The key points of the inspection are as follows:
the window can correctly display the small river basin, river channel and node of each computing unit.
The basic information management includes attribute information of river basin data, river course data and node data.
In parameter data management, four pieces of table information are required. None of which is missing.
7. Modeling a grid model:
7.1, grid data preparation:
the mesh data that need to be used include DEM (elevation), SLOPE (river course), USLU (land utilization) and SLTA (soil texture). The DEM, the SLOPE, the USLU and the SLTA are cut after being processed according to projection bands, and RIVL grid data is required to be changed into grid after river topology is established. Each data is named according to the English letters.
The DEM data uses GMTED2010 global elevation data with 30 arcsec (about 1km x km) aggregate data (aggregate method: average) resolution. First resampled to 0.01 ° in ArcGIS (using the reset tool, BILINEAR linear interpolation method). Projection conversion was performed again (using UTM 49N,project raster tool, resolution using default settings, BILINEAR linear interpolation method). Using the clip (Data Management) tool, the clip range selects the shp layer of the entire region (see red box below). The result is named DEM_UTM.GIF and is a template of all grid data;
gradient was calculated using SLOPE (spatial analyst). The output data is named slope_utm. The parameters are set by default.
Using feature to raster (conversion), USLU and SLTA are converted to grids, field processing references P17 and P19 in PPT2, conversion is Field selection of new Field, output cell size selection of DEM_UTM.GIF file. And sets the process existence in the environment as follows: the obtained results were named uslu_utm. Tif and slta_utm. Tif.
The four grid files obtained by the above processing are put into a "hannan area code\hannan_ras" folder ("hannan area code" folder is established in one-3-b)
7.2, river topology establishment:
and operating the file 'wata_area code. Shp' in the 'hainan area code\base_data' folder in the one-3-b in the ArcGIS. The newly added computing unit numbers are selected by using data management tools-generation-Dissolve tool, dissolve_fields, and the small watershed with the same computing unit numbers are merged. WSAREA of small drainage basin is preserved.
Splitting each computing unit into separate files by using an Analysis Tools-Extract-Split tool, wherein the file names are as follows: calculating a unit number shp;
RVCD_1, TOCD and ID fields are added in the "RIVL_area code_sj.shp" file, and the field attributes are all Integers (INT).
Using project tools, the NODE. Shp is converted to UTM projection (band and grid identical), named node_utm. Shp.
And operating the 'split RIVL1. M' in MATLAB, wherein the input three files respectively correspond to 'wata_area code shp', 'RIVL_area code shp' and 'NODE shp' in the base_data folder. Outputting small watershed, river channel and node which are all computing units in the shp folder. And carrying out topology coding on the river channel data in each folder. The coding method refers to P24-25 in PPT 2. Where RVCD_1 represents the river serial number, TOCD is the downstream river serial number (outlet is 0), and ID is the river classification. The coding rules are as follows:
RVCD_1 is ranked according to the river course, and the sequence of layering and clockwise continuous sequencing is that the serial numbers in the first-stage river are firstly compiled, then the second-stage river is compiled, and in the same-stage river, the first layer is firstly compiled (namely, the serial numbers of the river course at the most source are compiled, and every time a node is passed, the next layer is entered), and then the second layer is compiled. In the same layer, the numbers are ordered clockwise from the egress node.
And according to the river classification principle, the source river is 1 level, when two same-level rivers meet, the downstream river rises by one level, and if different-level rivers meet, the downstream river keeps a higher level. The number of stages is unchanged by no node which is converged into the river channel.
7.3, clipping and processing grid data:
running in MATLAB [ raster2ascii. M ], checking whether there is a omission of the channel topology, if there is a omission, outputting the rivlpopocheck. Xls file in the same directory, wherein the units not subjected to channel topology coding are recorded. After supplementation, the rivldopocheck. Xls file is deleted and e [ xptshpattr.m ] is run again until no rivldopocheck. Xls is output.
Running matlab [ RIVL_ToRaster. M ], [ extract_by_mask_band. M ], [ extract_by_mask_batch sample. M ] will see 4 xls files in batch numbered 1,2,3,5, respectively.
Batch processing is carried out on the project in the ArcGIS, 1.Batch_rivl_prj. Xls are copied into a batch processing window table, and ok is clicked;
and dragging the DEM_UTM into an arcGIS, adopting a batch processing function of element grid (feature to raster), setting environment variables, selecting the DEM_UTM in a range, and modifying the upper and lower last two bits.
Paste [ 2. Batch_rivl2master.xls ] in batch folder, batch process, after the next result, check if grid is aligned.
Mask extraction DEM (extract by mask), pasting the [ 3. Batch_extract_date. Xls ] in the batch folder, performing batch processing, and setting environment variables as follows.
QGIS was used to batch trim DEM, SLOPE, USLU and SLTA. The gen-clip-bat.py is run first with the new output folder under the "hainan area code" folder (paths of lines 87 and 90 are modified first). 4 bat files (excluding single. Bat) are generated in the folder
The python default environment is modified and 4 bat files are run. There may be a problem that clipping cannot be performed, find the corresponding shp file in the WATA folder, check whether holes exist in the polygon edges, and if so, edit and remove the holes in the editor mode. The batch file is manually modified and copied to single. After all the cuts are completed, the files are copied into the ras folder.
Running the [ batch_clip_ras.m ] program in MATLAB, the ras lot is scaled to the compute unit size.
And (3) performing grid conversion ASCII by using ArcGIS, pasting [ 5. Batch_extract_subscriber. Xls ] in the batch folder, and performing batch processing.
7.4, modeling grid data:
hydro1-6.Py was run in Python.
The outflow_hru.m is run in MATLAB.
The MATLAB runs crt_bat.m and then all_crt.bat, noting if each unit can run successfully.
Running hydro7-8.Py in Python
Copy_file.py is operated in Python, the result folder of the input file copy required by the model is copied, and a groovy control file is generated in batch
And running get_extension_index.m in Matlab, and acquiring the coordinates of four to each computing unit and the lower left projected by the corresponding grid UTM. As input for precipitation data clipping.
And selecting a field of precipitation data to be placed in a rain data folder, running rain data. Py in Python, and extracting the precipitation data to each unit.
And running createRainIndex m in Matlab to obtain precipitation data indexes.
Repeating the step of-2-c, the output file is named CUTopo4 ras. Csv, and copying the output file into a results folder. The copyinputq.m is run,
run_bat.m was run to obtain all_run. If all the log files output are empty, modeling is described as not problematic.
8. CRT model program development:
a tool (CRT) for generating grid confluence relationships is compiled for generating confluence relationships between grids, wherein the tool grid confluence direction can select two methods of D4 and D8, and the confluence percentage between grids can be represented by two methods: one is the average, i.e. the amount of water flowing to each grid is the same; the other is to distribute proportionally according to the elevation difference. The compiled exe file may be run on a windows system.
9. Parallel computing design:
as shown in FIG. 1, considering that the simulation area is large and the simulation time of a single server is long, the application develops a distributed parallel computing technology based on a virtualized computing resource pool, and dynamic continuous flood analysis computation is carried out on a simulation computing unit group. Parallel simulation from space, time and sub-processes is achieved.
Based on the drainage basin grid topological relation, the application provides a flood model calculation unit division principle:
and 9.1, dividing the independent large drainage basins.
9.2, calculating the unit area to be smaller than 1000 square kilometers; an egress node can only act as an egress for one computing unit.
9.3, naming the calculation unit: the first 5 bits of the small watershed code + the area code of the place + the digital code.
Parallel computation is performed from three aspects of space, time and subprocess:
spatial parallelism: the model comprises a plurality of watercourses and a plurality of simulation units (slope surfaces and grids), and the calculation tasks of different simulation units are distributed to a plurality of calculation units to perform parallel calculation in a space decomposition mode on the basis of considering the calculation dependency relationship among the simulation units.
Time parallelism: from a time perspective, the model simulation is performed at a plurality of moments in a continuous time sequence, with the output of the last moment being the input of the next moment.
The sub-processes are parallel: the flood model involves numerous production pooling sub-processes, employing parallel computing.
As shown in FIG. 2, the model simulation of the present application employs a cluster system strategy. A cluster is an aggregate of a group of independent computers (nodes) connected by a high-performance network, each node can be used as a single computing resource for interactive users, and can also cooperate and represent a single and centralized computing resource for parallel computing. A cluster is a low cost, easy to build, and well scalable parallel architecture. And computers of different performances in clients can be incorporated into the cluster system.
10. Data slice parsing
Binary rainfall data are converted into rainfall data of ASCII files, and in readrain. Py, left, bottom, top and right are respectively the range (geographic coordinates, namely units are degrees) of a calculation area, ncol and nrows are the number of columns and the number of rows. xllcorner, ylincor, is the lower left angular coordinate (projected coordinate system, i.e., in meters) of the calculated range. The cellsize is the mesh size in meters.
11. Database management call
The application adopts the sqlserver database to realize the call management of the parameter configuration information required by the distributed parallel computation. The Chinese typhoon flood model system sequentially comprises a computing node server (ComputeNode), a Group number (Group) and a computing Unit (Unit) from top to bottom. In order to fully utilize the flood model program, effectively reduce flood analysis calculation time and improve the efficiency of mass data interaction, the system divides a plurality of calculation UNITs (UNIT) into a Group; to meet the parallel computing requirement, multiple groups (groups) of the same basin are partitioned onto one compute node server (ComputeNode). In summary, the database management parameters can be mainly divided into two types according to the application, one is the parameter information of each provincial and municipal computing unit, and the other is the parallel scheduling information of the server, as shown in fig. 3.
11.1, storage of computing Unit parameter information
The basic information of the Unit (Unit) serving as the minimum Unit of the flood model scheduling mainly comprises a calculation Unit code, a name, upstream and downstream topological Unit information, a river where the Unit is located, a Unit outlet node, a calculation group where the Unit is located, four-to-attribute information of the calculation Unit, rasterized attribute information of the calculation Unit and the like. The computing unit is a core result of flood model modeling. The modeling result files of the calculation unit comprise a small river basin parameter file, a river course parameter file, a node parameter file, a land utilization parameter file, a soil texture parameter file, an elevation (dem) parameter file, a flood model calculation parameter file (x_par.csv) and the like, the modeling result is mainly stored on a calculation node server in a file form, and the purpose of establishing a basic information table of the calculation unit in a database is to enable a system to dynamically slice and analyze typhoon rainfall data according to the basic information of the calculation unit and accurately output a relevant result data set of a flood flooding range to a designated geographic position.
11.2, storing parallel scheduling information of the server:
in order to fully utilize a flood calculation program, effectively reduce flood analysis calculation time and improve efficiency of mass data interaction, a system divides a plurality of calculation UNITs (UNIT) into a Group (Group), and one flood calculation program calls data of one Group (Group) to calculate; in order to meet the requirement of parallel computing, multiple groups (groups) of the same drainage basin are divided into a computing node server (ComputeNode), and the computing node server is divided according to a first-level drainage basin.
The Group information table is a flooding calculation program information table, and each Group is a flooding calculation program. The fields in the flooding calculation program information table mainly comprise: program number, program name, computing node to which the program belongs, program data storage physical address, program and computing unit association ID, program step size setting, computing node IP address to which the program belongs, computing result type, return parameter setting, test data starting time, program parameter configuration and province to which the packet belongs.
11.3 computing node Server (ComputeNode) resource Allocation information Table
The method mainly carries out resource allocation and scheduling management on the parallel computing program, and the basic information mainly comprises the following steps: the node server name, the IP address, the primary basin name, the connection information and the like are calculated.
12. Computing resource allocation:
the flood model practice faces a large number of calculation tasks with the characteristics of less input and output, large memory consumption, large operation amount, no data interaction requirement in the operation process and the like. For such computing tasks, they may be packaged as an executable program of a single computing thread, and pre-arranged on all computing nodes; and setting a node monitoring module at each node, and calling a plurality of calculation programs in parallel by adopting a process dynamic allocation technology. The specific implementation steps are as follows:
the client side remotely invokes a node monitoring module on a computing node participating in the computing task through an MPICH parallel computing command;
the node monitoring module reads the number of processing cores of the node CPU processor in real time, and starts a corresponding number of calculation programs to execute in parallel according to the number of the CPU cores; monitoring the running states of all calculation programs on the current node in real time;
the node monitoring module returns the calculation state of the current node to the client through a TCP/IP network communication protocol;
the client is in charge of collecting state return values of all nodes participating in the calculation task, and then carrying out statistical analysis on the calculation result;
and the client reports the result of the calculation task to the server.
In summary, assuming that 1 computing task has X computing nodes to participate, and the number of CPU cores of each computing node is Y, XY computing programs can be executed in parallel at the same time, which is XY times the single-thread execution efficiency. The design scheme is practically applied to a certain key project born by an author, and achieves good effects.
In summary, assuming that 1 computing task has X computing nodes to participate, and the number of CPU cores of each computing node is Y, XY computing programs can be executed in parallel at the same time, which is XY times the single-thread execution efficiency. The design scheme has been practically applied in the application and achieves good effect. The Chinese typhoon flood model system composition structure and the parallel computing system composition structure are shown in figures 4-5.
12.1, set server side
The server program can simultaneously start a plurality of client programs to calculate and allocate the available calculation nodes to the clients. As shown in the following diagram, the server-side program simultaneously starts an 8-large-basin calculating node server to perform flood calculation.
12,2, setup client
And the input and output data receiving and transmitting processing is responsible for processing the calculation task, and the input and output data receiving and transmitting processing is communicated with the node monitoring modules on all the calculation nodes. In the present application, a client is a compute node server (ComputeNode), representing a first-level river basin, and if the client is configured with a Zhujiang river basin (WH), when the client is started, all the computing unit programs included in the Zhujiang river basin (WH) participate in computation.
13.3, set node monitor Module
And monitoring the execution condition of the computing program running on the node. In the application, a node communication module is a flooding model program (Group), after the flooding model program is started, a plurality of calculation units in the Group are calculated, and a calculation result data set is stored in a file form under a client specified path.
13.4, calculation program
Is responsible for the execution of specific computational tasks, and involves numerous production pooling sub-processes in the flooding model. The computing program is the minimum component of resource scheduling, and in the application, the computing units (units) are corresponded, and each computing Unit comprises a complete flood model production and convergence process.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A storm flood model system based on massively parallel computing, comprising:
the data processing module is used for analyzing and acquiring basic data of the storm flood model through the soil texture data;
the calculation unit module is used for carrying out small-drainage-basin analysis according to the basic data;
the parallel computing module is used for controlling a plurality of computing unit modules and carrying out parallel analysis on a plurality of small watersheds between an upstream node and a downstream node of a primary watershed;
the node server distribution module is used for distributing the parallel calculation module to a node server for calculation according to the primary drainage basin, and predicting the storm flood risk of the primary drainage basin;
the database module is used for storing the soil texture data, the basic data, the small-drainage-basin analysis data, the parallel calculation result, the primary drainage basin composition data and the prediction data;
and the database logic control module is used for controlling the database module to interact with other functional modules to realize the module functions of the system.
2. The massive parallel computing-based stormwater flood model system as claimed in claim 1, wherein:
the data processing module analyzes the corresponding relation between the soil texture type and the slope downward permeability characteristic of the small drainage basin by utilizing the soil texture data, performs rasterization processing, calculates the permeability coefficient of each grid of the small drainage basin, performs weighted average according to the specific gravity of all grids in the drainage basin of the grids with the same permeability coefficient, obtains the steady downward permeability and the maximum downward permeability of the underlying surface of each small drainage basin, roughly determines the downward permeability characteristic parameters of the small drainage basin, and obtains the basic data of the model by analyzing the nonlinear characteristics of the flow produced by the small drainage basin.
3. The massive parallel computing-based stormwater flood model system as claimed in claim 2, wherein:
the data processing module is further used for obtaining the combination proportion of clay particles, powder particles and sand particles of mineral particles in the soil, carrying out interpolation analysis to obtain soil texture data of surface soil of 0-20cm, correcting according to hydrologic characteristics to obtain soil texture type data, obtaining maximum response depths and flow time of different soil texture types under short-duration heavy rain conditions according to small-river basin flow characteristic parameters, and constructing the basic data, wherein the small-river basin flow characteristic parameters comprise saturated water content, residual water content, soil field water holding capacity, soil wetting capillary suction, hydraulic conductivity and soil water holding curve model parameters.
4. A massive parallel computing based stormwater flood model system as claimed in claim 3, wherein:
the computing unit module is further used for constructing a structure body according to the upstream inlet list, the downstream outlet, the outlet number, the type field and the computing unit name of the small drainage basin; initializing a hash mapping table watershed map with a small watershed number as a key value, and initializing a hash mapping table unitMap with a unit name as a key value; traversing an input file, and constructing a watershed map and a unitMap; circularly traversing the unitMap, and counting the upstream inlet nodes FCD of each small basin of the same computing unit, and the outlet numbers ONDCD and the outlet OCD of the computing units; the output result comprises: serial number ID, computing unit name UNITCD, upstream computing unit FCD, downstream computing unit OCD, computing unit exit number ONDCD, classification code GB.
5. The massive parallel computing-based stormwater flood model system as claimed in claim 4, wherein:
the computing unit module is also used for splitting watershed, river channel and node files, wherein the splitting process comprises the following steps:
copying original RIVL data, deleting a coastline, deleting an edge line of a reservoir, selecting according to attributes in an attribute table, and then deleting the edge line of the reservoir and the coastline;
assigning the computing unit name to the riverway line file RIVL by using an Analysis Tools-Overlay-space join tool;
checking the attribute tables of the shps of the WATA and the river in the ArcGIS, ensuring that no other fields except the original field and the newly added WSCU_Name field exist, and deleting the redundant fields in the attribute tables of the ArcGIS;
running the written Matlab program, and splitting WATA, RIVL and NODE into the computing unit modules.
6. The massive parallel computing-based stormwater flood model system as claimed in claim 5, wherein:
the parallel computing module is also used for carrying out river channel topology according to grid data comprising elevation, gradient, river channel, land utilization and soil texture, cutting and processing the grid data, modeling and analyzing the small river basin.
7. The massive parallel computing-based stormwater flood model system as claimed in claim 6, wherein:
the modeling process of the parallel computing module comprises the following steps:
hydro1-6.Py is run in Python;
running the outflow_HRU.m in MATLAB;
running crt_bat.m in MATLAB, then running all_crt.bat, noting if each unit can run successfully;
hydro7-8.Py is run in Python;
operating copy_file.py in Python, copying input files required by the model into a results folder, and generating a groovy control file in batch;
running get_extension_index.m in Matlab, and acquiring four-to-four coordinates of each calculation unit and the lower left coordinates projected by the corresponding grid UTM;
selecting a field of precipitation data to be placed in a rain data folder, running rain data. Py in Python, and extracting the precipitation data to each unit;
and running createRainIndex m in Matlab to obtain precipitation data indexes.
8. The massive parallel computing-based stormwater flood model system as claimed in claim 7, wherein:
the parallel computing module is used for performing parallel computing from three aspects of space, time and subprocesses, wherein the space is parallel: on the basis of considering the calculation dependency relationship among the simulation units, the calculation tasks of different simulation units are distributed to a plurality of calculation units to perform parallel calculation in a space decomposition mode;
time parallelism: from a time perspective, model simulation is performed at a plurality of moments in a continuous time sequence, and the output of the last moment is taken as the input of the next moment;
the sub-processes are parallel: the flood model involves numerous production pooling sub-processes, employing parallel computing.
9. The massive parallel computing-based stormwater flood model system as claimed in claim 8, wherein:
the database module is used for storing rainfall data converted from binary rainfall data into ASCII files, and comprises calculation unit parameter information, server parallel scheduling information and calculation node server resource allocation information, wherein the calculation unit parameter information comprises basic data and calculation unit modeling result files, and the basic data comprises: calculating unit codes, names, upstream and downstream topological unit information, located rivers, unit outlet nodes, belonging calculation groups, calculating unit four-to-attribute information and calculating unit gridding attribute information; the calculation unit modeling result file includes: a small river basin parameter file, a river channel parameter file, a node parameter file, a land utilization parameter file, a soil texture parameter file, an elevation parameter file and a flood model calculation parameter file;
the server parallel scheduling information includes: program number, program name, computing node to which the program belongs, program data storage physical address, program and computing unit association ID, program step length setting, computing node IP address to which the program belongs, computing result type, return parameter setting, test data starting time, program parameter configuration and region where the group is located;
the computing node server resource allocation information includes: node server name, IP address, primary basin name, and connection information are calculated.
10. The massive parallel computing-based stormwater flood model system as claimed in claim 9, wherein:
the node server distribution module is used for setting a node monitoring module at each node, adopting a process dynamic distribution technology, and calling a plurality of calculation programs to work in parallel, and specifically comprises the following steps:
the client side remotely invokes the node monitoring module on the computing node participating in the computing task through the MPICH parallel computing command;
the node monitoring module reads the number of processing cores of the node CPU processor in real time, and starts a corresponding number of calculation programs to execute in parallel according to the number of the CPU cores; monitoring the running states of all calculation programs on the current node in real time;
the node monitoring module returns the calculation state of the current node to the client through a TCP/IP network communication protocol;
the client is in charge of collecting state return values of all nodes participating in the calculation task, and then carrying out statistical analysis on the calculation result;
and the client reports the result of the calculation task to the server.
CN202310762951.1A 2023-06-27 2023-06-27 Storm flood model system based on massive parallel computing Pending CN116976499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310762951.1A CN116976499A (en) 2023-06-27 2023-06-27 Storm flood model system based on massive parallel computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310762951.1A CN116976499A (en) 2023-06-27 2023-06-27 Storm flood model system based on massive parallel computing

Publications (1)

Publication Number Publication Date
CN116976499A true CN116976499A (en) 2023-10-31

Family

ID=88470362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310762951.1A Pending CN116976499A (en) 2023-06-27 2023-06-27 Storm flood model system based on massive parallel computing

Country Status (1)

Country Link
CN (1) CN116976499A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507549A (en) * 2020-12-03 2021-03-16 中国水利水电科学研究院 Modular hydrological simulation system
CN115618584A (en) * 2022-09-30 2023-01-17 苏州九张通算信息产业有限公司 Urban rainstorm waterlogging simulation data processing method based on distributed computation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507549A (en) * 2020-12-03 2021-03-16 中国水利水电科学研究院 Modular hydrological simulation system
CN115618584A (en) * 2022-09-30 2023-01-17 苏州九张通算信息产业有限公司 Urban rainstorm waterlogging simulation data processing method based on distributed computation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
翟晓燕等: "中国山洪水文模型研制与应用:以安徽省中小流域为例", 应用基础与工程科学学报, vol. 28, no. 05, pages 1018 - 1036 *

Similar Documents

Publication Publication Date Title
CN103559375B (en) The numerical simulation of scheduling engineering water correction and visual simulation system
Zeng et al. Designing and implementing an SWMM-based web service framework to provide decision support for real-time urban stormwater management
Erener et al. Landslide susceptibility assessment: what are the effects of mapping unit and mapping method?
Paudel et al. Comparing the capability of distributed and lumped hydrologic models for analyzing the effects of land use change
CN103092572A (en) Parallelization method of distributed hydrological simulation under cluster environment
CN111553963A (en) Meta-grid generation method and device based on geographic information
Faudzi et al. Two-dimensional simulation of sultan Abu Bakar dam release using HEC-RAS
Jagadeesh et al. Flood Plain Modelling of Krishna Lower Basin Using Arcgis, Hec-Georas And Hec-Ras
Zhi et al. A 3D dynamic visualization method coupled with an urban drainage model
Wang et al. An integrated method for calculating DEM-based RUSLE LS
Zhang et al. Hydrologic information extraction based on arc hydro tool and DEM
Mantilla et al. Extending generalized Horton laws to test embedding algorithms for topologic river networks
CN116882741A (en) Method for dynamically and quantitatively evaluating super-standard flood disasters
CN116976499A (en) Storm flood model system based on massive parallel computing
Hu et al. Batch modeling of 3d city based on esri cityengine
Ghosh et al. Fractal generation of artificial sewer networks for hydrologic simulations
CN115797589A (en) Model rendering method and device, electronic equipment and storage medium
CN115618584A (en) Urban rainstorm waterlogging simulation data processing method based on distributed computation
Roy et al. 3D web-based GIS for flood visualization and emergency response
CN114648617A (en) Water system extraction method based on digital elevation model DEM
Kraemer et al. Automating arc hydro for watershed delineation
Abdalla et al. WebGIS-based flood emergency management scenario
Guo et al. Application of GIS and remote sensing techniques for water resources management
Liu et al. Analysis and dynamic simulation of urban rainstorm waterlogging
Branger et al. Use of open-source GIS for the pre-processing of distributed hydrological models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination