CN112560215A - Electric power line selection method based on deep reinforcement learning - Google Patents

Electric power line selection method based on deep reinforcement learning Download PDF

Info

Publication number
CN112560215A
CN112560215A CN202011561371.9A CN202011561371A CN112560215A CN 112560215 A CN112560215 A CN 112560215A CN 202011561371 A CN202011561371 A CN 202011561371A CN 112560215 A CN112560215 A CN 112560215A
Authority
CN
China
Prior art keywords
line selection
model
power line
areas
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011561371.9A
Other languages
Chinese (zh)
Other versions
CN112560215B (en
Inventor
宋军
詹伟
苗田
张中
孔龙
罗伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gansu Diantong Electric Power Engineering Design Consulting Co ltd
Original Assignee
Gansu Diantong Electric Power Engineering Design Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gansu Diantong Electric Power Engineering Design Consulting Co ltd filed Critical Gansu Diantong Electric Power Engineering Design Consulting Co ltd
Priority to CN202011561371.9A priority Critical patent/CN112560215B/en
Publication of CN112560215A publication Critical patent/CN112560215A/en
Application granted granted Critical
Publication of CN112560215B publication Critical patent/CN112560215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/18Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a power line selection method based on deep reinforcement learning. The method comprises the following steps: screening influence factors; collecting data corresponding to the influence factors; carrying out standardization processing on the influence factors, and then rasterizing into an image; dividing the continuous path into segments, and combining the images, the intelligent agent positions and the path segments into a sample to construct a sample library; constructing a deep reinforcement learning model based on the DQN and the FCN; dividing a sample library into a training set and a testing set, firstly training a model by using the training set, and then evaluating the model by using the testing set; and in the appointed line selection area, performing power line selection by using the tested model. The method grids the influence factors considered in the power line selection into a two-dimensional image containing a quantization value, so that the neural network can sense the environment; by combining and using the FCN and the DQN, the environment can be better sensed, the optimal decision can be made, and the problem that the traditional path planning algorithm cannot respond to the complex and variable environment in time is solved.

Description

Electric power line selection method based on deep reinforcement learning
Technical Field
The invention relates to an electric power line selection method based on deep reinforcement learning, and belongs to the technical field of electric transmission line design.
Background
Electric power line selection is the first work of designing high-voltage overhead transmission lines. The existing electric power line selection methods mainly comprise two types, one type is a computer-aided electric power line selection method, remote sensing images, digital elevation models, surface feature data and the like are overlapped together in a two-dimensional or three-dimensional scene, and the path of the electric transmission line is determined in a man-machine interaction mode according to line selection experience on the basis of considering line selection constraint conditions. The method depends on the experience of the line selection personnel, the quality of the path selected by different line selection personnel is uneven, and the standardization work is difficult. The other is an automatic line selection method based on GIS, which is a path planning method based on continuous space, the method discretizes the continuous space into a cost surface, then quantifies various influencing factors during line selection into numerical values, and finally performs path analysis by using a shortest path algorithm to obtain the path of the power transmission line. The method has no defects existing in the first method, but cannot respond to a complex and variable environment in time due to the adoption of the traditional path planning algorithm.
The process of power line selection is a multi-criterion decision process and a multi-step decision process, and each step of decision can lead the line to extend forwards for a certain distance. Reinforcement learning, an important area in machine learning, focuses on solving decision-making problems, and focuses on how to act based on the environment to achieve maximum expected benefits. Automatic power line selection can be achieved by means of reinforcement learning techniques.
The structure of the reinforcement learning model comprises two parts: agent and Environment. The intelligent agent learns and makes decisions by observing the state of the environment and acquiring the feedback reward, the environment can change the state under the influence of the action of the intelligent agent, and certain reward is fed back to the intelligent agent. However, reinforcement learning is limited to a low-dimensional problem, and it is difficult to perceive a complicated environmental state such as where power line selection is located.
The deep learning is deepened by a neural network, is essentially a function with strong fitting capability, and realizes the result that the actual output is close to the sample training result by continuously adjusting the parameters in the function. The deep learning model is composed of a plurality of layers of neural networks, and the output of each layer is used as the input of the next layer, so that the hierarchical representation of data is realized. The deep neural network in the deep learning field has excellent learning ability, so that the complex environment state in the electric power line selection can be sensed by using the deep neural network, an electric power line selection strategy is established by depending on the powerful function approximation ability of the deep neural network, and the path planning ability of the reinforcement learning algorithm is improved.
Disclosure of Invention
The invention provides a deep reinforcement learning-based power line selection method, aiming at solving the problems existing in the conventional automatic power line selection and according to the characteristics of deep learning and reinforcement learning. The deep reinforcement learning is an organic combination of the reinforcement learning and the deep learning, the problem and the optimization target are defined by the reinforcement learning, the modeling problem of a strategy and a value function is solved by the deep learning, and then the target function is optimized by using an error back propagation algorithm.
The method provided by the invention realizes power line selection by means of a full convolution neural network (FCN) and a Deep Q Network (DQN). The core idea of the DQN is to approximate an optimal action-value function by a neural network and obtain parameters of a neural model through a Q learning algorithm so as to obtain an optimal strategy corresponding to the state and the action. The line selection method provided by the invention comprises the following steps:
(1) and (4) screening influence factors. And according to the design specification of electric power line selection and expert consulting opinions, determining influence factors considered during electric power line selection.
(2) Data is collected. And collecting data corresponding to each influence factor, and performing data cleaning, correction and other processing on the data.
(3) And quantifying and rasterizing the influence factors. And carrying out standardization processing on data corresponding to the influence factors, then displaying each data in a two-dimensional GIS environment to form a layer of the GIS, and finally rasterizing all the layers into a multiband image.
(4) And constructing a sample library. And (3) breaking the continuous line path into segments, and combining the images generated by the influencing factors, the positions of the intelligent agents and the trends of the corresponding path segments into state-decision pairs to form a sample library.
(5) And (5) establishing a model. The invention uses an FCN model based on a DQN reflectivity mechanism, and the model can keep the original relative position information of image data when processing data and can fully represent the inherent attribute of a path planning environment. The model also introduces an attention mechanism to exploit key local information of the features.
(6) And (5) training and testing the model. The leave-out method was used first to treat 70% of the samples in the sample library as the training set and the remaining 30% as the test set. And then, training the model by using a training set, and directly optimizing the target function through training to achieve the purpose of model weight learning. And finally, evaluating the quality of the model by using the test set.
(7) And (4) planning the power transmission line path based on the model. First, referring to step (3), the line selection area outside the sample library is quantized and imaged. Then, the image in the line selection area is used as model input, and finally an optimal path is output.
The process of electric power line selection is a multi-criterion decision process essentially and a multi-step decision process, and the electric power line selection method based on deep reinforcement learning provided by the invention is one of effective methods for solving the problem. The invention has the beneficial effects that: (1) the influence factors considered in the power line selection are rasterized into a two-dimensional image containing a quantization value, so that the neural network can sense the environment; (2) according to the invention, through combined use of the FCN and the DQN, the environment can be better sensed, the optimal decision can be made, and the problem that the traditional path planning algorithm cannot respond to a complex and changeable environment in time is solved.
Drawings
FIG. 1 is a flow chart of power line selection based on deep reinforcement learning;
FIG. 2 is a diagram of a quantized value interval;
FIG. 3 is a schematic diagram of an FCN model based on DQN reflectivity mechanism
FIG. 4 is a flow chart of model training and prediction;
figure 5 is a flow chart of a model-based path planning algorithm.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings and examples.
Fig. 1 is a flow chart of power line selection based on deep reinforcement learning. The method comprises the following steps:
step 1: and (4) screening influence factors. According to the relevant specifications of power design and expert consultation opinions, influence factors considered during line selection are determined, and the influence factors are various factors such as residential areas, planning areas, industrial areas (factories, mining areas, wind power plants and the like), flood beaches, water bodies (including rivers, lakes, wetlands, reservoirs and the like), forest lands, cultivated lands, grasslands, desert and bare earth surfaces, geology, gradients, traffic accessibility, cross spanning, difficult line construction areas, ice areas, dirty areas, corners, distances, hydrological conditions, meteorological conditions, disaster risk areas, scenic areas, natural protection areas, core planning areas, historical cultural trails, airports, military areas, non-traversable objects and the like.
Step 2: data is collected. Firstly, collecting data corresponding to each influence factor in the area where the existing power transmission line is located, such as remote sensing image data, topographic map data, DEM (digital elevation model), geological data, land utilization data, hydrological meteorological data, ice region polluted region data, lightning risk region data, technical specifications and the like according to the influence factors considered in the step 1. Secondly, collecting the existing transmission line paths. And thirdly, processing the collected data, such as uniform spatial reference, elimination of redundant information, verification of data correctness and the like.
And step 3: impact factor quantification and normalization. The factors to be considered in the power line selection are various, qualitative factors and quantitative factors are provided, and the quantitative factors also have different calculation units or measurement scales, which will cause adverse effects on subsequent path planning. The normalization is to convert the actual values of all the influence factors into a numerical value with a unified measurement scale according to a certain mathematical method, thereby eliminating incomparability caused by different dimensional differences. Because the influence factors are qualitative and quantitative, the Delphi method can be used for distributing numerical values with uniform dimension to the qualitative and quantitative factors. The specific implementation steps are as follows: (1) forming an expert group; (2) an influence factor needing to be standardized is provided for an expert, the quantitative value-taking interval of the factor is defined to be 1-9, 1 represents the most suitable power transmission line to be built, and 9 represents the least suitable factor, as shown in figure 2; (3) the expert feeds back the quantized values of all the factors according to experience and related materials; (4) summarizing the quantization values of the experts, carrying out induction, and feeding back to each expert to modify the quantization value of each expert; (5) summarizing the modified values of the experts, and distributing the summarized result to the experts again until the quantized values of the factors are more uniform; (6) the quantized values of each factor are averaged, and the average is the normalized value of the factor.
And 4, step 4: and constructing a sample library. Since the path plan consists of a series of decisions, the continuous path is broken into segments when the sample is made. And combining the images at the path segments, the positions of the agents and the trends of the path segments into a state-decision pair, and taking the state-decision pair as a sample to form a sample library.
And 5: and (5) establishing a model. Based on deep reinforcement learning and by combining the characteristics of a path planning problem, the invention uses an FCN model based on a DQN reflectivity mechanism. As shown in fig. 3. The model can keep the original relative position information of the image and represent the inherent attribute of the path planning environment, and an attention mechanism is introduced to the model so as to fully utilize the key local information of the features. The model has 5 convolutional layers and 1 Softmax layer. The model is input as an image and output as probability values in each direction. The convolutional layer has 150, 100, 50 and 8 filters from front to back. The spatial dimensions of the feature tensor remain the same in the convolutional layers 1-4 as in the original image. The attention mechanism is introduced into the 5 th convolutional layer, which finds the position of the agent in the feature tensor through one-to-one mapping according to the coordinates of the agent in the image, and changes the spatial domain size of the output image into 1 × 1 by taking each channel value of the feature tensor at the position as the feature. The Softmax layer takes the output of the 5 th convolutional layer as the score of the agent in each path planning direction, which includes east, west, south, north, northeast, northwest, southeast and southwest. And then mapping the scores to probability values, and taking the action direction with the highest probability to perform single-step planning in the final decision.
Step 6: and (5) training and testing the model. As shown in fig. 4. First, the leave-out method was used to treat 70% of the samples in the sample library as the training set and the remaining 30% as the test set. The data distribution of the training set and the test set should be the same. The model is then trained using the training set. The training process is as follows: (1) setting the weight of the neural network by using a random initialization mode; (2) weight optimization was performed using the RMSProp algorithm. (3) During training, the target function uses a cross entropy loss function, and the loss function is optimized through an optimization algorithm based on gradient, so that the purpose of model weight learning is achieved. And finally, testing the trained model by using the test set. The data of the model is now the best decision it considers. The quality of the model can be evaluated by measuring the accuracy of the model decision.
And 7: and performing power line selection by using the model. As shown in fig. 5. Firstly, after a starting point and a stopping point and a line selection range are given, various data corresponding to influence factors in the range are collected, standardized and converted into images. Second, let the trained model read the image and agent location. And thirdly, calculating an output result by the model, and selecting the action corresponding to the maximum value of the probability vector as a decision result. Then, the intelligent agent acts according to the decision result, simultaneously updates the position information of the intelligent agent and feeds the position information back to the environment. At this time, it is determined whether the plan has timed out. If the time is out, the path planning is ended, otherwise, whether the intelligent agent reaches the target position is judged. If the target position is reached, finishing the path planning, otherwise, continuing to make model decision until finishing. The method of the present invention is thus complete.
The invention has the beneficial effects that: (1) the influence factors considered in the power line selection are rasterized into a two-dimensional image containing a quantization value, so that the neural network can sense the environment; (2) according to the invention, through combined use of the FCN and the DQN, the environment can be better sensed, the optimal decision can be made, and the problem that the traditional path planning algorithm cannot respond to a complex and changeable environment in time is solved.

Claims (7)

1. A power line selection method based on deep reinforcement learning is characterized by comprising the following steps:
step 1: defining influence factors considered during line selection;
step 2: collecting data corresponding to each influence factor in the area where the existing power transmission line is located, and processing the collected data;
and step 3: quantifying and standardizing the influence factors, namely standardizing the influence factors with different measurement units, measurement scales or qualitative measurement scales to ensure that the influence factors have uniform measurement scales;
and 4, step 4: constructing a sample library, breaking a continuous path into segments, combining images at the path segments, the positions of the intelligent agents and the direction of the path segments into a state-decision pair, and taking the state-decision pair as a sample to form the sample library;
and 5: establishing a full convolution neural network model based on a deep Q network reflectivity mechanism based on deep reinforcement learning;
step 6: training and testing the model, wherein 70% of samples in the sample library are used as a training set, and the rest 30% of samples are used as a testing set;
and 7: and performing power line selection by using the model.
2. The power line selection method of claim 1, wherein:
the influence factors in the step 1 are residential areas, planning areas, industrial areas, river flood beaches, water body forest lands, cultivated lands, grasslands, deserts and bare earth surfaces, geology, gradients, traffic accessibility, cross spanning, difficult line construction areas, ice areas, dirty areas, corners, distances, hydrologic conditions, meteorological conditions, disaster risk areas, scenic areas, natural protection areas, core planning areas, historical cultural trails, airports, military areas and non-traversable objects.
3. The power line selection method of claim 2, wherein:
the data corresponding to the step 2 specifically comprises remote sensing image data, topographic map data, DEM (digital elevation model), geological data, land utilization data, hydrological meteorological data, ice region dirty region data, lightning hazard risk region data, technical specifications and the like.
4. The power line selection method of claim 3, wherein:
the specific steps in the step 3 are as follows: (1) forming an expert group; (2) an influence factor needing standardization is provided for experts, the quantitative value-taking interval of the factor is defined to be 1-9, 1 represents the most suitable power transmission line to be built, and 9 represents the least suitable; (3) the expert feeds back the quantized values of all the factors according to experience and related materials; (4) summarizing the quantization values of the experts, carrying out induction, and feeding back to each expert to modify the quantization value of each expert; (5) summarizing the modified values of the experts, and distributing the summarized result to the experts again until the quantized values of the factors are more uniform; (6) the quantized values of each factor are averaged, and the average is the normalized value of the factor.
5. The power line selection method of claim 4, wherein:
the model in the step 5 comprises 5 convolutional layers and 1 Softmax layer, the input of the model is an image, and the output of the model is probability values in all directions.
6. The power line selection method of claim 5, wherein:
the training process in the step 6 is as follows: (1) setting the weight of the neural network by using a random initialization mode; (2) performing weight optimization by using a RMSProp algorithm; (3) during training, the target function uses a cross entropy loss function, and the loss function is optimized through an optimization algorithm based on gradient, so that the purpose of model weight learning is achieved.
7. The power line selection method of claim 6, wherein:
the specific steps of the step 7 are as follows: and selecting the action corresponding to the maximum value of the probability vector as a decision result according to the calculated output result, enabling the intelligent agent to act according to the decision result, updating own position information at the same time, feeding back the position information to the environment, judging whether the planning is overtime or not, ending the path planning if the planning is overtime, judging whether the intelligent agent reaches the target position or not if the intelligent agent reaches the target position, ending the path planning if the intelligent agent reaches the target position, and otherwise, continuing to perform model decision until the intelligent agent is ended.
CN202011561371.9A 2020-12-25 2020-12-25 Electric power line selection method based on deep reinforcement learning Active CN112560215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011561371.9A CN112560215B (en) 2020-12-25 2020-12-25 Electric power line selection method based on deep reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011561371.9A CN112560215B (en) 2020-12-25 2020-12-25 Electric power line selection method based on deep reinforcement learning

Publications (2)

Publication Number Publication Date
CN112560215A true CN112560215A (en) 2021-03-26
CN112560215B CN112560215B (en) 2022-11-11

Family

ID=75032591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011561371.9A Active CN112560215B (en) 2020-12-25 2020-12-25 Electric power line selection method based on deep reinforcement learning

Country Status (1)

Country Link
CN (1) CN112560215B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082969A (en) * 2021-04-07 2021-07-09 唐锦婷 Exhaust gas treatment system
CN113902963A (en) * 2021-12-10 2022-01-07 交通运输部公路科学研究所 Method and device for evaluating fire detection capability of tunnel
CN114399084A (en) * 2021-12-20 2022-04-26 嘉兴恒创电力设计研究院有限公司 Rapid line selection method and system based on power grid vector diagram and satellite image diagram

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116865A (en) * 2013-03-08 2013-05-22 华北电力大学 Multidimensional collaborative power grid planning method
US20170310382A1 (en) * 2016-04-21 2017-10-26 University Of Louisiana At Lafayette Experimental smartphone ground station grid system and method
CN109992923A (en) * 2018-11-20 2019-07-09 国网陕西省电力公司 A kind of transmission line of electricity paths planning method stage by stage based on variable resolution cost surface
CN110046213A (en) * 2018-11-20 2019-07-23 国网陕西省电力公司 A kind of electric power selection method for taking path distortion correction and scissors crossing correction into account
CN110081889A (en) * 2019-06-11 2019-08-02 广东工业大学 A kind of robot path planning method based on stochastical sampling and intensified learning
CN110417664A (en) * 2019-07-31 2019-11-05 国家电网有限公司信息通信分公司 Business route distribution method and device based on power telecom network
GB201917294D0 (en) * 2019-11-27 2020-01-08 Instadeep Ltd Electrical circuit design
CN111414549A (en) * 2019-05-14 2020-07-14 北京大学 Intelligent general assessment method and system for vulnerability of recommendation system
WO2020160427A1 (en) * 2019-02-01 2020-08-06 Duke Energy Corporation Advanced power distribution platform
CN112086958A (en) * 2020-07-29 2020-12-15 国家电网公司西南分部 Power transmission network extension planning method based on multi-step backtracking reinforcement learning algorithm
CN112085280A (en) * 2020-09-11 2020-12-15 东南大学 Power transmission channel path optimization method considering geographic factors

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116865A (en) * 2013-03-08 2013-05-22 华北电力大学 Multidimensional collaborative power grid planning method
US20170310382A1 (en) * 2016-04-21 2017-10-26 University Of Louisiana At Lafayette Experimental smartphone ground station grid system and method
CN109992923A (en) * 2018-11-20 2019-07-09 国网陕西省电力公司 A kind of transmission line of electricity paths planning method stage by stage based on variable resolution cost surface
CN110046213A (en) * 2018-11-20 2019-07-23 国网陕西省电力公司 A kind of electric power selection method for taking path distortion correction and scissors crossing correction into account
WO2020160427A1 (en) * 2019-02-01 2020-08-06 Duke Energy Corporation Advanced power distribution platform
CN111414549A (en) * 2019-05-14 2020-07-14 北京大学 Intelligent general assessment method and system for vulnerability of recommendation system
CN110081889A (en) * 2019-06-11 2019-08-02 广东工业大学 A kind of robot path planning method based on stochastical sampling and intensified learning
CN110417664A (en) * 2019-07-31 2019-11-05 国家电网有限公司信息通信分公司 Business route distribution method and device based on power telecom network
GB201917294D0 (en) * 2019-11-27 2020-01-08 Instadeep Ltd Electrical circuit design
CN112086958A (en) * 2020-07-29 2020-12-15 国家电网公司西南分部 Power transmission network extension planning method based on multi-step backtracking reinforcement learning algorithm
CN112085280A (en) * 2020-09-11 2020-12-15 东南大学 Power transmission channel path optimization method considering geographic factors

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082969A (en) * 2021-04-07 2021-07-09 唐锦婷 Exhaust gas treatment system
CN113082969B (en) * 2021-04-07 2024-01-23 唐锦婷 Exhaust gas treatment system
CN113902963A (en) * 2021-12-10 2022-01-07 交通运输部公路科学研究所 Method and device for evaluating fire detection capability of tunnel
CN113902963B (en) * 2021-12-10 2022-06-17 交通运输部公路科学研究所 Method and device for evaluating fire detection capability of tunnel
CN114399084A (en) * 2021-12-20 2022-04-26 嘉兴恒创电力设计研究院有限公司 Rapid line selection method and system based on power grid vector diagram and satellite image diagram

Also Published As

Publication number Publication date
CN112560215B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN112560215B (en) Electric power line selection method based on deep reinforcement learning
Sajedi‐Hosseini et al. Spatial prediction of soil erosion susceptibility using a fuzzy analytical network process: Application of the fuzzy decision making trial and evaluation laboratory approach
Torkashvand et al. DRASTIC framework improvement using stepwise weight assessment ratio analysis (SWARA) and combination of genetic algorithm and entropy
Nadiri et al. Assessment of groundwater vulnerability using supervised committee to combine fuzzy logic models
Nouri et al. Predicting urban land use changes using a CA–Markov model
Fijani et al. Optimization of DRASTIC method by supervised committee machine artificial intelligence to assess groundwater vulnerability for Maragheh–Bonab plain aquifer, Iran
Yang et al. Incorporating ecological constraints into urban growth boundaries: A case study of ecologically fragile areas in the Upper Yellow River
Xu et al. Suitability evaluation of urban construction land based on geo-environmental factors of Hangzhou, China
CN105243435B (en) A kind of soil moisture content prediction technique based on deep learning cellular Automation Model
CN109146204A (en) A kind of wind power plant booster stations automatic addressing method of comprehensiveestimation
CN109359166B (en) Space growth dynamic simulation and driving force factor contribution degree synchronous calculation method
CN106780089A (en) Permanent basic farmland demarcation method based on neutral net cellular Automation Model
Kidd et al. Digital mapping of a soil drainage index for irrigated enterprise suitability in Tasmania, Australia
Ahmed Modelling spatio-temporal urban land cover growth dynamics using remote sensing and GIS techniques: A case study of Khulna City
CN110991497A (en) Urban land use change cellular automata simulation method based on BSVC (binary coded VC) method
CN111539904B (en) Disaster vulnerability prediction method based on rainfall
Joorabian Shooshtari et al. Land use and cover change assessment and dynamic spatial modeling in the Ghara-su Basin, Northeastern Iran
Fataei et al. Industrial state site selection using MCDM method and GIS in Germi, Ardabil, Iran
CN104361255B (en) It is a kind of to improve cellular automata urban sprawl analogy method
CN115659853B (en) Nonlinear mixed-effect strain coefficient downscaling method and system
CN114861277A (en) Long-time-sequence national soil space function and structure simulation method
CN115730684A (en) Air quality detection system based on LSTM-CNN model
Nadiri et al. Formulating Convolutional Neural Network for mapping total aquifer vulnerability to pollution
Dinda et al. Modelling the future vulnerability of urban green space for priority-based management and green prosperity strategy planning in Kolkata, India: a PSR-based analysis using AHP-FCE and ANN-Markov model
CN113902580A (en) Historical farmland distribution reconstruction method based on random forest model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant