CN110544011B - Intelligent system combat effectiveness evaluation and optimization method - Google Patents

Intelligent system combat effectiveness evaluation and optimization method Download PDF

Info

Publication number
CN110544011B
CN110544011B CN201910698203.5A CN201910698203A CN110544011B CN 110544011 B CN110544011 B CN 110544011B CN 201910698203 A CN201910698203 A CN 201910698203A CN 110544011 B CN110544011 B CN 110544011B
Authority
CN
China
Prior art keywords
network model
population
value
optimization
executing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910698203.5A
Other languages
Chinese (zh)
Other versions
CN110544011A (en
Inventor
李妮
李玉红
余延超
龚光红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910698203.5A priority Critical patent/CN110544011B/en
Publication of CN110544011A publication Critical patent/CN110544011A/en
Application granted granted Critical
Publication of CN110544011B publication Critical patent/CN110544011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an intelligent system combat effectiveness evaluation and optimization method, which comprises the following steps: constructing a deep neural network model, inputting sample data for optimization, and performing training verification and test on the model to obtain an optimized network model; and inputting sample data to be evaluated into the optimized network model, and outputting a performance evaluation value. And performing single-effect optimization by adopting a GA algorithm, and performing multi-effect optimization by adopting an NSGAII algorithm, wherein the individual performance index value of the population predicted by the optimized network model is used as an individual self-adaptive value.

Description

Intelligent system combat effectiveness evaluation and optimization method
Technical Field
The invention belongs to the field of computer simulation and artificial intelligence, and relates to an intelligent weapon equipment system (Wsos) combat effectiveness evaluation and optimization method.
Background
The U.S. naval and national military standards GJB1364-92 define the effectiveness, which is considered to be the "ability" of the weaponry to achieve "regulatory targets of use" under "regulatory conditions". The WsoS fighting efficiency is the degree of WsoS completing a specific fighting task under given threat, condition, environment and fighting scheme, is an important index for measuring the WsoS construction level, and can comprehensively reflect the comprehensive fighting capability of the system. The tactical performance assessment and optimization of the WsoS can predict or verify its effectiveness in achieving the objectives of a combat mission under specific conditions, while facilitating verification, optimization of the combat strategy. Therefore, in order to comprehensively strengthen the WsoS construction, the WsoS technical level is integrally improved, the fighting capacity of the army is enhanced, and the WsoS battle efficiency evaluation and optimization research work are carried out, so that the WsoS battle efficiency evaluation system has important military significance.
The study of the assessment of the effectiveness of equipment combat started in the beginning of the 19 th century, with a history of over 100 years. For the evaluation of the operational efficiency of the weaponry System, static evaluation methods such as analytic method and multi-index comprehensive evaluation are mostly adopted in the past, such as an Availability reliability capability (ADC) method, a System Efficiency Analysis (SEA) method, an index method, a test statistic method, a lanchester method, a fuzzy comprehensive evaluation method, and an analytic hierarchy process. The WsoS is composed of a plurality of equipment systems, the systems are in loose coupling relationship, different hierarchical structures exist in the systems, and the systems have complex interrelations such as cooperation, dependence and the like, so that the system has various mission tasks, strong uncertainty and numerous influence factors, and is a special complex giant system. Therefore, many conventional methods for equipment system performance evaluation in the face of such complex nonlinear systems cannot meet the requirements of the WsoS performance evaluation. For example, the analytical method is good at solving the problem of evaluating a single piece of equipment, a single model of equipment, or a certain type of fighting power, but since the characteristics such as system complexity cannot be mathematically derived and accurately calculated, the analytical method cannot evaluate the characteristics such as the emergence performance and uncertainty of the performance of the WsoS.
In the same equipment system, the WsoS is also a typical multi-attribute system, so that the multi-index comprehensive evaluation method and the networking method based on the combat ring are widely applied to the efficiency evaluation field. The key step of the multi-index comprehensive evaluation is to select a reasonable and scientific index system from a plurality of indexes so as to reduce the uncertainty caused by excessive indexes. At present, aiming at the construction of an index system, more analytic hierarchies based on small data patterns and expert qualitative judgment are used. The method reflects the integrity characteristics of the system to a certain extent, but cannot describe the system capability emerging in the dynamic interaction process of the system, the incidence relation between capability indexes and the aggregation relation between layers, and the subjective factors of experts account for a large proportion, so that the comprehensive, objective and real-time system capability evaluation requirement of system battles under the informatization condition cannot be met. The networked assessment method based on the combat ring can highlight the influence of the cooperation relationship between equipment in systematic combat activities on the combat effectiveness, but needs to consider the optimization of the combat ring structure and the problem of edge weight value in the network. The modern war has more participating strength, high informatization degree and large battlefield uncertainty, so that the process of constructing an index system and a battle ring networking model for the WsoS battle efficiency is more complicated, the solving time is longer, and the reliability is reduced, so that the traditional methods are not satisfactory.
In terms of system efficiency optimization, sensitivity analysis methods such as direct calculation and Sobol (Sobol) index method based on Monte Carlo, kriging (Kriging) model construction method and the like are generally adopted. The Sobol index method is used for analyzing the influence relationship of a single input parameter and all input parameters on the efficiency output, can qualitatively guide the system efficiency optimization, but cannot quantitatively give the value of the input parameters when the efficiency is optimal. The Kriging model can fit the relationship between input parameters and efficiency output, but the fitting is an explicit functional form, the reliability of more complex relationships is not high or even the fitting cannot be performed, and the optimization is unidirectional.
In recent years, the proposal of big data concept enables a large amount of abundant simulation test data to be accumulated in combat training and equipment research. Under the large background that a crowd-sourcing algorithm, machine learning and deep learning go deep into various industries, the artificial intelligence algorithm based on data is introduced into system operational effectiveness evaluation, a series of complex solving processes in the traditional index-based system and operational loop method can be avoided, and an intelligent evaluation means is provided for Wsos operational effectiveness; artificial intelligence is introduced into system operational efficiency optimization, and the combined value of each thought parameter when the operational efficiency is optimal is reversely obtained, so that a quantitative intelligent means can be provided for WsoS operational efficiency optimization. However, there is currently less research in this area.
Disclosure of Invention
In order to solve the problems of complex process, long solving time, low reliability and the like of the traditional WsoS operational effectiveness evaluation method and the problems of incapability of quantitative optimization, complex process, limited applicability and the like of the traditional WsoS operational effectiveness optimization, the method fully utilizes a great amount of existing simulation test data, introduces a deep neural network to automatically learn the data, realizes the intelligent evaluation of the WsoS operational effectiveness, and realizes the intelligent optimization of the WsoS operational effectiveness by introducing a swarm intelligence algorithm.
The invention provides an intelligent system combat effectiveness evaluation and optimization method, which comprises the following steps:
s100, constructing a deep neural network model, inputting sample data for optimization, and training, verifying and testing the model to obtain an optimized network model; if the performance evaluation is needed, step S200 is executed, and if the performance optimization is needed, step S300 is executed.
And S200, inputting sample data to be evaluated into the optimized network model, and outputting an efficiency evaluation value.
Step S300, determining whether single-performance optimization is performed, if yes, performing step S400, and if no, performing step S500.
And step S400, performing single-effect optimization by adopting a GA algorithm, wherein the individual performance index value of the population predicted by the optimized network model is used as an individual self-adaptive value.
And step S500, performing multi-efficiency optimization by adopting an NSGAII algorithm, wherein the individual efficiency index value of the population predicted by the optimized network model is used as an individual adaptive value.
Optionally, step S100 includes the steps of:
step S110, dividing the sample data for optimization into a training set, a test set and a verification set.
And step S120, respectively carrying out standardization processing on the training set, the test set and the verification set to obtain a standardized training set, a standardized test set and a standardized verification set.
And step S130, constructing a deep neural network model, and setting a loss function, model precision and maximum learning times of the deep neural network model.
And S140, setting an optimizer of the deep neural network model, inputting the standardized training set into the deep neural network model, and performing model training.
Step S150, judging whether the training is set model precision or maximum learning times; if so, the training is finished, step S160 is executed, otherwise, step S140 is executed.
Step S160, checking whether the model performance meets the requirements or not based on the standardized verification set, and if so, executing step S180; if not, go to step S170.
Step S170, adjusting the network hyper-parameter according to the checking result of the model performance, and executing step S140.
And step S180, inputting the standardized test set to obtain an effectiveness predicted value on the test set.
Step S190, judging whether the generalization error of the deep neural network model meets the requirement; if so, obtaining the optimized network model, and executing step S200 or step S300, otherwise, executing step S1100.
Step S1100, adjusting the network hyper-parameter according to the generalization error condition, and executing step S140.
Optionally, step S400 further includes the steps of:
and step S410, initializing GA parameters, population and value ranges of the parameters.
Step S420, the performance index value of the population individual predicted by the optimized network model is called as an individual adaptive value.
Step S430, judging whether a termination condition is reached; if so, go to step S450, otherwise go to step S440.
In step S440, the selection, crossover, and mutation operations are performed, and step S420 is performed.
And step S450, outputting the optimal planned parameter combination.
Optionally, step S500 further includes the steps of:
step S510, initializing NSGAII parameters, populations and value ranges of the parameters.
Step S520, determining whether a first generation subgroup is generated; if so, add 1 to the evolution algebra, and go to step S540, otherwise, go to step S530.
Step S530, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; selecting, crossing and mutating; step S520 is performed.
In step S540, the child population and the parent population are merged.
Step S550, judging whether a new parent population is generated; if yes, go to step S570, otherwise go to step S560.
Step S560, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; performing congestion distance calculation on the individuals in each non-dominant layer; selecting proper individuals to form a new father population; step S550 is performed.
And step S570, selecting, crossing and mutating to generate a new filial generation population.
Step S580, determining whether the maximum number of iterations is reached; if yes, go to step S590, otherwise, add 1 to the number of iterations, go to step S540.
In step S590, the pareto optimal solution is output.
Optionally, the individual non-dominated sorting comprises the steps of:
step a1, initializing serial number i =1, and the original population is Q (1).
And a2, finding out a non-dominance solution set of the population Q (i), wherein the non-dominance solution set is used as a first non-dominance layer and is marked as Fi.
And a step a3, assigning a non-dominant order value of 1 to all individuals in the Fi non-dominant layer.
And a4, removing all individuals in the Fi non-dominant layer, and forming a new population by the rest individuals, wherein the new population is marked as Q (i + 1).
Step a5, judging whether population layering is finished; if yes, executing the step a6, if not, adding 1 to i, and returning to execute the step a2.
And a6, outputting the non-dominated sorting of all the individuals in the population.
Optionally, the congestion distance calculation includes the following steps:
step b1, initializing individual crowding distances of the same layer.
And b2, arranging the individuals on the same layer in an ascending order according to the ith objective function value.
And b3, giving a larger value to the individual crowding distance of the sequencing edge, so that the sequencing edge has selection advantage.
And b4, calculating the crowding distance of the individuals in the middle of the sorting.
Step b5, judging whether all the objective functions are calculated; if yes, executing step b6, otherwise, executing step b2.
And b6, outputting the crowding distances of all individuals of the population.
Optionally, the normalization processing in step S120 adopts a zero-mean normalization method, so that each processed dimensional feature conforms to a standard normal distribution, and the conversion formula is as follows:
Figure BDA0002149951750000051
wherein, x is the value of a certain dimension characteristic, mu is the mean value of the dimension characteristic value in the sample set, sigma is the variance of the dimension characteristic value, and x pre Is the value after the dimension characteristic is standardized.
The invention has the following beneficial effects:
(1) The system effectiveness evaluation is based on a deep neural network, and the evaluation and prediction of the system combat effectiveness are realized by learning the internal rule of the thought parameter and the effectiveness index value sample data. The method can avoid the complex and long processes of running a model, index modeling, index system construction, comprehensive efficiency evaluation and the like required by the traditional efficiency evaluation method.
(2) The system efficiency is optimized, and the intelligent optimization of single efficiency index and multiple efficiency index can be realized.
(3) The efficiency evaluation and the efficiency optimization in the invention form a closed loop process, and the reliability of the evaluation and optimization result is higher.
(4) The efficiency evaluation algorithm and the efficiency optimization algorithm can be replaced by various neural network models and crowd sourcing models, have strong expandability and are easy to operate and realize.
(5) The method can be applied to the evaluation and optimization links of various performance indexes of each level of various equipment systems, and can also be applied to index evaluation and optimization in other similar application fields.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
FIG. 1 is a flow chart of an intelligent system combat effectiveness evaluation and optimization method provided by the present invention;
FIG. 2 is a flow chart of deep neural network model training, testing and validation provided by the present invention;
FIG. 3 is a flowchart of GA-based single performance indicator optimization provided by the present invention;
FIG. 4 is a schematic diagram of the relationship between the intelligent evaluation and optimization of Wsos performance according to the present invention;
FIG. 5 is a flowchart of the NSGAII-based multiple performance index optimization provided by the present invention;
FIG. 6 is a flow chart of the NSGAII individual non-dominated sorting process provided by the present invention;
FIG. 7 is a flow of calculating crowding distance of population individuals in NSGAII provided by the present invention;
FIG. 8 is a deep neural network architecture constructed in accordance with the present invention;
FIG. 9 shows the optimization results of the single performance indicator of the present invention;
FIG. 10 shows the optimization results of the multiple (two) performance indicators of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flow chart of an intelligent system combat effectiveness evaluation and optimization method provided by the present invention. The invention provides an intelligent system combat effectiveness evaluation and optimization method, which comprises the following steps:
and S100, constructing a deep neural network model, inputting sample data for optimization, and training, verifying and testing the model to obtain the optimized network model. If the performance evaluation is needed, step S200 is executed, and if the performance optimization is needed, step S400 is executed. Specifically, as shown in fig. 2, step S100 includes the following steps.
Step S110, dividing the sample data for optimization into a training set, a test set and a verification set. According to a certain proportion, a sample complete set for optimization is randomly divided into a training set, a testing set and a verification set, no intersection exists among the training set, the testing set and the verification set, and a union set is the sample complete set for optimization.
And step S120, respectively carrying out standardization processing on the training set, the test set and the verification set to obtain a standardized training set, a standardized test set and a standardized verification set. The WsoS has a plurality of equipment types, and the performance parameters of the equipment are different, so that the measurement units of the parameters are different, and the value range is larger. In order to improve the training speed and accuracy of the deep neural network model, the features (parameters) of each dimension need to be standardized on the respective dimension, a zero-mean normalization (z-score) method is adopted, so that the processed features of each dimension conform to the standard normal distribution, and the conversion formula is as follows.
Figure BDA0002149951750000071
Wherein, x is the value of a certain dimension characteristic, mu is the mean value of the dimension characteristic value in the sample set, sigma is the variance of the dimension characteristic value, and x pre Is the dimension ofAnd characterizing the value after standardization.
And step S130, constructing a deep neural network model, and setting a loss function, model precision and maximum learning times of the deep neural network model. A Deep neural network model is built based on the tensoflow Deep learning framework, and a fully-connected Deep Neural Network (DNN), a Convolutional Neural Network (CNN) and the like can be selected. Setting the number of input layer neurons, the number of hidden layer neurons, the number of each layer of neurons and an activation function of the deep neural network model, initializing weight and a threshold, and setting a loss function, model precision and maximum learning times.
And step S140, setting an optimizer of the deep neural network model, inputting the standardized training set into the deep neural network model, and performing model training. There are various alternatives for the selection of the optimizer, which can be directly selected in tensoflow, and a random gradient descent (SGD), an Adaptive moment estimation (Adam), an accelerated gradient descent (RMSProp), an Adaptive gradient descent (adagard), and the like are commonly used. After the optimizer is selected, based on a specific optimizer, the standardized training set is input into a deep neural network model for learning, and in the learning process, the weight and the threshold of the network are gradually adjusted with the aim of minimizing the prediction error.
Step S150, determining whether the training is to be performed with the set model accuracy or the maximum learning frequency. If so, the training is finished, step S160 is executed, otherwise, step S140 is executed.
And step S160, checking whether the model performance meets the requirements or not based on the standardized verification set. Specifically, it is checked whether the model diverges on the standardized verification set, the classification accuracy of the classification problem, the prediction error of the regression problem, and the like meet the set requirements. If yes, go to step S180; if not, go to step S170;
and step S170, adjusting the network hyper-parameter according to the checking result of the model performance. Step S140 is performed.
Adjusting the network hyper-parameters may improve the performance of the network. Common hyper-parameters include learning rate, momentum, number of iterations, initialized weight, etc. When the weight is initialized, if a random number which is too small is set, a zero gradient network may be generated, and a uniform distribution method is generally adopted. The learning rate determines the updating speed of the weight and the threshold, if the value is too large, the result exceeds the optimal value, if the value is too small, the descending speed is too slow, and different optimizers are selected from the tenserflow to determine different learning rates. When the verification error rate and the training error rate are different greatly, the iteration times need to be increased continuously or the network structure needs to be adjusted.
The adjustment of the network hyper-parameters can adopt a manual adjustment method, namely, the single parameter or a plurality of parameters are respectively assigned, then the steps S140-S160 are returned to be executed, and the performance of the deep neural network model under the condition of manual parameter setting is observed. Alternatively, a more efficient hyper-parametric auto-optimization method may be substituted, such as the commonly used grid search, random search, GA-based, particle Swarm (PSO) search algorithm, gradient-based distillation inverse mode auto-differentiation (DrMAD) method, and the like.
And step S180, inputting the standardized test set to obtain an effectiveness predicted value on the test set.
And step S190, judging whether the generalization error of the deep neural network model meets the requirement. If so, obtaining the optimized network model, and executing step S200 or step S300, otherwise, executing step S1100. If the effect on the standardized verification set is much worse than that on the standardized training set, the model is over-fitted to indicate that the generalization error does not meet the requirement, and step S1100 is executed, otherwise, the generalization error meets the requirement, and the optimized network model is obtained, and performance evaluation and optimization can be performed, and step S200 or step S300 is executed.
Step S1100, adjusting the network hyper-parameter according to the generalization error condition, and executing step S140. Aiming at the overfitting phenomenon, a regularization mode is adopted to weaken model overfitting, and a commonly used regularization method is a method for deleting hidden layer neurons (Dropout regularization). Alternative methods may be selected from early termination, L1 regularization, L2 regularization, dataset amplification methods, and the like.
Step S200, inputting sample data to be evaluated into the optimized network model to obtain an efficiency evaluation value.
And step S300, judging whether the single-effect optimization is performed. If yes, go to step S400, otherwise go to step S500.
And S400, performing single-effect optimization by adopting a GA algorithm. Specifically, as shown in fig. 3, step S400 includes the following steps.
Step S410, initializing GA parameters, population and value ranges of the parameters. Initializing the maximum iteration times, population scale, cross probability, variation probability and value ranges of all parameters of the GA algorithm. The chromosomes are encoded by real numbers, the value of each gene is the actual value of the corresponding parameter, and each chromosome represents an individual and represents a group of combinations of the values of the parameters. The chromosome coding mode can be replaced by other methods, such as binary coding, which is generally used, but the efficiency of binary coding is lower in the case of more parameters.
Step S420, the performance index value of the population individual predicted by the optimized network model is called as an individual adaptive value. And predicting the sample combination data of each parameter in the value range of the parameter based on the optimized network model obtained in the step, and taking the sample combination data as an individual fitness value.
Step S430, determine whether the termination condition is reached. If yes, go to step S450, otherwise go to step S440.
In step S440, the selection, crossover, and mutation operations are performed, and step S420 is performed. The selection operator is selected as a roulette operation, and the alternative method can be a random traversal sampling method, a linear sequencing method, an index sequencing method, a tournament selection method and the like. The crossover operator is selected as single-point crossover, and other commonly used alternative methods include two-point crossover, multi-point crossover, uniform crossover and the like. The mutation operator is selected as single-point mutation, so that the operation is easy, and other commonly used alternative methods comprise two-point mutation, multi-point mutation and the like.
And S450, outputting the optimal planned parameter combination and ending the process.
And step S500, performing multi-efficiency optimization by adopting an NSGAII algorithm. The genetic algorithm of non-dominated sorting (NSGAII) using elite strategy is used for multi-potency optimization, specifically, as shown in fig. 5, step S500 includes the following steps.
Step S510, initializing NSGAII parameters, populations and value ranges of the parameters. This step is similar to step S410, and is specifically referred to step S410, and will not be repeated here.
In step S520, it is determined whether or not the first generation subgroup is generated. If so, the evolution algebra is incremented by 1, and step S540 is executed from the second generation, otherwise, step S530 is executed.
Step S530, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; selection, crossover, mutation. Step S520 is performed.
Step S540, merging the child population and the parent population.
Step S550, determining whether a new parent population is generated. If yes, go to step S570, otherwise go to step S560.
Step S560, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; performing congestion distance calculation on the individuals in each non-dominant layer; and selecting proper individuals to form a new father population. Step S550 is performed. Specifically, if the non-dominant ranks of the two individuals are different, the lower ranked individual is better; otherwise, if the two individuals belong to the same frontier, the individuals with lower crowding distance are more preferable, and the more preferable individuals are selected to form a new parent population.
Step S570, selecting, crossing, and mutating to generate a new offspring population.
In step S580, it is determined whether the maximum number of iterations has been reached. If yes, go to step S590, otherwise, add 1 to the evolution algebra, go to step S540.
In step S590, the pareto optimal solution is output.
For the calculation of the individual fitness value mentioned in the above steps S530 and S560, similarly to step S420, see step S410 specifically, and the description is not repeated here.
As for the individual non-dominant ranking mentioned in the above steps S530 and S560, as shown in fig. 6, the method specifically includes the following steps:
step a1, an initialization number i =1, and the original population is Q (1).
And a step a2, finding out a non-dominant solution set of the population Q (i), wherein the non-dominant solution set is used as a first non-dominant layer and is marked as Fi.
And a step a3, assigning a non-dominant order value of 1 to all individuals in the Fi non-dominant layer.
And a4, removing all individuals in the Fi non-dominant layer, and forming a new population by the rest individuals, wherein the new population is marked as Q (i + 1).
Step a5, judging whether population layering is finished; if yes, executing the step a6, if not, adding 1 to i, and returning to execute the step a2.
And a6, outputting the non-dominated sorting of all the individuals in the population.
As shown in fig. 7, the congestion distance calculation in step S560 specifically includes the following steps:
step b1, initializing individual crowding distances of the same layer.
And b2, arranging the individuals in the same layer in an ascending order according to the ith objective function value.
And b3, giving a larger value to the individual crowding distance of the sequencing edge, so that the sequencing edge has selection advantage.
And b4, calculating the crowding distance of the individuals in the middle of the sorting.
And b5, judging whether all the objective functions are calculated. If yes, executing step b6, otherwise, executing step b2.
And b6, outputting the crowding distances of all individuals of the population.
In order to further explain the scheme in detail, the invention provides a specific embodiment 1 by combining the above contents and through specific application, and further explains the technical scheme of the invention.
Example 1
The invention is in
Figure BDA0002149951750000111
Core TM 2Duo CPU E8400@3.00GHz 2.99GHz 2.00GB internal memory 32-bit operation system. />
The Wsos combat simulation system is a multi-weapon distributed system combat simulation system, covers typical combat elements of a red-blue square and comprises combat equipment with various purposes for various weapons in sea, land and air. The system battle is to execute a battle task according to a certain countermeasure flow under a specific scenario file, and each party in the battle process shows the battle efficiency to a certain extent. The application research is based on the existing large amount of imagination parameter combination data and combat effectiveness data, and the imagination parameters to be researched are assumed to have m and are marked as x 1 ,x 2 ,…,x m Each parameter takes a real number within a certain range as x i ∈[d i1 ,d i2 ]I =1,2, … m, with the aim of predicting the proposed parameter combination (x) for unknown tags 1 ,x 2 ,…,x m ) The method comprises the following specific steps of taking the combat effectiveness of the following red parties, and how to combine and take values of a certain parameter or a plurality of parameters when the combat effectiveness indexes of the red parties are optimal:
and S100, constructing a deep neural network model, inputting sample data for optimization, and training, verifying and testing the model to obtain the optimized network model. If the performance evaluation is needed, step S200 is executed, and if the performance optimization is needed, step S400 is executed. Specifically, step S100 includes the following steps.
Step S110, dividing the sample data for optimization into a training set, a test set and a verification set. According to a certain proportion, a sample complete set for optimization is randomly divided into a training set, a testing set and a verification set, no intersection exists among the training set, the testing set and the verification set, and the union set is the sample complete set for optimization. When the sample data for optimization is small (several hundred samples or less), the proportion of division may be 60%, 20%. If the data is large-scale data (more than thousands and tens of thousands of samples), the verification set and the training set can be properly reduced to meet the function, and the division ratio can be 80%, 10% and 10%.
And step S120, respectively carrying out standardization processing on the training set, the test set and the verification set to obtain a standardized training set, a standardized test set and a standardized verification set. The WsoS has a plurality of equipment types, and the performance parameters of the equipment are different, so that the measurement units of the parameters are different, and the value range is larger. In order to improve the training speed and accuracy of the deep neural network model, the features (parameters) of each dimension need to be standardized on the respective dimension, a zero-mean normalization (z-score) method is adopted, so that the processed features of each dimension conform to the standard normal distribution, and the conversion formula is as follows.
Figure BDA0002149951750000121
Wherein, x is the value of a certain dimension characteristic, mu is the mean value of the dimension characteristic value in the sample set, sigma is the variance of the dimension characteristic value, and x pre Is the value of the dimension feature after standardization.
Step S130, constructing a deep neural network model, and setting a loss function, model accuracy, and maximum learning frequency of the deep neural network model as shown in fig. 8. The method comprises the steps of building DNN based on a tensoflow deep learning framework, setting the number of neurons in an input layer of a model, setting the number of layers of a hidden layer, the number of neurons in each layer and an activation function, initializing weight and a threshold, and setting a loss function, model precision and maximum learning times.
For example, for the systematic combat effectiveness evaluation prediction problem, DNN is configured to include m input layers, 4 hidden layers and 1 output layer, where the four hidden layers include 1024, 512, 256 and 128 neurons, respectively, and the error function is selected as a mean square error loss function, and the structure of the DNN is shown in fig. 7. X represents an input, w i 、b i (i =1,2,3,4) represents the weight and threshold of each hidden layer neuron, respectively, with dimensions w 1 ,m×1024;b 1 ,1×1024;w 2 ,1024×512;b 2 ,1×512;w 3 ,512×256;b 3 ,1×256;w 4 ,256×128;b 4 ,1×128。w o 、b o Representing weights and thresholds of output layer neurons, respectively, with dimensions dependent on the number of final hidden layer neurons and the performance index to be learnedThe number of the index is p, and is w o ,128×p;b o 1 xp, y represents p performance index predictors.
And step S140, setting an optimizer of the deep neural network model, inputting the standardized training set into the deep neural network model, and performing model training. And selecting an Adam optimizer, learning the test set by DNN, and gradually adjusting the network weight and the threshold by taking a minimized mean square error loss function as a target in the learning process.
Step S150, determining whether the training is to be performed with the set model accuracy or the maximum learning frequency. If so, the training is finished, step S160 is executed, otherwise, step S140 is executed.
And step S160, checking whether the model performance meets the requirements or not based on the standardized verification set. Specifically, it is checked whether the model diverges on the standardized verification set, the classification accuracy of the classification problem, the prediction error of the regression problem, and the like meet the set requirements. If yes, go to step S180; if not, go to step S170;
and step S170, adjusting the network hyper-parameter according to the checking result of the model performance. Step S140 is performed.
And analyzing the verification error and assigning a new value to the hyper-parameter. For example, if the verification result exceeds the optimal value, the learning rate is adjusted to be small; if the verification process is too slow, the learning rate will be increased. If the verification error rate is greatly different from the training error rate, the iteration times are increased or different network structures are tried, such as the number of hidden layers is reduced.
And step S180, inputting the standardized test set to obtain an effectiveness predicted value on the test set.
And step S190, judging whether the generalization error of the deep neural network model meets the requirement. If so, obtaining the optimized network model, and executing step S200 or step S300, otherwise, executing step S1100. The standardized test set can verify the generalization ability of the neural network model, if the effect on the standardized verification set is much worse than that on the standardized training set, the model is over-fitted, which indicates that the generalization error does not meet the requirement, and step S1100 is executed. Otherwise, it is stated that the generalization error satisfies the requirement, that is, the optimized network model (DNN) is obtained, and the performance evaluation and optimization can be performed to execute step S200 or step S300.
Step S1100, adjusting the network hyper-parameter according to the generalization error condition, and executing step S140. Aiming at the over-fitting phenomenon, a Dropout regularization method is adopted during network training, the proportion of Dropout is set to be a certain proportion, such as 20%, and 20% of hidden layer neurons are randomly deleted by using a tenserflow corresponding function module.
Step S200, inputting sample data to be evaluated into the optimized network model (DNN) to obtain an efficiency evaluation value.
And step S300, judging whether the single-effect optimization is performed. If yes, go to step S400, otherwise go to step S500.
And S400, performing single-effect optimization by adopting a GA algorithm. Specifically, step S400 includes the following steps.
Step S410, initializing GA parameters, population and value ranges of the parameters. Initializing the maximum iteration times, population scale, cross probability, variation probability and value ranges of all parameters of the GA algorithm. The chromosomes are encoded by real numbers, the value of each gene is the actual value of the corresponding parameter, and each chromosome represents an individual and represents a group of combinations of the values of the parameters. With x 1 ∈[6,12]For example, if the value is 8.1, the value of the 1 st gene locus of the corresponding chromosome is 8.1, and all the parameter values are connected in series in sequence to form a chromosome with the length of m.
In step S420, the index value of individual performance of the population predicted by the optimized network model (DNN) is called as an individual adaptive value. And predicting sample combination data of each parameter in the value range of the parameter based on the optimized network model (DNN) obtained in the step, and taking the sample combination data as an individual fitness value. And the length of each group of parameter value combination samples is m, and the samples are chromosome phenotypes after parameter real number coding. For example, if the performance index to be optimized corresponds to the first column in p in the sample tag data, the DNN is called to predict the unlabeled sample data with the length of m, and then a p-column prediction value is obtained, and at this time, only the first column is taken as the fitness value of the population individual.
Step S430, determine whether the termination condition is reached. The termination condition is the maximum number of iterations. If so, go to step S450, otherwise go to step S440.
In step S440, the selection, crossover, and mutation operations are performed, and step S420 is performed. The selection operator is selected as roulette, the crossover operator is selected as single-point crossover, and the mutation operator is selected as single-point mutation.
And S450, outputting the optimal planned parameter combination and ending the process.
And step S500, performing multi-efficiency optimization by adopting an NSGAII algorithm. Specifically, step S500 includes the following steps.
Step S510, initializing NSGAII parameters, populations and value ranges of the parameters.
In step S520, it is determined whether or not the first generation subgroup is generated. If yes, add 1 to the evolution algebra, go to step S540, otherwise go to step S530.
Step S530, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; selection, crossover, mutation. Step S520 is performed.
Step S540, merging the child population and the parent population.
Step S550, determine whether to generate a new parent population. If yes, go to step S570, otherwise go to step S560.
Step S560, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; performing congestion distance calculation on the individuals in each non-dominant layer; and selecting proper individuals to form a new father population. Step S550 is performed. Specifically, if the non-dominant ranks of the two individuals are different, the lower ranked individual is better; otherwise, if the two individuals belong to the same frontier, the individuals with larger crowding distance are more preferable, and the more preferable individuals are selected to form a new parent population.
Step S570, selecting, crossing, and mutating to generate a new offspring population.
In step S580, it is determined whether the maximum number of iterations has been reached. If so, go to step S590, otherwise, add 1 to the iteration count, go to step S540.
In step S590, the pareto optimal solution is output, and the process ends.
For the calculation of the individual fitness value mentioned in the above steps S530 and S560, similarly to step S420, specifically, the length of each set of parameter value combination samples is m, and the samples are chromosome phenotypes after parameter real number encoding. For example, if there are two performance indicators to be optimized, which correspond to the first column and the third column in p in the sample tag data, respectively, the DNN is called to predict the unlabeled sample data with m length, and then p columns of predicted values are obtained, and at this time, only the first column and the third column are taken as fitness values of population individuals.
The non-dominated sorting of the individuals mentioned in the above steps S530 and S560 specifically includes the following steps:
step a1, initializing serial number i =1, and the original population is Q (1).
And a step a2, finding out a non-dominant solution set of the population Q (i), wherein the non-dominant solution set is used as a first non-dominant layer and is marked as Fi.
And a3, assigning a non-dominance order value of 1 to all individuals in the Fi non-dominance layer.
And a4, removing all individuals in the Fi non-dominant layer, and forming a new population by the rest individuals, wherein the new population is marked as Q (i + 1).
Step a5, judging whether population layering is finished; if yes, executing the step a6, if not, adding 1 to i, and returning to execute the step a2.
And a6, outputting the non-dominated sorting of all individuals in the population.
The congestion distance calculation in step S560 includes the following steps:
step b1, initializing individual crowding distances of the same layer. Initializing the crowding distance L [ i ] of all individuals of the same layer] d Is 0, and i is the sequence number of the objective function, and is initialized to 1,d representing the distance.
And b2, arranging the individuals in the same layer in an ascending order according to the ith objective function value.
And b3, assigning a larger value to the individual crowding distance of the sequencing edge of the layer (for example, setting the crowding distance of the first sequencing individual and the crowding distance of the last sequencing individual to be infinite), so that the method has selection advantages.
Step b4, calculating the crowding distance of the individuals in the middle of the layer sequence, wherein the calculation is as follows:
Figure BDA0002149951750000161
wherein, L [ i ]] d The crowding distance of the ith individual of the layer, L [ i +1] j The j (th) objective function value of the (i + 1) th individual of the layer, L [ i-1 ]] j The j (th) objective function value of the (i-1) th individual of the layer,
Figure BDA0002149951750000162
and &>
Figure BDA0002149951750000163
Respectively the maximum value and the minimum value of the jth objective function of the layer.
And b5, judging whether all the objective functions are calculated. If yes, executing step b6, otherwise, executing step b2.
And b6, outputting the crowding distances of all individuals of the population.
Application specific examples, for example, the intended parameters (factors) to be examined have m =10, denoted x 1 ,x 2 ,…,x 12 The values of the levels of the respective factors are shown in table 1.
TABLE 1
Factor variable name Factor level 1 Factor level 2 Factor level 3
x1 6 8 12
x2 2 3 4
x3 1.2 2 2.8
x4 30 60 90
x5 12.5 15 17.5
x6 0.85 0.95 1.05
x7 40 80 120
x8 1 2.5 4
x9 35 45 55
x10 12.5 22.5 32.5
Corresponding to table 1, 1000 sample data of the current expected parameter and system performance values are obtained, and p =2 performance indicators are corresponding to each sample. Dividing the sample set into a training set, a verification set and a test set according to the proportion of 60%, 20% and 20%, respectively carrying out standardization processing, then carrying out loop iterative learning on the constructed DNN based on the standardized training set, the standardized verification set and the standardized test set according to the steps S130-S1100 to finally obtain optimized DNN with good performance, and finally inputting parameter value combination data of unknown performance values into the optimized DNN to obtain corresponding efficiency evaluation values, namely, realizing intelligent evaluation of system efficiency. For example, the optimized DNN evaluates the sample data to be evaluated [8,3.0,1.2,30,17.5,0.85,40,1.0,35,12.5] to obtain a performance evaluation value [0.270175,0.56976455].
Based on the optimized DNN, the value ranges of the planned parameters are shown in table 1.
According to the step S400, a GA algorithm is adopted to optimize the anti-reconnaissance capability index of the naval vessel as an example, the iteration number is set to be 100, the population size is 10, the initial cross probability pc =0.9, the variation probability pm =0.1, and the maximum value of the anti-reconnaissance capability is 0.590648 when the 57 th generation is obtained. As a result, as shown in fig. 9, the corresponding combinations of the set parameter values are:
[10.931,2.628,1.354,87.828,16.689,1.025,73.806,3.531,52.391,31.813]。
according to step S500, using the NSGAII algorithm to optimize the anti-reconnaissance capability and situation completeness as an example, setting the iteration number as 100, the population size as 10, the initial crossover probability pc =0.9, and the variation probability pm =0.1, and finally obtaining 50 pareto optimal solutions after reaching the maximum iteration number, as shown in fig. 10, each optimal solution corresponds to a group of values of the imagination parameters, so that a decision maker selects an appropriate optimal solution according to a certain consideration criterion.
Therefore, the intelligent WsoS operational efficiency evaluation and optimization method completes intelligent evaluation of WsoS operational efficiency and intelligent optimization of single or multiple efficiency indexes.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (1)

1. An intelligent system combat effectiveness evaluation and optimization method is characterized by comprising the following steps:
s100, constructing a deep neural network model, determining planned parameters, system combat effectiveness and sample data, inputting the sample data for optimization, and training, verifying and testing the model to obtain an optimized network model; if the performance evaluation is needed, executing step S200, and if the performance optimization is needed, executing step S300;
step S200, inputting sample data to be evaluated into the optimized network model, and outputting an efficiency evaluation value;
step S300, judging whether single-effect optimization is performed, if so, executing step S400, otherwise, executing step S500;
step S400, performing single-effect optimization by adopting a GA algorithm, wherein the index value of the individual performance of the population predicted by the optimized network model is used as an individual adaptive value;
step S500, performing multi-efficiency optimization by adopting an NSGAII algorithm;
step S100 includes the steps of:
step S110, dividing sample data for optimization into a training set, a test set and a verification set;
step S120, respectively carrying out standardization processing on the training set, the test set and the verification set to obtain a standardized training set, a standardized test set and a standardized verification set;
step S130, constructing a deep neural network model, and setting a loss function, model precision and maximum learning times of the deep neural network model;
step S140, setting an optimizer of the deep neural network model, inputting the standardized training set into the deep neural network model, and performing model training;
step S150, judging whether the training reaches the set model precision or the maximum learning frequency; if so, ending the training, executing step S160, otherwise, executing step S140;
step S160, checking whether the model performance meets the requirements or not based on the standardized verification set, and if so, executing step S180; if not, go to step S170;
step S170, adjusting network hyper-parameters according to the checking result of the model performance, and executing step S140;
step S180, inputting a standardized test set to obtain an efficiency predicted value on the test set;
step S190, judging whether the generalization error of the deep neural network model meets the requirement; if so, obtaining the optimized network model, and executing the step S200 or the step S300, otherwise, executing the step S1100;
step S1100, adjusting network hyper-parameters according to the generalization error condition, and executing step S140;
step S400 further includes the steps of:
step S410, initializing GA parameters, population and value ranges of the parameters;
step S420, calling the individual performance index value of the population predicted by the optimized network model as an individual adaptive value;
step S430, judging whether a termination condition is reached; if yes, go to step S450, otherwise go to step S440;
step S440, executing selection, crossing and mutation operations, and executing step S420;
step S450, outputting the optimal set parameter combination;
step S500 further includes the steps of:
step S510, initializing NSGAII parameters, populations and value ranges of the parameters;
step S520, determining whether a first generation subgroup is generated; if so, adding 1 to the evolution algebra, and executing step S540 from the second generation, otherwise, executing step S530;
step S530, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; selecting, crossing and mutating; executing step S520;
step S540, combining the offspring population and the parent population;
step S550, judging whether a new parent population is generated; if yes, go to step S570, otherwise go to step S560;
step S560, calling the optimized network model to predict the performance index value of the population individuals as an individual adaptive value, and sequencing the population individuals in a non-dominated manner; performing congestion distance calculation on the individuals in each non-dominant layer; selecting proper individuals to form a new father population; step S550 is executed;
step S570, selecting, crossing and mutating to generate a new filial generation population;
step S580, determining whether the maximum number of iterations is reached; if yes, go to step S590, otherwise, add 1 to the iteration number, go to step S540;
in step S590, the pareto optimal solution is output.
CN201910698203.5A 2019-07-31 2019-07-31 Intelligent system combat effectiveness evaluation and optimization method Active CN110544011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910698203.5A CN110544011B (en) 2019-07-31 2019-07-31 Intelligent system combat effectiveness evaluation and optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910698203.5A CN110544011B (en) 2019-07-31 2019-07-31 Intelligent system combat effectiveness evaluation and optimization method

Publications (2)

Publication Number Publication Date
CN110544011A CN110544011A (en) 2019-12-06
CN110544011B true CN110544011B (en) 2023-03-24

Family

ID=68709886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910698203.5A Active CN110544011B (en) 2019-07-31 2019-07-31 Intelligent system combat effectiveness evaluation and optimization method

Country Status (1)

Country Link
CN (1) CN110544011B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220921A (en) * 2020-01-08 2020-06-02 重庆邮电大学 Lithium battery capacity estimation method based on improved convolution-long-and-short-term memory neural network
CN111523636B (en) * 2020-04-14 2023-02-24 上海海事大学 Optimization method for improving scale-free network elasticity
CN111832911A (en) * 2020-06-24 2020-10-27 哈尔滨工程大学 Underwater combat effectiveness evaluation method based on neural network algorithm
CN111967015B (en) * 2020-07-24 2022-04-12 复旦大学 Defense agent method for improving Byzantine robustness of distributed learning system
CN111861237A (en) * 2020-07-27 2020-10-30 中国人民解放军军事科学院评估论证研究中心 Military force comprehensive evaluation method
CN112686366A (en) * 2020-12-01 2021-04-20 江苏科技大学 Bearing fault diagnosis method based on random search and convolutional neural network
CN112580606B (en) * 2020-12-31 2022-11-08 安徽大学 Large-scale human body behavior identification method based on clustering grouping
CN113435780B (en) * 2021-07-14 2023-05-12 北京信息科技大学 Emergency communication sensing equipment system efficiency evaluation method based on neural network
CN116205535A (en) * 2023-03-09 2023-06-02 北京瑞风协同科技股份有限公司 AHP equipment test identification index balance evaluation method based on sensitivity analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635344A (en) * 2018-11-14 2019-04-16 中国航空工业集团公司沈阳飞机设计研究所 Effectiveness Evaluation Model preparation method and device based on l-G simulation test
CN109858093A (en) * 2018-12-28 2019-06-07 浙江工业大学 The air source heat pump multi-objective optimization design of power method of the non-dominated sorted genetic algorithm of SVR neural network aiding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070203589A1 (en) * 2005-04-08 2007-08-30 Manyworlds, Inc. Adaptive Recombinant Process Methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635344A (en) * 2018-11-14 2019-04-16 中国航空工业集团公司沈阳飞机设计研究所 Effectiveness Evaluation Model preparation method and device based on l-G simulation test
CN109858093A (en) * 2018-12-28 2019-06-07 浙江工业大学 The air source heat pump multi-objective optimization design of power method of the non-dominated sorted genetic algorithm of SVR neural network aiding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于GA-BP神经网络的某型装备作战效能评估方法;郑玉军 等;《空军雷达学院学报》;20121031;第26卷(第5期);第346-348页 *

Also Published As

Publication number Publication date
CN110544011A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN110544011B (en) Intelligent system combat effectiveness evaluation and optimization method
Cheng et al. Optimizing hydropower reservoir operation using hybrid genetic algorithm and chaos
Benardos et al. Optimizing feedforward artificial neural network architecture
CN104978612A (en) Distributed big data system risk predicating method based on AHP-RBF
CN109118013A (en) A kind of management data prediction technique, readable storage medium storing program for executing and forecasting system neural network based
CN112364560B (en) Intelligent prediction method for working hours of mine rock drilling equipment
CN113094988A (en) Data-driven slurry circulating pump operation optimization method and system
CN111832949A (en) Construction method of equipment combat test identification index system
CN115221793A (en) Tunnel surrounding rock deformation prediction method and device
CN104732067A (en) Industrial process modeling forecasting method oriented at flow object
CN116542382A (en) Sewage treatment dissolved oxygen concentration prediction method based on mixed optimization algorithm
CN106845696B (en) Intelligent optimization water resource configuration method
CN115982141A (en) Characteristic optimization method for time series data prediction
CN114021432A (en) Stress corrosion cracking crack propagation rate prediction method and system
CN112200208B (en) Cloud workflow task execution time prediction method based on multi-dimensional feature fusion
CN113792984B (en) Cloud model-based anti-air defense anti-pilot command control model capability assessment method
Jalalvand et al. A multi-objective risk-averse workforce planning under uncertainty
CN115713144A (en) Short-term wind speed multi-step prediction method based on combined CGRU model
CN111414927A (en) Method for evaluating seawater quality
CN115034070A (en) Multi-objective optimization and VIKOR method-based complex mechanical product selection, assembly and optimization and decision method
Xia et al. Robust system portfolio modeling and solving in complex system of systems construction
Cococcioni et al. Identification of Takagi-Sugeno fuzzy systems based on multi-objective genetic algorithms
Sun et al. Consistency modification of judgment matrix based on genetic algorithm in analytic hierarchy process
CN110942149B (en) Feature variable selection method based on information change rate and condition mutual information
CN112650770B (en) MySQL parameter recommendation method based on query work load analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant