CN113222797A - Combat method and system - Google Patents
Combat method and system Download PDFInfo
- Publication number
- CN113222797A CN113222797A CN202110479320.XA CN202110479320A CN113222797A CN 113222797 A CN113222797 A CN 113222797A CN 202110479320 A CN202110479320 A CN 202110479320A CN 113222797 A CN113222797 A CN 113222797A
- Authority
- CN
- China
- Prior art keywords
- neural network
- deep neural
- combat
- weapon
- battlefield
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000013528 artificial neural network Methods 0.000 claims abstract description 102
- 210000002569 neuron Anatomy 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 230000004083 survival effect Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims 1
- 238000002360 preparation method Methods 0.000 claims 1
- 230000002787 reinforcement Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- YQGOJNYOYNNSMM-UHFFFAOYSA-N eosin Chemical compound [Na+].OC(=O)C1=CC=CC=C1C1=C2C=C(Br)C(=O)C(Br)=C2OC2=C(Br)C(O)=C(Br)C=C21 YQGOJNYOYNNSMM-UHFFFAOYSA-N 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0633—Workflow analysis
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Educational Administration (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Development Economics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- Marketing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Game Theory and Decision Science (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a combat method and a combat system, wherein the combat method comprises the following steps: taking own weapons as own intelligent bodies, extracting own intelligent body data, and performing deep neural network analysis on own intelligent body combat parameters; carrying out deep neural network analysis of battlefield situation awareness and threat assessment; and combining the deep neural network analysis of own intelligent agent combat parameters and the neural network analysis of battlefield situation awareness and threat assessment to obtain a combat decision. The invention combines the actual scene with the deep neural network, can accurately and quickly grasp the battlefield situation of strategic and campaign levels, lightens the pressure of the combat unit and improves the accuracy of combat command.
Description
Technical Field
The invention relates to the technical field of information, in particular to a combat method and system.
Background
With the rapid advance of the new technology change, modern wars have gradually entered the information-based war era, and the basic form of the wars is an integrated system confrontation taking information technology as a support. In the informatization war, the battlefield space is expanded in the air to form a complex system for jointly implementing various military warfare strains which are expanded in multi-dimensional spaces of land, sea, air, sky, electricity and network. A large number of connections between the combat units and the command system, between personnel, equipment and the battlefield environment and the like form a highly complex network; in the process of deduction in the battlefield, a large amount of data is generated every moment, and thus battlefield big data is formed.
Battlefield information is quantized in the sea, the battlefield environment is complicated, the battle objects are diversified, and the battle intensity is increased sharply, so that great challenges are brought to the accurate command of a commander; the difficulty is to grasp the battlefield situation of strategic and battle level accurately and quickly. Military command decision is one of core links in a combat system, and how to better assist a commander to make a decision under increasingly complex and variable battlefield environments becomes a hot problem in the field of military research. The method of adopting the deep neural network and the multi-agent reinforcement learning can bring new eosin for military command.
Reinforcement learning is an important branch of machine learning, learning is carried out through interaction between an agent and the environment, strategies are searched to achieve maximum return or achieve specific targets, and the reinforcement learning is closely related to game theory. With the rapid development and wide application of the deep neural network, more and more traditional reinforcement learning algorithms are combined with the deep neural network to form a deep reinforcement learning method, so that the problem that the scene with higher dimensionality in the real world is more complex is solved.
The multi-agent reinforcement learning applies reinforcement learning technology and game theory to a multi-agent system, so that a plurality of agents can complete more complex tasks by performing interaction and decision in a higher-dimensional and dynamic real scene, and the multi-agent reinforcement learning becomes a leading-edge hotspot in the field of reinforcement learning research gradually.
Disclosure of Invention
To solve at least one of the above problems, a first aspect of the present invention is directed to a combat method, the method comprising:
the method comprises the following steps: taking own weapons as own intelligent bodies, extracting own intelligent body data, and performing deep neural network analysis on own intelligent body combat parameters;
carrying out deep neural network analysis of battlefield situation awareness and threat assessment;
and combining the deep neural network analysis of own intelligent agent combat parameters and the neural network analysis of battlefield situation awareness and threat assessment to obtain a combat decision.
Preferably, the extracting data of the own intelligent agent and the deep neural network analysis of the fighting parameters of the own intelligent agent comprise:
depicting each of the personal agents with a deep neural network comprising A1、A2、……、AnWherein n is the number of own intelligent bodies, namely the number of own weapons;
preferably, the extracting the own-party agent data specifically includes: extracting weapon operational parameters of the own weapon, using the weapon operational parameters as input conditions of the deep neural network, and carrying out forward propagation to obtain an output layer A of the deep neural networki=(Ai1,Ai2,……,Aim) Where m is the deep neural network AiTotal number of output layer neurons.
Preferably, the deep neural network analysis for battlefield situation awareness and threat assessment includes:
threat assessment of various weapons of the enemy is carried out, a battlefield map is divided into grids according to two-dimensional coordinates, and a battlefield situation gray level map set T ═ is generated1,T2,……,Tt) (ii) a The battlefield situation gray level image set is used as input information and led into a deep neural network ONN(ii) a Forward propagation is carried out to obtain the deep neural network ONNOutput layer O ═ O (O)1,……,Op)。
Preferably, the weapon operation parameters include: three-dimensional space coordinates, navigation speed, heading, belonged formation, task execution state, weapon carrying and number thereof, survival or not, damage percentage, dead time and remaining oil amount.
Preferably, the threat assessment of various enemy weapons specifically includes:
carrying out threat assessment according to the coordinates, weapon power and distance of each weapon;
the battlefield situation gray level map is integrated into a battlefield situation gray level map T of T-class enemy weapons in the gridj(j ≦ t).
The obtaining of the combat decision by combining the deep neural network analysis of the own intelligent agent combat parameters and the battlefield situation sensing neural network analysis comprises the following steps:
constructing a deep neural network C for each own intelligent agentiThe output layer Ai=(Ai1,Ai2,……,Aim) And the battlefield situation deep neural network ONNOutput layer O ═ O (O)1,……,Op) As the deep neural network CiThe input layer of (1); forward propagation is carried out to obtain an output layer Ci=(Ci 1,...,Ci q),
Wherein (C)i 1,...,Ci q) Combat decision instruction set D respectively corresponding to own intelligent agenti=(Di 1,...,Di q) The probability of each combat instruction is selected from Ci=(Ci 1,...,Ci q) Taking the decision instruction corresponding to the maximum value as a decision instruction, and calculating the reward value R of the decision instructioniSaid prize value RiFor training its corresponding deep neural network.
In a second aspect, the present invention provides a combat system comprising:
the operational parameter analysis module: carrying out deep neural network analysis according to weapon operation parameters of own weapons;
a situation generation module: generating a combat situation gray level map according to the distribution of enemy weapons on a battlefield map and combat parameters of the enemy weapons, and performing neural network analysis;
a decision generation module: and generating the fighting decision of the weapon of the own party according to the situation generating module and the fighting parameter analyzing module.
Specifically, the deep neural network analysis according to weapon operational parameters of own weapons includes:
regarding each own weapon as an own intelligent agent; depicting each of the personal agents with a deep neural network comprising A1、A2、……、AnWherein n is the number of own intelligent bodies, namely the number of own weapons;
specifically, the extracting the own-party agent data includes: extracting weapon operational parameters of the own weapon, using the weapon operational parameters as input conditions of the deep neural network, and carrying out forward propagation to obtain an output layer A of the deep neural networki=(Ai1,Ai2,……,Aim) Where m is the deep neural network AiTotal number of output layer neurons.
Preferably, the generating of the fighting decision of the own weapon according to the situation generating module and the fighting parameter analyzing module includes:
threat assessment of various weapons of the enemy is carried out, a battlefield map is divided into grids according to two-dimensional coordinates, and a battlefield situation gray level map set T ═ is generated1,T2,……,Tt) (ii) a The battlefield situation gray level image set is used as input information and led into a deep neural network ONN(ii) a Forward propagation is carried out to obtain the deep neural network ONNOutput layer O ═ O (O)1,……,Op)。
The step of generating a combat situation gray level map according to the distribution of enemy weapons on a battlefield map and combat parameters and carrying out deep neural network analysis comprises the following steps:
constructing a deep neural network C for each own intelligent agentiThe output layer Ai=(Ai1,Ai2,……,Aim) And the battlefield situation deep neural network ONNOutput layer O ═ O (O)1,……,Op) As the deep neural network CiThe input layer of (1); forward propagation is carried out to obtain an output layer Ci=(Ci 1,...,Ci q) Wherein (C)i 1,...,Ci q) Respectively corresponding to own intelligent agentOperational decision instruction set Di=(Di 1,...,Di q) The probability of each combat instruction is selected from Ci=(Ci 1,...,Ci q) Taking the decision instruction corresponding to the maximum value as a decision instruction, and calculating the reward value R of the decision instructioniSaid prize value RiFor training its corresponding deep neural network.
The invention has the following beneficial effects:
the invention combines the actual scene with the deep neural network, can accurately and quickly grasp the battlefield situation of strategic and campaign levels, lightens the pressure of the combat unit and improves the accuracy of combat command.
Drawings
FIG. 1 is a diagram illustrating the steps of a method of combat according to one embodiment of the present invention;
fig. 2 is a block diagram of a combat system according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the invention, the invention is further described below with reference to preferred embodiments and the accompanying drawings. Similar parts in the figures are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and is not to be taken as limiting the scope of the invention.
Embodiment 1 provides a method of combat, as shown in fig. 1, the method comprising:
s001: taking own weapons (including bombers, early warning machines, fighters, air defense warships and the like) as own intelligent bodies, extracting data of the own intelligent bodies, and performing deep neural network analysis on operational parameters of the own intelligent bodies;
preferably, all n-pieces weapons of one's own party are taken as a single intelligent agent, each intelligent agent of one's own party is represented by a deep neural network, and the deep neural network of the intelligent agent of one's own party comprises A1、A2、……、An;
Specifically, extracting the personal agent data comprises: extracting weapon operational parameters of the own weapon, using the weapon operational parameters as input conditions of the deep neural network, and carrying out forward propagation to obtain an output layer A of the deep neural networki=(Ai1,Ai2,……,Aim) Where m is the deep neural network Ai(i.e. the deep neural network A corresponding to the ith own weaponi) (ii) output layer neuron total number;
preferably, the weapon operational parameters of the own weapon include: three-dimensional space coordinates, navigation speed, heading, belonged formation, task execution state, weapon carrying and number thereof, survival or not, damage percentage, dead time, residual oil quantity and the like.
S002: carrying out deep neural network analysis of battlefield situation awareness and threat assessment;
preferably, threat assessment of various enemy weapons is carried out, the battlefield map is divided into m-n grids according to two-dimensional coordinates, and various enemy weapons generate a battlefield situation gray level map battlefield T in the two-dimensional coordinate gridsjAnd a set T ═ T (T) of all weapon battlefield situation gray level maps of the enemy1,T2,……,Tt);
Importing the battlefield situation gray level image set as input information into a deep neural network ONN(ii) a Forward propagation is carried out to obtain the deep neural network ONNOutput layer O ═ O (O)1,……,Op)。
Preferably, the threat assessment of various enemy weapons specifically includes:
threat assessment is carried out according to coordinates, weapon power and distance of enemy weapons,
specifically, the battlefield situation gray level map set is a battlefield situation gray level map T of T-class enemy weapons in the gridj(j ≦ t).
S003: combining the deep neural network analysis of own intelligent agent combat parameters and the battlefield situation sensation neural network analysis to obtain a combat decision;
the method specifically comprises the following steps:
construct one for each of the own agentsDeep neural network CiThe output layer Ai=(Ai1,Ai2,……,Aim) And the battlefield situation deep neural network ONNOutput layer O ═ O (O)1,……,Op) As the deep neural network CiThe input layer of (1); forward propagation is carried out to obtain an output layer Ci=(Ci 1,...,Ci q);
Wherein (C)i 1,...,Ci q) Combat decision instruction set D respectively corresponding to own intelligent agenti=(Di 1,...,Di q) The probability of each combat instruction is selected from Ci=(Ci 1,...,Ci q) Taking the decision instruction corresponding to the maximum value as a decision instruction, and calculating the reward value R of the decision instructioniSaid prize value RiFor training its corresponding deep neural network.
Embodiment 2 provides a battle system, as shown in fig. 2, comprising:
the operational parameter analysis module: carrying out deep neural network analysis according to weapon operation parameters of own weapons;
a situation generation module: generating a combat situation gray scale map in the first combat case sample according to the distribution of enemy weapons on the battlefield map and the combat parameters of the enemy weapons;
a decision generation module: and generating the fighting decision of the weapon of the own party according to the situation generating module and the fighting parameter analyzing module.
The deep neural network analysis according to the weapon operational parameters of the own weapon comprises the following steps:
regarding each own weapon as an own intelligent agent; depicting each of the personal agents with a deep neural network comprising A1、A2、……、AnWherein n is the number of own intelligent bodies, namely the number of own weapons;
the extracting of the own-party intelligent agent data specifically comprises the following steps: extracting the above materialsWeapon operational parameters of weapons are used as input conditions of the deep neural network, and forward propagation is carried out to obtain an output layer A of the deep neural networki=(Ai1,Ai2,……,Aim) Where m is the deep neural network AiTotal number of output layer neurons.
The operation decision of the own weapon generated according to the situation generation module and the operation parameter analysis module comprises the following steps:
threat assessment of various weapons of the enemy is carried out, a battlefield map is divided into grids according to two-dimensional coordinates, and a battlefield situation gray level map set T ═ is generated1,T2,……,Tt) (ii) a The battlefield situation gray level image set is used as input information and led into a deep neural network ONN(ii) a Forward propagation is carried out to obtain the deep neural network ONNOutput layer O ═ O (O)1,……,Op)。
The step of generating a combat situation gray level map according to the distribution of enemy weapons on a battlefield map and combat parameters and carrying out deep neural network analysis comprises the following steps:
constructing a deep neural network C for each own intelligent agentiThe output layer Ai=(Ai1,Ai2,……,Aim) And the battlefield situation deep neural network ONNOutput layer O ═ O (O)1,……,Op) As the deep neural network CiThe input layer of (1); forward propagation is carried out to obtain an output layer Ci=(Ci 1,...,Ci q) Wherein (C)i 1,...,Ci q) Combat decision instruction set D respectively corresponding to own intelligent agenti=(Di 1,...,Di q) The probability of each combat instruction is selected from Ci=(Ci 1,...,Ci q) Taking the decision instruction corresponding to the maximum value as a decision instruction, and calculating the reward value R of the decision instructioniSaid prize value RiFor training the corresponding deep neural network, i.e. the combat of the weapon of the own party corresponding to the decision instructionAnd (6) making a decision.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention, and it will be obvious to those skilled in the art that other variations or modifications may be made on the basis of the above description, and all embodiments may not be exhaustive, and all obvious variations or modifications may be included within the scope of the present invention.
Claims (10)
1. A combat method is characterized in that,
the method comprises the following steps: taking own weapons as own intelligent bodies, extracting own intelligent body data, and performing deep neural network analysis on own intelligent body combat parameters;
carrying out deep neural network analysis of battlefield situation awareness and threat assessment;
and combining the deep neural network analysis of own intelligent agent combat parameters and the neural network analysis of battlefield situation awareness and threat assessment to obtain a combat decision.
2. The method of claim 1,
the extraction of own intelligent agent data and the deep neural network analysis of own intelligent agent operational parameters comprise the following steps:
depicting each of the personal agents with a deep neural network comprising A1、A2、……、AnWherein n is the number of own intelligent bodies, namely the number of own weapons;
the extracting of the own-party intelligent agent data specifically comprises the following steps: extracting weapon operational parameters of the own weapon, using the weapon operational parameters as input conditions of the deep neural network, and carrying out forward propagation to obtain an output layer A of the deep neural networki=(Ai1,Ai2,……,Aim) Where m is the deep neural network AiTotal number of output layer neurons.
3. The method of claim 1,
the deep neural network analysis for battlefield situation awareness and threat assessment comprises the following steps:
threat assessment of various weapons of the enemy is carried out, a battlefield map is divided into grids according to two-dimensional coordinates, and a battlefield situation gray level map set T ═ is generated1,T2,……,Tt) (ii) a The battlefield situation gray level image set is used as input information and led into a deep neural network ONN(ii) a Forward propagation is carried out to obtain the deep neural network ONNOutput layer O ═ O (O)1,……,Op)。
4. The method of claim 2,
the weapon operational parameters include: three-dimensional space coordinates, navigation speed, heading, belonged formation, task execution state, weapon carrying and number thereof, survival or not, damage percentage, dead time and remaining oil amount.
5. The method of claim 3. It is characterized in that the preparation method is characterized in that,
the threat assessment of various enemy weapons specifically comprises the following steps:
carrying out threat assessment according to the coordinates, weapon power and distance of each weapon;
the battlefield situation gray level map is integrated into a battlefield situation gray level map T of T-class enemy weapons in the gridj(j ≦ t).
6. The method of claim 1,
the obtaining of the combat decision by combining the deep neural network analysis of the own intelligent agent combat parameters and the battlefield situation sensing neural network analysis comprises the following steps:
constructing a deep neural network C for each own intelligent agentiThe output layer Ai=(Ai1,Ai2,……,Aim) And the battlefield situation deep neural network ONNOutput layer O ═ O (O)1,……,Op) As the deep neural network CiThe input layer of (1); forward propagation is carried out to obtain an output layer Ci=(Ci 1,...,Ci q),
Wherein (C)i 1,...,Ci q) Combat decision instruction set D respectively corresponding to own intelligent agenti=(Di 1,...,Di q) The probability of each combat instruction is selected from Ci=(Ci 1,...,Ci q) Taking the decision instruction corresponding to the maximum value as a decision instruction, and calculating the reward value R of the decision instructioniSaid prize value RiFor training its corresponding deep neural network.
7. A combat system, characterized in that,
the combat system includes:
the operational parameter analysis module: carrying out deep neural network analysis according to weapon operation parameters of own weapons;
a situation generation module: generating a combat situation gray level map according to the distribution of enemy weapons on a battlefield map and combat parameters of the enemy weapons, and performing neural network analysis;
a decision generation module: and generating the fighting decision of the weapon of the own party according to the situation generating module and the fighting parameter analyzing module.
8. The combat system of claim 7,
the deep neural network analysis according to the weapon operational parameters of the own weapon comprises the following steps:
regarding each own weapon as an own intelligent agent; depicting each of the personal agents with a deep neural network comprising A1、A2、……、AnWherein n is the number of own intelligent bodies, namely the number of own weaponsCounting;
the extracting of the own-party intelligent agent data specifically comprises the following steps: extracting weapon operational parameters of the own weapon, using the weapon operational parameters as input conditions of the deep neural network, and carrying out forward propagation to obtain an output layer A of the deep neural networki=(Ai1,Ai2,……,Aim) Where m is the deep neural network AiTotal number of output layer neurons.
9. The combat system of claim 7,
the operation decision of the own weapon generated according to the situation generation module and the operation parameter analysis module comprises the following steps:
threat assessment of various weapons of the enemy is carried out, a battlefield map is divided into grids according to two-dimensional coordinates, and a battlefield situation gray level map set T ═ is generated1,T2,……,Tt) (ii) a The battlefield situation gray level image set is used as input information and led into a deep neural network ONN(ii) a Forward propagation is carried out to obtain the deep neural network ONNOutput layer O ═ O (O)1,……,Op)。
10. The combat system of claim 7,
the step of generating a combat situation gray level map according to the distribution of enemy weapons on a battlefield map and combat parameters and carrying out deep neural network analysis comprises the following steps:
constructing a deep neural network C for each own intelligent agentiThe output layer Ai=(Ai1,Ai2,……,Aim) And the battlefield situation deep neural network ONNOutput layer O ═ O (O)1,……,Op) As the deep neural network CiThe input layer of (1); forward propagation is carried out to obtain an output layer Ci=(Ci 1,...,Ci q) Wherein (C)i 1,...,Ci q) Combat decision instruction set D respectively corresponding to own intelligent agenti=(Di 1,...,Di q) The probability of each combat instruction is selected from Ci=(Ci 1,...,Ci q) Taking the decision instruction corresponding to the maximum value as a decision instruction, and calculating the reward value R of the decision instructioniSaid prize value RiFor training its corresponding deep neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110479320.XA CN113222797A (en) | 2021-04-30 | 2021-04-30 | Combat method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110479320.XA CN113222797A (en) | 2021-04-30 | 2021-04-30 | Combat method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113222797A true CN113222797A (en) | 2021-08-06 |
Family
ID=77090299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110479320.XA Pending CN113222797A (en) | 2021-04-30 | 2021-04-30 | Combat method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113222797A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145348A (en) * | 2019-11-19 | 2020-05-12 | 扬州船用电子仪器研究所(中国船舶重工集团公司第七二三研究所) | Visual generation method of self-adaptive battle scene |
CN112560332A (en) * | 2020-11-30 | 2021-03-26 | 北京航空航天大学 | Aviation soldier system intelligent behavior modeling method based on global situation information |
-
2021
- 2021-04-30 CN CN202110479320.XA patent/CN113222797A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145348A (en) * | 2019-11-19 | 2020-05-12 | 扬州船用电子仪器研究所(中国船舶重工集团公司第七二三研究所) | Visual generation method of self-adaptive battle scene |
CN112560332A (en) * | 2020-11-30 | 2021-03-26 | 北京航空航天大学 | Aviation soldier system intelligent behavior modeling method based on global situation information |
Non-Patent Citations (5)
Title |
---|
吕学志等: "基于深度学习的战役初始态势认知方法", 《火力与指挥控制》, vol. 45, no. 04, 29 May 2020 (2020-05-29), pages 10 - 17 * |
吴昭欣;李辉;王壮;陶伟;吴昊霖;侯贤乐;: "基于深度强化学习的智能仿真平台设计", 战术导弹技术, no. 04, 15 July 2020 (2020-07-15), pages 193 - 200 * |
杨萍;毕义明;孙淑玲;: "具有自主决策能力的机动单元智能体研究", 兵工学报, no. 11, 15 November 2007 (2007-11-15), pages 1363 - 1366 * |
汤润泽等: "人工智能在无人战场态势预判与博弈对抗中的应用", 《现代防御技术》, vol. 48, no. 05, 15 October 2020 (2020-10-15), pages 25 - 31 * |
汤润泽等: "多武器跨域智能协同对空作战应用及关键技术", 《现代防御技术》, vol. 49, no. 02, 15 April 2021 (2021-04-15), pages 26 - 34 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111240353B (en) | Unmanned aerial vehicle collaborative air combat decision method based on genetic fuzzy tree | |
CN114239228A (en) | Efficiency evaluation method based on modeling and analysis of massive countermeasure simulation deduction data | |
CN101964019A (en) | Against behavior modeling simulation platform and method based on Agent technology | |
CN113705102B (en) | Deduction simulation system, deduction simulation method, deduction simulation equipment and deduction simulation storage medium for sea-air cluster countermeasure | |
CN113791634A (en) | Multi-aircraft air combat decision method based on multi-agent reinforcement learning | |
CN113723013B (en) | Multi-agent decision-making method for continuous space soldier chess deduction | |
CN105678030B (en) | Divide the air-combat tactics team emulation mode of shape based on expert system and tactics tactics | |
CN112861257B (en) | Aircraft fire control system precision sensitivity analysis method based on neural network | |
CN115329594B (en) | Large-scale missile cluster attack and defense confrontation simulation acceleration method and system | |
Biltgen et al. | A Methodology for Capability-Focused Technology Evaluation of Systems of Systems | |
CN113625569B (en) | Small unmanned aerial vehicle prevention and control decision method and system based on hybrid decision model | |
CN109544082A (en) | A kind of system and method for digital battlefield confrontation | |
Fawkes | Developments in Artificial Intelligence: Opportunities and Challenges for Military Modeling and Simulation | |
CN114912741A (en) | Effectiveness evaluation method and device for combat system structure and storage medium | |
CN115293022A (en) | Aviation soldier intelligent agent confrontation behavior modeling method based on OptiGAN and spatiotemporal attention | |
CN113222797A (en) | Combat method and system | |
CN114247144B (en) | Multi-agent confrontation simulation method and device, electronic equipment and storage medium | |
CN115457809A (en) | Multi-agent reinforcement learning-based flight path planning method under opposite support scene | |
CN114254722B (en) | Multi-intelligent-model fusion method for game confrontation | |
Hu et al. | A Neural Network-Based Intelligent Decision-Making in the Air-Offensive Campaign with Simulation | |
US12061673B1 (en) | Multi-agent planning and autonomy | |
Lihua et al. | Multi-platform fire control strike track planning method based on deep enhance learning | |
CN118133958B (en) | Military combat command system and method based on augmented reality and knowledge graph | |
CN112926729B (en) | Man-machine confrontation intelligent agent strategy making method | |
Farlik et al. | Aspects of the surface-to-air missile systems modelling and simulation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |