CN112766595B - Command control device, method, system, computer equipment and medium - Google Patents

Command control device, method, system, computer equipment and medium Download PDF

Info

Publication number
CN112766595B
CN112766595B CN202110123302.8A CN202110123302A CN112766595B CN 112766595 B CN112766595 B CN 112766595B CN 202110123302 A CN202110123302 A CN 202110123302A CN 112766595 B CN112766595 B CN 112766595B
Authority
CN
China
Prior art keywords
scene
data
action
module
information database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110123302.8A
Other languages
Chinese (zh)
Other versions
CN112766595A (en
Inventor
常予
蔡明春
苏磊
张蕴弛
李梦芯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Electronic System Engineering
Original Assignee
Beijing Institute of Electronic System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Electronic System Engineering filed Critical Beijing Institute of Electronic System Engineering
Priority to CN202110123302.8A priority Critical patent/CN112766595B/en
Publication of CN112766595A publication Critical patent/CN112766595A/en
Application granted granted Critical
Publication of CN112766595B publication Critical patent/CN112766595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a command control device, a command control method, a command control system, computer equipment and a command control medium. Wherein, a specific implementation mode of the command control device comprises the following steps: the perception information input module is used for acquiring multi-class scene perception data; a distributed cloud processing module for processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data; the man-machine interaction module is used for acquiring an action target output by a user based on the scene perception data; the parallel simulation module is used for simulating according to the scene information database and the action target to generate simulation deduction data; the situation prediction module is used for inputting the scene information database and the action targets into the trained prediction model to generate situation prediction information; and the task output module is used for generating an action task according to the deduction data and the prediction information.

Description

Command control device, method, system, computer equipment and medium
Technical Field
The present application relates to the field of command and control technologies, and in particular, to a command control device, method, system, computer device and medium applied to an unmanned mobile unit cluster.
Background
In the intelligent fight age, scene perception data are massive and heterogeneous, unmanned platforms are widely applied, and a movement space is accelerated and expanded from a physical domain and an information domain to a cognitive domain, so that higher requirements are provided for the accuracy, timeliness and effectiveness of command control, and the command control of an intelligent unmanned mobile unit cluster is a trend.
Thus, there is a need for new command control apparatus, methods, systems, computer devices, and media.
Disclosure of Invention
To solve at least one of the above problems, a first embodiment of the present application provides a command control apparatus, including:
the perception information input module is used for acquiring multi-class scene perception data;
a distributed cloud processing module for processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data;
the man-machine interaction module is used for acquiring an action target output by a user based on the scene perception data;
the parallel simulation module is used for simulating according to the scene information database and the action target to generate simulation deduction data;
the situation prediction module is used for inputting the scene information database and the action targets into the trained prediction model to generate situation prediction information;
and the task output module is used for generating an action task according to the deduction data and the prediction information.
Furthermore, the parallel simulation module is further used for outputting simulation deduction data to the situation prediction module;
the situation prediction module is further used for expanding the scene information database according to the simulation deduction data and inputting the expanded scene information database and the action target into a trained prediction model to generate situation prediction information.
Further, the distributed cloud processing module includes:
the filtering processing unit is used for carrying out filtering processing on scene perception data, wherein the scene perception data comprises list perception data and image perception data;
the sliding window processing unit is used for cutting, classifying and connecting the filtered list perception data;
and the image processing unit is used for identifying, matching and labeling the filtered image scene perception data.
Further, the predictive model is generated for training using machine learning, including one or more of reinforcement learning, deep learning, and deep reinforcement learning.
Furthermore, the parallel simulation module is further used for generating a multiple simulation model by utilizing a twin service technology and a simulation service technology, and generating simulation deduction data according to the scene information database and the action target.
Further, the scene perception data includes: state data and environment data acquired by a plurality of sensors;
the scene information database includes: the system comprises a blue party action situation database, a red party action situation database and a scene environment database, wherein the scene environment database compensates the blue party action situation database and the red party action situation database;
the action tasks include: action planning, force distribution grouping and unit node allocation.
A second embodiment of the present application provides a method for controlling by using the command control device, including:
acquiring multiple types of scene perception data;
processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data;
acquiring an action target output by a user based on the scene perception data;
simulating according to the scene information database and the action target to generate simulation deduction data;
inputting the scene information database and the action targets into a trained prediction model to generate situation prediction information;
and generating an action task according to the deduction data and the prediction information.
A third embodiment of the present application provides a command control system, including:
at least one command control device as described above;
and at least two action units which are communicated with each other, wherein the action units acquire action tasks of the command control device through a communication network and execute the action tasks.
A fourth embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described above.
A fifth embodiment of the application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing a method as described above when executing said program.
The beneficial effects of the application are as follows:
aiming at the existing problems at present, the application establishes a command control device, a method, a system, computer equipment and a medium, performs distributed storage on scene perception data through a distributed cloud processing module, forms a large database by utilizing a parallel simulation module and performs deduction, and generates situation prediction information by utilizing a situation prediction module under a training simulation environment, thereby generating action tasks closely related to the scene perception data, realizing real-time acquisition of the scene perception data and real-time perception and prediction of global scene situations, and having wide application prospects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a structural frame diagram of a command control device according to an embodiment of the present application;
FIG. 2 shows a flow chart of a command method according to one embodiment of the application;
FIG. 3 illustrates a structural framework diagram of a command and control system according to one specific example of the present application;
FIG. 4 is a block diagram of a communication system between a command control device and a mobile subunit according to one embodiment of the application;
fig. 5 shows a schematic structural diagram of a computer device according to another embodiment of the present application.
Detailed Description
In order to more clearly illustrate the present application, the present application will be further described with reference to preferred embodiments and the accompanying drawings. Like parts in the drawings are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and that this application is not limited to the details given herein.
It is noted that relational terms such as first and second, and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As shown in fig. 1, a first embodiment of the present application proposes a command control apparatus, including:
the perception information input module is used for acquiring multi-class scene perception data;
a distributed cloud processing module for processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data;
the man-machine interaction module is used for acquiring an action target output by a user based on the scene perception data;
the parallel simulation module is used for simulating according to the scene information database and the action target to generate simulation deduction data;
the situation prediction module is used for inputting the scene information database and the action targets into the trained prediction model to generate situation prediction information;
and the task output module is used for generating an action task according to the deduction data and the prediction information.
In the embodiment, the distributed cloud processing module is used for carrying out distributed storage on the scene perception data, the parallel simulation module is used for forming a large database and carrying out deduction, and the situation prediction module is used for generating situation prediction information in a training simulation environment, so that action tasks closely related to the scene perception data are generated, real-time acquisition of the scene perception data and real-time perception and prediction of the global scene situation are realized, and the method has wide application prospects.
In an alternative embodiment, the scene-aware data includes: status data and environmental data acquired with a number of sensors.
In a specific example, in a scene facing the unmanned action unit cluster, sensing equipment and action tracks of both red and blue sides by using a plurality of sensing detection devices such as a radar, a detection machine and an unmanned plane, so as to obtain state data such as azimuth angles, speeds, coordinates and distances of both red and blue sides; and environmental data such as weather, wind power, temperature and the like are sensed by the environmental detection devices such as temperature, pressure and speed sensors, so that the real-time acquisition of the front-end data of the scene and the real-time analysis and monitoring of the situation of the scene are realized. The environment data and the state data are transmitted to the distributed cloud processing module through communication modes such as satellites, base stations or routing nodes. In one specific example, as shown in FIG. 3, the sensory data also includes historical data related to previous actions, base data, superior command information, and status data and environmental data acquired with the sensors.
In an alternative embodiment, the distributed cloud processing module includes:
the filtering processing unit is used for carrying out filtering processing on scene perception data, wherein the scene perception data comprises list perception data and image perception data;
the sliding window processing unit is used for cutting, classifying and connecting the filtered list perception data;
and the image processing unit is used for identifying, matching and labeling the filtered image scene perception data.
In this embodiment, in order to obtain accurate scene sensing data, filtering processing is performed on acquired scene sensing data output by waveform signals, so as to improve the signal-to-noise ratio of the scene sensing data, further sliding window processing is performed on list scene sensing data in the acquired scene sensing data, cutting, classifying and connecting of the scene sensing data are realized, and picture data in the acquired scene sensing data are identified, identification, matching and labeling of the image scene sensing data are realized, and finally a scene information database of multiple types of scene information with accurate data and wide types is formed.
In an alternative embodiment, the scene information database includes: the system comprises a blue party action situation database, a red party action situation database and a scene environment database, wherein the scene environment database compensates the blue party action situation database and the red party action situation database.
In a specific example, after the acquired scene perception data are processed and classified by the distributed processing module, a clustered blue party action situation database, a clustered red party action situation database and a clustered scene environment database are formed, and further, the scene environment database is utilized to compensate and fuse the enemy red party action situation database, so that the influence of environment data on the enemy red party is fully considered, the scene information database is more accurate, and the scene information is acquired in real time and the global situation is accurately generated.
In a specific example, as shown in fig. 3, the distributed cloud processing module is a distributed cloud platform, and can rapidly store and process various data from the front end of a scene, so as to realize remote layout of a command device and rapid information transmission.
And when the perception information input module transmits the scene perception data to the distributed cloud processing module, the perception information input module also transmits the scene perception data to the man-machine interaction module, and a commander acquires the scene perception data output by the man-machine interaction module and delivers an action target according to the current scene situation. In one specific example, as shown in fig. 3, the human-machine interaction module is a human-machine interaction platform.
The parallel simulation module carries out simulation deduction according to the scene information database output by the distributed operation processing module and the action target forwarded by the man-machine interaction module, generates simulation deduction data which takes the action target as a result and takes the scene information data as conditions, wherein the simulation deduction data is a set of a large amount of simulation data, and can determine the possibility of realizing the action target under the current situation according to the simulation deduction data.
And meanwhile, the situation prediction module inputs the scene information database and the action target into a trained prediction model to generate situation prediction information. In an alternative embodiment, the predictive model is generated for training using machine learning, including one or more of reinforcement learning, deep learning, and deep reinforcement learning.
In a specific example, in the training process of the prediction model, a historical scene information database provides a training data basis for machine learning to perform large-scale training, testing and evaluation, and analysis such as maneuver type identification, flight trajectory prediction, threat level classification, intelligent body action planning and the like is realized by using an artificial intelligence algorithm (deep learning, reinforcement learning and the like), so that situation prediction information with higher precision is obtained.
In an alternative embodiment, as shown in fig. 1, the parallel simulation module is further configured to output simulation deduction data to the situation prediction module;
the situation prediction module is further used for expanding the scene information database according to the simulation deduction data and inputting the expanded scene information database and the action target into a trained prediction model to generate situation prediction information.
In this embodiment, the parallel simulation module performs intelligent parallel simulation on the scene sensing data and the action target in the multiple digital twin space, obtains a large amount of simulation scene sensing data through parallel simulation, and further expands the scene information database by using the simulation scene sensing data, so that the prediction model is more accurate according to the situation prediction information output by the scene information database.
In this embodiment, the scenario information database of the situation prediction model is associated with the simulation deduction data of the parallel simulation module, and the scenario information database of the situation prediction model is further expanded by a large amount of simulation deduction data obtained by the parallel simulation module, so that the robustness of machine learning is improved, the accuracy of the situation prediction information is improved, and meanwhile, the situation prediction information can further correct the scenario situation development deduced by the parallel simulation module, and the reliability of parallel deduction is ensured.
In an alternative embodiment, the parallel simulation module is further configured to generate multiple simulation models using a twin service technique and a simulation service technique, and generate simulation deduction data according to the scene information database and the action target.
The parallel simulation module of the embodiment generates a multiple twin space by utilizing a twin service technology, performs multiple simulation deduction in the multiple twin space according to the scene information database and the action target, and obtains a large amount of simulation deduction data of scene simulation through parallel simulation, on one hand, the simulation deduction data can obtain various action schemes under the action target, and the possibility of success of the action target is improved; on the other hand, the simulation deduction data can expand a training database of the artificial intelligent model, and accuracy of scene situation prediction is improved.
The parallel simulation module is used for outputting the deduced simulation deduction data and the situation prediction information generated by the situation prediction module to the task output module, and the task output module is used for generating a multi-element action task of the command action unit according to the situation prediction information of the scene and the parallel deduction results of various action schemes. In an alternative embodiment, the action tasks include: action planning, force distribution grouping and unit node allocation.
According to the embodiment, the distributed cloud processing module is used for carrying out distributed storage on the scene perception data, the parallel simulation module is used for forming a large database and carrying out deduction, and the situation prediction module is used for generating situation prediction information in a training simulation environment, so that action tasks closely related to the scene perception data are generated, real-time acquisition of the scene perception data and real-time perception and prediction of the overall scene situation are realized, and the method has a wide application prospect.
Therefore, the distributed command device of the embodiment improves the reliability and expandability of the command device for coping with complex scenes and improves the reaction capacity of the command system; and moreover, the combination of the parallel simulation module and situation prediction enhances the perception capability and prediction capability of the command control device on the situation of the scene, and improves the autonomous controllable capability of the command device and the intelligent action level of the novel scene.
Corresponding to the aforementioned command device, as shown in fig. 2, another embodiment of the present application further provides a method for controlling by using the aforementioned command control device, including:
acquiring multiple types of scene perception data;
processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data;
acquiring an action target output by a user based on the scene perception data;
simulating according to the scene information database and the action target to generate simulation deduction data;
inputting the scene information database and the action targets into a trained prediction model to generate situation prediction information;
and generating an action task according to the deduction data and the prediction information.
According to the method for conducting command control by utilizing the command control device, the scene perception data are stored in a distributed mode through the distributed cloud processing module, a large database is formed and deduced through the parallel simulation module, and situation prediction information is generated under the training simulation environment through the situation prediction module, so that action tasks closely related to the scene perception data are generated, real-time acquisition of the scene perception data and real-time perception and prediction of the overall scene situation are achieved, and the method has wide application prospects.
Since the command control method provided by the embodiment of the present application corresponds to the command control device provided by the above embodiments, the previous implementation manner is also applicable to the command control method provided by the embodiment, and will not be described in detail in the embodiment.
Corresponding to the aforementioned command device, as shown in fig. 3, another embodiment of the present application further provides a command control system, including:
at least one command control device as described above;
and at least two action units which are communicated with each other, wherein the action units acquire action tasks of the command control device through a communication network and execute the action tasks.
The command control devices provided by the application can be distributed to the scene front, each command control device is an intelligent node, the positions of the command control devices are equal and mutually independent, and the grouping networking can be realized, so that the scene viability of the command control system is improved. Meanwhile, the modularized command control mode breaks through the centralized command control mode transmitted step by step in the past, and the efficient and quick decision of the command flow is realized.
In this embodiment, each mobile unit maintains communication within the coverage area of the mobile base station, and can rapidly allocate multiple types of mobile units according to the action tasks, and allocate, cooperate and combine with the de-cycling (OODA) to form a complete dynamic action formation for executing the action tasks. In one specific example, the mobile unit includes: unmanned mobile units, manned mobile units, and hybrid mobile units. The mobile unit of the embodiment of the application can allocate the force resources in a self-maintenance and self-management mode, realize autonomous cooperation and dynamic formation, form a large-scale dynamic killing network and greatly improve the efficiency of accurate striking and dynamic decision of actions.
In a specific example, as shown in fig. 4, a specific implementation manner of the communication network system proposed in this embodiment includes:
satellite network, communication device and wireless base station.
Wherein the communication device includes: data gateway, central router, ethernet switch, distributed server and authorization server. The communication equipment is connected with the multipath wireless base stations through a satellite network and is simultaneously connected with the wired switch, so that communication among the director terminal, the task output module, the distributed cloud processing module and each mobile unit is realized.
The communication system can eliminate the bottleneck among all communication units on the basis, obtain flexible configuration, efficient scheduling and top management of computing resources, better realize interconnection and intercommunication of scene command control systems, and provide technical support for weapon system information and fire integration.
Another embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements: acquiring multiple types of scene perception data; processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data; acquiring an action target output by a user based on the scene perception data; simulating according to the scene information database and the action target to generate simulation deduction data; inputting the scene information database and the action targets into a trained prediction model to generate situation prediction information; and generating an action task according to the deduction data and the prediction information.
In practical applications, the computer-readable storage medium may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
As shown in fig. 5, another embodiment of the present application provides a schematic structural diagram of a computer device. The computer device 12 shown in fig. 5 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in FIG. 5, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 20. As shown in fig. 5, the network adapter 20 communicates with other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in fig. 5, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processor unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing a command control method provided by an embodiment of the present application.
It should be understood that the foregoing examples of the present application are provided merely for clearly illustrating the present application and are not intended to limit the embodiments of the present application, and that various other changes and modifications may be made therein by one skilled in the art without departing from the spirit and scope of the present application as defined by the appended claims.

Claims (7)

1. A command control device, comprising:
the sensing information input module is used for acquiring multiple types of scene sensing data, wherein the scene sensing data comprises state data and environment data acquired by a plurality of sensors;
a distributed cloud processing module for processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data, the scene information database comprising: the system comprises a blue party action situation database, a red party action situation database and a scene environment database, wherein the scene environment database compensates the blue party action situation database and the red party action situation database;
the man-machine interaction module is used for acquiring an action target output by a user based on the scene perception data;
the parallel simulation module is used for simulating according to the scene information database and the action target to generate simulation deduction data;
the situation prediction module is used for inputting the scene information database and the action targets into the trained prediction model to generate situation prediction information;
the task output module is used for generating an action task according to the deduction data and the prediction information, and the action task comprises the following steps: action planning, force distribution grouping and unit node allocation;
wherein, the distributed cloud processing module includes:
the filtering processing unit is used for carrying out filtering processing on scene perception data, wherein the scene perception data comprises list perception data and image perception data;
the sliding window processing unit is used for cutting, classifying and connecting the filtered list perception data;
the image processing unit is used for identifying, matching and labeling the filtered image scene perception data;
the parallel simulation module is further used for generating a multiple simulation model by utilizing a twin service technology and a simulation service technology, and generating simulation deduction data according to the scene information database and the action target.
2. The apparatus of claim 1, wherein the device comprises a plurality of sensors,
the parallel simulation module is also used for outputting simulation deduction data to the situation prediction module;
the situation prediction module is also used for expanding the scene information database according to the simulation deduction data and inputting the expanded scene information database and the action target into a trained prediction model to generate situation prediction information.
3. The apparatus of claim 2, wherein the device comprises a plurality of sensors,
the predictive model is generated for training using machine learning, including one or more of reinforcement learning, deep learning, and deep reinforcement learning.
4. A method of controlling with a commander control apparatus according to any one of claims 1 to 3, comprising:
acquiring multiple types of scene perception data;
processing and identifying the scene-aware data to generate at least one scene information database corresponding to the scene-aware data;
acquiring an action target output by a user based on the scene perception data;
simulating according to the scene information database and the action target to generate simulation deduction data;
inputting the scene information database and the action targets into a trained prediction model to generate situation prediction information;
and generating an action task according to the deduction data and the prediction information.
5. A command control system, comprising:
at least one command and control device according to any one of claims 1-3;
and at least two action units which are communicated with each other, wherein the action units acquire action tasks of the command control device through a communication network and execute the action tasks.
6. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method as claimed in claim 4.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method as claimed in claim 4 when executing the program.
CN202110123302.8A 2021-01-29 2021-01-29 Command control device, method, system, computer equipment and medium Active CN112766595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110123302.8A CN112766595B (en) 2021-01-29 2021-01-29 Command control device, method, system, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110123302.8A CN112766595B (en) 2021-01-29 2021-01-29 Command control device, method, system, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN112766595A CN112766595A (en) 2021-05-07
CN112766595B true CN112766595B (en) 2023-09-29

Family

ID=75706589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110123302.8A Active CN112766595B (en) 2021-01-29 2021-01-29 Command control device, method, system, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN112766595B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438303B (en) * 2021-06-23 2023-07-25 南京孩乐康智能科技有限公司 Remote auxiliary work system and method, electronic equipment and storage medium
CN114201042B (en) * 2021-11-09 2023-09-15 北京电子工程总体研究所 Distributed comprehensive integrated seminar device, system, construction method and interaction method
CN114819458A (en) * 2021-12-31 2022-07-29 第四范式(北京)技术有限公司 Simulation model construction method and simulation model construction device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598066A (en) * 2018-12-05 2019-04-09 百度在线网络技术(北京)有限公司 Effect evaluation method, device, equipment and the storage medium of prediction module
CN110288160A (en) * 2019-06-27 2019-09-27 北京华如科技股份有限公司 A kind of situation dynamic prediction method based on parallel simulation
CN111429583A (en) * 2020-03-23 2020-07-17 北京智汇云舟科技有限公司 Space-time situation perception method and system based on three-dimensional geographic information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8019712B2 (en) * 2008-01-30 2011-09-13 The Boeing Company Intelligent threat assessment module, method and system for space situational awareness system
US9952591B2 (en) * 2015-11-24 2018-04-24 Northrop Grumman Systems Corporation Spatial-temporal forecasting for predictive situational awareness

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598066A (en) * 2018-12-05 2019-04-09 百度在线网络技术(北京)有限公司 Effect evaluation method, device, equipment and the storage medium of prediction module
CN110288160A (en) * 2019-06-27 2019-09-27 北京华如科技股份有限公司 A kind of situation dynamic prediction method based on parallel simulation
CN111429583A (en) * 2020-03-23 2020-07-17 北京智汇云舟科技有限公司 Space-time situation perception method and system based on three-dimensional geographic information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
实时态势数据驱动的平行仿真推演方法;周芳;毛少杰;吴云超;李玉萍;;中国电子科学研究院学报(第04期);第33-38页 *
指挥信息系统数据泄露态势感知研究;李强等;《第七届中国指挥控制大会论文集》;第585-589页 *
智能化战争形态及其支撑技术体系;蔡明春等;《国防科技》;第38卷(第1期);第94-98页 *

Also Published As

Publication number Publication date
CN112766595A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112766595B (en) Command control device, method, system, computer equipment and medium
Chen et al. An edge traffic flow detection scheme based on deep learning in an intelligent transportation system
CN110363058B (en) Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural networks
US10140719B2 (en) System and method for enhancing target tracking via detector and tracker fusion for unmanned aerial vehicles
CN109145680A (en) A kind of method, apparatus, equipment and computer storage medium obtaining obstacle information
WO2020147500A1 (en) Ultrasonic array-based obstacle detection result processing method and system
Wei et al. Survey of connected automated vehicle perception mode: from autonomy to interaction
CN112650300B (en) Unmanned aerial vehicle obstacle avoidance method and device
US11776275B2 (en) Systems and methods for 3D spatial tracking
CN109656319B (en) Method and equipment for presenting ground action auxiliary information
Hentati et al. Mobile target tracking mechanisms using unmanned aerial vehicle: Investigations and future directions
Chung et al. Toward robotic sensor webs: Algorithms, systems, and experiments
CN117851570A (en) Star crowd intelligent cooperative control method and system based on large language model
Jolfaei et al. Guest editorial introduction to the special issue on deep learning models for safe and secure intelligent transportation systems
WO2023051398A1 (en) Security compensation method and apparatus, and storage medium and electronic device
Kulbacki et al. Intelligent video monitoring system with the functionality of online recognition of people’s behavior and interactions between people
CN113031600B (en) Track generation method and device, storage medium and electronic equipment
Shi et al. Integrated approach to AUV docking based on nonlinear offset-free model predictive control
Schelle et al. Modelling visual communication with UAS
CN112748746B (en) Method, device, equipment and storage medium for acquiring flight information of aircraft
CN116540568B (en) Large-scale distributed unmanned aerial vehicle cluster simulation system
CN117973820B (en) Task dynamic allocation system and method based on artificial intelligence
CN108199868A (en) A kind of group system distributed control method based on tactics cloud
US20230109494A1 (en) Methods and devices for building a training dataset
Fuhrmann Using Scene-Aware Voice Dialogs in Human-Drone Interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant