CN116167274A - Simulation combat attack and defense training method, related device and storage medium - Google Patents

Simulation combat attack and defense training method, related device and storage medium Download PDF

Info

Publication number
CN116167274A
CN116167274A CN202211529886.XA CN202211529886A CN116167274A CN 116167274 A CN116167274 A CN 116167274A CN 202211529886 A CN202211529886 A CN 202211529886A CN 116167274 A CN116167274 A CN 116167274A
Authority
CN
China
Prior art keywords
party
virtual object
combat
target
monitoring range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211529886.XA
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Publication of CN116167274A publication Critical patent/CN116167274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a simulated combat attack and defense training method, a related device and a storage medium. The method is applied to a simulated combat defense training system, the simulated combat defense training system comprises a simulated combat platform, a first party combat system and a second party combat system, the simulated combat platform comprises at least one first party virtual object and at least one second party virtual object, the first party combat system comprises a first party detection model and a first party virtual controller, and the method comprises the following steps: acquiring first scanning data in a first monitoring range, wherein the first scanning data comprises data of at least one second target virtual object entering the first monitoring range, and the second target virtual object is a second target virtual object added with a countermeasure pattern; and inputting the first scanning data into a first party detection model to obtain a first detection result. According to the scheme, the attack effect of the countermeasure pattern on the model can be improved, and the iteration period of the model can be shortened.

Description

Simulation combat attack and defense training method, related device and storage medium
The present application claims priority from the chinese patent office, application number 2022112057950, application name "simulation fight platform construction method, apparatus, computer device, and storage medium," filed at 30/9/2022, the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a simulated combat attack and defense training method, a related device and a storage medium.
Background
With the continuous development of artificial intelligence technology, battlefield forms are also evolving towards intellectualization, and intelligent unmanned combat equipment becomes an important role in leading war activities. The artificial intelligence technology can be applied to unmanned aerial vehicle automatic investigation, automatic confirmation of attack targets, intelligent unmanned autonomous combat equipment, bionic robots and the like in military scenes.
However, when an attacker uses the loopholes of the artificial intelligence (Artificial Intelligence, AI) image detection or recognition algorithm model, after a specific countermeasure pattern is added to the combat equipment, the attacker can attack the detection model relied on behind the combat equipment such as the unmanned aerial vehicle of the other party and the intelligent equipment, so that the attacker cannot normally recognize video or image content and cannot normally work.
In order to verify the robustness of the detection model in the combat defense training, the countercheck pattern needs to be printed, the countercheck pattern is stuck on the surface of the combat equipment, then the image information of the combat equipment stuck with the countercheck pattern is input into the detection model through a camera in a real scene for carrying out the robustness verification, however, the real battlefield environment is very complex, and the physical combat environment during training is greatly different from the real physical combat environment due to the limitations of a field, a training environment and the like, so that the attack effect of the countercheck pattern on the model is poor when the combat defense training is carried out in the physical combat environment, the countercheck pattern needs to be printed into a real object in the prior art, the time cost of the model attack is high, and the iteration period of the model is long.
Disclosure of Invention
The embodiment of the application provides a simulation combat attack and defense training method, a related device and a storage medium, which can be used for performing simulation combat attack and defense training in a simulation combat platform, improving the attack effect of a combat pattern on a model and shortening the iteration period of the model.
In a first aspect, an embodiment of the present application provides a method for training a simulated combat attack and defense, including:
the method is applied to a simulated combat training system, the simulated combat training system comprises a simulated combat platform, a first party combat system and a second party combat system, the simulated combat platform comprises at least one first party virtual object and at least one second party virtual object, the first party combat system comprises a first party detection model and a first party virtual controller, the simulated combat platform is provided with images and point cloud data in a first monitoring range corresponding to the first party virtual object, and the images and the point cloud data are transmitted to the first party detection model through an interface, and the method comprises the following steps:
acquiring first scanning data in the first monitoring range, wherein the first scanning data comprises data of at least one second-party target virtual object entering the first monitoring range, and the second-party target virtual object is the second-party virtual object added with the countermeasure pattern;
And inputting the first scanning data into the first party detection model to obtain a first detection result, wherein the first detection result is used for controlling the first party virtual controller to generate at least one combat instruction of the first party virtual object.
In some embodiments, the simulated combat platform is pre-configured with a plurality of weather parameters, the method further comprising:
receiving a weather parameter selection instruction aiming at a target weather parameter, wherein the target weather parameter is one of a plurality of weather parameters;
and responding to the weather parameter selection instruction, and setting the weather parameter of the simulated combat platform display interface as the target weather parameter.
In some embodiments, the display interface of the simulated combat platform includes a first area and a second area, where the first area displays functional icons corresponding to each preset candidate countermeasure pattern, and the second area currently displays a picture of the simulated combat platform; determining at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns; importing at least one target countermeasure pattern into at least one first party target virtual object, comprising:
Receiving a first operation instruction of a user aiming at a first function icon in the first area;
selecting, in response to the first operation instruction, a countermeasure pattern corresponding to the first function icon from among the plurality of candidate countermeasure patterns as the target countermeasure pattern;
the target countermeasure pattern is displayed on the first party target virtual object.
In some embodiments, the determining at least one target countermeasure pattern to be added currently from a preset plurality of candidate countermeasure patterns; importing at least one target countermeasure pattern into at least one first party target virtual object, comprising:
determining at least one target countermeasure pattern from a plurality of candidate countermeasure patterns according to a preset countermeasure script;
at least one of the target countermeasure patterns is imported onto at least one of the first party target virtual objects.
In a second aspect, an embodiment of the present application further provides a simulated combat training device, where the simulated combat training device is configured and in a simulated combat training system, the simulated combat training system includes a simulated combat platform, a first party combat system and a second party combat system, the simulated combat platform includes at least one first party virtual object and at least one second party virtual object, the first party combat system includes a first party detection model and a first party virtual controller, and image and point cloud data in a first monitoring range corresponding to the first party virtual object in the simulated combat platform are transmitted to the first party detection model through an port, where the device includes:
The receiving and transmitting module is used for acquiring first scanning data in the first monitoring range, wherein the first scanning data comprise data of at least one second target virtual object entering the first monitoring range, and the second target virtual object is the second target virtual object added with the countermeasure pattern;
the processing module is used for inputting the first scanning data into the first party detection model to obtain a first detection result, and the first detection result is used for controlling the first party virtual controller to generate at least one combat instruction of the first party virtual object.
In some embodiments, after performing the step of inputting the first scan data into the first party detection model to obtain a first detection result, the processing module is further configured to:
if the first detection result is that the second party virtual object is scanned, determining that the first party detection model is successfully defended;
and if the first detection result is that the second party virtual object is not scanned, determining that the first party detection model fails in defense.
In some embodiments, the simulated combat platform is further provided with a preset place, the first party virtual object includes unmanned combat equipment, and the processing module is further configured to:
When the second-party virtual object is detected to enter the preset range of the first party, displaying the unmanned aerial vehicle combat equipment in the second monitoring range of the second-party virtual object, and controlling the unmanned aerial vehicle combat equipment to move to the preset place so as to attract the second-party virtual object to move to the preset place along with the unmanned aerial vehicle combat equipment.
In some embodiments, the processing module is further to:
determining at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns;
at least one target countermeasure pattern is imported into at least one first party target virtual object, wherein the first party target virtual object is the first party virtual object in a second monitoring range, and the second monitoring range is a monitoring range corresponding to the second party virtual object.
In some embodiments, the processing module, after performing the step of importing at least one of the target countermeasure patterns into at least one first party target virtual object, is further configured to:
acquiring the current environmental characteristics of the first party target virtual object through the transceiver module;
the target countermeasure pattern on the first party target virtual object is adjusted according to the environmental characteristic.
In some embodiments, the second party virtual object comprises an aerial virtual object; the processing module is further configured to, after performing the step of importing at least one of the target countermeasure patterns into at least one first party target virtual object:
if the above-ground virtual object is detected in the advancing process of the first-party target virtual object, carrying out local camouflage adjustment on the target countermeasure pattern on the first-party target virtual object according to the azimuth of the above-ground virtual object relative to the first-party target virtual object;
and if the aerial virtual object is detected in the advancing process of the first target virtual object, performing full vehicle camouflage adjustment on the target countermeasure pattern on the first target virtual object.
In some embodiments, the second party combat system comprises a second party detection model and a second party virtual controller, wherein the image and point cloud data in the second monitoring range in the simulated combat platform are transmitted to the second party detection model through a port; the processing module is further configured to, after performing the step of importing at least one of the target countermeasure patterns into at least one first party target virtual object:
Acquiring second scanning data in the second monitoring range through the transceiver module;
and inputting the second scanning data into the second party detection model to obtain a second detection result, wherein the second detection result is used for controlling the second party virtual controller to generate at least one combat instruction of the second party virtual object.
In some embodiments, the simulated combat platform is preset with a plurality of weather parameters, and the transceiver module is further configured to receive a weather parameter selection instruction for a target weather parameter, where the target weather parameter is one of the plurality of weather parameters;
and the processing module is also used for responding to the weather parameter selection instruction and setting the weather parameter of the simulated combat platform display interface as the target weather parameter.
In some embodiments, the display interface of the simulated combat platform includes a first area and a second area, where the first area displays functional icons corresponding to each preset candidate countermeasure pattern, and the second area currently displays a picture of the simulated combat platform; the processing module determines at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns; the step of importing at least one target countermeasure pattern into at least one first party target virtual object is specifically used for:
Receiving a first operation instruction of a user aiming at a first functional icon in the first area through the transceiver module,
selecting, in response to the first operation instruction, a countermeasure pattern corresponding to the first function icon from among the plurality of candidate countermeasure patterns as the target countermeasure pattern;
the target countermeasure pattern is displayed on the first party target virtual object.
In some embodiments, the processing module is configured to determine at least one target countermeasure pattern to be added currently from a plurality of candidate countermeasure patterns preset; the step of importing at least one target countermeasure pattern into at least one first party target virtual object is specifically used for:
determining at least one target countermeasure pattern from a plurality of candidate countermeasure patterns according to a preset countermeasure script;
at least one of the target countermeasure patterns is imported onto at least one of the first party target virtual objects.
In a third aspect, embodiments of the present application further provide a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method when executing the computer program.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, implement the above-described method.
Compared with the prior art, the embodiment of the application provides a simulation combat defense training system, on one hand, the embodiment of the application can realize simulation combat defense training in the simulation combat defense training system, combat defense training in the physical world is simulated through simulation scenes, the training environment provided by a simulation combat platform in the simulation combat defense training system is similar to the real combat environment, so that the attack effect of the countermeasure pattern on the model can be improved, on the other hand, the countermeasure pattern does not need to be printed in the scheme, and the countermeasure pattern is artificially posted in combat equipment, and the countermeasure sample can be continuously generated in the process of simulation combat defense training through the scheme, so that the scheme can continuously attack the detection model in the process of simulation combat defense training, more, faster and more comprehensive attack detection models can be provided in a short time, the attack effect can be improved, and the iteration period of the model can be shortened.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a simulated combat attack and defense training method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
fig. 3 is a flow chart of a simulated combat attack and defense training method according to an embodiment of the present application;
fig. 4 is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
fig. 5 is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
FIG. 6 is a schematic flow chart of a simulated combat attack and defense training method according to another embodiment of the present application;
fig. 7 is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
fig. 8 is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
Fig. 9 is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
fig. 10a is a schematic diagram of another application scenario of the simulated combat attack and defense training method according to the embodiment of the present application;
fig. 10b is a schematic diagram of another application scenario of the simulated combat attack and defense training method provided in the embodiment of the present application;
FIG. 11 is a schematic block diagram of a simulated combat attack and defense training device provided in an embodiment of the present application;
FIG. 12 is a schematic diagram of a server according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal in an embodiment of the present application;
fig. 14 is a schematic structural diagram of a server in an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims of the embodiments and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the partitioning of modules by embodiments of the application is only one logical partitioning, such that a plurality of modules may be combined or integrated in another system, or some features may be omitted, or not implemented, and further that the coupling or direct coupling or communication connection between modules may be via some interfaces, such that indirect coupling or communication connection between modules may be electrical or other like, none of the embodiments of the application are limited. The modules or sub-modules described as separate components may or may not be physically separate, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
The embodiment of the application provides a simulated combat attack and defense training method, a related device and a storage medium. The execution main body of the simulation combat and defense training method can be the simulation combat and defense training device provided by the embodiment of the application, or a simulation combat and defense training system provided with the simulation combat and defense training device, or computer equipment integrated with the simulation combat and defense training system, wherein the simulation combat and defense training device or the simulation combat and defense training system can be realized in a hardware or software mode, and the computer equipment can be a terminal or a server.
When the computer device is a server, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
When the computer device is a terminal, the terminal may include: smart phones, tablet computers, notebook computers, desktop computers, smart televisions, smart speakers, personal digital assistants (hereinafter abbreviated as PDA, english: personal Digital Assistant), desktop computers, smart watches, and the like, which carry multimedia data processing functions (e.g., video data playing functions, music data playing functions), but are not limited thereto.
The scheme of the embodiment of the application can be realized based on an artificial intelligence technology, and particularly relates to the technical field of computer vision in the artificial intelligence technology and the fields of cloud computing, cloud storage, databases and the like in the cloud technology, and the technical fields are respectively described below.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, model robustness detection, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, map construction, etc., as well as common model robustness detection, fingerprint recognition, etc., biometric techniques.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The solution of the embodiment of the present application may be implemented based on cloud technology, and in particular, relates to the technical fields of cloud computing, cloud storage, database, and the like in the cloud technology, and will be described below.
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. Cloud technology (Cloud technology) is based on the general terms of network technology, information technology, integration technology, management platform technology, application technology and the like applied by Cloud computing business models, and can form a resource pool, so that the Cloud computing business model is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a significant amount of computing, storage resources, such as video websites, image-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing. According to the embodiment of the application, the identification result can be stored through cloud technology.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside. In the embodiment of the application, the information such as network configuration and the like can be stored in the storage system, so that the server can conveniently call the information.
At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data Identification (ID) and the like, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided into stripes in advance according to the set of capacity measures for objects stored on a logical volume (which measures tend to have a large margin with respect to the capacity of the object actually to be stored) and redundant array of independent disks (RAID, redundant Array of Independent Disk), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
The Database (Database), which can be considered as an electronic filing cabinet, is a place for storing electronic files, and users can perform operations such as adding, inquiring, updating, deleting and the like on the data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple users, with as little redundancy as possible, independent of the application.
The database management system (Database Management System, abbreviated as DBMS) is a computer software system designed for managing databases, and generally has basic functions of storage, interception, security, backup and the like. The database management system may classify according to the database model it supports, e.g., relational, XML (Extensible Markup Language ); or by the type of computer supported, e.g., server cluster, mobile phone; or by the query language used, e.g., SQL (structured query language ), XQuery; or by performance impact emphasis, such as maximum scale, maximum speed of operation; or other classification schemes. Regardless of the manner of classification used, some DBMSs are able to support multiple query languages across categories, for example, simultaneously. In the embodiment of the application, the identification result can be stored in the database management system, so that the server can conveniently call.
It should be specifically noted that, the service terminal according to the embodiments of the present application may be a device that provides voice and/or data connectivity to the service terminal, a handheld device with a wireless connection function, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, which can be portable, pocket, hand-held, computer-built-in or car-mounted mobile devices, for example, which exchange voice and/or data with radio access networks. For example, personal communication services (English full name: personal Communication Service, english short name: PCS) telephones, cordless telephones, session Initiation Protocol (SIP) phones, wireless local loop (Wireless Local Loop, english short name: WLL) stations, personal digital assistants (English full name: personal Digital Assistant, english short name: PDA) and the like.
Referring to fig. 1, fig. 1 is a schematic application scenario diagram of a simulated combat attack and defense training method according to an embodiment of the present application. The simulated combat defense training method is applied to a simulated combat defense training system in fig. 1, the simulated combat defense training system comprises a simulated combat platform, a first party (my) combat system and a second party (enemy) combat system, wherein the simulated combat platform comprises a my combat area, an enemy combat area, at least one first party virtual object and at least one second party virtual object, the first party combat system comprises a first party detection model and a first party virtual controller, images and point cloud data in a first monitoring range corresponding to the first party virtual object in the simulated combat platform are transmitted to the first party detection model through an interface, the second party combat system comprises a second party detection model and a second party virtual controller, and images and point cloud data in a second monitoring range corresponding to the second party virtual object in the simulated combat platform are transmitted to the second party detection model through the interface.
Specifically, the simulated combat attack and defense training system may acquire first scan data within the first monitoring range, where the first scan data includes data that at least one second target virtual object enters the first monitoring range, where the second target virtual object is the second target virtual object to which a countermeasure pattern is added; inputting the first scanning data into the first party detection model to obtain a first detection result, wherein the first detection result is used for controlling the first party virtual controller to generate at least one combat instruction of the first party virtual object; acquiring second scanning data in the second monitoring range; and inputting the second scanning data into the second party detection model to obtain a second detection result, wherein the second detection result is used for controlling the second party virtual controller to generate at least one combat instruction of the second party virtual object.
After the simulation scene of the simulation combat platform is built, the countermeasure image generated by the attack algorithm can be connected with the simulation platform in a transmission control protocol (Transmission Control Protocol, TCP) protocol transmission mode, and then the countermeasure image is imported into the simulation environment of the simulation combat platform through the runtime fbx importer plug-in unit.
It should be noted that, as shown in fig. 2, in order to improve the similarity between the combat scene and the real combat scene in the simulated combat platform, the combat scene in the simulated combat platform according to the embodiment is automatically constructed by using a neural network according to the dynamic parameters (images, illumination, etc.) of the real military combat scene, and the user can change the battlefield layout in the simulated combat platform as required and perform loop iteration on the simulated scene.
It should be noted that, in this embodiment, the first side virtual object and the second side virtual object include various types of virtual objects, for example, include an unmanned aerial vehicle, intelligent combat equipment (such as a tank, etc.), a bionic robot, a fixed monitoring device, and important places (such as a munition depot and a granary), where, besides the monitoring device has a monitoring function, the unmanned aerial vehicle, the intelligent vehicle, and the bionic robot also have monitoring functions (are provided with cameras), and the monitoring ranges corresponding to the different types of virtual objects are different, for example, the monitoring range of the unmanned aerial vehicle is centered on the position where the unmanned aerial vehicle is located, and the monitoring radius corresponding to the height of the unmanned aerial vehicle (the corresponding relationship between the height of the unmanned aerial vehicle and the monitoring radius is preset in the platform) determines the monitoring range of the unmanned aerial vehicle; the monitoring range of the intelligent vehicle and the bionic robot in the embodiment is that the camera on the intelligent vehicle and the bionic robot is a first person view angle to determine a first person picture of the intelligent vehicle and the bionic robot, and the range corresponding to the first person picture is used as the monitoring range.
In the following, according to the first and second embodiments, a first party virtual object is taken as a defending party, a second virtual object is taken as an attacking party, and a first party virtual object is taken as an attacking party, and a second party virtual object is taken as a defending party, where the first party is my and the second party is an adversary for convenience of understanding, respectively, are described.
Embodiment one:
in this embodiment, the first party virtual object is a defending party, and the second virtual object is an attacking party; the method is applied to the simulated combat attack and defense training system shown in fig. 1, and fig. 3 is a flow diagram of the simulated combat attack and defense training method provided by the embodiment of the application. As shown in fig. 3, the method includes the following steps S110-150.
S110, the simulation combat platform acquires first scanning data in a first monitoring range.
At this time, the first virtual object is a defender, the second virtual object is an attacker, and the second virtual object enters the my combat zone.
The first scanning data comprise data that at least one second target virtual object enters the first monitoring range, and the second target virtual object is the second target virtual object added with the countermeasure pattern. The first monitoring range is a visual field monitoring range corresponding to the first virtual object.
That is, the disguised second-party virtual object enters the monitoring range of the first-party virtual object, the first-party virtual object scans the scanning data in the visual field range in real time, and if the second-party target virtual object enters the first monitoring range, the first scanning data acquired at the moment includes the second-party target virtual object.
S120, the simulation combat platform acquires and inputs the first scanning data into a first party detection model.
After the first scanning data is acquired by the simulated combat platform, the first scanning data is input into the first party detection model, so that the first party detection model acquires the first scanning data.
Specifically, the simulated combat platform scans data within the first monitoring range in real time and transmits the scanned data to the first party detection model in real time.
S130, the first party detection model obtains a first detection result according to the first scanning data.
In this embodiment, the first detection result is used to control the first party virtual controller to generate at least one combat instruction of the first party virtual object, where if the first detection result is that the second party virtual object is scanned, it is determined that the first party detection model defense is successful; and if the first detection result is that the second party virtual object is not scanned, determining that the first party detection model fails in defense.
And if the first party detection model is determined to be successful in defense, sending out defense success information, and optimizing the corresponding countermeasure pattern on the second party target virtual object at the moment until the countermeasure pattern is added to the second party virtual object, wherein the first party detection model fails in defense. And if the first party detects that the model defense fails, outputting a simulation result of successful attack of the second party target virtual object.
It should be noted that, the first party detection model in this embodiment includes a countering defense algorithm, and the defense algorithm may be implemented through the following steps a and B:
A. the data is pre-processed and the data is pre-processed, namely, recognizing and filtering noise of scanned pictures/video streams;
B. and performing countermeasure training, namely training by putting countermeasure sample image data and normal image data into a model, and performing supervised learning to obtain a reinforced safety model, namely obtaining the first party detection model.
And S140, the first party detection model sends the first detection result to the first party virtual controller.
And after the first detection model detects that the first detection result is obtained, the first detection result is sent to the first virtual controller.
In some embodiments, to reduce the waste of resources, the first detection result is sent to the first party virtual controller only when the first detection result is that the second party virtual object is scanned.
And S150, the first party virtual controller generates at least one combat instruction of the first party virtual object according to the first detection result.
In this embodiment, if the first detection result is that the second virtual object is scanned, at this time, the first virtual controller schedules the combat equipment in the first virtual object in the simulated combat platform to the position where the second virtual object is scanned according to the position of the scanned second virtual object, and performs attack processing such as blasting on the scanned second virtual object, for example, as shown in fig. 4, at this time, an enemy combat zone and an my combat zone are provided in the simulated combat platform, at this time, the equipment a in the second virtual object of the enemy is camouflaged (i.e. an countermeasure pattern is added), and then enters the first monitoring range of the my combat zone, and the equipment a is identified by the me, and blasted.
In some embodiments, when it is determined that the second party virtual object exists in the first monitoring range according to the first detection result, at this time, an countermeasure image is further added to the first party virtual object according to the type of the identified second party virtual object, specifically, an countermeasure image is added to the first party virtual object in a preset range of the scanned second party virtual object (for example, centered on the scanned second party virtual object and having a preset length (for example, 20 m) as a radius), and in order to improve the attack effect of the countermeasure object, when the second party virtual object is scanned to be an above-ground virtual object (for example, a smart car, a robot, etc.), at this time, an countermeasure image is added to the first party virtual object in the preset range according to the orientation of the above-ground virtual object relative to the first party virtual object; when the scanned second party virtual object is an aerial virtual object (such as an unmanned aerial vehicle), camouflage adjustment is needed for the whole vehicle by using the countermeasure pattern.
Further, there is also a need for environmental features around the first party virtual object, and then adjusting the countermeasure image on the first party virtual object according to the environmental features around, for example, adjusting the countermeasure pattern to be camouflage or camouflage with a group closer to the combat scene.
In some embodiments, in order to more thoroughly fight against enemy army, my attracts enemy entering the preset range of my to a preset camouflage trap area through land-air collaborative combat, and uniformly explodes, as shown in fig. 5, specifically, at this time, a first party virtual object includes unmanned aerial vehicle combat equipment, which is displayed in a second monitoring range of the second party virtual object when the second party virtual object is detected to enter the preset range of the first party, and is controlled to move to the preset place so as to attract the second party virtual objects (remaining equipment b and equipment c) to follow the unmanned aerial vehicle combat equipment to move to the preset place.
The preset range is a range of approaching a preset place from the my, the preset place is a disguised trap, and blasting points are pre-buried in the disguised trap.
In summary, on the one hand, the embodiment of the application can realize the simulated combat defense training in the simulated combat defense training system, simulate the combat defense training of the physical world through the simulation scene, and the training environment provided by the simulated combat platform in the simulated combat defense training system is similar to the real combat environment, so that the attack effect of the countermeasure pattern on the model can be improved, on the other hand, the countermeasure pattern is not required to be printed in the scheme, and the countermeasure sample is manually posted in the combat equipment, and can be continuously generated in the process of the simulated combat defense training through the scheme, so that the scheme can continuously attack the detection model in the process of the simulated combat defense training, more, faster and more comprehensive attack detection models can be realized in a short time, the attack effect can be improved, and the iteration period of the model can be shortened.
Embodiment two:
in this embodiment, the first party virtual object is an attacker, and the second virtual object is a defender; the method is applied to the simulated combat attack and defense training system shown in fig. 1, and fig. 6 is a flow diagram of the simulated combat attack and defense training method provided by the embodiment of the application. As shown in fig. 6, the method includes the following steps S210-270.
S210, the simulated combat platform determines at least one target combat pattern to be added currently from a plurality of preset candidate combat patterns.
In this embodiment, the first party virtual object is an attacker, and the second virtual object is a defender, and at this time, the first party virtual object enters the enemy combat area.
In this embodiment, a plurality of candidate patterns are preset in the simulated combat platform, and when a first party virtual object needs to be added with the countermeasure pattern, the candidate patterns are selected.
S220, the simulation combat platform imports at least one target combat pattern into at least one first party target virtual object, wherein the first party target virtual object is the first party virtual object in a second monitoring range, and the second monitoring range is a monitoring range corresponding to the second party virtual object.
It should be noted that, the countermeasure pattern in this embodiment may be adjusted or added by means of a code and interface dragging, so that the position and the size of the countermeasure pattern in the virtual object may be flexibly adjusted, so that the attack effect of the countermeasure pattern is more stable.
The step of adding or adjusting the countermeasure pattern according to the way of the user interface drag is as follows:
the display interface of the simulated combat platform comprises a first area and a second area, wherein the first area displays functional icons corresponding to each preset candidate countermeasure pattern, and the second area currently displays the picture of the simulated combat platform; determining at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns; importing at least one target countermeasure pattern into at least one first party target virtual object, comprising: receiving a first operation instruction of a user for a first function icon in the first area, and responding to the first operation instruction, selecting a countermeasure pattern corresponding to the first function icon from a plurality of candidate countermeasure patterns as the target countermeasure pattern; the target countermeasure pattern is displayed on the first party target virtual object.
The first operation instruction corresponds to that the user clicks a first functional icon displayed in the first area through a mouse, and then directly drags the target countermeasure pattern to a first party target virtual object currently displayed in the second area through the mouse.
Wherein the step of adding or adjusting the countermeasure pattern by the code input is as follows:
at this time, the liquid crystal display device, determining at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns; importing at least one target countermeasure pattern into at least one first party target virtual object, comprising: determining at least one target countermeasure pattern from a plurality of candidate countermeasure patterns according to a preset countermeasure script; at least one of the target countermeasure patterns is imported onto at least one of the first party target virtual objects.
Specifically, in some embodiments, since the preset script is provided with the target countermeasure patterns corresponding to the first party virtual objects respectively, after the first party virtual object is determined, the target countermeasure pattern corresponding to the first party target virtual object is determined according to the correspondence between the objects in the preset countermeasure script and the countermeasure patterns, and the target countermeasure image is imported onto the corresponding virtual object.
In the target countermeasure pattern adding step, specifically:
in some embodiments, after the first party virtual object enters the enemy combat zone, if the second party virtual object is detected during the course of the journey, the target challenge pattern is automatically added.
In some embodiments, when adding the target countermeasures, in order to improve the attack effect of the countermeasures, the countermeasures are added according to the current environment, in some embodiments, in order to purposefully disguise, the countermeasures are added according to the object type of the identified second virtual object, wherein the object type of the second virtual object includes an aerial virtual object and a ground virtual object, as shown in fig. 7, if the detected second virtual object is the ground virtual object, at this time, the local disguise is performed on the first target virtual object according to the position of the ground virtual object relative to the first target virtual object, that is, the countermeasures are added locally (the side facing the second virtual object), and the position of the second virtual object is detected in real time, so as to adjust the position of the countermeasures in real time. As shown in fig. 8, if the detected second party virtual object is an aerial virtual object, the target countermeasure pattern on the first party target virtual object is camouflaged by a whole vehicle, for example, the whole vehicle is camouflaged by using the camouflage countermeasure pattern.
In the adjustment step of the target countermeasure pattern, specifically:
as shown in fig. 9, since the intelligent combat equipment in the virtual object of the first party of the my party scans the environment of the battlefield in real time during the course of combat, the countermeasure patterns may be ineffective when the environment changes, and the countermeasure attack algorithm may generate new countermeasure patterns to disguise the intelligent combat equipment according to the dynamic environment changes of the battlefield. The pattern definition can be performed by the anti-pattern attack algorithm to improve the concealment of the intelligent equipment. Such as camouflage by defining the pattern as a pattern that is more similar to the operational scene. The method comprises the following steps: the target countermeasure pattern on the first party target virtual object is adjusted according to the environmental characteristic.
In the traveling process of intelligent combat equipment in a first party target virtual object, in order to improve the stability of the countermeasure pattern, the countermeasure pattern of the first party target virtual object needs to be adjusted in real time according to the detected second party virtual object, which is specifically as follows: if the above-ground virtual object is detected in the advancing process of the first-party target virtual object, carrying out local camouflage adjustment on the target countermeasure pattern on the first-party target virtual object according to the azimuth of the above-ground virtual object relative to the first-party target virtual object;
And if the aerial virtual object is detected in the advancing process of the first target virtual object, performing full vehicle camouflage adjustment on the target countermeasure pattern on the first target virtual object.
S230 is a and the simulated combat platform acquires second scanning data in the second monitoring range.
In this embodiment, since the first target virtual object is within the second monitoring range and the target countermeasures pattern is attached to the first target virtual object, the second scan data at this time includes the first target virtual object attached with the target countermeasures pattern.
S240, the simulation combat platform inputs the second scanning data into the second party detection model.
After the simulation combat platform acquires the second scanning data, the second scanning data is input into the second party detection model, so that the second party detection model acquires the second scanning data.
Specifically, the simulated combat platform scans data in the second monitoring range in real time and transmits the scanned data to the second party detection model in real time.
S250, the second detection model obtains a second detection result according to the second scanning data.
In this embodiment, the second detection result is used to control the second party virtual controller to generate at least one combat instruction of the second party virtual object, where if the second detection result is that the first party virtual object is scanned, it is determined that the second party detection model defense is successful; and if the second detection result is that the first virtual object is not scanned, determining that the second detection model fails in defense.
And if the second party detection model is determined to be defended successfully, sending defending success information, and optimizing the corresponding countermeasure pattern on the first party target virtual object at the moment until the countermeasure pattern is added to the first party virtual object, wherein the second party detection model fails to defend. And if the second party detection model fails in defense, outputting a simulation result of successful attack of the first party target virtual object.
And S260, the second party detection model sends a second detection result to the second party virtual controller.
After the second detection result is detected by the second detection model, generating the second detection result to the second party virtual controller.
In some embodiments, to reduce the waste of resources, the second detection result is sent to the second virtual controller only when the second detection result is that the first virtual object is scanned.
And S270, the second party virtual controller generates at least one combat instruction of the second party virtual object according to the second detection result.
In this embodiment, if the second detection result is that the first virtual object is scanned, at this time, the second virtual controller schedules the fighter device in the second virtual object to the position where the first virtual object is scanned according to the position where the first virtual object is scanned, and performs attack processing such as blasting on the scanned first virtual object, for example, as shown in fig. 10a, at this time, an enemy fight area and a my fight area are provided in the simulated fight platform, and at this time, the equipment a in the first virtual object is camouflaged (i.e., an countermeasure pattern is added) and then enters into the second monitoring range of the enemy fight area, and if the enemy recognizes the equipment a, the equipment a is blasted.
If the equipment a cannot be identified, the equipment a is not blasted, and the equipment a is successfully disguised, specifically, as shown in fig. 10b, the first party virtual object is disguised by using the countermeasure pattern, so that the second party detection model cannot identify the first party virtual object within the second monitoring range, and the second party detection model fails to defend.
In some embodiments, in order to provide a richer environmental basis for the simulated combat training, a model is detected more comprehensively, and the simulated combat platform is preset with various weather parameters, at this time, the simulated combat platform may receive a weather parameter selection instruction (which may be a user-triggered instruction or a code-triggered instruction) for a target weather parameter, where the target weather parameter is an air parameter of the various weather parameters; and then the platform responds to the weather parameter selection instruction, and the weather parameter of the simulated combat platform display interface is set as the target weather parameter.
Fig. 11 is a schematic block diagram of a simulated combat attack and defense training device according to an embodiment of the present application. As shown in fig. 11, corresponding to the above simulated combat defense training method, the application also provides a simulated combat attack and defense training device. The simulated combat defense training device comprises a unit for executing the simulated combat defense training method, and can be configured in a simulated combat defense training system, wherein the simulated combat defense training system comprises a simulated combat platform, a first party combat system and a second party combat system, the simulated combat platform comprises at least one first party virtual object and at least one second party virtual object, the first party combat system comprises a first party detection model and a first party virtual controller, and image and point cloud data in a first monitoring range corresponding to the first party virtual object in the simulated combat platform are transmitted to the first party detection model through an interface. Specifically, referring to fig. 11, the simulated combat attack and defense training device 110 includes a transceiver module 1101 and a processing module 1102.
The transceiver module 1101 is configured to obtain first scan data within the first monitoring range, where the first scan data includes data of at least one second target virtual object entering the first monitoring range, and the second target virtual object is the second target virtual object to which a countermeasure pattern is added;
the processing module 1102 is configured to input the first scan data into the first party detection model to obtain a first detection result, where the first detection result is used to control the first party virtual controller to generate a combat instruction of at least one first party virtual object.
In some embodiments, after performing the step of inputting the first scan data into the first party detection model, the processing module 1102 is further configured to:
if the first detection result is that the second party virtual object is scanned, determining that the first party detection model is successfully defended;
and if the first detection result is that the second party virtual object is not scanned, determining that the first party detection model fails in defense.
In some embodiments, a preset location is further provided in the simulated combat platform, the first party virtual object includes unmanned combat equipment, and the processing module 1102 is further configured to:
When the second-party virtual object is detected to enter the preset range of the first party, displaying the unmanned aerial vehicle combat equipment in the second monitoring range of the second-party virtual object, and controlling the unmanned aerial vehicle combat equipment to move to the preset place so as to attract the second-party virtual object to move to the preset place along with the unmanned aerial vehicle combat equipment.
In some embodiments, the processing module 1102 is further configured to:
determining at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns;
at least one target countermeasure pattern is imported into at least one first party target virtual object, wherein the first party target virtual object is the first party virtual object in a second monitoring range, and the second monitoring range is a monitoring range corresponding to the second party virtual object.
In some embodiments, the processing module 1102 is further configured to, after performing the step of importing at least one of the target countermeasure patterns into at least one first party target virtual object:
acquiring, by the transceiver module 1101, an environmental characteristic in which the first party target virtual object is currently located;
the target countermeasure pattern on the first party target virtual object is adjusted according to the environmental characteristic.
In some embodiments, the second party virtual object comprises an aerial virtual object; the processing module 1102 is further configured to, after performing the step of importing at least one of the target countermeasure patterns into at least one first party target virtual object:
if the above-ground virtual object is detected in the advancing process of the first-party target virtual object, carrying out local camouflage adjustment on the target countermeasure pattern on the first-party target virtual object according to the azimuth of the above-ground virtual object relative to the first-party target virtual object;
and if the aerial virtual object is detected in the advancing process of the first target virtual object, performing full vehicle camouflage adjustment on the target countermeasure pattern on the first target virtual object.
In some embodiments, the second party combat system comprises a second party detection model and a second party virtual controller, wherein the image and point cloud data in the second monitoring range in the simulated combat platform are transmitted to the second party detection model through a port; the processing module 1102 is further configured to, after performing the step of importing at least one of the target countermeasure patterns into at least one first party target virtual object:
Acquiring second scanning data in the second monitoring range through the transceiver module 1101;
and inputting the second scanning data into the second party detection model to obtain a second detection result, wherein the second detection result is used for controlling the second party virtual controller to generate at least one combat instruction of the second party virtual object.
In some embodiments, the simulated combat platform is preset with a plurality of weather parameters, and the transceiver module 1101 is further configured to receive a weather parameter selection instruction for a target weather parameter, where the target weather parameter is one of the plurality of weather parameters;
the processing module 1102 is further configured to set a weather parameter of the simulated combat platform display interface to the target weather parameter in response to the weather parameter selection instruction.
In some embodiments, the display interface of the simulated combat platform includes a first area and a second area, where the first area displays functional icons corresponding to each preset candidate countermeasure pattern, and the second area currently displays a picture of the simulated combat platform; the processing module 1102 performs the candidate countermeasures from the preset plurality of candidate countermeasures determining at least one target countermeasure pattern to be added currently in the patterns; the step of importing at least one target countermeasure pattern into at least one first party target virtual object is specifically used for:
A first operation instruction of a user for a first function icon in the first area is received through the transceiver module 1101,
selecting, in response to the first operation instruction, a countermeasure pattern corresponding to the first function icon from among the plurality of candidate countermeasure patterns as the target countermeasure pattern;
the target countermeasure pattern is displayed on the first party target virtual object.
In some embodiments, the processing module 1102 determines at least one target countermeasure pattern to be added currently from a plurality of candidate countermeasure patterns preset; the step of importing at least one target countermeasure pattern into at least one first party target virtual object is specifically used for:
determining at least one target countermeasure pattern from a plurality of candidate countermeasure patterns according to a preset countermeasure script;
at least one of the target countermeasure patterns is imported onto at least one of the first party target virtual objects.
It should be noted that, as those skilled in the art can clearly understand, the specific implementation process of the above-mentioned simulation combat attack training device and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the description is omitted here.
The simulation combat and defense training device in the embodiment of the present application is described above from the point of view of the modularized functional entity, and the simulation combat and defense training device in the embodiment of the present application is described below from the point of view of hardware processing, respectively.
It should be noted that, in each embodiment (including each embodiment shown in fig. 11) of the present application, the entity devices corresponding to all the transceiver modules may be transceivers, and the entity devices corresponding to all the processing modules may be processors. When one of the devices has the structure shown in fig. 11, the processor, the transceiver and the memory implement the same or similar functions as the transceiver module and the processing module provided by the device embodiment of the device, and the memory in fig. 12 stores a computer program that needs to be invoked when the processor executes the above-mentioned simulated combat attack and defense training method.
The system shown in fig. 11 may have a structure as shown in fig. 12, when the apparatus shown in fig. 11 has a structure as shown in fig. 12, the processor in fig. 12 can implement the same or similar functions as the processing module provided by the apparatus embodiment corresponding to the apparatus, the transceiver in fig. 12 can implement the same or similar functions as the transceiver module provided by the apparatus embodiment corresponding to the apparatus, and the memory in fig. 12 stores a computer program to be invoked when the processor executes the above-mentioned simulated combat attack training method. In the embodiment shown in fig. 11, the entity device corresponding to the transceiver module may be an input/output interface, and the entity device corresponding to the processing module may be a processor.
The embodiment of the present application further provides another terminal device, as shown in fig. 13, for convenience of explanation, only the portion relevant to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the mobile phone as an example of the terminal:
fig. 13 is a block diagram showing a part of the structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 13, the mobile phone includes: radio Frequency (RF) circuit 1310, memory 1330, input unit 1330, display unit 1340, sensor 1330, audio circuit 1360, wireless fidelity (wireless fidelity, wi-Fi) module 1370, processor 1380, and power supply 1390. It will be appreciated by those skilled in the art that the handset construction shown in fig. 13 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 13:
the RF circuit 1310 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, the RF circuit may process the downlink information for the processor 1380; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 1310 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (English full name: low Noise Amplifier; LNA), a duplexer, and the like. In addition, the RF circuitry 1310 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (english: global System of Mobile communication, english: GSM), general packet radio service (english: general Packet Radio Service, english: GPRS), code division multiple access (english: code Division Multiple Access, CDMA), wideband code division multiple access (english: wideband Code Division Multiple Access, english: WCDMA), long term evolution (english: long Term Evolution, english: LTE), email, short message service (english: short Messaging Service, english: SMS), and the like.
The memory 1330 may be used to store software programs and modules, and the processor 1380 may perform various functional applications and data processing of the handset by executing the software programs and modules stored in the memory 1330. The memory 1330 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. Further, the memory 1330 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1330 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1330 may include a touch panel 1331 and other input devices 1332. Touch panel 1331, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 1331 or thereabout using any suitable object or accessory such as a finger, stylus, etc.) and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1331 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1380, and can receive commands from the processor 1380 and execute them. In addition, the touch panel 1331 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 1330 may include other input devices 1332 in addition to the touch panel 1331. In particular, other input devices 1332 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1340 may be used to display information input by a user or information provided to the user as well as various menus of the mobile phone. The display unit 1340 may include a display panel 1341, and the display panel 1341 may be optionally configured in the form of a liquid crystal display (english: liquid Crystal Display, abbreviated as LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1331 may overlay the display panel 1341, and when the touch panel 1331 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 1380 to determine the type of touch event, and the processor 1380 then provides a corresponding visual output on the display panel 1341 according to the type of touch event. Although in fig. 13, the touch panel 1331 and the display panel 1341 are two independent components for implementing the input and output functions of the mobile phone, in some embodiments, the touch panel 1331 may be integrated with the display panel 1341 to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1330, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1341 and/or the backlight when the phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 1360, speaker 1361, microphone 1362 may provide an audio interface between the user and the handset. The audio circuit 1360 may transmit the received electrical signal after audio data conversion to the speaker 1361, where the electrical signal is converted to a sound signal by the speaker 1361 and output; on the other hand, the microphone 1362 converts the collected sound signals into electrical signals, which are received by the audio circuit 1360 and converted into audio data, which are processed by the audio data output processor 1380 for transmission to, for example, another cell phone via the RF circuit 1310, or for output to the memory 1330 for further processing.
Wi-Fi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through a Wi-Fi module 1370, so that wireless broadband Internet access is provided for the user. Although fig. 13 shows Wi-Fi module 1370, it is understood that it does not belong to the necessary constitution of the mobile phone, and can be omitted entirely as needed within the scope of not changing the essence of the application.
Processor 1380 is a control center of the handset, connecting various parts of the entire handset using various interfaces and lines, performing various functions and processing data of the handset by running or executing software programs and/or modules stored in memory 1330, and invoking data stored in memory 1330, thereby performing overall monitoring of the handset. Optionally, processor 1380 may include one or more processing units; preferably, processor 1380 may integrate an application processor primarily handling operating systems, user interfaces, applications, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1380.
The handset further includes a power supply 1390 (e.g., a battery) for powering the various components, which may be logically connected to the processor 1380 via a power management system so as to perform functions such as managing charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 1380 included in the mobile phone further has a flowchart for controlling and executing the above simulated combat attack training method shown in fig. 3 or fig. 6.
Fig. 14 is a schematic diagram of a server structure provided in the embodiment of the present application, where the server 1420 may have a relatively large difference due to different configurations or performances, and may include one or more central processing units (english full: central processing units, CPU for short), 1422 (e.g., one or more processors) and memory 1432, one or more storage media 1430 (e.g., one or more mass storage devices) storing applications 1442 or data 1444. Wherein the memory 1432 and storage medium 1430 can be transitory or persistent storage. The program stored in the storage medium 1430 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Further, the central processor 1422 may be provided in communication with a storage medium 1430, executing a series of instruction operations on the server 1420 in the storage medium 1430.
The Server(s) 1420 may also include one or more power supplies 1426, one or more wired or wireless network interfaces 1450, one or more input/output interfaces 1458, and/or one or more operating systems 1441, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like.
The steps performed by the server in the above embodiments may be based on the structure of the server 1420 shown in fig. 14. The steps of the simulated combat defense training method shown in fig. 3 or 6 in the above embodiments, for example, can be implemented based on the server structure shown in fig. 14. For example, the processor 1422 performs the following operations by invoking instructions in the memory 1432:
acquiring first scanning data in the first monitoring range, wherein the first scanning data comprises data of at least one second-party target virtual object entering the first monitoring range, and the second-party target virtual object is the second-party virtual object added with the countermeasure pattern;
and inputting the first scanning data into the first party detection model to obtain a first detection result, wherein the first detection result is used for controlling the first party virtual controller to generate at least one combat instruction of the first party virtual object.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, apparatuses and modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When the computer program is loaded and executed on a computer, the flow or functions described in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The foregoing describes in detail the technical solution provided by the embodiments of the present application, in which specific examples are applied to illustrate the principles and implementations of the embodiments of the present application, where the foregoing description of the embodiments is only used to help understand the methods and core ideas of the embodiments of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope according to the ideas of the embodiments of the present application, the present disclosure should not be construed as limiting the embodiments of the present application in view of the above.

Claims (10)

1. The simulated combat attack and defense training method is characterized by comprising the following steps of: the method is applied to a simulated combat defense training system, the simulated combat defense training system comprises a simulated combat platform, a first party combat system and a second party combat system, the simulated combat platform comprises at least one first party virtual object and at least one second party virtual object, the first party combat system comprises a first party detection model and a first party virtual controller, wherein image and point cloud data in a first monitoring range corresponding to the first party virtual object in the simulated combat platform are transmitted to the first party detection model through a port, and the method comprises the following steps:
Acquiring first scanning data in the first monitoring range, wherein the first scanning data comprises data of at least one second-party target virtual object entering the first monitoring range, and the second-party target virtual object is the second-party virtual object added with the countermeasure pattern;
inputting the first scanning data into the first party detection model to obtain a first detection result, the first detection result is used for controlling the first party virtual controller to generate at least one combat instruction of the first party virtual object.
2. The method of claim 1, wherein after inputting the first scan data into the first party detection model to obtain a first detection result, the method further comprises:
if the first detection result is that the second party virtual object is scanned, determining that the first party detection model is successfully defended;
and if the first detection result is that the second party virtual object is not scanned, determining that the first party detection model fails in defense.
3. The method of claim 1, wherein the simulated combat platform further has a preset location disposed therein, the first party virtual object comprising unmanned combat equipment, the method further comprising:
When the second-party virtual object is detected to enter the preset range of the first party, displaying the unmanned aerial vehicle combat equipment in the second monitoring range of the second-party virtual object, and controlling the unmanned aerial vehicle combat equipment to move to the preset place so as to attract the second-party virtual object to move to the preset place along with the unmanned aerial vehicle combat equipment.
4. The method according to claim 1, wherein the method further comprises:
determining at least one target countermeasure pattern to be added currently from a plurality of preset candidate countermeasure patterns;
at least one target countermeasure pattern is imported into at least one first party target virtual object, wherein the first party target virtual object is the first party virtual object in a second monitoring range, and the second monitoring range is a monitoring range corresponding to the second party virtual object.
5. The method of claim 4, wherein after said importing at least one of said target countermeasure patterns into at least one first party target virtual object, said method further comprises:
acquiring the current environmental characteristics of the first party target virtual object;
The target countermeasure pattern on the first party target virtual object is adjusted according to the environmental characteristic.
6. The method of claim 4, wherein the second party virtual object comprises an above-ground virtual object and an over-the-air virtual object; after the at least one target countermeasure pattern is imported into the at least one first party target virtual object, the method further includes:
if the above-ground virtual object is detected in the advancing process of the first-party target virtual object, carrying out local camouflage adjustment on the target countermeasure pattern on the first-party target virtual object according to the azimuth of the above-ground virtual object relative to the first-party target virtual object;
and if the aerial virtual object is detected in the advancing process of the first target virtual object, performing full vehicle camouflage adjustment on the target countermeasure pattern on the first target virtual object.
7. The method of claim 4, wherein the second party combat system comprises a second party detection model and a second party virtual controller, wherein the image and point cloud data within the second monitoring range in the simulated combat platform are transmitted to the second party detection model via a port; after the at least one target countermeasure pattern is imported into the at least one first party target virtual object, the method further includes:
Acquiring second scanning data in the second monitoring range;
and inputting the second scanning data into the second party detection model to obtain a second detection result, wherein the second detection result is used for controlling the second party virtual controller to generate at least one combat instruction of the second party virtual object.
8. The utility model provides a simulation fight and fight training device, its characterized in that, in the simulation fight and defend training device is configured and simulation fight and defend training system, simulation fight and defend training system including simulation fight platform, first party fight system and second party fight system, simulation fight platform includes at least one first party virtual object and at least one second party virtual object, first party fight system includes first party detection model and first party virtual controller, image and point cloud data in the first monitoring range that corresponds with first party virtual object in the simulation fight platform, through the port transmission for first party detection model, the device includes:
the receiving and transmitting module is used for acquiring first scanning data in the first monitoring range, wherein the first scanning data comprise data of at least one second target virtual object entering the first monitoring range, and the second target virtual object is the second target virtual object added with the countermeasure pattern;
The processing module is used for inputting the first scanning data into the first party detection model to obtain a first detection result, and the first detection result is used for controlling the first party virtual controller to generate at least one combat instruction of the first party virtual object.
9. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program comprising program instructions which, when executed by a processor, can implement the method of any of claims 1-7.
CN202211529886.XA 2022-09-30 2022-11-30 Simulation combat attack and defense training method, related device and storage medium Pending CN116167274A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022112057950 2022-09-30
CN202211205795 2022-09-30

Publications (1)

Publication Number Publication Date
CN116167274A true CN116167274A (en) 2023-05-26

Family

ID=86420865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211529886.XA Pending CN116167274A (en) 2022-09-30 2022-11-30 Simulation combat attack and defense training method, related device and storage medium

Country Status (1)

Country Link
CN (1) CN116167274A (en)

Similar Documents

Publication Publication Date Title
CN114297730A (en) Countermeasure image generation method, device and storage medium
CN114387647B (en) Anti-disturbance generation method, device and storage medium
CN115588131B (en) Model robustness detection method, related device and storage medium
CN115239941B (en) Countermeasure image generation method, related device and storage medium
CN116310745B (en) Image processing method, data processing method, related device and storage medium
CN115081643B (en) Confrontation sample generation method, related device and storage medium
CN114444579A (en) General disturbance acquisition method and device, storage medium and computer equipment
CN115022098A (en) Artificial intelligence safety target range content recommendation method, device and storage medium
CN116486463B (en) Image processing method, related device and storage medium
CN115471495B (en) Model robustness detection method, related device and storage medium
CN115526055B (en) Model robustness detection method, related device and storage medium
CN115376192B (en) User abnormal behavior determination method, device, computer equipment and storage medium
CN116778306A (en) Fake object detection method, related device and storage medium
CN115392405A (en) Model training method, related device and storage medium
CN116167274A (en) Simulation combat attack and defense training method, related device and storage medium
CN115909186B (en) Image information identification method, device, computer equipment and storage medium
CN115909020B (en) Model robustness detection method, related device and storage medium
CN115412726B (en) Video authenticity detection method, device and storage medium
CN116308978B (en) Video processing method, related device and storage medium
CN117765349A (en) Method for generating challenge sample, related device and storage medium
CN115525554B (en) Automatic test method, system and storage medium for model
CN116704567A (en) Face picture processing method, related equipment and storage medium
CN116363490A (en) Fake object detection method, related device and storage medium
CN117853859A (en) Image processing method, related device and storage medium
CN117132851A (en) Anti-patch processing method, related device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination