CN114911340B - Intelligent police command exercise system and method based on meta-universe system - Google Patents

Intelligent police command exercise system and method based on meta-universe system Download PDF

Info

Publication number
CN114911340B
CN114911340B CN202210147882.9A CN202210147882A CN114911340B CN 114911340 B CN114911340 B CN 114911340B CN 202210147882 A CN202210147882 A CN 202210147882A CN 114911340 B CN114911340 B CN 114911340B
Authority
CN
China
Prior art keywords
virtual
exercise
scene
character
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210147882.9A
Other languages
Chinese (zh)
Other versions
CN114911340A (en
Inventor
李首峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guozheng Xintong (Beijing) Technology Co.,Ltd.
Original Assignee
Guozhengtong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guozhengtong Technology Co ltd filed Critical Guozhengtong Technology Co ltd
Priority to CN202210147882.9A priority Critical patent/CN114911340B/en
Publication of CN114911340A publication Critical patent/CN114911340A/en
Application granted granted Critical
Publication of CN114911340B publication Critical patent/CN114911340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Alarm Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of virtual reality, and discloses a system and a method for intelligent police command exercise based on a meta universe system, which are used for constructing corresponding virtual reality scenes in the meta universe aiming at data of real scenes, constructing corresponding virtual exercise scenes according to preset exercise contents, fusing the virtual reality scenes with the virtual exercise scenes to construct virtual exercise scenes, enabling corresponding personnel participating in exercise to perform exercise in the virtual exercise scenes through an interaction technology, and grading the final result. The mode of freely selecting virtual reality scene and virtual exercise scene can bring multi-scene, multi-content exercise environment to the virtual police to be trained, and the selection of virtual props can also enable reality personnel to master more props using modes and skill structures and intuitively know the defects of the props.

Description

Intelligent police command exercise system and method based on meta-universe system
Technical Field
The invention relates to the technical field of virtual reality, in particular to an intelligent police command exercise system and method based on a meta-universe system.
Background
Police practice is used for deepening police practice, is intensively embodied for daily education training and cultivation, aims to find problems existing in practice through practice, immediately corrects the problems and defects, and aims to aim at practice ability level of polices and the like through plan practice, daily training and business training as a common mode for continuously improving the practice ability level of polices and the like. The existing police exercise, firstly, in a real scene, is exercised by symbolized scenes and plots, so that manpower and material resources are wasted, normal operation of the real scene is hindered, meanwhile, reality is not strong, and a better exercise effect cannot be achieved; secondly, the training is performed in a similar complete virtual scene of the electronic game, the real world contrast is lacking, the operability is not strong, and a better training effect cannot be achieved.
Disclosure of Invention
The invention mainly provides an intelligent police command exercise system and method based on a meta-universe system.
In order to solve the technical problems, the invention adopts the following technical scheme:
a smart alert command exercise method based on a meta-universe system comprises the following steps:
the method comprises the steps of collecting real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset police exercise content, and fusing the corresponding virtual exercise scene with the selected virtual reality scene to obtain a virtual exercise live-action;
constructing virtual characters and virtual props, and constructing virtual police based on data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop;
and acquiring training information of a virtual police in the virtual training live-action, and acquiring result information, and acquiring a training score based on the training information and the result information.
Further, gather real scene data, construct virtual reality scene based on real scene data, establish virtual exercise scene, draw corresponding virtual exercise scene and fuse with the virtual reality scene of selecting according to the alert condition of predetermining the exercise content, acquire virtual exercise live-action, include:
acquiring corresponding parameters of a real scene, and constructing a virtual reality scene through the corresponding parameters;
parameterizing sudden accidents, acquiring standard accident models, establishing virtual exercise scenes with a plurality of standard accident models, selecting corresponding standard accident models through preset warning condition exercise contents, selecting corresponding virtual reality scene fusion points, combining the standard accident models with corresponding fusion points, thus constructing accident response real scene models, setting triggering characteristics of the corresponding accident response real scene models, and generating virtual exercise real scenes corresponding to the preset exercise contents;
and initializing the virtual exercise live-action through a virtual reality editor.
Further, parameterizing sudden accidents, obtaining a standard accident model, establishing a virtual exercise scene with a plurality of standard accident models, selecting a corresponding standard accident model through preset warning condition exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard accident model with the corresponding fusion point, thereby constructing an accident response real scene model, setting initiation characteristics of the corresponding accident response real scene model, and generating a virtual exercise real scene corresponding to the preset exercise content, wherein the method comprises the following steps:
performing numerical simulation on the sudden accident, acquiring an influence area from the sudden accident to the end according to the time sequence and the triggering condition, dividing the influence area according to the time sequence to obtain different standard accident models, and establishing a virtual exercise scene with a plurality of standard accident models;
selecting a corresponding standard accident model through preset police condition exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard accident model with the fusion point to obtain an accident response real model with an influence area, and setting triggering characteristics of the corresponding accident response real model;
and developing a corresponding accident response live-action model based on the triggering characteristics, and generating an event in the virtual reality scene so as to acquire a virtual exercise live-action.
Further, constructing virtual characters and virtual props, and constructing a virtual police based on the data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop, wherein the method comprises the following steps of:
constructing a virtual character, extracting a task character and a prop character, and putting the prop character into the virtual exercise live-action, wherein the task character comprises an AI character with a preset behavior mode and a player character played by collecting real person information;
collecting a plurality of real prop parameters, and constructing a virtual prop;
selecting the AI personage or the player personage according to the police condition exercise content;
collecting data information of training personnel to construct a virtual police;
and controlling the player character and the virtual police through an interaction technology, and selecting and using the virtual prop.
Further, collecting training information of a virtual police in a virtual training live-action and result information, and obtaining a training score based on the training information and the result information, including:
acquiring virtual prop use and personnel action information and the like of a virtual police in a virtual training live-action so as to acquire training information;
acquiring exercise time information, casualty information and scene damage information in the virtual exercise live-action, thereby acquiring result information;
and acquiring the exercise score of the training personnel based on the training information and the result information.
Intelligent police command exercise system based on metauniverse system includes:
the virtual exercise live-action construction module is used for acquiring real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset warning condition exercise content and fusing the corresponding virtual exercise scene with the selected virtual reality scene to acquire a virtual exercise live-action;
the virtual character prop construction module is used for constructing virtual characters and virtual props and constructing virtual police based on data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop;
and the exercise information evaluation module is used for acquiring training information of a virtual police in the virtual exercise live-action and acquiring result information, and acquiring exercise scores based on the training information and the result information.
Further, the virtual exercise live-action construction module includes:
the real scene acquisition and construction sub-module is used for acquiring corresponding parameters of the real scene and constructing a virtual reality scene through the corresponding parameters;
the training live-action fusion construction sub-module is used for parameterizing sudden accidents, acquiring standard accident models, establishing virtual training scenes with a plurality of standard accident models, selecting corresponding standard accident models through preset warning condition training contents, selecting corresponding virtual reality scene fusion points, combining the standard accident models with the corresponding fusion points, thus constructing an accident response live-action model, setting triggering characteristics of the corresponding accident response live-action model, and generating virtual training live-actions corresponding to the preset training contents;
and the scene initialization sub-module is used for initializing the virtual exercise live-action through the virtual reality editor.
Further, the drilling live-action fusion construction submodule comprises:
the virtual exercise scene construction unit is used for carrying out numerical simulation on the sudden accident, acquiring an influence area from the sudden accident to the end according to the time sequence and the triggering condition, dividing the influence area according to the time sequence to obtain different standard accident models, and establishing a virtual exercise scene with a plurality of standard accident models;
the accident response real model fusion unit is used for selecting a corresponding standard accident model through preset warning condition exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard accident model with the fusion point to obtain an accident response real model with an influence area, and setting the initiation characteristics of the corresponding accident response real model;
and the virtual exercise live-action construction unit is used for developing a corresponding accident response live-action model based on the triggering characteristics and generating an event in the virtual reality scene so as to acquire the virtual exercise live-action.
Further, the virtual character prop construction module includes:
the character construction submodule is used for constructing a virtual character, extracting a task character and a prop character, and putting the prop character into the virtual exercise live-action, wherein the task character comprises an AI character with a preset behavior mode and a player character played by collecting real person information;
the prop collecting and constructing sub-module is used for collecting a plurality of real prop parameters and constructing virtual props;
the figure selection construction submodule is used for selecting the A I figure or the player figure according to the police exercise content;
the police construction sub-module is used for collecting data information of training personnel and constructing a virtual police;
and the interaction control sub-module is used for controlling the player character and the virtual police through an interaction technology, and selecting and using the virtual prop.
Further, the exercise information evaluation module includes:
the training information acquisition sub-module acquires virtual prop use and personnel action information and the like of a virtual police in the virtual training live-action so as to acquire training information;
the result information acquisition sub-module acquires exercise time information, casualty information and scene damage information in the virtual exercise live-action, so as to acquire result information;
and the exercise score calculation sub-module is used for acquiring exercise scores of the training personnel based on the training information and the result information.
The beneficial effects are that: corresponding virtual reality scenes are built in meta universe aiming at the data of the reality scenes, corresponding virtual exercise scenes can be built according to preset exercise contents, the virtual reality scenes and the virtual exercise scenes are fused to build virtual exercise live-action, corresponding personnel participating in exercise conduct exercise in the virtual exercise live-action through interaction technology, and the final result is scored. The mode of freely selecting virtual reality scene and virtual exercise scene can bring multi-scene, multi-content exercise environment to the virtual police to be trained, and the selection of virtual props can also enable reality personnel to master more props using modes and skill structures and intuitively know the defects of the props.
Drawings
FIG. 1 is a flow chart of an intelligent police command exercise method based on a meta-universe system;
FIG. 2 is a flowchart of step S101;
fig. 3 is a flowchart of step S1012;
FIG. 4 is a flowchart of step S102;
fig. 5 is a flowchart of step S103;
fig. 6 is a block diagram of a smart alert command exercise system based on the metauniverse system.
Detailed Description
The technical scheme of the intelligent police command exercise system and method based on the meta-universe system related by the invention is further described in detail below by combining the embodiment.
As shown in fig. 1, the smart alert command exercise method based on the meta-universe system according to the embodiment of the invention includes: s101 to S103, the method comprises the steps of,
s101, acquiring real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset warning condition exercise content, and fusing the corresponding virtual exercise scene with the selected virtual reality scene to obtain a virtual exercise live-action;
the virtual exercise scene refers to exercise content according to preset alert conditions, for example: disaster such as explosion, fire, typhoon, crowd trampling are difficult to exercise in reality to virtual content is constructed, and virtual exercise scene and virtual reality scene that constructs are fused, thereby form the virtual exercise practice that can advance the exercise of alert condition exercise content of predetermineeing.
S102, constructing virtual characters and virtual props, and constructing a virtual police based on data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop;
wherein prop figures refer to figures models constructed to a preset program to act as props in a virtual exercise live-action, the models being capable of being moved through the preset program. For example: performing an activity in the virtual exercise live view to move from one location to another; meanwhile, the method can also be changed into another section of preset program according to the triggering condition in the virtual exercise, for example: move or limit the range of movement to one or more fixed points according to the direction of the virtual police. The task characters are targets arranged according to preset exercise content, can be simple modes formed by programs, can also be difficulty lifting modes played by true people, and are selected according to the actual conditions of the virtual police to be trained.
And training personnel can control the established virtual characters and virtual police through interaction technology and wearable equipment, for example: through brain-computer interface technology, sense experience such as smell, taste and the like is realized, and meanwhile, the sense experience is freely interacted with a virtual live-action exercise scene, and the pain of the body under attack is sensed through cooperation with somatosensory equipment; the sense of the falling flying body in the exercise is felt through the full-automatic touch chair.
S103, training information of a virtual police in the virtual exercise live-action is collected, result information is collected, and exercise scores are obtained based on the training information and the result information.
In this embodiment, a corresponding virtual reality scene is constructed in the meta universe aiming at the data of the reality scene, a corresponding virtual exercise scene can be constructed according to the preset exercise content, the virtual reality scene and the virtual exercise scene are fused to construct a virtual exercise live-action, corresponding personnel participating in the exercise are enabled to exercise in the virtual exercise live-action through the interaction technology, and the final result is scored. The mode of freely selecting virtual reality scene and virtual exercise scene can bring multi-scene, multi-content exercise environment to the virtual police to be trained, and the selection of virtual props can also enable reality personnel to master more props using modes and skill structures and intuitively know the defects of the props.
Further, as shown in fig. 2, the step S101 of collecting real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset alert exercise content and fusing with the selected virtual reality scene, and obtaining a virtual exercise scene includes:
s1011, acquiring corresponding parameters of a real scene, and constructing a virtual reality scene through the corresponding parameters;
s1012, parameterizing sudden accidents to obtain standard accident models, establishing virtual exercise scenes with a plurality of standard accident models, selecting corresponding standard accident models through preset warning condition exercise contents, selecting corresponding virtual reality scene fusion points, combining the standard accident models with corresponding fusion points, thus constructing accident response real scene models, setting triggering characteristics of the corresponding accident response real scene models, and generating virtual exercise real scenes corresponding to the preset exercise contents;
the parameterizing of the accident refers to basically confirming actual parameters of the accident event which may occur in reality, and performing numerical simulation on the accident event to obtain an influence range thereof, for example: and confirming and numerically simulating actual parameters of accidents such as fire or explosion and the like to obtain the generated influence range.
S1013, initializing the virtual exercise live-action through a virtual reality editor.
After initialization, virtual exercises can be started from the head.
Further, as shown in fig. 3, in step S1012, the sudden accident is parameterized to obtain a standard accident model, and a virtual exercise scene with a plurality of standard accident models is established, a corresponding standard accident model is selected through preset alert exercise content, a corresponding virtual reality scene fusion point is selected, the standard accident model is combined with a corresponding fusion point, so as to construct an accident response real scene model, and initiation features of the corresponding accident response real scene model are set, so as to generate a virtual exercise real scene corresponding to the preset exercise content, including:
s10121, carrying out numerical simulation on the sudden accident, acquiring an influence area from the sudden accident to the end according to the time sequence and the triggering condition, dividing the influence area according to the time sequence to obtain different standard accident models, and establishing a virtual exercise scene with a plurality of standard accident models;
wherein, the time sequence is assembled into T= { T 1 ,T 2 ,T 3 ...T n The trigger condition is set to r= { R } 1 ,R 2 ,R 3 ...R m };
The trigger condition is a set of various influencing factors that influence the increase or decrease of the incident, such as: walls affecting the explosion range, fire extinguishing agents affecting the fire range, etc.; when there is no trigger factor, R is selected 1 =1, i.e. no effect on range, only time-dependent development, obtaining the range of effect and constructing Fan Wei matrix:
Figure BDA0003509104190000091
in the formula, the elements S of the matrix mn =R m ·T n The range of S is a preset range for numerical decomposition according to the size of the range of the accident, namely, the normal evolution of the range of the accident without triggering conditions is subjected to numerical simulation record, then different fixed ranges are obtained through decomposition, the range is selected to be affected by the development of time sequences and the triggering conditions, and three-dimensional coordinate determination is carried out on each range S in a matrix, such as:
Figure BDA0003509104190000092
wherein, the fixed range P, Q, U obtained by decomposition is defined as range unit vectors pointing in +x, +y, and +z directions, respectively, to obtain a range linear transformation formula: s is S mn =xp+yq+ zU, and P, Q, U is used as a range base vector, and a range base vector matrix is constructed with P, Q, U:
Figure BDA0003509104190000093
s10122, selecting a corresponding standard accident model through preset warning condition exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard accident model with the fusion point to obtain an accident response real model with an influence area, and setting the triggering characteristics of the corresponding accident response real model;
scene vectors [ X Y Z ] corresponding to fusion points in the virtual reality scene are collected, and the scene vectors are used for combining a range basis vector matrix:
Figure BDA0003509104190000094
Figure BDA0003509104190000101
the method comprises the steps that a range base vector matrix L is utilized to convert a scene vector [ X Y Z ] corresponding to a fusion point in a virtual reality scene into an accident response real scene model with a plurality of influence areas [ XP YQ ZU ] in the virtual reality scene; the three-dimensional coordinates of the accident response live-action model can be influenced and limited by the three-dimensional coordinates of the triggering conditions, for example: and limiting the three-dimensional coordinates of the fire house by the three-dimensional coordinates of the wall body.
And selecting different accident response live-action models in different time periods according to the time sequence T and the triggering condition R, and implanting the different accident response live-action models into the virtual reality scene in real time.
S10123, developing a corresponding accident response live-action model based on the triggering characteristics, and generating an event in the virtual reality scene so as to acquire a virtual exercise live-action.
The triggering characteristics refer to triggering conditions of sudden accidents comprising a plurality of standard accident models in a virtual drilling scene, for example: the time point of explosion, the explosion condition or the ignition point of fire, the ignition factor and the like are only used for starting corresponding sudden accidents, so that an accident response live-action model is evolved, and a virtual exercise live-action is obtained.
Further, as shown in fig. 4, in the step S102, a virtual character and a virtual prop are constructed, and a virtual police is constructed based on the data information of the training person; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop, wherein the method comprises the following steps of:
s1021, constructing a virtual character, extracting a task character and a prop character, and putting the prop character into the virtual exercise live-action, wherein the task character comprises an AI character with a preset behavior mode and a player character played by collecting real person information:
s1022, collecting a plurality of real prop parameters, and constructing a virtual prop;
s1023, selecting the AI character or the player character according to the police exercise content;
s1024, acquiring data information of training personnel to construct a virtual police;
s1025, controlling the player character and the virtual police through an interaction technology, and selecting and using the virtual prop.
Further, as shown in fig. 5, the step S103 of collecting training information of the virtual police in the virtual training scene and result information, and obtaining the training score based on the training information and the result information includes:
s1031, acquiring virtual prop use and personnel action information and the like of a virtual police in a virtual training live-action so as to acquire training information;
and establishing a prop use standard model and a personnel action standard model, performing coincidence comparison with the corresponding models by utilizing the acquired corresponding prop use information and personnel action information of the virtual police, and calculating scores according to the coincidence ratio.
F i =λA+μB
Wherein λ+δ=1, f i Representing training information scores, λA representing props using a contact ratio score, and δB representing action information contact ratio scores.
S1032, acquiring exercise time information, casualty information and scene damage information in the virtual exercise live-action, thereby acquiring result information;
F j =αN+βM+γK
wherein α+β+γ=1, f j Representing the result information score, αn representing the exercise time information score, βm representing the casualty information score, γk representing the scene impairment information score.
S1033, acquiring exercise scores of the training personnel based on the training information and the result information.
F=σF i +τF j
Where σ+τ=1 and f represents the exercise score.
The scoring system is adopted to integrally observe the exercise of the virtual police, and meanwhile, the single skill can be observed, so that the pertinence is improved rapidly.
As shown in fig. 6, the intelligent police command exercise system based on the meta-universe system according to the embodiment of the invention includes:
the virtual exercise live-action construction module 61 is used for acquiring real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset warning condition exercise content, and fusing the corresponding virtual exercise scene with the selected virtual reality scene to acquire a virtual exercise live-action;
virtual character prop construction module 62, which is used for constructing virtual characters and virtual props and constructing virtual police based on the data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop;
and the exercise information evaluation module 63 is used for acquiring training information of the virtual police in the virtual exercise live-action and acquiring result information, and acquiring exercise scores based on the training information and the result information.
Further, the virtual exercise live-action construction module 61 includes:
the real scene acquisition and construction sub-module 611 is configured to acquire corresponding parameters of a real scene, and construct a virtual reality scene through the corresponding parameters;
the training live-action fusion construction sub-module 612 is configured to parameterize a sudden accident, obtain a standard accident model, establish a virtual training scene with a plurality of standard accident models, select a corresponding standard accident model through preset alert training content, select a corresponding virtual reality scene fusion point, combine the standard accident model with a corresponding fusion point, thereby constructing an accident response live-action model, and set triggering characteristics of the corresponding accident response live-action model, thereby generating a virtual training live-action corresponding to the preset training content;
the scene initialization submodule 613 is configured to initialize the virtual exercise live-action through the virtual reality editor.
Further, the training live-action fusion construction sub-module 612 includes:
the virtual exercise scene construction unit 6121 performs numerical simulation on the sudden accident, acquires an influence area from the sudden accident to the end according to the time sequence and the triggering condition, divides the influence area according to the time sequence to obtain different standard accident models, and establishes a virtual exercise scene with a plurality of standard accident models;
the accident response real model fusion unit 6122 selects a corresponding standard accident model through preset warning condition exercise content, selects a corresponding virtual reality scene fusion point, combines the standard accident model with the fusion point to obtain an accident response real model with an influence area, and sets the initiation characteristics of the corresponding accident response real model;
the virtual exercise live-action construction unit 6123 develops a corresponding accident response live-action model based on the triggering characteristics, and generates an event in the virtual reality scene, thereby acquiring the virtual exercise live-action.
Further, virtual character prop construction module 62 includes:
a character construction sub-module 621, configured to construct a virtual character, extract a task character and a prop character, and put the prop character into the virtual exercise live-action, where the task character includes an AI character with a preset behavior pattern and a player character played by collecting real person information;
prop collection construction sub-module 622, configured to collect a plurality of real prop parameters and construct a virtual prop;
a character selection construction sub-module 623 for selecting the AI character or the player character according to the alert exercise content;
the police building sub-module 624 is used for collecting the data information of the training personnel and building a virtual police;
an interaction control sub-module 625, configured to control the player character and the virtual police through interaction technology, and select and use the virtual prop.
Further, the exercise information evaluation module 63 includes:
the training information acquisition sub-module 631 acquires virtual prop use and personnel action information of the virtual police in the virtual training live-action, and the like, thereby acquiring training information;
the result information acquisition sub-module 632 acquires exercise time information, casualty information, and scene damage information in the virtual exercise live-action, thereby acquiring result information;
and the exercise score calculation sub-module 633 is used for acquiring the exercise score of the training personnel based on the training information and the result information.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. The intelligent police command exercise method based on the meta-universe system is characterized by comprising the following steps of:
the method comprises the steps of collecting real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset police exercise content, and fusing the corresponding virtual exercise scene with the selected virtual reality scene to obtain a virtual exercise live-action;
constructing virtual characters and virtual props, and constructing virtual police based on data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop;
acquiring training information of a virtual police in a virtual training live-action, acquiring result information, and acquiring a training score based on the training information and the result information;
the acquisition of real scene data, the construction of virtual reality scene based on the real scene data, the establishment of virtual exercise scene, the extraction of corresponding virtual exercise scene and the fusion with the selected virtual reality scene according to the preset alert exercise content, the acquisition of virtual exercise reality scene, including:
acquiring corresponding parameters of a real scene, and constructing a virtual reality scene through the corresponding parameters;
parameterizing sudden accidents, acquiring standard accident models, establishing virtual exercise scenes with a plurality of standard accident models, selecting corresponding standard accident models through preset warning condition exercise contents, selecting corresponding virtual reality scene fusion points, combining the standard accident models with corresponding fusion points, thus constructing accident response real scene models, setting triggering characteristics of the corresponding accident response real scene models, and generating virtual exercise real scenes corresponding to the preset exercise contents;
and initializing the virtual exercise live-action through a virtual reality editor.
2. The method of claim 1, wherein parameterizing the sudden event, obtaining a standard event model, establishing a virtual exercise scene with a plurality of standard event models, selecting a corresponding standard event model through preset alert exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard event models with the corresponding fusion point, thereby constructing an event response real scene model, setting initiation characteristics of the corresponding event response real scene model, thereby generating a virtual exercise real scene corresponding to the preset exercise content, and comprising:
performing numerical simulation on the sudden accident, acquiring an influence area from the sudden accident to the end according to the time sequence and the triggering condition, dividing the influence area according to the time sequence to obtain different standard accident models, and establishing a virtual exercise scene with a plurality of standard accident models;
selecting a corresponding standard accident model through preset police condition exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard accident model with the fusion point to obtain an accident response real model with an influence area, and setting triggering characteristics of the corresponding accident response real model;
and developing a corresponding accident response live-action model based on the triggering characteristics, and generating an event in the virtual reality scene so as to acquire a virtual exercise live-action.
3. The method of claim 1, wherein virtual characters and virtual props are constructed, and virtual police are constructed based on data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop, wherein the method comprises the following steps of:
constructing a virtual character, extracting a task character and a prop character, and putting the prop character into the virtual exercise live-action, wherein the task character comprises an AI character with a preset behavior mode and a player character played by collecting real person information;
collecting a plurality of real prop parameters, and constructing a virtual prop;
selecting the AI personage or the player personage according to the police condition exercise content;
collecting data information of training personnel to construct a virtual police;
controlling the player character and virtual police through interactive technology, and selecting the virtual prop and
and (3) using.
4. A method according to claim 3, characterized by collecting training information of a virtual police in a virtual training live-action, and result information, obtaining a training score based on the training information and result information, comprising:
acquiring virtual prop use and personnel action information of a virtual police in a virtual training live-action so as to acquire training information;
acquiring exercise time information, casualty information and scene damage information in the virtual exercise live-action, thereby acquiring result information;
and acquiring the exercise score of the training personnel based on the training information and the result information.
5. Intelligent police condition command exercise system based on metauniverse system, its characterized in that includes:
the virtual exercise live-action construction module is used for acquiring real scene data, constructing a virtual reality scene based on the real scene data, establishing a virtual exercise scene, extracting a corresponding virtual exercise scene according to preset warning condition exercise content and fusing the corresponding virtual exercise scene with the selected virtual reality scene to acquire a virtual exercise live-action;
the virtual character prop construction module is used for constructing virtual characters and virtual props and constructing virtual police based on data information of training personnel; extracting a task character and a prop character based on the virtual character, putting the prop character into the virtual exercise live-action, arranging the task character and the virtual police according to preset police exercise content, and selecting a corresponding virtual prop;
the training information evaluation module is used for acquiring training information of a virtual police in the virtual training live-action and acquiring result information, and acquiring a training score based on the training information and the result information;
the virtual exercise live-action construction module comprises:
the real scene acquisition and construction sub-module is used for acquiring corresponding parameters of the real scene and constructing a virtual reality scene through the corresponding parameters;
the training live-action fusion construction sub-module is used for parameterizing sudden accidents, acquiring standard accident models, establishing virtual training scenes with a plurality of standard accident models, selecting corresponding standard accident models through preset warning condition training contents, selecting corresponding virtual reality scene fusion points, combining the standard accident models with the corresponding fusion points, thus constructing an accident response live-action model, setting triggering characteristics of the corresponding accident response live-action model, and generating virtual training live-actions corresponding to the preset training contents;
and the scene initialization sub-module is used for initializing the virtual exercise live-action through the virtual reality editor.
6. The system of claim 5, wherein the training live-action fusion construction sub-module comprises:
the virtual exercise scene construction unit is used for carrying out numerical simulation on the sudden accident, acquiring an influence area from the sudden accident to the end according to the time sequence and the triggering condition, dividing the influence area according to the time sequence to obtain different standard accident models, and establishing a virtual exercise scene with a plurality of standard accident models;
the accident response real model fusion unit is used for selecting a corresponding standard accident model through preset warning condition exercise content, selecting a corresponding virtual reality scene fusion point, combining the standard accident model with the fusion point to obtain an accident response real model with an influence area, and setting the initiation characteristics of the corresponding accident response real model;
and the virtual exercise live-action construction unit is used for developing a corresponding accident response live-action model based on the triggering characteristics and generating an event in the virtual reality scene so as to acquire the virtual exercise live-action.
7. The system of claim 5, wherein the virtual character prop construction module comprises:
the character construction submodule is used for constructing a virtual character, extracting a task character and a prop character and enabling the prop character to be
The object is put in the virtual exercise live-action, and the task person comprises an AI person with a preset behavior mode and a player person played by the acquired real person information;
the prop collecting and constructing sub-module is used for collecting a plurality of real prop parameters and constructing virtual props;
the character selection construction submodule is used for selecting the AI character or the player character according to the police exercise content;
the police construction sub-module is used for collecting data information of training personnel and constructing a virtual police;
and the interaction control sub-module is used for controlling the player character and the virtual police through an interaction technology, and selecting and using the virtual prop.
8. The system of claim 7, wherein the exercise information evaluation module comprises:
the training information acquisition sub-module acquires virtual prop use and personnel action information of a virtual police in the virtual training live-action so as to acquire training information;
the result information acquisition sub-module acquires exercise time information, casualty information and scene damage information in the virtual exercise live-action, so as to acquire result information; and the exercise score calculation sub-module is used for acquiring exercise scores of the training personnel based on the training information and the result information.
CN202210147882.9A 2022-02-17 2022-02-17 Intelligent police command exercise system and method based on meta-universe system Active CN114911340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210147882.9A CN114911340B (en) 2022-02-17 2022-02-17 Intelligent police command exercise system and method based on meta-universe system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210147882.9A CN114911340B (en) 2022-02-17 2022-02-17 Intelligent police command exercise system and method based on meta-universe system

Publications (2)

Publication Number Publication Date
CN114911340A CN114911340A (en) 2022-08-16
CN114911340B true CN114911340B (en) 2023-05-05

Family

ID=82763592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210147882.9A Active CN114911340B (en) 2022-02-17 2022-02-17 Intelligent police command exercise system and method based on meta-universe system

Country Status (1)

Country Link
CN (1) CN114911340B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115456416A (en) * 2022-09-16 2022-12-09 国网新源控股有限公司北京十三陵蓄能电厂 Simulation drilling method and system for virtual-real combination

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108847081A (en) * 2018-07-09 2018-11-20 天维尔信息科技股份有限公司 A kind of fire-fighting simulated training method based on virtual reality technology

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654806A (en) * 2016-03-22 2016-06-08 中国特种设备检测研究院 Simulation training and checking system and method for pipe leakage accidents
CN107754212A (en) * 2016-08-16 2018-03-06 上海掌门科技有限公司 Road-work equipment and its virtual reality exchange method
CN108268128A (en) * 2017-01-03 2018-07-10 天津港焦炭码头有限公司 A kind of safety in production emergency preplan 3DVR virtual reality drilling systems
CN108665754A (en) * 2017-03-31 2018-10-16 深圳市掌网科技股份有限公司 Outdoor safety drilling method based on virtual reality and system
KR20180135761A (en) * 2017-06-13 2018-12-21 금오공과대학교 산학협력단 Methods and apparatus for military training based on mixed reality
CN108154741A (en) * 2017-12-29 2018-06-12 广州点构数码科技有限公司 A kind of policeman's real training drilling system and method based on vr
CN109243233A (en) * 2018-08-31 2019-01-18 苏州竹原信息科技有限公司 A kind of defensive combat drilling system and method based on virtual reality
WO2021025660A1 (en) * 2018-11-20 2021-02-11 Transocean Sedco Forex Ventures Limited Proximity-based personnel safety system and method
CN110335359B (en) * 2019-04-22 2023-01-03 国家电网有限公司 Distribution board fire accident emergency drilling simulation method based on virtual reality technology
CN110706543A (en) * 2019-10-24 2020-01-17 中国计量大学 Chemical industry safety live-action simulation drilling system based on VR technique
CN110794968B (en) * 2019-10-30 2024-04-16 深圳市城市公共安全技术研究院有限公司 Emergency drilling interaction system and method based on scene construction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108847081A (en) * 2018-07-09 2018-11-20 天维尔信息科技股份有限公司 A kind of fire-fighting simulated training method based on virtual reality technology

Also Published As

Publication number Publication date
CN114911340A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN106530887B (en) Fire scene simulating escape method and device
CN110339569B (en) Method and device for controlling virtual role in game scene
US10918955B2 (en) Techniques for displaying character play records on a game map
CN109658516B (en) VR training scene creation method, VR training system and computer-readable storage medium
CN114911340B (en) Intelligent police command exercise system and method based on meta-universe system
CN110335359A (en) Distribution board firing accident emergency drilling analogy method based on virtual reality technology
CN108376198B (en) Crowd simulation method and system based on virtual reality
CN108399815A (en) A kind of security risk based on VR looks into the method and its system except rehearsal
CN107145223A (en) Multi-point interaction control system and method based on Unity d engines and the VR helmets
CN112435348A (en) Method and device for browsing event activity virtual venue
CN107469315A (en) A kind of fighting training system
CN113559510A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN111383642A (en) Voice response method based on neural network, storage medium and terminal equipment
CN106325524A (en) Method and device for acquiring instruction
CN108444076B (en) Push method, air conditioning equipment, mobile terminal and storage medium
CN111773669B (en) Method and device for generating virtual object in virtual environment
KR101872000B1 (en) Method for applying interaction in Virtual Reality
CN115061570B (en) High-fidelity simulation training system and method based on real countermeasure data
CN111124125A (en) Police affair training method and system based on virtual reality
CN113268626B (en) Data processing method, device, electronic equipment and storage medium
CN111369861A (en) Virtual reality technology-based simulated fighter plane driving system and method
CN116189516A (en) Physical laboratory system based on meta universe
CN114186696A (en) Visual system and method for AI training teaching
CN113867532A (en) Evaluation system and evaluation method based on virtual reality skill training
CN113534961A (en) Secret education training method and system based on VR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230726

Address after: 100029 Third Floor of Yansha Shengshi Building, 23 North Third Ring Road, Xicheng District, Beijing

Patentee after: Guozheng Xintong (Beijing) Technology Co.,Ltd.

Address before: 100029 Third Floor of Yansha Shengshi Building, 23 North Third Ring Road, Xicheng District, Beijing

Patentee before: GUOZHENGTONG TECHNOLOGY Co.,Ltd.