CN113720202A - Immersive 3D image shooting training target range software system and method - Google Patents

Immersive 3D image shooting training target range software system and method Download PDF

Info

Publication number
CN113720202A
CN113720202A CN202010399145.9A CN202010399145A CN113720202A CN 113720202 A CN113720202 A CN 113720202A CN 202010399145 A CN202010399145 A CN 202010399145A CN 113720202 A CN113720202 A CN 113720202A
Authority
CN
China
Prior art keywords
shooting
image
shooting point
instruction
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010399145.9A
Other languages
Chinese (zh)
Inventor
谢家满
俞田
王海洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shichuang Culture Development Co ltd
Guangdong Renguang Technology Co ltd
Original Assignee
Beijing Shichuang Culture Development Co ltd
Guangdong Renguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shichuang Culture Development Co ltd, Guangdong Renguang Technology Co ltd filed Critical Beijing Shichuang Culture Development Co ltd
Priority to CN202010399145.9A priority Critical patent/CN113720202A/en
Publication of CN113720202A publication Critical patent/CN113720202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41AFUNCTIONAL FEATURES OR DETAILS COMMON TO BOTH SMALLARMS AND ORDNANCE, e.g. CANNONS; MOUNTINGS FOR SMALLARMS OR ORDNANCE
    • F41A33/00Adaptations for training; Gun simulators
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an immersive 3D image shooting training shooting range software system, wherein an infrared camera module and a thermal sensing camera module are used for acquiring shooting point image information; the server comprises a storage unit, an information processing unit, a calculation unit and an image real-time rendering engine unit; the storage unit is configured to store audio data and video data; the video data comprises a plurality of preset virtual scene data packets; the virtual scene data packet comprises a current virtual scene and a plurality of events to be triggered; the current virtual scene comprises a plurality of pieces of preset space coordinate information, and each preset space coordinate information of the current virtual scene corresponds to at least one event to be triggered in the area range; the conversion and matching of shooting point position image data are realized through the information processing unit, the computing unit and the image real-time rendering engine unit, so that event change caused by shooting point positions is triggered, a virtual scene is changed along with the shooting point positions, and the effect of restoring cases to be vivid is achieved.

Description

Immersive 3D image shooting training target range software system and method
Technical Field
The invention relates to the technical field of shooting training, in particular to an immersive 3D image shooting training shooting range software system and method.
Background
The existing shooting training target range is mostly two-dimensional plane projection, the shooting range projects pictures to the target range through a projector, stereoscopic impression and spatial impression are lacked, a participant cannot be merged into a played case scene, when image content is played in the shooting range, the participant can only passively imagine and participate in experience through watching the case scene information, the participant cannot be placed in the case scene information, the participant personally experiences actual experience of a main role of the case scene information, and the training effect of logic judgment capability and corresponding action required by the participant in a complex situation is poor.
The shooting training field with the two-dimensional plane projection has the advantages that shooting targets are single, environmental limitation is large, trainees do not have practical field experience feeling and the like during case recovery training, judgment results after shooting are single, only success or failure is achieved, only the hitting accuracy of participants can be trained, real case conditions cannot be simulated, and logic judgment needed by the participants in complex case processing and corresponding action response are achieved.
Therefore, in order to solve the problems in the prior art, it is important to provide an immersive 3D image shooting training shooting range software system technology that can restore a scene, feed back a result and shoot accuracy in real time, and does not need a large shooting field.
Disclosure of Invention
The invention aims to avoid the defects in the prior art and provides an immersive 3D image shooting training shooting range software system, which can create an immersive 3D shooting training space and solve the problems that the traditional shooting training system is single and lagged, and has large requirements on the space, training places and equipment and the like. Meanwhile, the case is restored in real time, the case scene is reflected, and the trained personnel are immersed in the case for training. Through the capture of the shooting point, the shooting score of the trainee is fed back and displayed in real time, and the field strain capacity and the shooting accuracy of the participant can be trained.
In order to solve the problems, the invention aims to realize the following technical scheme:
an immersive 3D image shooting training shooting range software system comprises a shooting area and protection areas arranged on two sides of the shooting area; further comprising:
the projectors are respectively acted on the shooting area and the protection area, the single display areas of the projectors are distributed in a matrix form, and the shooting area and the protection area are spliced to form an integral display area;
the infrared camera module is used for collecting shooting point image information of the infrared shooting gun and sending the shooting point image information to the switch;
the thermal sensing camera module is used for collecting shooting point image information of the live ammunition gun and sending the shooting point image information to the switch;
the switch is used for acquiring shooting point image information of the infrared camera module or the thermal camera module and sending the shooting point image information to the server;
the server comprises a storage unit, an information processing unit, a calculation unit and an image real-time rendering engine unit;
wherein the storage unit is configured to store audio data and video data; the video data comprises a plurality of preset virtual scene data packets; the virtual scene data packet comprises a current virtual scene and a plurality of events to be triggered; the current virtual scene comprises a plurality of pieces of preset space coordinate information, and each preset space coordinate information of the current virtual scene corresponds to at least one event to be triggered in an area range;
the information processing unit is configured to receive shooting point image information from the switch, convert the shooting point image information into shooting point coordinate information and send the shooting point coordinate information to the computing unit;
the computing unit is configured to convert shooting point location coordinate information into shooting point location space coordinate information and match the shooting point location space coordinate information with each piece of preset space coordinate information of the current virtual scene; if the shooting point space coordinate information falls into the area where one or more preset space coordinate information of the current virtual scene is located, the computing unit sends a trigger instruction to the image real-time rendering engine unit;
the image real-time rendering engine unit is configured to receive a trigger instruction, and trigger all events to be triggered in an area where shooting point space coordinate information is located according to the trigger instruction; the image real-time rendering engine unit drives the event to be triggered to run according to event logic, images in the event to be triggered are compiled and rendered in real time, and the video data and the audio data which are compiled and rendered are respectively sent to the video splicing fusion device and the sound system;
the sound system is used for receiving and outputting audio data;
the video splicing fusion device is used for dividing a frame image in the video data into a plurality of image blocks according to the single display area corresponding to each projector, matching the characteristics of each image block, searching key points for splicing the image blocks, performing edge fusion on the associated image blocks, and sending the fused image blocks to the video processor;
and the video processor is used for reintegrating the image blocks corresponding to the same monomer display area into sub-video data, and then respectively sending each sub-video data to the corresponding projector to form complete picture playing on the shooting area and the protection area.
The shooting point image information collected by the infrared camera module or the thermal sensing camera module is a gray image; the calculation unit optimizes the difference comparison of the gray level images through an algorithm, eliminates interference items in the gray level images, and obtains shooting point coordinate information in the gray level images.
As described above, the event to be triggered includes a plurality of virtual scenes and virtual characters, the plurality of virtual scenes or virtual characters pre-store an event logic running track, and when the event to be triggered is triggered, the plurality of virtual scenes or virtual characters are played in a frame image manner along a time axis according to event logic.
The image real-time rendering engine unit obtains one or more events to be triggered according to the received triggering instruction, and arranges, superposes, compiles and renders the events to be triggered to form video data.
The server performs grid division on shooting point image information acquired by the infrared camera module or the thermal sensing camera module according to the resolution M × N of the camera to form M × N grid points; wherein M is more than or equal to 1, and N is more than or equal to 1. The M grid points are uniformly distributed in the display area. Wherein the display area includes a single display area or an entire display area.
In the above, the infrared camera module includes an infrared signal emitter and an infrared sensor.
In the above, the thermal camera module includes a thermal imaging sensor and a thermal signal transmitter.
Preferably, the generation system further comprises a main control display for displaying the control interface.
Specifically, shooting points of an infrared shooting gun are collected through an infrared camera module, live ammunition shooting points are collected through a thermal camera module, a picture collected through the thermal camera module (the infrared camera module) is divided into grid-shaped grid points through an information processing unit according to camera resolution, for example, the camera resolution is 640 x 480, 307200 grid points can be divided, the grid points are uniformly distributed in a projection display area, the optimum projection range is measured and calculated by controlling projection distance and lens focal length, the fact that the mapped grid intervals meet the precision requirement is guaranteed, when one camera meets the precision requirement, the shooting area cannot be completely projected, and a plurality of cameras are required to be spliced to complete picture projection work. When shot points are shot by the camera, the shot points are converted into a gray level image, the difference comparison of the gray level image is optimized through an algorithm, and interference items in the gray level image are eliminated, so that corresponding coordinate information of the gray level points in preset grid points is obtained. Meanwhile, the coordinate information and the picture content page have a preset mapping relation, so that when shooting points are captured, the coordinate information of the shooting points in the camera can be mapped and converted into the coordinate information of the picture content in the image engine. The real-time calibration calculation of the acquired coordinate data is realized, corresponding vision and case logic is triggered according to the calculation result, the picture content is calculated in real time according to the received data through a computer graphic image real-time rendering engine, the multi-channel visual synchronization technology and the visual correction technology are combined, the vision is projected on the wall surface of a space in real time by using projection equipment, an immersion shooting space is created, and the high-resolution three-dimensional audio-visual image and multi-degree-of-freedom interactive experience which are personally on the scene are obtained.
In the technical scheme, the case recovery can be iteratively upgraded, and events to be triggered corresponding to a plurality of shooting point positions which can be shot are compiled, so that the corresponding events to be triggered can be triggered when the shooting point is shot, and the story line of the case is richer. Shooting point positions are collected ceaselessly, data are transmitted in time, so that the shot images are changed correspondingly, the current case process and various materials are more truly restored and simulated, and trained personnel can be immersed in the case environment to carry out 3D (three-dimensional) package in-person training; the shooting training system can achieve the effect that different positions are hit to show different actions and various reaction scenes, so that the training is more real and scientific. And the case recovery system in the initial stage of the system mainly takes the preset story line logic as a main part, and case development can be compiled on the basis of the whole police training system and the specific occurrence process of the real case. On one hand, the subsequent iteration is to continuously enrich the possibility of case development by writing an active program so as to integrate the influence of all actions which can be taken by the trainees on the case into the system, on the other hand, an AI algorithm and a computer deep learning module are added, so that the trainees can continuously and deeply learn the behavior of the trainees in the participation of the trainees, and the randomness of case recovery is enriched and the situation is infinitely close to reality.
The working principle of the technical scheme is as follows: the participator uses the infrared gun or the live-action gun to shoot a specified area (such as a shooting area), the shooting point in the area is recorded by the infrared and thermal imaging sensor and is transmitted to the server through the switch, the corresponding shooting point coordinate information is analyzed into real-time data (namely space coordinate information) for controlling a sound system and a video system through the information processing unit, the calculating unit and the image real-time rendering engine unit in the server, the picture content is calculated in real time according to the received data through the computer graphic and image real-time rendering engine unit, the multi-channel visual synchronization technology and the visual correction technology are combined, the visual is projected onto the wall surface of the space in real time by using the projection equipment, an immersion shooting space is created, and meanwhile, the program is changed on the main control display in real time to control the presentation of interface data.
Preferably, the firing zone is located on an inner side of the firing chamber.
Preferably, the projector is arranged at the top in the shooting room.
Preferably, the surface of the shooting area is provided with a bullet collector.
Above, receipts bullet ware includes by outer first bulletproof steel sheet, PU board and the rubber curtain that sets gradually to interior, first bulletproof steel sheet passes through first vertical frame and shooting district fixed connection, PU board and rubber curtain pass through horizontal frame and the interior top fixed connection of shooting room.
Preferably, the surface of the protection zone is provided with a bulletproof layer.
The bulletproof layer comprises a second bulletproof steel plate, a solid wood keel frame, a multilayer plate and a wood wire plate which are sequentially connected through gluing from outside to inside, and the second bulletproof steel plate is fixedly connected with the protection area through a second vertical frame.
The invention also provides a shooting training range generating method applied to the shooting training range software system, which comprises the following steps:
step S1: capturing shooting point image information acquisition instructions and transmitting the instructions to a system; acquiring shooting point position images formed by the infrared camera module or the thermal camera module in the shooting area according to the instruction information;
step S2: capturing shooting point coordinate information to generate an instruction, and transmitting the instruction to a system; acquiring shooting point image information according to the instruction information, and processing and converting the shooting point image information into shooting point coordinate information;
step S3: intercepting a secondary conversion instruction of the coordinate information of the shooting point, and transmitting the instruction to a system; converting shooting point coordinate information into shooting point space coordinate information according to the instruction information;
step S4: intercepting a space coordinate information matching instruction and transmitting the instruction to a system; according to the instruction information, matching shooting point space coordinate information with each preset space coordinate information of the current virtual scene; if the shooting point space coordinate information falls in the area where one or more preset space coordinate information of the current virtual scene is located, executing step S5;
step S5: capturing a trigger instruction of an event to be triggered, and transmitting the instruction to a system; triggering all events to be triggered in the area where the shooting point space coordinate information is located according to the instruction information; driving the event to be triggered to run according to event logic, and compiling and rendering the image in the event to be triggered in real time; and playing a plurality of virtual scenes or virtual characters in the event to be triggered in a frame picture mode along a time axis according to event logic to form video data.
Step S6: intercepting a video splicing and fusing instruction, and transmitting the instruction to a system; dividing a frame image in the video data into a plurality of image blocks according to the instruction information and the single display area corresponding to each projector, matching the characteristics of each image block, searching key points for splicing the image blocks, and then performing edge fusion on the associated image blocks to form a fused image;
step S7: intercepting a video processing instruction and transmitting the instruction to a system; integrating image blocks corresponding to the same monomer display area into sub video data again according to the instruction information, and sending each sub video data to a corresponding projector respectively;
step S8: intercepting an audio playing instruction and transmitting the instruction to a system; according to the instruction information, audio data and video data are synchronously played, and complete picture playing is formed on the shooting area and the protection area;
step S9: and repeatedly executing the steps S1-S8, and performing iterative playing until the corresponding event to be triggered preset at the shooting point is triggered.
The invention further provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the method of shooting training range generation as described above.
The invention further provides a computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method of shooting training range generation as described above according to the invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
The invention has the beneficial effects that:
the shooting training shooting range software system provided by the invention is 3D projection and is not limited by a field; the system combines the infrared shooting targets or the live ammunition shooting targets to generate a virtual simulation target range surrounded by a three-dimensional stereo projection picture; shooting point location data are collected in real time through a camera module, and conversion matching of shooting point location image data is achieved through an information processing unit, a computing unit and an image real-time rendering engine unit, so that event change caused by shooting point locations is triggered, a virtual scene is changed along with the shooting point locations, and the effect of restoring cases to be vivid is achieved; the wireless approach is to the real medium-bounce feedback reaction, and the event result formed by the shooting result is fed back to the participant by combining the event logic of the event to be triggered, which is stored in advance by the system, so that the timeliness of the feedback obtained by the participant after shooting is ensured. Accuracy and immersion.
Drawings
FIG. 1 is a schematic structural diagram of a shooting training range software system provided by the present invention;
FIG. 2 is a schematic diagram of the operational principle of the shooting training range software system provided by the present invention;
FIG. 3 is a schematic diagram of a server structure of a shooting training range software system provided by the present invention;
fig. 4 is a schematic flow chart of the shooting training range generation method provided by the invention.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings.
As shown in fig. 1 to 3, the embodiment provides an immersive 3D image shooting training shooting range software system, which includes a shooting area and protection areas disposed on two sides of the shooting area; further comprising:
the projectors are respectively acted on the shooting area and the protection area, the single display areas of the projectors are distributed in a matrix form, and the shooting area and the protection area are spliced to form an integral display area;
the infrared camera module is used for collecting shooting point image information of the infrared shooting gun and sending the shooting point image information to the switch;
the thermal sensing camera module is used for collecting shooting point image information of the live ammunition gun and sending the shooting point image information to the switch;
the switch is used for acquiring shooting point image information of the infrared camera module or the thermal camera module and sending the shooting point image information to the server;
the server comprises a storage unit, an information processing unit, a calculation unit and an image real-time rendering engine unit;
wherein the storage unit is configured to store audio data and video data; the video data comprises a plurality of preset virtual scene data packets; the virtual scene data packet comprises a current virtual scene and a plurality of events to be triggered; the current virtual scene comprises a plurality of pieces of preset space coordinate information, and each preset space coordinate information of the current virtual scene corresponds to at least one event to be triggered in an area range;
the information processing unit is configured to receive shooting point image information from the switch, convert the shooting point image information into shooting point coordinate information and send the shooting point coordinate information to the computing unit;
the computing unit is configured to convert shooting point location coordinate information into shooting point location space coordinate information and match the shooting point location space coordinate information with each piece of preset space coordinate information of the current virtual scene; if the shooting point space coordinate information falls into the area where one or more preset space coordinate information of the current virtual scene is located, the computing unit sends a trigger instruction to the image real-time rendering engine unit;
the image real-time rendering engine unit is configured to receive a trigger instruction, and trigger all events to be triggered in an area where shooting point space coordinate information is located according to the trigger instruction; the image real-time rendering engine unit drives the event to be triggered to run according to event logic, images in the event to be triggered are compiled and rendered in real time, and the video data and the audio data which are compiled and rendered are respectively sent to the video splicing fusion device and the sound system;
the sound system is used for receiving and outputting audio data;
the video splicing fusion device is used for dividing a frame image in the video data into a plurality of image blocks according to the single display area corresponding to each projector, matching the characteristics of each image block, searching key points for splicing the image blocks, performing edge fusion on the associated image blocks, and sending the fused image blocks to the video processor;
the video processor is used for reintegrating the image blocks corresponding to the same monomer display area into sub-video data, and then respectively sending each sub-video data to the corresponding projector to form complete picture playing on the shooting area and the protection area;
and the main control display is used for displaying a control interface.
In this embodiment, assuming that the virtual scene transmitted by the current projector is a glass bottle factory, a plurality of glass bottles are in the glass bottle factory, and one of the glass bottles X is set as a hit target, the glass bottle X is divided into two regions, namely, a bottle mouth X1 (corresponding to spatial coordinates of (X1, y1)) and a bottle body X2 (corresponding to spatial coordinates of (X2, y2)), so that the bottle mouth region and the bottle body region are defined around the two coordinates; shooting by a participant at the bottleneck of the glass bottle X by using a ball gun to hit the glass bottle X; at the moment, the thermal sensing camera module shoots an image containing the shooting point position and transmits the image to the server through the switch; the information processing unit of the server processes the coordinate information into coordinate information, and the coordinate information is secondarily processed into space coordinate information (X, Y) through the calculating unit; at the moment, the calculation unit respectively matches the spatial coordinate information (X, Y) of the shot point with the spatial coordinates (X1, Y1) and the area with the spatial coordinates (X2, Y2), and if the spatial coordinate information (X, Y) of the shot point is in the range of the spatial coordinates (X1, Y1) of the bottleneck X1, the to-be-triggered event corresponding to the spatial coordinates (X1, Y1) is triggered, namely visual effect animation and sound effect brought by bottleneck blasting are generated; if the space coordinate information (X, Y) of the shooting point is in the space coordinate (X2, Y2) range of the bottle body X2, triggering the event to be triggered corresponding to the space coordinate (X2, Y2), namely generating visual effect animation and sound effect brought by bottle body explosion; and the whole scene of the glass factory is changed and iterated after the glass bottle X breaks out, so that a shooting training target range with real-time feedback and better experience is realized.
If the shooting target of the participant is a character, different knock-down effects can be generated when the participant hits different parts of the character, a preset event to be triggered is set at each specific space coordinate, when the event is triggered, the shooting effect can be fed back in real time by continuously updating the iterative animation, and the timeliness, the accuracy and the immersion sense are improved.
In this embodiment, the shooting point image information collected by the infrared camera module or the thermal camera module is a gray image; the calculation unit optimizes the difference comparison of the gray level images through an algorithm, eliminates interference items in the gray level images, and obtains shooting point coordinate information in the gray level images.
In this embodiment, the event to be triggered includes a plurality of virtual scenes and virtual characters, where event logic running tracks are stored in advance in the plurality of virtual scenes or virtual characters, and when the event to be triggered is triggered, the plurality of virtual scenes or virtual characters are played in a frame image manner along a time axis according to event logic.
In this embodiment, the image real-time rendering engine unit obtains one or more events to be triggered according to the received trigger instruction, and performs arrangement, superposition, compilation and rendering on the multiple events to be triggered to form video data.
In this embodiment, the server performs mesh division on shooting point image information acquired by the infrared camera module or the thermal camera module according to the camera resolution M × N to form M × N grid points; wherein M is more than or equal to 1, and N is more than or equal to 1. The M grid points are uniformly distributed in the display area. Wherein the display area includes a single display area or an entire display area.
As shown in fig. 4, the present embodiment also provides various shooting training range generating methods applied to the shooting training range software system, including:
step S1: capturing shooting point image information acquisition instructions and transmitting the instructions to a system; acquiring shooting point position images formed by the infrared camera module or the thermal camera module in the shooting area according to the instruction information;
step S2: capturing shooting point coordinate information to generate an instruction, and transmitting the instruction to a system; acquiring shooting point image information according to the instruction information, and processing and converting the shooting point image information into shooting point coordinate information;
step S3: intercepting a secondary conversion instruction of the coordinate information of the shooting point, and transmitting the instruction to a system; converting shooting point coordinate information into shooting point space coordinate information according to the instruction information;
step S4: intercepting a space coordinate information matching instruction and transmitting the instruction to a system; according to the instruction information, matching shooting point space coordinate information with each preset space coordinate information of the current virtual scene; if the shooting point space coordinate information falls in the area where one or more preset space coordinate information of the current virtual scene is located, executing step S5;
step S5: capturing a trigger instruction of an event to be triggered, and transmitting the instruction to a system; triggering all events to be triggered in the area where the shooting point space coordinate information is located according to the instruction information; driving the event to be triggered to run according to event logic, and compiling and rendering the image in the event to be triggered in real time; and playing a plurality of virtual scenes or virtual characters in the event to be triggered in a frame picture mode along a time axis according to event logic to form video data.
Step S6: intercepting a video splicing and fusing instruction, and transmitting the instruction to a system; dividing a frame image in the video data into a plurality of image blocks according to the instruction information and the single display area corresponding to each projector, matching the characteristics of each image block, searching key points for splicing the image blocks, and then performing edge fusion on the associated image blocks to form a fused image;
step S7: intercepting a video processing instruction and transmitting the instruction to a system; integrating image blocks corresponding to the same monomer display area into sub video data again according to the instruction information, and sending each sub video data to a corresponding projector respectively;
step S8: intercepting an audio playing instruction and transmitting the instruction to a system; according to the instruction information, audio data and video data are synchronously played, and complete picture playing is formed on the shooting area and the protection area;
step S9: and repeatedly executing the steps S1-S8, and performing iterative playing until the corresponding event to be triggered preset at the shooting point is triggered.
Variations and modifications to the above-described embodiments may occur to those skilled in the art, which fall within the scope and spirit of the above description. Therefore, the present invention is not limited to the specific embodiments disclosed and described above, and some modifications and variations of the present invention should fall within the scope of the claims of the present invention. Furthermore, although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (10)

1. An immersive 3D image shooting training shooting range software system comprises a shooting area and protection areas arranged on two sides of the shooting area; it is characterized by also comprising:
the projectors are respectively acted on the shooting area and the protection area, the single display areas of the projectors are distributed in a matrix form, and the shooting area and the protection area are spliced to form an integral display area;
the infrared camera module is used for collecting shooting point image information of the infrared shooting gun and sending the shooting point image information to the switch;
the thermal sensing camera module is used for collecting shooting point image information of the live ammunition gun and sending the shooting point image information to the switch;
the switch is used for acquiring shooting point image information of the infrared camera module or the thermal camera module and sending the shooting point image information to the server;
the server comprises a storage unit, an information processing unit, a calculation unit and an image real-time rendering engine unit;
wherein the storage unit is configured to store audio data and video data; the video data comprises a plurality of preset virtual scene data packets; the virtual scene data packet comprises a current virtual scene and a plurality of events to be triggered; the current virtual scene comprises a plurality of pieces of preset space coordinate information, and each preset space coordinate information of the current virtual scene corresponds to at least one event to be triggered in an area range;
the information processing unit is configured to receive shooting point image information from the switch, convert the shooting point image information into shooting point coordinate information and send the shooting point coordinate information to the computing unit;
the computing unit is configured to convert shooting point location coordinate information into shooting point location space coordinate information and match the shooting point location space coordinate information with each piece of preset space coordinate information of the current virtual scene; if the shooting point space coordinate information falls into the area where one or more preset space coordinate information of the current virtual scene is located, the computing unit sends a trigger instruction to the image real-time rendering engine unit;
the image real-time rendering engine unit is configured to receive a trigger instruction, and trigger all events to be triggered in an area where shooting point space coordinate information is located according to the trigger instruction; the image real-time rendering engine unit drives the event to be triggered to run according to event logic, images in the event to be triggered are compiled and rendered in real time, and the video data and the audio data which are compiled and rendered are respectively sent to the video splicing fusion device and the sound system;
the sound system is used for receiving and outputting audio data;
the video splicing fusion device is used for dividing a frame image in the video data into a plurality of image blocks according to the single display area corresponding to each projector, matching the characteristics of each image block, searching key points for splicing the image blocks, performing edge fusion on the associated image blocks, and sending the fused image blocks to the video processor;
and the video processor is used for reintegrating the image blocks corresponding to the same monomer display area into sub-video data, and then respectively sending each sub-video data to the corresponding projector to form complete picture playing on the shooting area and the protection area.
2. The immersive 3D image shooting training shooting range software system of claim 1, wherein shooting point image information collected by the infrared camera module or the thermal camera module is a grayscale image; the calculation unit optimizes the difference comparison of the gray level images through an algorithm, eliminates interference items in the gray level images, and obtains shooting point coordinate information in the gray level images.
3. The immersive 3D image shooting training shooting range software system according to claim 1, wherein the event to be triggered comprises a plurality of virtual scenes and virtual characters, the virtual scenes or the virtual characters are stored with event logic running tracks in advance, and when the event to be triggered is triggered, the virtual scenes or the virtual characters are played in a frame mode along a time axis according to the event logic.
4. The immersive 3D video shooting training shooting range software system of claim 1, wherein the image real-time rendering engine unit obtains one or more events to be triggered according to the received trigger instruction, and arranges, overlaps, compiles, and renders the plurality of events to be triggered to form video data.
5. The immersive 3D video shooting training shooting range software system of claim 1, wherein the server grids shooting point image information collected by the infrared camera module or the thermal camera module according to a camera resolution M × N to form M × N grid points; wherein M is more than or equal to 1, and N is more than or equal to 1.
6. The immersive 3D image shooting training shooting range software system of claim 5, wherein the M x N grid points are evenly distributed in the display area.
7. The immersive 3D image shooting training range software system of claim 1, wherein the generation system further comprises a master display for displaying a control interface.
8. An immersive 3D image shooting training range generation method applied to the immersive 3D image shooting training range software system according to any one of claims 1 to 7, wherein the generation method comprises the following steps:
step S1: capturing shooting point image information acquisition instructions and transmitting the instructions to a system; acquiring shooting point position images formed by the infrared camera module or the thermal camera module in the shooting area according to the instruction information;
step S2: capturing shooting point coordinate information to generate an instruction, and transmitting the instruction to a system; acquiring shooting point image information according to the instruction information, and processing and converting the shooting point image information into shooting point coordinate information;
step S3: intercepting a secondary conversion instruction of the coordinate information of the shooting point, and transmitting the instruction to a system; converting shooting point coordinate information into shooting point space coordinate information according to the instruction information;
step S4: intercepting a space coordinate information matching instruction and transmitting the instruction to a system; according to the instruction information, matching shooting point space coordinate information with each preset space coordinate information of the current virtual scene; if the shooting point space coordinate information falls in the area where one or more preset space coordinate information of the current virtual scene is located, executing step S5;
step S5: capturing a trigger instruction of an event to be triggered, and transmitting the instruction to a system; triggering all events to be triggered in the area where the shooting point space coordinate information is located according to the instruction information; driving the event to be triggered to run according to event logic, and compiling and rendering the image in the event to be triggered in real time; and playing a plurality of virtual scenes or virtual characters in the event to be triggered in a frame picture mode along a time axis according to event logic to form video data.
Step S6: intercepting a video splicing and fusing instruction, and transmitting the instruction to a system; dividing a frame image in the video data into a plurality of image blocks according to the instruction information and the single display area corresponding to each projector, matching the characteristics of each image block, searching key points for splicing the image blocks, and then performing edge fusion on the associated image blocks to form a fused image;
step S7: intercepting a video processing instruction and transmitting the instruction to a system; integrating image blocks corresponding to the same monomer display area into sub video data again according to the instruction information, and sending each sub video data to a corresponding projector respectively;
step S8: intercepting an audio playing instruction and transmitting the instruction to a system; according to the instruction information, audio data and video data are synchronously played, and complete picture playing is formed on the shooting area and the protection area;
step S9: and repeatedly executing the steps S1-S8, and performing iterative playing until the corresponding event to be triggered preset at the shooting point is triggered.
9. An electronic device, characterized in that: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, carries out the method of shooting training range generation according to claim 8.
10. A computer-readable medium having a computer program stored thereon, characterized in that: which program, when being executed by a processor, carries out the shooting training range generating method of claim 8.
CN202010399145.9A 2020-05-12 2020-05-12 Immersive 3D image shooting training target range software system and method Pending CN113720202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010399145.9A CN113720202A (en) 2020-05-12 2020-05-12 Immersive 3D image shooting training target range software system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010399145.9A CN113720202A (en) 2020-05-12 2020-05-12 Immersive 3D image shooting training target range software system and method

Publications (1)

Publication Number Publication Date
CN113720202A true CN113720202A (en) 2021-11-30

Family

ID=78671144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010399145.9A Pending CN113720202A (en) 2020-05-12 2020-05-12 Immersive 3D image shooting training target range software system and method

Country Status (1)

Country Link
CN (1) CN113720202A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199075A (en) * 2021-12-21 2022-03-18 北京华如科技股份有限公司 Chest ring target simulation laser training system
CN117610794A (en) * 2024-01-22 2024-02-27 南昌菱形信息技术有限公司 Scene simulation training evaluation system and method for emergency

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060105299A1 (en) * 2004-03-15 2006-05-18 Virtra Systems, Inc. Method and program for scenario provision in a simulation system
CN2816734Y (en) * 2005-08-05 2006-09-13 北京神州凯业系统工程技术研究中心 Full-scence-simulation shooting training-apparatus
CN103644764A (en) * 2013-12-20 2014-03-19 南京理工大学连云港研究院 Virtual-shooting simulative training system for police
CN106060493A (en) * 2016-07-07 2016-10-26 广东技术师范学院 Multi-source projection seamless edge stitching method and system
CN206019465U (en) * 2016-08-30 2017-03-15 江门市前卫匹特搏供应有限公司 A kind of safe dual training system based on true gun
CN107789826A (en) * 2016-08-31 2018-03-13 福建泉城特种装备科技有限公司 Image shooting training system
CN207366930U (en) * 2017-08-24 2018-05-15 南京才华科技集团有限公司 A kind of 3D stereopsis training system
CN209541534U (en) * 2018-12-05 2019-10-25 南京润景丰创信息技术有限公司 A kind of image dual training system of compatible analog bullet and live shell

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060105299A1 (en) * 2004-03-15 2006-05-18 Virtra Systems, Inc. Method and program for scenario provision in a simulation system
CN2816734Y (en) * 2005-08-05 2006-09-13 北京神州凯业系统工程技术研究中心 Full-scence-simulation shooting training-apparatus
CN103644764A (en) * 2013-12-20 2014-03-19 南京理工大学连云港研究院 Virtual-shooting simulative training system for police
CN106060493A (en) * 2016-07-07 2016-10-26 广东技术师范学院 Multi-source projection seamless edge stitching method and system
CN206019465U (en) * 2016-08-30 2017-03-15 江门市前卫匹特搏供应有限公司 A kind of safe dual training system based on true gun
CN107789826A (en) * 2016-08-31 2018-03-13 福建泉城特种装备科技有限公司 Image shooting training system
CN207366930U (en) * 2017-08-24 2018-05-15 南京才华科技集团有限公司 A kind of 3D stereopsis training system
CN209541534U (en) * 2018-12-05 2019-10-25 南京润景丰创信息技术有限公司 A kind of image dual training system of compatible analog bullet and live shell

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199075A (en) * 2021-12-21 2022-03-18 北京华如科技股份有限公司 Chest ring target simulation laser training system
CN117610794A (en) * 2024-01-22 2024-02-27 南昌菱形信息技术有限公司 Scene simulation training evaluation system and method for emergency
CN117610794B (en) * 2024-01-22 2024-04-19 南昌菱形信息技术有限公司 Scene simulation training evaluation system and method for emergency

Similar Documents

Publication Publication Date Title
CN110650354B (en) Live broadcast method, system, equipment and storage medium for virtual cartoon character
KR100490726B1 (en) Apparatus and method for video based shooting game
US6278418B1 (en) Three-dimensional imaging system, game device, method for same and recording medium
US20200142663A1 (en) Multi-viewpoint switched shooting system and method
CN107341832B (en) Multi-view switching shooting system and method based on infrared positioning system
US20180314322A1 (en) System and method for immersive cave application
CN107454433B (en) Live broadcasting annotation method and device, terminal and live broadcasting system
JP2012523269A (en) Virtual camera motion simulation
KR101734520B1 (en) User Interfacing system based on movement patterns recognition by gyro sensor
CN113720202A (en) Immersive 3D image shooting training target range software system and method
WO2021106803A1 (en) Class system, viewing terminal, information processing method, and program
CN114035682A (en) Naked eye 3D interactive immersive virtual reality CAVE system
KR101348195B1 (en) Virtual reality 4d an image firing system make use of a hologram
KR101076263B1 (en) Tangible Simulator Based Large-scale Interactive Game System And Method Thereof
CN101614504B (en) Real-person confrontation simulated shooting system, battle platform and operating method thereof
JP2021086606A (en) Class system, viewing terminal, information processing method, and program
JP2021086146A (en) Content control system, content control method, and content control program
Yuan et al. Virtual fire drill system supporting co-located collaboration
CN111913572B (en) Human-computer interaction system and method for user labor learning
CN110728743B (en) VR three-dimensional scene three-dimensional picture generation method combining cloud global illumination rendering
KR102467381B1 (en) Method of sensing plane touch using depth camera and apparatus performing the same
Córdova-Esparza et al. Telepresence system based on simulated holographic display
CN111651048B (en) Multi-virtual object arrangement display method and device, electronic equipment and storage medium
US11561753B2 (en) Digital video structural support system
JP6733027B1 (en) Content control system, content control method, and content control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination