CN201155944Y - Ball firing system - Google Patents

Ball firing system Download PDF

Info

Publication number
CN201155944Y
CN201155944Y CNU2008200922875U CN200820092287U CN201155944Y CN 201155944 Y CN201155944 Y CN 201155944Y CN U2008200922875 U CNU2008200922875 U CN U2008200922875U CN 200820092287 U CN200820092287 U CN 200820092287U CN 201155944 Y CN201155944 Y CN 201155944Y
Authority
CN
China
Prior art keywords
module
data
scene
hits
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNU2008200922875U
Other languages
Chinese (zh)
Inventor
邓秉忠
刘联华
江志添
郑威波
黄桃益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CNU2008200922875U priority Critical patent/CN201155944Y/en
Application granted granted Critical
Publication of CN201155944Y publication Critical patent/CN201155944Y/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Controls And Circuits For Display Device (AREA)
  • Projection Apparatus (AREA)

Abstract

The utility model relates to a ball firing system which comprises a target screen, a projector, a central control unit, a network and a hitting analyzing unit connected with the central control unit via the network; the hitting analyzing unit includes a data acquiring module, a mapping relationship module and a hit resolving module; the data acquiring module is used to project the sample images of grids on the target screen, photograph the sample images, acquire the data of grid terminals, and finally acquire sampling images; the mapping relationship module is used to analyze the position data in the images which is corresponding to the grid terminals, and set up the non-linear mapping relationship between relevant images; the hit resolving module is used to detect the hits on the target screen, acquire and photograph images, recover the positions of the hits according to the mapping relationship, and resolve the accurate positions of the hits relatively to the sample images; the hitting analyzing unit controls to project the hits on the shooting target screen by the projector according to the resolved accurate positions of the hits. The system effectively revises the error brought by image distortion in image photographing, can detect the hits projected on the screen, so as to quickly resolve the positions in real time.

Description

A kind of ball firing system
Technical field
The utility model relates to photographed images distortion recovery technique, especially relates to a kind of ball firing system that utilizes based on the virtual scene of the point of impact calculation method of topology grid learning.
Background technology
Guiding theory based on the effect operation is introduced the city in a large number with war, and street complexity, the building particular surroundings of standing in great numbers fight the city to have brought unprecedented challenge to army.Traditional ball firing training can not satisfy the needs of following military struggle, and can not effectively satisfy garrisons fulfils the demand of defence.The virtual scene ball firing training becomes a kind of new training tool, multiple particular surroundings such as its energy simcity street, building, hostage's abduction.Simulated scenario ball firing generally goes out scene or simulation target by projector's projection, projection screen is again to be subjected to barrage simultaneously, carrying out the point of impact by the video camera of installing side by side with projector detects, carry out the generation of automatic target-indicating or control scene by the point of impact that detects, this just proposes new specification requirement to shell hole identification and location.
Present virtual scene ball firing system roughly has following two types:
The one, go out particular target figure for shooting by projector's projection, video camera is clapped back image, searches shell hole by graphical analysis, analyzes by target figure scoring ring simultaneously, in conjunction with the shell hole position, thereby draws the number of rings of shell hole.This method shortcoming is to analyze specific target figure, can not carry out the gunnery training of any simulated scenario, to photographed images quality requirement height, when target figure imaging is bad, then can't correctly analyzes and hit number of rings.
Another kind is to utilize laser or infrared light several position (central 1 point, 5 points) on screen to get bright spot at four jiaos 4, draws the photographed images and the screen coordinate relation of these points, gets average and is used to calculate other position.The shortcoming of this method is a low precision, can't eliminate the error of photographed images distortion, and the projection distortion error of projector also can't be eliminated simultaneously, and camera lens quality and installation site are proposed very high requirement.
Summary of the invention
The purpose of this utility model is to solve to exist in the existing virtual scene ball firing system can't eliminate the projected image distortion error of projector and have the not high technical problem of point of impact positioning accuracy, proposes a kind of ball firing system that utilizes based on the virtual scene of the point of impact calculation method of topology grid learning.
For addressing the above problem, the utility model discloses a kind of ball firing system, comprising: the gunnery target screen; Projector; Be used to produce virtual operation scene image, and control is projected in the central control unit that gunnery target shields with the virtual scene image by projector; Network; Hit analytic unit by what network was connected with central control unit, it is used for discerning processing by the pop-off view data that data acquisition camera is gathered on the gunnery target screen, and this hits analytic unit and comprises:
Data acquisition module is used for the sample image for grid is projected in the gunnery target screen by projector, by video camera the sample image on the gunnery target screen is made a video recording, and gathers the position data of each grid end points, obtains sampled images;
The mapping relations module is used to analyze the position data of each grid end points correspondence among sample image and sampled images, sets up nonlinear topology grid mapping relations between sample image and the sampled images;
The point of impact resolves module, be used for detecting the point of impact that hits on the gunnery target screen by video camera, obtain photographed images, according to the topology grid mapping relations distortion of making a video recording of the position of this point of impact in photographed images is repaired, calculate this point of impact accurate position with respect to sample image on the gunnery target screen;
The described analytic unit that hits is projected in the point of impact on the gunnery target screen by projector according to the accurate location information control of the point of impact that calculates.
More excellent, ball firing system of the present utility model also comprises: the scene acquisition camera; And by network with hit the scene management unit that analytic unit is connected with central control unit, it comprises:
Network control interface;
The contextual data acquisition module, be used for when the confrontation fire training mode, control scene camera acquisition shooter video image, and data are passed to adversary's scene management unit and the projection shooting scene as the antagonism adversary by network control interface, this video data also is transferred to central control unit for CSRC simultaneously;
The target generation module is used for when the simulative shooting training pattern, and control generates for the virtual scene view data of shooting or simulation target image;
Video processing module, the confrontation fire training mode is used to resist the compression of adversary's scene video decompress(ion) and our shooter's video, during the virtual scene shooting, is used for the broadcast and the control of scene video files;
Image synthesis unit is used for and will hits the data composograph of indication of shots module, target generation module and video processing module output.
More excellent, described network is a Networks of Fiber Communications.
More excellent, ball firing system of the present utility model also comprises: the system's electrical control unit that is used to control illumination, demonstration and the processing of eliminating the noise.
Compared with prior art, the utlity model has following beneficial effect:
The utility model has effectively been revised the distortion error between photographed images and the projected image, thereby set up the high accuracy photographed images and the projected image coordinate resolves model, and, requirement can be lowered significantly to video camera, from can effectively reducing system cost, it is simple that the installation of video camera also becomes.Simultaneously, by the mapping relations of having set up, location identification that can be real-time fast.
Description of drawings
Fig. 1 is the structural representation of a preferred embodiment of the utility model;
Fig. 2 A and Fig. 2 B are the sample image M of grid and the schematic diagram of sampled images N;
Fig. 3 A and Fig. 3 B are respectively the schematic diagrames of photographed images and projected image;
Fig. 4 is the schematic flow sheet of a preferred embodiment of the utility model;
Fig. 5 is a schematic flow sheet of setting up a preferred embodiment of mapping relations in the utility model;
Fig. 6 is the structural representation of a preferred embodiment of ball firing system of the present utility model.
The specific embodiment
The utility model is to be output as the sample image of a grid by projection, the sample image on the projection screen is made a video recording, to obtain sampled images; By analyzing samples view data and corresponding sampled images data line by line, thereby set up the mapping relations between projected image and the photographed images, revise photographed images with these mapping relations and produce the error that pattern distortion brings, thereby reach the purpose of the point of impact on the projection screen being carried out real-time fast location compute.
As shown in Figure 1, present embodiment comprises: projection screen 110; Be used to be subjected to projection control module 141 control, the output projected image also is projected to the projector 120 of described projection screen 110; Be used for the projected image on the described projection screen 110 is carried out the video camera 130 of video data acquiring, the photographed images data of its collection input to data acquisition module 142; Be used to set up the nonlinear topology grid mapping relations between projected image and the photographed images, revise photographed images because produce the mapping relations module 143 that pattern distortion brings error with this, it comprises image analysis module 1431 and study control module 1432; And the photographed images that described data acquisition module 142 is gathered is by the mapping relations in the described mapping relations module 143 of inquiry, thereby revise pattern distortion error between projected image and the photographed images, the point of impact that reaches each pixel accurate position in projected image in the accurate identification photographed images resolves module 144.
For setting up described topology grid mapping relations, at first need 141 controls of projection control module that the drop shadow spread of projector's 120 outputs is divided into several horizontal and vertical all equidistant grids, each grid has four grid end points, thereby one of control projector 120 output is for the sample image M of grid and be projected on the described projection screen 110, shown in Fig. 2 A, as has the sample image of 64 * 48 or 128 * 96 grid end points.Sample image M on 130 pairs of described projection screens 110 of described video camera makes a video recording, gather the data message of each grid end points, obtain sampled images N, shown in Fig. 2 B, because the shooting meeting of described video camera 130 brings error because produce pattern distortion, therefore, among Fig. 2 B, sampled images N after the distortion of solid line part representative image, the most extraneous scope of dotted portion representative sample image M.And among Fig. 2 A and Fig. 2 B, a grid is represented in the space between the solid line, and a grid end points is represented in the crosspoint between solid line and the solid line.
For improving data sampling efficient, improve the accuracy of obtaining described topology grid mapping relations, described projector 120 is the output sample image M line by line, described video camera 130 is with the positional information of each end points of every capable sample image M that lines by line scan, finally obtain sampled images N, and will collect that each grid end-point data exports described data acquisition module 142 among the sampled images N; Pass through analyzing samples image M data and corresponding sampled images N data line by line by image analysis module 1431, find out correlativity between them, and by the correlation between study control module 1432 learning sample image M data and the corresponding sampled images N data, nonlinear Topological Mapping concerns M=f (N) between projected image and the photographed images thereby set up.
After the Topological Mapping relational network topology that study is set up, all coordinates to photographed images utilize four point interpolation method pointwises to calculate corresponding projected image position in advance, and the mapping relations of photographed images and projected image are stored in two two-dimensional map array A X[x, y] and A Y[x, y], so that quick search, wherein, the value of x is the 0 maximum transversal resolution ratio to photographed images; The value of y is the 0 maximum longitudinal frame to photographed images.When needs are done shooting distortion repair process, only need be according to the position data of the point of impact in photographed images, by inquiring about the corresponding data of this position data in described two-dimensional map array, obtain the point of impact accurate position with respect to projected image on projection screen.Such as, if the coordinate position of the point of impact in photographed images is P (50,100), then only need visit array A X[50,100] and A Y[50,100] promptly obtain the accurate position of the point of impact with respect to sample image respectively.
Shown in Fig. 3 A and Fig. 3 B, A ', B ', C ' and D ' are respectively 4 grid end points of a grid in the photographed images, and H ' then is positioned at the inside of this grid; If carry out data acquisition by the image on 130 pairs of described projection screens 110 of described video camera, after obtaining photographed images, if need know 4 grid terminal A ', B ', C ' and D ' corresponding on described projection screen 110 with respect to the accurate position of sample image, then only need be with 4 grid terminal A ', B ', C ' and the corresponding coordinate position data of D ', do the pattern distortion repair process by inquiring about described topology grid mapping relations, can obtain it respectively with respect to accurate position A, B, C and the D of sample image.Because H ' then is positioned at the inside of this grid, after then it can obtain corresponding A, B, C and D position according to A ', B ', C ' and D ' inquiry topology grid mapping relations, position data to A, B, C and D is made interpolation processing, also can calculate the accurate position H of H ' with respect to sample image.
Need to prove that grid is close more, then the degree of pattern distortion reparation is high more; When each grid end points was the minimum pixel of a projected image, all points of impact all can be positioned on the grid end points of projected image.But consider the complexity that data are handled, and actual conditions such as the resolution ratio of video camera 130 and single pixel projection brightness, the precision of grid also far can not reach the precision that is similar to pixel at present.
In conjunction with Fig. 4, present embodiment comprises the steps:
Step S410: will be projected on the projection screen line by line for the sample image M of grid; Sample image M on the projection screen is made a video recording, gather the position data of each grid end points, obtain sampled images N.
Step S420: analyze the position data of each grid end points correspondence between sample image M and sampled images N, the Nonlinear Mapping of setting up between sample image M and the sampled images N concerns M=f (N);
Step S430: detect the point of impact that hits on projection screen by video camera, obtain photographs.
Step S440: according to the topology grid mapping relations distortion of making a video recording of the position of this point of impact in photographs is repaired, calculate this point of impact accurate position with respect to projected image on projection screen.
Step S450: according to the accurate position of the point of impact that calculates, control is projected in the point of impact on the projection screen by projector.
Wherein, in conjunction with shown in Figure 5, it is the schematic flow sheet to the specific embodiment of step S420 among Fig. 4.In the present embodiment, the projected image of described projector 120 outputs also is to export by the row, column display precision of projection screen 110.Present embodiment specifically comprises the steps:
Step S510: the delegation of a sample image M of described projection control module 141 controls described projector 120 outputs also is projected on the described projection screen 110.
Step S520: the described video camera 130 of described data acquisition module 142 controls scans the delegation's image pixel data that shows on the described projection screen 110, and the data that collection is obtained are done preliminary treatment.
Step S530: the data that collection the is obtained processing that performs an analysis.
Step S540: whether the position of judging the grid end points of this row sample image is discerned correctly, if change step S550, otherwise change step S560.
Step S560: adjust image acquisition parameters such as brightness, make the data of gathering among the step S520 have more high accuracy, so that it can be correctly validated when step S540.
Step S550: set up between this row sample image M and corresponding this row sampled images N mapping relations of corresponding grid endpoint location.
Step S570: judge whether current output row is last column of sample image.If not, change step S580, if then change step S590.
Step S580: continue output, data acquisition and the data analysis processing etc. of next line sample image.
Step S590: will be to the mapping relations of the position foundation of the corresponding grid end points in each row sample image and corresponding row sampled images, form the mapping relations between sample image and the sampled images, mapping relations between projected image and the photographed images just, and preserve this mapping relations.
The utility model mainly can be used for the virtual scene gunnery system, reaching the accurate demarcation point of impact or shell hole position in the projection scene, and makes corresponding response.
With reference to figure 6, be a specific embodiment that the utility model is applied to the virtual scene gunnery system.This embodiment solves antagonism both sides indoor and outdoor action outdoor scene and hits when target, high-precision real each other and resolve and play demonstration, puts forth effort to improve the ability of army's live shell antagonism.When being used for the anti-terrorism ball firing training, can shooting the anti-hijacking target that computer generates, or carry out the interactive mode shooting according to video film.92 formula pistols after the repacking and 95 formula assault rifles also can carry out laser simulated firing here.Embodiment can also realize the ball firing antagonism between many people, upstairs downstairs in two rooms, set up the video interactive system by video camera and projecting apparatus, to the point of impact of the paper screen in two rooms accurate location fast, computing by computer, can real-time judge go out the position of point of impact, draw both sides' military success.
Present embodiment comprises mainly and comprising: projector 120, central control unit 20, data acquisition camera 131 and scene acquisition camera 132, hit analytic unit 30, scene management unit 40, network 50 and system's electrical control unit 60.Central control unit 20, hit analytic unit 30 and be connected by network 50 with scene management unit 40, the form of expression variation of network 50, such as cable network, wireless network, LAN, Ethernet or the like are arranged, be thought of as the realization high-speed data transmission, network 50 adopts Networks of Fiber Communications in the present embodiment.
Wherein, central control unit 20 comprises: the scene generation module 201 that is used to generate virtual scene, after the virtual scene data of its generation are made decompression processing through video decompression module 202, export projector 120 to by 141 controls of projection control module, the image projection of virtual scene is shielded on 111 at gunnery target by projector 120; Simultaneously, central control unit 20 carries out data interaction by network interface 203 with network 50 and communicates by letter.
Data acquisition camera 131 is used to gather the view data on gunnery target screen 111 by laser simulated firing or ball firing, the data of gathering are passed to the data acquisition module 142 that hits in the analytic unit 30, resolve the view data of 144 pairs of collections of module according to the mapping relations in the mapping relations module 143 by the point of impact, obtain the correspondence position of pop-off on the simulated scenario image of laser simulated firing or ball firing, that is to say position and the coordinate information of pop-off on gunnery target screen 111, by network control interface 146, behind the network 50, control projector 120 is projected in the accurate position of pop-off on the gunnery target screen 111.
Certainly,, then need gunnery target screen 111 to adopt the coil paper of record pop-off, need in hitting analytic unit 30, be provided for controlling the coil paper control module 145 that coil paper rotates if adopt ball firing; Coil paper is established for covering shell hole, and is too many when the screen pop-off, will influence the identification of the point of impact, therefore needs the barrage alternating movement, to cover shell hole.
Scene management unit 40 comprises: the contextual data acquisition module 401 that is used to control the virtual scene view data that 120 projections of scene acquisition camera 132 acquired projections machines produce; The data that collection obtains can be passed to the point of impact that hits in the analytic unit 30 by network control interface 402 and resolve module 144, resolve module 144 combined with virtual scene images for the point of impact and make image recognition processing; Obtain position and coordinate information on the pop-off gunnery target screen 111 that hits analytic unit 30 by network control interface 402, indication of shots is handled in the work hits indication of shots module 403; And be used to be controlled at the operational training analogue and generate target generation module 405 for the target image of shooting; Be used for hitting the video processing module 404 that view data that analytic unit 30 obtains is made Video processing such as decompress(ion); To hit the image synthesis unit 406 of the data composograph of indication of shots module 403, target generation module 405 and video processing module 404 outputs, these image synthesis unit 406 synthetic images can be projected on operational training analogue's the gunnery target screen by shooting scene projector 407.
Wherein, after the point of impact detected, it brought direct feel not only can for ejaculator or trainer; In simulation target when shooting, be to realize the prerequisite of newspaper number of rings automatically, and hit results can preserve, and can replay; In video simulation scene shooting, can make scene management unit 40 allow the target that is hit in the video do respective reaction, as the death etc. that falls down to the ground, therefore, detect and obtain the basis that point of impact position is a gunnery system.
In addition, system's electrical control unit 60 is mainly used in that light, image in the control virtual operational training both sides environment of living in shows and silencing apparatus etc., and it can adopt PLC control, at this enumeration not.
Therefore, should be with in the example, by being used to revise the mapping relations of the pattern distortion error that produces between projector and the video camera, but accurate recognition goes out position and the coordinate of pop-off at the gunnery target screen, and accurate virtual operational training simulation is provided.
To sum up, the utlity model has following useful technique effect:
The utility model has effectively been revised the distortion error between photographed images and the projected image, takes the photograph thereby set up high accuracy Picture image and projected image calculating coordinate model, and, the requirement to video camera can be lowered significantly, from can effectively reducing system The system cost, it is simple that the installation of video camera also becomes. Simultaneously, by the mapping relations of having set up, can sit specifying in real time fast Demarcate position identification.

Claims (4)

1, a kind of ball firing system is characterized in that, comprising: the gunnery target screen; Projector; Be used to produce virtual operation scene image, and control is projected in the central control unit that gunnery target shields with the virtual scene image by projector; Network; Hit analytic unit by what network was connected with central control unit, it is used for discerning processing by the pop-off view data that data acquisition camera is gathered on the gunnery target screen, and this hits analytic unit and comprises:
Data acquisition module is used for the sample image for grid is projected in the gunnery target screen by projector, by video camera the sample image on the gunnery target screen is made a video recording, and gathers the position data of each grid end points, obtains sampled images;
The mapping relations module is used to analyze the position data of each grid end points correspondence among sample image and sampled images, sets up nonlinear topology grid mapping relations between sample image and the sampled images;
The point of impact resolves module, be used for detecting the point of impact that hits on the gunnery target screen by video camera, obtain photographed images, according to the topology grid mapping relations distortion of making a video recording of the position of this point of impact in photographed images is repaired, calculate this point of impact accurate position with respect to sample image on the gunnery target screen;
The described analytic unit that hits is projected in the point of impact on the gunnery target screen by projector according to the accurate location information control of the point of impact that calculates.
2, ball firing system according to claim 1 is characterized in that, also comprises: the scene acquisition camera; And by network with hit the scene management unit that analytic unit is connected with central control unit, it comprises:
Network control interface;
The contextual data acquisition module, be used for when the confrontation fire training mode, control scene camera acquisition shooter video image, and data are passed to adversary's scene management unit and the projection shooting scene as the antagonism adversary by network control interface, this video data also is transferred to central control unit for CSRC simultaneously;
The target generation module is used for when the simulative shooting training pattern, and control generates for the virtual scene view data of shooting or simulation target image;
Video processing module, the confrontation fire training mode is used to resist the compression of adversary's scene video decompress(ion) and our shooter's video, during the virtual scene shooting, is used for the broadcast and the control of scene video files;
Image synthesis unit is used for and will hits the data composograph of indication of shots module, target generation module and video processing module output.
3, ball firing system according to claim 1 is characterized in that, described network is a Networks of Fiber Communications.
4, ball firing system according to claim 1 is characterized in that, also comprises: the system's electrical control unit that is used to control illumination, demonstration and the processing of eliminating the noise.
CNU2008200922875U 2008-02-19 2008-02-19 Ball firing system Expired - Fee Related CN201155944Y (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNU2008200922875U CN201155944Y (en) 2008-02-19 2008-02-19 Ball firing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNU2008200922875U CN201155944Y (en) 2008-02-19 2008-02-19 Ball firing system

Publications (1)

Publication Number Publication Date
CN201155944Y true CN201155944Y (en) 2008-11-26

Family

ID=40103726

Family Applications (1)

Application Number Title Priority Date Filing Date
CNU2008200922875U Expired - Fee Related CN201155944Y (en) 2008-02-19 2008-02-19 Ball firing system

Country Status (1)

Country Link
CN (1) CN201155944Y (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105823374A (en) * 2016-04-13 2016-08-03 威海吉利达智能装备有限公司 Red and blue force system for live ammunition rivalry drilling
CN110193206A (en) * 2019-05-15 2019-09-03 深圳弘江军科技有限公司 A kind of shooting interactive system and method
CN111954060A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Barrage mask rendering method, computer device and readable storage medium
CN114858013A (en) * 2022-04-13 2022-08-05 莆田市军源特种装备科技有限公司 Projectile body throwing distance measuring method based on intelligent visual identification

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105823374A (en) * 2016-04-13 2016-08-03 威海吉利达智能装备有限公司 Red and blue force system for live ammunition rivalry drilling
CN105823374B (en) * 2016-04-13 2018-08-21 山东吉利达智能装备集团有限公司 Red blue force's system of live shell rivalry-drilling
CN110193206A (en) * 2019-05-15 2019-09-03 深圳弘江军科技有限公司 A kind of shooting interactive system and method
CN111954060A (en) * 2019-05-17 2020-11-17 上海哔哩哔哩科技有限公司 Barrage mask rendering method, computer device and readable storage medium
CN111954060B (en) * 2019-05-17 2022-05-10 上海哔哩哔哩科技有限公司 Barrage mask rendering method, computer device and readable storage medium
CN114858013A (en) * 2022-04-13 2022-08-05 莆田市军源特种装备科技有限公司 Projectile body throwing distance measuring method based on intelligent visual identification
CN114858013B (en) * 2022-04-13 2024-05-17 莆田市军源特种装备科技有限公司 Projectile body distance casting and measuring method based on intelligent visual recognition

Similar Documents

Publication Publication Date Title
CN103716594B (en) Panorama splicing linkage method and device based on moving target detecting
EP1779055B1 (en) Enhancement of aimpoint in simulated training systems
CN104754302B (en) A kind of target detection tracking method based on rifle ball linked system
CN207503485U (en) Army and police's image shooting training system
CN100590645C (en) Hit point calculation method and apparatus based on topology grid learning
CN201155944Y (en) Ball firing system
CN104090664B (en) A kind of interactive projection method, apparatus and system
CN110360877B (en) Intelligent auxiliary system and method for shooting training
CN209230407U (en) Shoot training of light weapons auxiliary system
CN106060473A (en) Big data processing method and big data processing device
CN111401246A (en) Smoke concentration detection method, device, equipment and storage medium
JP2001256475A (en) System for detecting black smoke
CN114092473A (en) Large-scale product vision inspection device and system
CN110888812A (en) System and method for testing response time of terminal page
KR101330060B1 (en) Method and system for training full duplex simulator shooting tactics using laser
CN110009696A (en) It is demarcated based on ant colony algorithm Optimized BP Neural Network trinocular vision
CN109579612A (en) A kind of dual training system of compatible analog bullet and live shell
CN104076990A (en) Screen positioning method and device
CN112989972A (en) Automatic identification method, device and system for target shooting and storage medium
CN107823883B (en) Aiming point screen coordinate obtaining method based on image recognition and laser positioning
CN107301672A (en) A kind of indoor scene becomes more meticulous model building device and modeling method
CN102160928A (en) Infrared gun firing point identification method and system
KR20000012160A (en) Simulation system for training shooting using augmented reality and method thereof
CN106998442A (en) Intelligent domestic system
CN111046566A (en) Crowd performance site sparing system

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081126

Termination date: 20100219