CN112765125A - Database configuration for glasses-handle continuous ammunition identification system - Google Patents

Database configuration for glasses-handle continuous ammunition identification system Download PDF

Info

Publication number
CN112765125A
CN112765125A CN202011620068.1A CN202011620068A CN112765125A CN 112765125 A CN112765125 A CN 112765125A CN 202011620068 A CN202011620068 A CN 202011620068A CN 112765125 A CN112765125 A CN 112765125A
Authority
CN
China
Prior art keywords
ammunition
database
data
dimensional code
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011620068.1A
Other languages
Chinese (zh)
Inventor
王彬
贾昊楠
陈明华
姜志宝
王韶光
尹会进
张洋洋
闫媛媛
王维娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
32181 Troops of PLA
Original Assignee
32181 Troops of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 32181 Troops of PLA filed Critical 32181 Troops of PLA
Priority to CN202011620068.1A priority Critical patent/CN112765125A/en
Publication of CN112765125A publication Critical patent/CN112765125A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

The invention discloses a database configuration for a glasses-handle continuous-carrying ammunition identification system, which comprises the following data configuration modules: the device comprises a two-dimensional code data generating module, a two-dimensional code data encrypting module, a two-dimensional code data mapping module, a picture data module, a video data module, a model data module and a storage data module; and the database is respectively connected with the two-dimension code scanning port and the AR display port in an open type data series mode. The database system can be directly butted with an augmented reality ammunition mirror software and hardware system based on a visual perception technology, ammunition guarantee, a user and a non-professional can know and understand basic information of ammunition within the shortest time by recognizing the two-dimensional code of the ammunition packaging box, structural signs of the ammunition and components can be intuitively felt, operation and use and accident treatment of the ammunition can be rapidly mastered, and the informatization management level of the ammunition of the army is greatly improved.

Description

Database configuration for glasses-handle continuous ammunition identification system
Technical Field
The invention relates to an ammunition identification system, in particular to a glasses-handle continuous ammunition identification system based on data interaction and data simulation.
Background
The augmented Reality technology based on the visual perception technology developed on the basis of Virtual Reality VR is a cross discipline established on the basis of a plurality of disciplines, and can dynamically fuse a Virtual image generated by a computer into a real environment seen by a user in real time. The augmented reality technology based on the visual perception technology is used for supplementing a real scene, and the understanding and feeling of a user to a real environment are enhanced through virtual-real fusion so as to achieve an enhanced effect, so that the user can not only feel things experienced in an objective world through a virtual reality system, but also break through space, time and other limitations and feel experience which cannot be experienced in the real world in person. The main advantage of the augmented reality technology is that the real world is increased and strengthened by the mutual combination of the real world and the virtual world, so that the ability of people to recognize and modify the real world is greatly improved in a new way, and the change from human-adapted world to world-adapted human appears.
In recent years, augmented reality technology has entered into several aspects of the military field and has begun to play an important role. The application of the augmented reality technology in the military field is highly emphasized in all countries in the world, a large amount of research and exploration are carried out in aspects of weapon equipment manufacturing, battlefield environment display, army exercise training, after-loading comprehensive guarantee and the like, a series of achievements are obtained, and a wide application prospect is shown. Augmented reality technology based on visual perception technology applies to the military field, mainly embodies in 4 aspects: firstly, information such as real objects, models and design drawings can be displayed and shared in real time, and a multi-channel man-machine natural interaction technology is utilized, so that multiple people in different places can interact in real time, communicate design ideas, modify and improve schemes. Secondly, the model of the weapon equipment and various possible design schemes can be fused together and displayed to the user, the user can comprehensively compare various schemes through the augmented reality system, and the modification opinions can be directly reflected on the development model of the equipment. Third, can offer the advanced demonstration for users, let the person of developing and user enter the operational environment of the virtuality and reality combination at the same time and operate the weapon system, the rationality of design scheme, tactics, technical performance index and operation of the inspection weapon system. Fourthly, the standard workflow guide of assembly and maintenance can be accurately displayed to a user, and the equipment development efficiency and the equipment practicability are greatly improved.
Currently, the use and management of ammunition mainly have the following problems: (1) the ammunition used at present is various in variety and different in operation and use, so that the ammunition knowledge learning cost is extremely high. (2) The prior ammunition guarantee and the ammunition basic knowledge of users are deficient, so that the ammunition operation and use capability is deficient. (3) Due to the characteristics of ammunition, the ammunition has higher potential safety hazard in the actual learning and using process. The problems are expected to be fundamentally solved through the research and development and application of the augmented reality technology.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a database configuration for a glasses-handle continuous ammunition identification system.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows.
A database configuration for a glasses-handle in-line ammunition identification system, the hardware of the glasses-handle in-line ammunition identification system comprising AR glasses, a database server, the software of the glasses-handle in-line ammunition identification system comprising augmented reality ammunition identification glasses software and a database; the database includes the following data configuration modules: the system comprises a two-dimensional code data generating module, a two-dimensional code data encrypting module, a two-dimensional code data mapping module, a picture data module, a video data module, a model data module and a storage data module, wherein the database is integrally provided with a module-oriented expandable framework, and each data module is provided with an open data adding and deleting authority framework; the database is connected with the two-dimensional code scanning port and the AR display port in an open type readable data series way upwards and downwards respectively; the database comprises the following sub-databases: the system comprises a two-dimensional code dictionary sub-database, a two-dimensional code encryption sub-database, an ammunition basic information sub-database, an ammunition three-dimensional vector sub-database, an ammunition warehouse digital management sub-database, an ammunition quality digital management sub-database and an ammunition destruction automatic warning prompt sub-database; the server of the database is set to at least satisfy the following conditions: the simulation calculation CPU is not lower than I7, the memory is not less than 8G, the hard disk is not less than 500G, and the independent video memory is not less than 1G.
As a preferred technical scheme of the invention, the database comprises a two-dimensional code dictionary sub-database, and the storage capacity of the two-dimensional code dictionary sub-database is compatible with at least 100 kinds of ammunition information and corresponding two-dimensional code dictionaries.
As a preferred technical solution of the present invention, the database includes a two-dimensional code dictionary sub-database and a two-dimensional code encryption sub-database, and the two-dimensional code encryption sub-database and the two-dimensional code dictionary sub-database are provided with a category mapping function architecture.
As a preferred technical scheme of the invention, the database comprises an ammunition basic information sub-database, and the ammunition basic information sub-database sets a standard exchangeable text data format.
In a preferred embodiment of the present invention, the ammunition basic information includes at least ammunition kind data, ammunition name data, ammunition assembly information data, and ammunition data.
As a preferred technical scheme of the invention, the database comprises an ammunition three-dimensional vector sub-database, compatible ammunition appearance data, ammunition component structure physical sign data, ammunition operation use text data, ammunition operation use graphic data, ammunition operation use video data and ammunition accident handling data.
As a preferred technical scheme of the invention, the database comprises an ammunition warehouse digital management sub-database, and is compatible with warehouse ammunition position leading data and ammunition allocation automatic logging statistical data.
As a preferred technical scheme of the invention, the database comprises an ammunition quality digital management sub-database and is compatible with conventional detection automatic entry data and quality state automatic identification data.
As a preferred technical scheme of the invention, the database ammunition destruction automatic warning prompt sub-database is compatible with automatic warning prompt data for destroying dangerous goods.
As a preferred technical scheme of the invention, the database is provided with two software communication ports, a first port is externally connected with perspective optical waveguide display engine software, and data setting, timing/manual dormancy setting and manual awakening functions are set; the ammunition two-dimensional code is quickly identified, and the identification time is not more than 2 seconds; setting an ammunition mark capable of identifying the cylindrical surface of an ammunition bullet body, wherein the misjudgment rate is not more than 40%; basic information, pictures, videos and three-dimensional models of ammunition are displayed and identified through touch panel operation switching, and the models are rotated in a compatible mode; setting up support voice interaction and/or voice assistant; the second port supports the packaging and importing of database content to the AR display device for communication and storage.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: generally, aiming at the problems that ammunition is various in variety and different in operation and use, basic ammunition knowledge and operation and use requirements of guarantee and use personnel are insufficient, and the safety potential hazard in ammunition use is increased, the augmented reality ammunition identification glasses system is formed by developing virtual augmented reality technology research based on visual perception of ammunition, and comprises augmented reality glasses hardware, two-dimensional codes, an ammunition database server, augmented reality ammunition identification glasses software and database software. The system can realize the quick accurate discernment of ammunition basic information, three-dimensional stereoscopic display of ammunition structure, ammunition operation use flow and require directly perceived demonstration etc. make ammunition guarantee and user of service grasp ammunition knowledge fast, and vision perception ammunition structure is constituteed and operation use flow, reduces the potential safety hazard in the ammunition use, for realizing accurate, the digital management of ammunition, three-dimensional, the popularization of visualization knowledge establishes the basis. According to the special AR technology and the augmented reality algorithm developed by the invention, through developing the related technical researches of a two-dimensional code recognition technology, an image recognition technology, an augmented reality processing center, a man-machine interaction center, an ultra-low power consumption design, an intelligent energy management technology and the like, on the basis of fully collecting various information of ammunition, a recognition, analysis and data display platform is provided for the rapid recognition and visual display of the basic information of the ammunition of the army. Therefore, a foundation is laid for realizing the technical guarantee of ammunition and the basic performance of the ammunition held by the operator quickly and visually, familiarizing operation and use, reducing the waiting time of ammunition use preparation, reducing the use potential safety hazard of the ammunition and the like.
The database system developed based on the invention can directly realize the augmented reality ammunition mirror software and hardware system based on the visual perception technology, abandon the traditional learning mode, completely restore the equipment through augmented reality head-mounted equipment, and realize the interaction between people and the equipment through data equipment. A very lifelike learning and training environment is established, and a user can see not only real ammunition and environment but also various virtual objects added in a scene through a portable augmented reality system, so that a trainee can learn in an immersive manner, and the skill level is further improved. The data system and the application of the glasses-handle continuous ammunition identification system thereof can enable ammunition guarantee and users and non-ammunition professionals to know the basic information (type, name, assembly information, data information and the like) of ammunition in the shortest time by identifying the two-dimensional code on an ammunition packing box or an ammunition body, intuitively feel the structural signs of the (three-dimensional) ammunition and component parts, and quickly master the operation, use (video) and accident handling of the ammunition. And the digital management of ammunition warehouse (leading of warehouse ammunition position, automatic statistics of ammunition allocation and distribution and the like), the digital management of ammunition quality (automatic input of conventional detection, automatic identification of quality state and the like) and the automatic warning prompt of ammunition destruction (automatic warning prompt of dangerous goods destruction) and the like can be realized through subsequent function expansion, and the informatization management level of the ammunition of the army is greatly improved.
The configuration construction and content interaction mode of the database at least has the following technical values and advantages: the virtual-real combinability is that the discordance caused by true-false fusion is hardly felt by integrating the virtual environment and the actual environment; the real-time interactivity can be directly interacted with a virtual object or a virtual environment through the interactive equipment, so that the perception of a user to the environment is enhanced; the 3D positioning and video type augmented reality system has the advantages that the video shot by the camera is directly displayed in the display, so that a user can see a real scene, meanwhile, the virtual video shot by the virtual camera is sent to the display, the virtual scene and the real scene are integrated through the omnibearing alignment of the virtual camera and the real camera, and virtual objects can be freely added and positioned in a three-dimensional space. The data system can meet the application requirements of professional ammunition, and ammunition users can realize quick and accurate identification of basic information of the ammunition, three-dimensional display of an ammunition structure, intuitive demonstration of an ammunition operation use flow and requirements and the like through advanced methods such as simulation model demonstration, augmented reality interaction and the like; the ammunition technical guarantee and the basic performance of the ammunition grasping by operators are realized quickly and visually, the operation and the use are familiar, the waiting time of ammunition use preparation is reduced, and the use potential safety hazard of the ammunition is reduced. The data system can improve the learning efficiency and the learning effect of the ammunition knowledge, and a visual virtual system is constructed by adopting an AR (augmented reality) technology and a computer simulation technology, so that ammunition guarantee and users can quickly master the ammunition knowledge, visually perceive the structural composition of the ammunition and operate and use processes. More importantly, many ammunitions are disassembled and their internal mechanisms of operation cannot be observed. By adopting the augmented reality vision simulation technology, the structural characteristics and the working process of the display system can be clear and vivid, and a large amount of visual ammunition structure information can be provided. The data system can reduce potential safety hazards, the system comprises multimedia data such as pictures, videos and three-dimensional models of ammunition operation use and accident handling, and ammunition use and handling knowledge can be rapidly mastered through virtual learning and operation, so that the probability of safety accidents is further reduced. The data system can improve the ammunition informatization management level, and realizes digital management of an ammunition warehouse (warehouse ammunition position leading, automatic ammunition allocation and statistics and the like), digital management of ammunition quality (conventional detection automatic input, automatic quality state identification and the like), automatic ammunition destruction warning prompt (dangerous article destruction automatic warning prompt) and the like through function expansion.
The database system aims at improving the ammunition cognitive ability and operation and application ability of a user, takes 'outstanding characteristics, vivid simulation, advanced technology, stability, reliability and easy expansion' as a design principle, and completely covers the contents of basic ammunition information, visual feeling, operation and use, maintenance, technical inspection, accident treatment and the like. The system has the technical advantages and the technical progress of vivid simulation, rich functions, advanced technology, convenient use, easy expansion and the like.
Drawings
FIG. 1 is a schematic diagram of the system configuration of the present invention.
Fig. 2 is a schematic view of two-dimensional code recognition perspective and transformation effects of the present invention.
Fig. 3 is a schematic illustration of an ammunition tag of the present invention.
FIG. 4 is a schematic diagram of a picture segmentation histogram according to the present invention.
Fig. 5 is a schematic view of the principle of the visual features of the present invention.
FIG. 6 is a schematic diagram of the speech recognition principle of the present invention.
FIG. 7 is a schematic diagram of a three-dimensional modeling scheme of the present invention.
FIG. 8 is a schematic diagram of a computer animation technology implementation approach of the present invention.
Detailed Description
The following examples illustrate the invention in detail. The raw materials and various devices used in the invention are conventional commercially available products, and can be directly obtained by market purchase.
In the following description of embodiments, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Embodiment 1, System Overall architecture
Referring to fig. 1, ammunition identification and use research based on visual perception technology aims at developing a set of augmented reality ammunition application platforms. The system hardware comprises customized AR glasses and a database server, and the software comprises augmented reality ammunition recognition glasses software and database software.
Embodiment 2, AR augmented reality glasses part group.
The product design form is as follows: splitting; basic functions are as follows: the adjustment of the size angle and the like of the model structure is supported; various editing modes such as moving, splitting and combining the model are supported; supporting the addition of multimedia information and the superposition of a model; supporting the real object and the virtual object to perform virtual-real superposition; supporting the synchronous display of the first visual angle and the third visual angle; the synchronous display and the synchronous editing of multiple terminals (a flat panel, glasses, a display and the like) are supported; the cloud data is obtained in a mode of supporting 4G/WIFI; supporting remote voice communication operation interaction; common digital model import is supported; supporting gesture recognition and language interaction; fingerprint identification is supported; supporting an NFC function; various AI algorithm integration and application are supported; support to customize and adapt the application software; the positioning function is as follows: support for GPS; the Beidou satellite positioning is supported; remote assistance function: support for remote assistance functionality; the glasses operating system needs to provide an open application programming interface to meet the requirement of subsequent secondary development.
The indexes are as follows: AR augmented reality glasses processor: high pass CellCept 820; memory: 2 GB; a built-in storage space: 16 GB; connecting: Wi-Fi, Bluetooth, USB 2.0; a display screen: Micro-OLED; the number of display screens: a double display screen; monocular resolution: not less than 1024 x 768; the field angle: 35 ° contrast: 10000: 1; video: 720P @30fps,1080P @30 fps; automatic focusing: supporting; operating the system: android (provides an open application programming interface); gesture interaction: supporting; sensor (glasses): accelerometers, gyroscopes, magnetometers, light sensors; audio: stereo headphones/microphones; storage card: an extensible 32G Micro SD; a myopia lens: supporting; battery capacity 3700 mAh; full load service time: greater than 2.5 hours; a gesture recognition function: the field angle: horizontal 130 degrees and vertical 110 degrees; frame rate: 30 fps; working distance: 20-60 cm; image resolution: 2x 640x 480 pixels; time delay: 20 ms; SLAM function: 6DOF tracking; positioning accuracy: 95 percent; CPU occupancy rate: 10 percent; initialization time: less than 1 s; a monocular/binocular mode is supported; closed loop repositioning speed: less than 2 s; and (3) off-line map: the cloud synchronization sharing of the multiple devices is supported; and (3) motion prediction: less than 25 ms; 3D mapping: the refresh rate is 10 Hz; the Mesh precision is 80%; the scan space size is 10 m.
EXAMPLE 3 augmented reality ammunition recognition eyeglass System segment
The method comprises the following steps: A. a two-dimensional code generation module; the mainstream two-dimensional code used in industry is adopted as an ammunition identification mark, and the method comprises the following steps: lobebel codes, hamming codes, etc.; the two-dimensional code adopts DES and RSA double encryption algorithm to ensure that plaintext data cannot be stolen by an unauthorized party; the two-dimensional code is generated in a full-automatic mode, the two-dimensional code can be exported, printed, renamed and the like, and management of the two-dimensional code is facilitated. B. An image recognition module; the ammunition two-dimensional code can be quickly identified, and the identification time is not more than 2 seconds; the ammunition marks on the ammunition column and the packaging box can be identified, and the misjudgment rate is not more than 40%; the cylindrical surface two-dimensional code corrects the cylindrical surface image distortion by adopting an 8-equal division mode to improve the recognition rate; in the image recognition process, capturing an image through a high-resolution camera, and performing image preprocessing-image feature extraction to match a model output result; the method is suitable for various actual environments, and can accurately identify the image in the weather of over-brightness, over-darkness, relatively low visibility and rain and snow. C. A front-end display module; basic information, pictures, videos, three-dimensional models and the like of ammunition are switched and displayed through the operation of the touch panel, and the models can be rotated; by combining the video with the three-dimensional model, a user can quickly master the operation and accident handling of the ammunition; a UI interface is reasonably designed, and the user operation experience is optimized; displaying the hardware information of the AR glasses used currently so as to achieve the real-time grasp of the hardware performance; the functions of later-stage ammunition warehouse digital management, ammunition quality digital management, automatic ammunition destruction warning and reminding and the like are realized through the reserved interface. D. An interactive function module; identifying and tracking the behaviors of the operating and using personnel through the interactive equipment and the positioning system; the touch interaction requires that the AR glasses mouse function is realized through a touch panel, the basic clicking and dragging operation is realized, and the control on the three-dimensional model is realized; the voice interaction function supports the intelligent interaction function of the AR glasses through a short language instruction; the voice recognition has a preprocessing process, so that the signal frequency spectrum is flattened, and the recognition accuracy is improved; and a plurality of interactive fusion modes are adopted, and entity buttons are used as assistance, so that the interactive naturalness and the interactive accuracy are improved. A unified, collaborative interactive system needs to be designed. The method solves the problem of multiple interactive fusion, and comprehensively considers effective communication and operation under the limitation of operation speed and bandwidth from meeting complex and variable environmental conditions; the method has the advantages that operators and human-computer operation interfaces are more reasonable, the display screen is not confused, information organization is carried out according to the properties of a target object in use and a certain hierarchical structure, and intelligent processing is carried out through a single information filtering mechanism. E. A low power consumption management module; when the system has no identification and operation tasks, the system enters a sleep countdown mode, and the time reaches an ultra-low power consumption state; when the touch time is triggered, the sleep is immediately awakened and enters a normal identification state.
Example 4, database software suite.
The method comprises the following steps: A. a base information database; the method is realized based on the lightweight SQLlite; the method comprises the steps of including a two-dimension code dictionary and two-dimension code encrypted data; the text data includes basic information (kind, name, assembly information, specification information, etc.) data of ammunition; the multimedia data comprises ammunition pictures, three-dimensional models, disassembled three-dimensional views, ammunition structure characteristics and related operation video flows, so that a user can quickly master ammunition operation and use and accident handling; the operations of adding, deleting, modifying and searching the existing database can be realized through ammunition identification database software, and along with the gradual improvement of the system, the information and model data of ammunition can be continuously improved through the database; and the later stage realizes the updating and upgrading of the ammunition data in the glasses in a wired connection mode. B. A production information database; data reservation function, digital management of ammunition quality (conventional detection automatic input, quality state automatic identification and the like), automatic ammunition destruction warning prompt (dangerous article destruction automatic warning prompt) and the like; the existing data can be added, deleted, changed and checked. C. A logistics storage database; the data reservation function comprises the information of digital management (warehouse ammunition position leading, ammunition allocation automatic logging and statistics and the like) of an ammunition warehouse, and provides data support for organization, command, scheduling and supervision; the existing data can be added, deleted, changed and checked.
In the aspect of indexes, the ammunition two-dimensional code can be quickly identified, and the identification time is not more than 2 seconds; the ammunition marks on the cylindrical surface of the ammunition body and the packaging box can be identified, and the misjudgment rate is not more than 40%; supporting voice interaction and voice assistant; the ammunition augmented reality interactive platform has the storage capacity capable of realizing visual display of information of not less than 100 ammunitions.
Example 5 Key technology three dimensional Engine technology
The three-dimensional engine employs Unity 3D. The engine is a mature comprehensive development engine, and has more successful products of the same type for the augmented reality application development of the system;
the engine supports the multi-platform support capability of PC/Android/ios/linux and the like, and has very good compatibility for a PC platform and a mobile Android platform used by the system; the performance of the engine is good, and the engine can be well adapted according to needs, including a CPU, a GPU, a memory and the like;
the system can smoothly and stably run on the augmented reality glasses; the development efficiency is extremely high; the engine well supports technologies such as augmented reality technology, touch technology, voice interaction technology and the like, and can perfectly match the functions required by the system; more DrallCall (draw call) and more sophisticated optimization are supported; support for advanced AI; supporting physical simulation; hot updating is supported, and system functions and data updating are well supported; the engine is provided with an editor (animation, scene, special effect, UI, particle and the like) and supports extension; the method well supports third-party plug-ins and libraries; the engine has good documentation and technical support; the engine is provided with complete development tools, such as performance analysis, packaging and the like.
Example 6 Key technologies-database configuration and data architecture
The database includes the following data configuration modules: the system comprises a two-dimensional code data generating module, a two-dimensional code data encrypting module, a two-dimensional code data mapping module, a picture data module, a video data module, a model data module and a storage data module, wherein the database is integrally provided with a module-oriented expandable framework, and each data module is provided with an open data adding and deleting authority framework; the database is connected with the two-dimensional code scanning port and the AR display port in an open type readable data series way upwards and downwards respectively; the database comprises the following sub-databases: the system comprises a two-dimensional code dictionary sub-database, a two-dimensional code encryption sub-database, an ammunition basic information sub-database, an ammunition three-dimensional vector sub-database, an ammunition warehouse digital management sub-database, an ammunition quality digital management sub-database and an ammunition destruction automatic warning prompt sub-database; the server of the database is set to at least satisfy the following conditions: the simulation calculation CPU is not lower than I7, the memory is not less than 8G, the hard disk is not less than 500G, and the independent video memory is not less than 1G.
The database comprises a two-dimensional code dictionary sub-database, and the storage capacity of the two-dimensional code dictionary sub-database is compatible with at least 100 kinds of ammunition information and corresponding two-dimensional code dictionaries. The database comprises a two-dimensional code dictionary sub-database and a two-dimensional code encryption sub-database, and the two-dimensional code encryption sub-database and the two-dimensional code dictionary sub-database are provided with a category mapping function architecture. The database comprises an ammunition basic information sub-database, and a standard exchangeable text data format is set in the ammunition basic information sub-database. The ammunition basic information at least comprises ammunition species data, ammunition name data, ammunition assembly information data and ammunition metadata information data. The database comprises an ammunition three-dimensional vector sub-database, compatible ammunition appearance data, ammunition component structure sign data, ammunition operation use text data, ammunition operation use graphic data, ammunition operation use video data and ammunition accident handling data. The database comprises an ammunition warehouse digital management sub-database, and is compatible with warehouse ammunition position leading data and ammunition allocation automatic logging statistical data. The database comprises an ammunition quality digital management sub-database and is compatible with conventional detection automatic entry data and quality state automatic identification data. The database ammunition destruction automatic warning prompt sub-database is compatible with the automatic warning prompt data for destroying dangerous goods. The database is provided with two software communication ports, a first port is externally connected with perspective optical waveguide display engine software, and data setting, timing/manual dormancy setting and manual awakening functions are set; the ammunition two-dimensional code is quickly identified, and the identification time is not more than 2 seconds; setting an ammunition mark capable of identifying the cylindrical surface of an ammunition bullet body, wherein the misjudgment rate is not more than 40%; basic information, pictures, videos and three-dimensional models of ammunition are displayed and identified through touch panel operation switching, and the models are rotated in a compatible mode; setting up support voice interaction and/or voice assistant; the second port supports the packaging and importing of database content to the AR display device for communication and storage.
The database selects SQLite. Used for bearing various ammunition data of the system. The user can realize the operations of adding, deleting, changing and searching the existing database through ammunition identification database software, and along with the gradual improvement of the system, the information and model data of ammunition can be continuously improved to the database. The SQLite is a lightweight relational database management system, occupies very low resources, can support Windows and Android operating systems required by the project, can be combined with multiple programming languages, has a processing speed superior to Mysql, PostgreSQL and the like, and is an open source database.
Embodiment 7 two-dimensional code identification and encryption technology as key technology
The image recognition of the research project relates to two parts, namely ammunition two-dimensional code image recognition and ammunition identification recognition. The two-dimensional code and the ammunition mark need to be well recognized, and mature and stable performance is needed.
Image recognition
The image recognition technology of a computer and the image recognition of human beings are not essentially different in principle, the image recognition of human beings depends on the characteristic classification of the image, then the image is recognized through the characteristic of each category, and when a picture is seen, the brain can quickly sense whether the picture or a picture similar to the picture is seen.
In the process, the brain identifies the classified category in the memory, and checks whether the memory with the same or similar characteristics with the image exists, so as to identify whether the image is seen.
Image recognition techniques are based on the main features of an image. Each image has its features such as the letter a having a tip, P having a circle, and the center of Y having an acute angle, etc. The study of eye movement in image recognition shows that the sight line is always focused on the main features of the image, namely, the places where the curvature of the contour of the image is maximum or the direction of the contour changes suddenly, and the information content of the places is maximum. And the scan path of the eye always goes from one feature to another in turn. Therefore, in the image recognition process, the perception mechanism must exclude the input redundant information and extract the key information. At the same time, there must be a mechanism in the brain that is responsible for integrating information, which can organize the information obtained in stages into a complete perceptual map.
The process of the image recognition technology comprises the following steps: information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision. The acquisition of information means that information such as light or sound is converted into electrical information by a sensor. I.e. to take basic information of the object under study and to convert it by some means into information that can be recognized by a machine. The preprocessing mainly refers to operations such as denoising, smoothing and transformation in image processing, so as to enhance important features of the image. Feature extraction and selection is simply understood to mean that the images under study are of various types, and if the images are to be distinguished by some method, the images are identified by their own features, and the process of obtaining the features is feature extraction.
Ammunition two-dimensional code image recognition
The common method processes of the two-dimensional code mainly comprise image preprocessing, positioning position detection graphs, positioning correction graphs, perspective transformation, decoding and error correction.
Preprocessing an image: graying, denoising, distortion correction and binaryzation; the two-dimensional code recognition process is easily influenced by the environment and difficult to recognize, and the preprocessing process is used for improving the image quality and the recognition environment.
Image graying: the data format of camera output is many, black and white camera directly outputs grey map, and the color camera output format has YUV422, YUV410, RGB565, RGB888, etc., two-dimensional code identification only needs single-channel grey map, therefore, conversion is needed, taking RGB888 as an example, the conversion formula is as follows:
Gray=0.2989R+0.5870G+0.1140B
noise removal: the influence of noise can cause inaccurate feature positioning and wrong data stage decoding, common noise is mainly gaussian noise and salt and pepper noise, and gaussian filtering, median filtering or mean filtering can be adopted to improve the image quality.
③ distortion correction: wide-angle camera or fisheye camera have great distortion, and the nearer visual angle edge image deformation is bigger, to the great image of distortion, not only 1: 1: 3: 1: 1, the data in the data area has no standard module size, which may result in inaccurate decoding. In this case, the image is corrected to an undistorted image by distortion model correction.
And fourthly, binarization: under normal conditions, the background and the QR code target are obviously distinguished, the illumination is uniform, only a global binarization method needs to be simply used, and common methods include a fixed threshold method, an Otsu method, a histogram bimodal thresholding method and the like. For the condition of uneven illumination, the method is not suitable, global brightness imbalance is caused, and the code cannot be normally recognized, so that the method needs to be processed by a self-adaptive local thresholding method, and can be realized by a method of solving a threshold value by blocks and then equalizing the threshold value.
B. Positioning position detection pattern: the method comprises the steps of searching the characteristics of the position detection graphs, scanning the characteristics in the horizontal and vertical directions, penetrating through the position detection graphs for many times to be candidate position detection images, removing false position detection graphs through a screening strategy to determine real graphs, and determining the directions of the false position detection graphs.
C. Positioning and correcting the graph: a syndrome is estimated from the detected image.
D. Perspective transformation: acquiring a homography matrix according to the positioning points and the correctors, and acquiring a standard square image through perspective transformation, wherein the perspective transformation formula is as follows:
Figure BDA0002872119280000151
x=a11u+a12v+a13
y=a21u+a22v+a23
z=a31u+a32v+a33
Figure BDA0002872119280000152
Figure BDA0002872119280000153
the perspective transformation effect is shown in fig. 2.
E. Decoding and error correction: the decoding is to decode and compare the two-dimension code version information, format information, data and error correcting code. The data area is converted into bit streams of 0 and 1, and the bit streams are checked and corrected with an error correction algorithm. And after the coding format is judged, decoding is carried out, and the data contained in the two-dimensional code is obtained.
F. And (3) identifying the cylindrical surface two-dimensional code: aiming at the research object of the project, the cylindrical two-dimensional code image identification requirement exists. In contrast, the acquired two-dimensional code image is subjected to 8-equal division to correct the cylindrical distortion, so that the recognition rate is improved.
G. Encrypting the two-dimensional code: when the two-dimensional code is generated, encrypting plaintext data by adopting a DES (data encryption standard) and RSA (rivest-Shamir-Adleman) double encryption algorithm; and corresponding decryption is carried out in the decoding process, so that the data of the inscription cannot be stolen by an unauthorized party.
Ammunition identification recognition
The ammunition mark is serial number data on the ammunition column and the packing box and consists of Chinese, English and numbers. Located in several places of the case or ammunition, several components of the ammunition are respectively marked. As shown in fig. 3, green is several areas of the ammunition tag and red is the data to be identified.
For such data recognition, text recognition techniques are used, images are captured by a camera, the shape is determined by detecting the dark and light patterns, and the shape is then translated into computer text using character recognition methods.
A. Pretreatment: the method mainly comprises graying, binaryzation, noise removal, inclination correction and the like.
Graying: the grayscale map is a picture containing only luminance information and no color information. In the RGB model, a color represents a gray-scale color if R, G, B, the value of which is called the gray-scale value. The following formula is generally satisfied: the parameter Gray is 0.299R +0.587G +0.114B, which takes the physiological characteristics of human eyes into consideration.
B. Binarization: non-black, i.e. white. Most of pictures shot by a camera are color images, the information content of the color images is huge, the contents of the pictures can be simply divided into foreground and background, in order to enable a computer to recognize characters more quickly and better, the color images need to be processed first, so that only the foreground information and the background information of the pictures can be processed, the foreground information can be simply defined to be black, the background information is white, and the binary image is formed. The color image after gray level processing needs to be further separated from the background by binarization processing. The concept of "threshold" is involved in the binarization process, and in short, it is desirable to find a suitable value as a limit, and the value greater than or less than the limit is changed to white or black, i.e. 0 or 255.
A histogram method (also called a two-peak method) is used to find the binarization threshold, and the histogram is an important feature of the image. The histogram method considers that the image consists of a foreground and a background, on the gray level histogram, the foreground and the background form a peak, and the lowest valley between two peaks is the threshold.
C. Image denoising: in reality, digital images are often affected by interference of imaging equipment and external environment noise during digitization and transmission, and are called noisy images or noisy images. The process of reducing noise in a digital Image is called Image Denoising. In the demonstration process, the picture after binarization can show a plurality of small black points which are all unnecessary information, the outline cutting identification of the picture to be carried out later is greatly influenced, the noise reduction is a very important stage, and the accuracy of the picture identification is directly influenced by the quality of the noise reduction treatment.
D. And (3) inclination correction: the most common method for correcting the rotation and inclination of the picture is Hough transform, which is the principle of expanding the picture to connect discontinuous characters into a straight line, thereby facilitating the straight line detection. After the angle of the straight line is calculated, the inclined picture can be corrected to the horizontal position by utilizing a rotation algorithm.
E. Picture segmentation: for a segment of multi-line text, the character segmentation comprises two steps of line segmentation and character segmentation, and the inclination correction is the premise of the character segmentation. We project the tilt-corrected text to the Y-axis and accumulate all values to obtain a histogram on the Y-axis, see fig. 4.
F. Character recognition: and through obtaining image slices, carrying out template rough classification and template fine matching on the character extraction feature vectors of each part of image scanning and the feature templates, and identifying characters.
Example 8 Key technology-spatial instantaneous positioning technology
The spatial instantaneous positioning technology, namely SLAM, is used for simultaneously modeling a scene and positioning the position of a camera, and the purpose of the spatial instantaneous positioning technology is to estimate the positions and relative motion tracks of some devices in the scene. SLAM instant positioning and mapping, a simple understanding is: the machine can build ambient environment data instantly by means of vision and sensors in a completely unfamiliar environment.
The principle of formation of visual features is shown in fig. 5, in which one can see that an image can be obtained by imaging one camera, mainly by observing one 3D point with the camera.
And the SLAM method is mainly used for calibrating the points, and then the characteristic point identification and the sensor signal are fused into the algorithm by a visual method to reconstruct the scene.
Space positioning technology, which was used in military affairs for the first time, such as guided missiles and airplanes, needs to position positions in the air, and is optimized for use in intelligent robots, unmanned vehicles and sweeping robots. VR/AR is also a spatial location technology that needs to be developed and mastered by companies such as Google Tango, a few years ago Tango tablet, and Microsoft holens, high-pass, some hardware vendors have already available their reference designs for use with good results. ARKit, also Facebook, Snapchat, issued by Apple, is also being staged for related applications.
SLAM spatial localization technique/Vision method
SLAM spatial localization techniques need to combine both visual and sensor information. The vision method can be divided into a plurality of methods, such as laser radar, a binocular camera, a single camera, RGBD and the like. The visual SLAM method comprises two modules, one is Tracking, and the position of a 3D point is known; one is Mapping, which updates the position of the 3D point. Two visual characteristics also need to be known: image feature point based methods such as PTAM, ORB; the other is a direct method, comparing pixel gray level differences, such as LSD-SLAM, DSO-SLAM.
SLAM spatial localization technique/sensor
The sensor is one of two main basic elements for realizing SLAM, and at present, several common inertial sensors such as gyroscopes are common in the field of VRAR and are respectively mechanical, laser and micromechanical gyroscopes. Mechanical gyroscopes, which are common at the mobile phone end, rotate the whole device if the gyroscope is rotating at high speed. The direction of the laser gyroscope is unchanged, so that the rotation direction of equipment can be obtained, the gyroscope is more applied to ships hundreds of years ago in the early stage, but the prior high-precision gyroscope is the laser gyroscope, for example, a missile flies for several hours in the air, but the error is only hundreds of meters or within 100 meters. The micromechanical gyroscope has a laser source in the middle, which emits laser light in two directions. If the object is still, the lengths of the two light rays are the same, the phase difference of the two light rays is 0 by comparison, if the object rotates, the two light paths have slight and extremely small changes, the phase difference generated in the middle is obtained, and the rotating speed of the whole equipment can be known by identifying the phase difference. There are two sectors, the object is stationary while rotating, and its rotation speed can be known by recognizing the angle. This miniaturized gyroscope can be much less accurate than a laser, and cannot be used alone, so it must be combined with the gyroscope and visual information.
Example 9 three-dimensional registration technique as a Key technique
The three-dimensional registration technology is a basic technology for realizing mobile augmented reality application and is also a key for determining the performance of a mobile augmented reality application system, so that the three-dimensional registration technology is always a key point and a difficulty point for research of the mobile augmented reality system. The main tasks are as follows: and detecting the pose state of the camera relative to the real scene in real time, determining the position of virtual information to be superposed in a projection plane, and displaying the virtual information at the correct position in a screen in real time to finish three-dimensional registration. Three criteria for registration technology performance determination: real-time, stability and robustness.
The AR glasses integrate a hybrid registration algorithm based on computer vision and based on a hardware sensor, and high quality and precision are achieved. Computer vision based registration algorithm: the method mainly refers to a process of identifying, tracking and positioning a real scene through knowledge in the aspect of image processing after information of the real scene is acquired by computer vision. Registration algorithms based on computer vision are further classified into registration algorithms based on traditional signs and registration algorithms based on natural feature points without signs. Hardware sensor based registration algorithm: the hardware sensor tracking technology of the traditional augmented reality system mainly comprises an inertial navigation system, a Global Positioning System (GPS), an electromagnetic, optical or ultrasonic position tracker and the like. The main problems of the inertial navigation system are that the tracking error of the angle and the position of the tracked object is continuously increased along with the increase of time, the drift is large, and the volume and the weight of the equipment are also large; the GPS positioning error is large, and GPS signals can not be normally received under the conditions of indoor environment, canyon environment or other complex terrains; the electromagnetic, optical or ultrasonic position tracker adopts the working mode of transmitting and receiving to track, and the use occasion is fixed, and the range is limited.
Example 10 Key technique-virtual-real fusion display technique
The main problems of the virtual-real fusion scene display research are two aspects: firstly, how to complete the fusion and superposition of the real scene and the virtual object information, and secondly, how to solve the phenomenon of virtual object information delay in the fusion process. For an optical see-through helmet mounted display, a user can see the scene in the real environment in real time, and the virtual object information for enhancing the real scene can be displayed on the helmet mounted display after a series of system delays. When the head of the user or the surrounding scene and objects change, the system delay causes the enhanced information to have a 'drift' phenomenon in the real environment. However, such a problem can be solved to some extent by using a video see-through display method. The video display and the display frequency of the virtual object information are controlled by a program, so that the requirement of real-time performance can be met, and the phenomenon of 'drift' is relieved or even eliminated.
Example 11 Key technology-Speech recognition technology
The speech recognition technology is to make the intelligent device understand human speech. It is a science that involves many disciplines such as digital signal processing, artificial intelligence, linguistics, mathematical statistics, acoustics, affective science and psychology alternately. The technology can provide a plurality of applications such as automatic customer service, automatic voice translation, command control, voice verification code, and the like. In recent years, with the rise of artificial intelligence, speech recognition technology makes a major breakthrough in both theory and application, starts to move from laboratories to markets, and goes into practical application.
The essence of speech recognition is pattern recognition based on speech characteristic parameters, i.e. through learning, the system can classify the input speech according to a certain pattern, and then find out the best matching result according to the judgment criterion. Currently, the pattern matching principle has been applied in most speech recognition systems. Fig. 6 is a block diagram of a speech recognition system based on the principle of pattern matching.
General pattern recognition includes basic modules such as preprocessing, feature extraction, pattern matching and the like. Input speech is first pre-processed, where the pre-processing includes framing, windowing, pre-emphasis, and so on. Secondly, feature extraction is carried out, so that the selection of proper feature parameters is particularly important. Commonly used characteristic parameters include: pitch period, formants, short-term average energy or amplitude, Linear Prediction Coefficients (LPC), perceptual weighted prediction coefficients (PLP), short-term average zero-crossing rate, Linear Prediction Cepstral Coefficients (LPCC), autocorrelation functions, mel-frequency cepstral coefficients (MFCC), wavelet transform coefficients, empirical mode decomposition coefficients (EMD), gamma-pass filter coefficients (GFCC), and the like. When actual recognition is carried out, a template is generated for the test voice according to a training process, and finally recognition is carried out according to a distortion judgment criterion. Common distortion decision criteria include euclidean distance, covariance matrix, bayesian distance, and the like.
Example 12 Key technique-three-dimensional ammunition solid modeling technique
The ammunition solid model modeling is a process of constructing a solid model with three-dimensional data in a virtual three-dimensional space and reproducing attributes such as structure, appearance, material, action and the like of the solid model in the virtual space. Three-dimensional solid models can be classified into a surface model and a polygonal model according to their types. The curved surface model is suitable for quantitative production, and the polygonal model is used for visual representation of three-dimensional simulation, games, movies and the like. At present, the mainstream three-dimensional solid model modeling method mainly comprises the following steps: A. modeling three-dimensional software; there are many excellent modeling software on the market today, such as 3DMAX, MAYA, UG, AUTOCAD, etc. They use some basic geometric elements, such as cubes, spheres, etc., to construct complex geometric scenes through a series of geometric operations, such as translation, rotation, stretching, etc. The method mainly comprises geometric modeling, behavior modeling, physical modeling, object characteristic modeling, model segmentation and the like. B. Modeling by utilizing instrument equipment; three-dimensional scanners are one of the important tools currently used for three-dimensional modeling of real objects. The method can quickly and conveniently convert the three-dimensional color information of the real world into digital signals which can be directly processed by a computer, and provides an effective means for the digitization of real objects. Through scanning, the three-dimensional space coordinate of each sampling point on the surface of the object can be obtained, the color of each sampling point can be obtained through color scanning, and the surface color texture map of the object can be output. C. Modeling according to an image or video; image-based modeling and rendering is an extremely active area of research in the current computer graphics community. Compared with the traditional modeling and drawing based on geometry, the modeling and drawing technology based on images provides a most natural way for people to obtain the photo reality, and by adopting the technology, the modeling becomes faster and more convenient, and the high drawing speed and the high reality can be obtained. The main objective of image-based modeling is to recover the three-dimensional geometry of a scene from a two-dimensional image. The restoration of three-dimensional shapes of a scene from a two-dimensional image is originally directed to computer graphics and computer vision.
According to the three-dimensional simulation needs of the solid model, the principle of three-dimensional solid modeling is formulated: firstly, the model structure and the proportion are highly accurate; secondly, the surface number of the mould is controllable in height, and the real-time rendering performance requirement is met; the model has LOD multiple detail levels, and meets the requirements of different visual range display and rendering performance optimization; the model has good and damaged appearance effects; simulating the material and the texture height of the model. The three-dimensional software modeling method is mainly used, and the modeling efficiency and effect are improved by combining various modeling methods. The modeling method adopted in the project is shown in the following figure 7.
We divide the solid model modeling into two phases: and a data acquisition stage and a model generation stage mainly adopt a three-dimensional software modeling method.
A data acquisition stage: acquiring physical appearance parameters and action effect parameters: the method is used for constructing the parameter basis of the solid model with high precision and high simulation degree. Generating a reference model: the advanced method of instrument scanning or image generation is used for quickly generating the model as the reference and basis of three-dimensional production. A model generation stage: generating an original high-precision model: and through three-dimensional software, the physical model is restored to the maximum precision, and the physical model comprises details such as main structure appearance, fine textures, concave-convex and the like. The original high-precision model cannot be used directly, and its details are transcribed in a variety of ways at a later stage and fully presented at an application stage. And (3) generating an application-level high-precision model: and generating the original high-precision model into a high-precision model which accords with the virtual simulation operation level by using a model topology technology. And recording the detail information of the original high-precision model by using a baking technology, wherein the detail information mainly comprises a normal map representing the detail concave-convex of the model. And (3) generation of material and texture: the physical rendering (PBR) technology is adopted, the surface material is directly written by physical parameters, so that the expression of the model is more consistent with physical rules, and the calculation of illumination is more consistent with reality.
The core algorithm is as follows:
Figure BDA0002872119280000231
the results are stored in a map form: albedo map matte color mapping, normal map normal mapping, metallic map metallization mapping, roughnessroughness mapping, ao map environment shielding mapping and the like. And (3) action and effect generation: adding joint, father and son relationship binding, skeleton binding, physical behavior binding and the like to the entity model, making the action effects of advancing, expanding, withdrawing and the like of the model, and making the effects of smoke, fire and the like of the model by using the particle system. And (3) generating a multi-appearance model: and generating appearance effect models such as damage and damage according to the application level high-precision model. Application-level multi-precision model generation: and generating middle-level precision models and low-level precision models by taking the application level high-precision models as a basis so as to have LOD multi-detail levels. Parameter driven binding: the method comprises the steps of multi-precision and multi-effect fruit model synthesis, action and effect binding, and the model is driven by parameters.
The software modeling mode adopted by the project has the advantages that the precision and the quality are highly controllable, and the appearance and the action effect of the real object are restored with high precision; the physical-based rendering (PBR) has more excellent display effect; multiple appearance states, richness and satisfaction of display requirements; the LOD with multiple detail levels meets the requirements of rendering effects and performance optimization of different distances; the parametric model can realize parametric driving to the simulation system.
Example 13 Key technique-computer animation technique
Regarding the computing and animation technology, the research of related content can be divided into three levels, namely a bottom-level motion control method, a motion editing and synthesizing method, a high-level motion control method and the like. The bottom layer motion control method mainly comprises a key frame method, a physical-based method, a kinematics method, a motion capture data method, a process method and the like. The motion editing mainly performs operations such as redirection, offset mapping, deformation and smoothing on a single motion, and the motion synthesis mainly performs operations such as interpolation, connection and mixing on a plurality of motions. The high-rise motion control method mainly comprises the contents of path planning, behavior planning, emotion giving, path following and the like. Animation modules in the virtual simulation engine need to encapsulate these technologies, ultimately supporting flow-based simulation and keyframe-based simulation. The computer animation technology implementation is shown in figure 8 below.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
In various embodiments, the hardware implementation of the technology may directly employ existing intelligent devices, including but not limited to industrial personal computers, PCs, smart phones, handheld stand-alone machines, floor stand-alone machines, and the like. The input device preferably adopts a screen keyboard, the data storage and calculation module adopts the existing memory, calculator and controller, the internal communication module adopts the existing communication port and protocol, and the remote communication adopts the existing gprs network, the web and the like.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A database configuration for an eyeglass-handle-carried ammunition identification system, characterized by: hardware of the glasses-handle continuous ammunition carrying identification system comprises AR glasses and a database server, and software of the glasses-handle continuous ammunition carrying identification system comprises augmented reality ammunition identifying glasses software and a database;
the database includes the following data configuration modules: the system comprises a two-dimensional code data generating module, a two-dimensional code data encrypting module, a two-dimensional code data mapping module, a picture data module, a video data module, a model data module and a storage data module, wherein the database is integrally provided with a module-oriented expandable framework, and each data module is provided with an open data adding and deleting authority framework; the database is connected with the two-dimensional code scanning port and the AR display port in an open type readable data series way upwards and downwards respectively;
the database comprises the following sub-databases: the system comprises a two-dimensional code dictionary sub-database, a two-dimensional code encryption sub-database, an ammunition basic information sub-database, an ammunition three-dimensional vector sub-database, an ammunition warehouse digital management sub-database, an ammunition quality digital management sub-database and an ammunition destruction automatic warning prompt sub-database;
the server of the database is set to at least satisfy the following conditions: the simulation calculation CPU is not lower than I7, the memory is not less than 8G, the hard disk is not less than 500G, and the independent video memory is not less than 1G.
2. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database comprises a two-dimensional code dictionary sub-database, and the storage capacity of the two-dimensional code dictionary sub-database is compatible with at least 100 kinds of ammunition information and corresponding two-dimensional code dictionaries.
3. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database comprises a two-dimensional code dictionary sub-database and a two-dimensional code encryption sub-database, and the two-dimensional code encryption sub-database and the two-dimensional code dictionary sub-database are provided with category mapping function architectures.
4. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database comprises an ammunition basic information sub-database, and a standard exchangeable text data format is set in the ammunition basic information sub-database.
5. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the ammunition basic information at least comprises ammunition species data, ammunition name data, ammunition assembly information data and ammunition metadata information data.
6. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database comprises an ammunition three-dimensional vector sub-database, compatible ammunition appearance data, ammunition component structure sign data, ammunition operation use text data, ammunition operation use graphic data, ammunition operation use video data and ammunition accident handling data.
7. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database comprises an ammunition warehouse digital management sub-database and is compatible with warehouse ammunition position leading data and ammunition allocation automatic logging statistical data.
8. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database comprises an ammunition quality digital management sub-database and is compatible with conventional detection automatic entry data and quality state automatic identification data.
9. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database ammunition destruction automatic warning prompt sub-database is compatible with the automatic warning prompt data for destroying dangerous goods.
10. A database configuration for an eyeglass-handle-carried ammunition identification system according to claim 1, characterized in that: the database is provided with two software communication ports, a first port is externally connected with perspective optical waveguide display engine software, and the following data setting, timing/manual dormancy setting and manual awakening functions are set; the ammunition two-dimensional code is quickly identified, and the identification time is not more than 2 seconds; setting an ammunition mark capable of identifying the cylindrical surface of an ammunition bullet body, wherein the misjudgment rate is not more than 40%; basic information, pictures, videos and three-dimensional models of ammunition are displayed and identified through touch panel operation switching, and the models are rotated in a compatible mode; setting up support voice interaction and/or voice assistant; the second port supports the packaging and importing of database content to the AR display device for communication and storage.
CN202011620068.1A 2020-12-30 2020-12-30 Database configuration for glasses-handle continuous ammunition identification system Pending CN112765125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011620068.1A CN112765125A (en) 2020-12-30 2020-12-30 Database configuration for glasses-handle continuous ammunition identification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011620068.1A CN112765125A (en) 2020-12-30 2020-12-30 Database configuration for glasses-handle continuous ammunition identification system

Publications (1)

Publication Number Publication Date
CN112765125A true CN112765125A (en) 2021-05-07

Family

ID=75698124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011620068.1A Pending CN112765125A (en) 2020-12-30 2020-12-30 Database configuration for glasses-handle continuous ammunition identification system

Country Status (1)

Country Link
CN (1) CN112765125A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254252A (en) * 2011-07-19 2011-11-23 娄文忠 Method for monitoring and intelligent management of ammunition storage and transportation
US20140210857A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
US20150109338A1 (en) * 2013-10-17 2015-04-23 Nant Holdings Ip, Llc Wide area augmented reality location-based services
CN105825335A (en) * 2016-03-15 2016-08-03 杭州睿兴栋宇工程咨询有限公司 BIM (Building Information Modeling) based construction onsite vehicle and device management system and method
CN106340217A (en) * 2016-10-31 2017-01-18 华中科技大学 Augmented reality technology based manufacturing equipment intelligent system and its implementation method
CN107423392A (en) * 2017-07-24 2017-12-01 上海明数数字出版科技有限公司 Word, dictionaries query method, system and device based on AR technologies
CN108229616A (en) * 2018-01-03 2018-06-29 北京厚力德仪器设备有限公司 The monitoring system and monitoring method of a kind of weather modification equipment and ammunition
CN109345630A (en) * 2018-09-07 2019-02-15 昆明盛策同辉数字科技有限责任公司 AR information generating method, device, storage medium and the equipment that can customize
CN109544693A (en) * 2017-09-22 2019-03-29 江苏智谋科技有限公司 Real-time graphics system and virtual reality interaction technique
CN109919331A (en) * 2019-02-15 2019-06-21 华南理工大学 A kind of airborne equipment intelligent maintaining auxiliary system and method
CN211015620U (en) * 2019-11-14 2020-07-14 广东电网有限责任公司 Electric power operation and maintenance auxiliary device based on augmented reality technology

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254252A (en) * 2011-07-19 2011-11-23 娄文忠 Method for monitoring and intelligent management of ammunition storage and transportation
US20140210857A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
US20150109338A1 (en) * 2013-10-17 2015-04-23 Nant Holdings Ip, Llc Wide area augmented reality location-based services
CN105825335A (en) * 2016-03-15 2016-08-03 杭州睿兴栋宇工程咨询有限公司 BIM (Building Information Modeling) based construction onsite vehicle and device management system and method
CN106340217A (en) * 2016-10-31 2017-01-18 华中科技大学 Augmented reality technology based manufacturing equipment intelligent system and its implementation method
CN107423392A (en) * 2017-07-24 2017-12-01 上海明数数字出版科技有限公司 Word, dictionaries query method, system and device based on AR technologies
CN109544693A (en) * 2017-09-22 2019-03-29 江苏智谋科技有限公司 Real-time graphics system and virtual reality interaction technique
CN108229616A (en) * 2018-01-03 2018-06-29 北京厚力德仪器设备有限公司 The monitoring system and monitoring method of a kind of weather modification equipment and ammunition
CN109345630A (en) * 2018-09-07 2019-02-15 昆明盛策同辉数字科技有限责任公司 AR information generating method, device, storage medium and the equipment that can customize
CN109919331A (en) * 2019-02-15 2019-06-21 华南理工大学 A kind of airborne equipment intelligent maintaining auxiliary system and method
CN211015620U (en) * 2019-11-14 2020-07-14 广东电网有限责任公司 Electric power operation and maintenance auxiliary device based on augmented reality technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘伟等: "基于物联网的人工影响天气作业用弹药管理系统", 《电脑知识与技术》 *

Similar Documents

Publication Publication Date Title
US10769858B2 (en) Systems and methods for sign language recognition
WO2019029100A1 (en) Multi-interaction implementation method for mining operation based on virtual reality and augmented reality
CN111563502A (en) Image text recognition method and device, electronic equipment and computer storage medium
CN113822977A (en) Image rendering method, device, equipment and storage medium
EP4345777A1 (en) Living body detection method and apparatus, and training method and apparatus for living body detection system
CN112764530A (en) Ammunition identification method based on touch handle and augmented reality glasses
CN113506377A (en) Teaching training method based on virtual roaming technology
CN112684892B (en) Augmented reality ammunition recognition glasses-handle carrying system
WO2017042070A1 (en) A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN112764531A (en) Augmented reality ammunition identification method
CN109461203B (en) Gesture three-dimensional image generation method and device, computer equipment and storage medium
Putra et al. Designing translation tool: Between sign language to spoken text on kinect time series data using dynamic time warping
CN112633442A (en) Ammunition identification system based on visual perception technology
CN112765125A (en) Database configuration for glasses-handle continuous ammunition identification system
TW202311815A (en) Display of digital media content on physical surface
Huang Virtual reality/augmented reality technology: the next chapter of human-computer interaction
Soliman et al. Artificial intelligence powered Metaverse: analysis, challenges and future perspectives
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
Sharma et al. Hand Gesture Recognition using OpenCV
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20240079031A1 (en) Authoring tools for creating interactive ar experiences
Piechaczek et al. Popular strategies and methods for using augmented reality
CN117274383A (en) Viewpoint prediction method and device, electronic equipment and storage medium
Farooqui et al. A Sign Translation App to Convert to ASL to Auditory English Language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507