CN112633442B - Ammunition identification system based on visual perception technology - Google Patents

Ammunition identification system based on visual perception technology Download PDF

Info

Publication number
CN112633442B
CN112633442B CN202011621699.5A CN202011621699A CN112633442B CN 112633442 B CN112633442 B CN 112633442B CN 202011621699 A CN202011621699 A CN 202011621699A CN 112633442 B CN112633442 B CN 112633442B
Authority
CN
China
Prior art keywords
ammunition
model
information
dimensional
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011621699.5A
Other languages
Chinese (zh)
Other versions
CN112633442A (en
Inventor
王彬
贾昊楠
陈明华
姜志宝
王韶光
尹会进
张洋洋
闫媛媛
王维娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
32181 Troops of PLA
Original Assignee
32181 Troops of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 32181 Troops of PLA filed Critical 32181 Troops of PLA
Priority to CN202011621699.5A priority Critical patent/CN112633442B/en
Publication of CN112633442A publication Critical patent/CN112633442A/en
Application granted granted Critical
Publication of CN112633442B publication Critical patent/CN112633442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an ammunition identification system based on visual perception technology, which comprises: AR eyeglass system and database system, AR eyeglass system includes: the image recognition unit is used for recognizing two-dimensional codes and ammunition identifications of ammunition, the virtual scene generation unit is used for matching information recognized by the image recognition unit with ammunition information data stored in the database system, processing interaction data and tracking data generated by the interaction unit, generating a virtual picture matched with real environment data and transmitting the virtual picture to the head display unit, the head display unit is used for displaying basic information, pictures, videos and three-dimensional models of the ammunition, and the low-power consumption management unit is used for controlling the system to enter a sleep countdown state when no operation is performed, and time reaches an ultralow-power consumption state. The invention can realize rapid and accurate identification of basic information of ammunition, three-dimensional stereo display of ammunition structure, and visual demonstration of operation and use flow and requirement of ammunition.

Description

Ammunition identification system based on visual perception technology
Technical Field
The invention relates to the field of ammunition management, in particular to an ammunition identification system based on a visual perception technology.
Background
The use and management of current ammunition mainly has the following problems: (1) The current ammunition has various varieties and various operation and use, so that the learning cost of ammunition knowledge is extremely high; (2) Current ammunition guarantee and lack of basic knowledge of ammunition of users, so that the operation and use capabilities of ammunition are lack; (3) Due to the characteristics of ammunition, the ammunition has higher potential safety hazard in the actual learning and use processes.
Disclosure of Invention
In order to solve the problems, the invention provides an ammunition identification system based on a visual perception technology, which can realize quick and accurate identification of basic information of ammunition, three-dimensional display of an ammunition structure, visual demonstration of operation and use procedures and requirements of ammunition and the like, so that ammunition guarantee and users quickly master ammunition knowledge, visual perception of ammunition structure composition and operation and use procedures and potential safety hazards in the ammunition use process are reduced.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
An ammunition identification system based on visual perception technology, comprising: the AR glasses system comprises an AR glasses main body, an image recognition unit, a head display unit, an interaction unit, a virtual scene generation unit and a low-power consumption management unit, wherein the image recognition unit is used for recognizing two-dimensional codes and ammunition identifications of ammunition, the virtual scene generation unit is used for matching information recognized by the image recognition unit with ammunition information data stored in the database system, processing interaction data and tracking data generated by the interaction unit, generating a virtual picture matched with real environment data and transmitting the virtual picture to the head display unit, the head display unit is used for displaying basic information, pictures, videos and a three-dimensional model of the ammunition, and when the low-power consumption management unit does not operate, the control system is in sleep countdown, and time reaches an ultralow-power consumption state.
Optionally, the two-dimensional code is placed on the cylindrical surface of the ammunition projectile body or on the ammunition packaging box, and the two-dimensional code comprises a dragon Bei Ma and a hamming code.
Optionally, the ammunition mark is arranged on the cylindrical surface of the ammunition body or on the ammunition packing box, and the ammunition mark is Chinese, english and numerals.
Optionally, the head display unit is an optical see-through display.
Optionally, the interaction unit includes a handle interaction module, a voice interaction module and a gesture interaction module, the handle interaction module is connected with the AR glasses main body, and includes a touch pad, the touch pad is used for realizing the switching of the display interface of the head display unit, the voice interaction module is used for realizing the interaction of the AR glasses system through language instructions, and the gesture interaction module is used for realizing the interaction of the AR glasses system through grabbing gestures and reading the gestures as instructions.
Optionally, the basic information of the ammunition includes the type, name, assembly information and information of the ammunition.
Optionally, the video of ammunition includes an operational use video, an accident maintenance video and an ammunition dismantling video.
Optionally, the picture of the ammunition comprises a picture of an ammunition body and a manual of ammunition.
Alternatively, the three-dimensional model of ammunition is three-dimensional modeled and virtually assembled by Solidworks.
Optionally, the head display unit is further configured to display warehouse information and production information, where the warehouse information includes providing warehouse ammunition location guidance and ammunition allocation statistics, and the production information includes providing ammunition quality status and destruction prompt.
Compared with the prior art, the invention has the following technical progress:
The system can enable ammunition users to realize quick and accurate identification of basic information of ammunition, three-dimensional display of ammunition structure, visual demonstration of ammunition operation and use flow and requirements and the like through advanced method means such as simulation model demonstration and augmented reality interaction. Realizing technical support of ammunition and quick and visual mastering of basic performance of the medicine by operators, familiarizing with operation and use, reducing waiting time of ammunition use preparation, reducing potential safety hazard of ammunition use and the like.
A visual virtual system is constructed by adopting an AR augmented reality technology and a computer simulation technology, so that ammunition guarantee and a user can quickly master ammunition knowledge, and the ammunition structural composition and operation and use processes are visually perceived. More importantly, many ammunition, after being disassembled, cannot observe its internal mechanism of operation. The visual simulation technology of the augmented reality is adopted, so that the structural characteristics and the working process of the display system can be clear and vivid, a large amount of visual ammunition structural information can be provided, and the learning and skill grasping efficiency of the visual ammunition is greatly improved.
The system comprises multimedia data such as pictures, videos, three-dimensional models and the like of ammunition operation and accident handling, and the probability of safety accidents is further reduced by rapidly mastering the ammunition operation and handling knowledge through virtual learning and operation.
The method realizes the digital management of ammunition warehouses (position leading of ammunition in warehouses, automatic boarding statistics of ammunition allocation and the like), the digital management of ammunition quality (conventional detection and automatic input, quality state automatic identification and the like), the automatic warning prompt of ammunition destruction (automatic warning prompt of destroyed dangerous goods) and the like through subsequent function expansion, and lays a foundation for improving the informationized management level of army ammunition.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
In the drawings:
fig. 1 is a schematic structural view of the present invention.
FIG. 2 is a block diagram of a speech recognition system based on the pattern matching principle of the present invention.
FIG. 3 is a schematic diagram of the operation of the video viewing process according to the present invention.
Fig. 4 is a schematic diagram of the operation procedure of checking pictures according to the present invention.
FIG. 5 is a schematic diagram of the operation process of the three-dimensional model checking method.
FIG. 6 is a schematic diagram of layering of a three-dimensional modeling assembly.
Fig. 7 is a schematic diagram of three-dimensional modeling of ammunition.
Detailed Description
The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
As shown in fig. 1, the invention discloses an ammunition identification system based on a visual perception technology, which comprises an AR glasses system and a database system, wherein the AR glasses system specifically comprises an AR glasses main body, an image identification unit, a head display unit, an interaction unit, a virtual scene generation unit and a low-power consumption management unit.
The design form of the AR glasses main body product is split, the design form of the AR glasses main body product supports adjustment of the structural dimension angle and the like of a model, supports various editing modes such as movement, disassembly and combination of an ammunition three-dimensional model, supports addition of multimedia information and model superposition, supports virtual and real superposition of a real object and a virtual object, supports synchronous display of a first view angle and a third view angle, supports synchronous display and synchronous editing of multiple terminals (flat plates, glasses, displays and the like), supports cloud data obtained by means of 4G/5G/WIFI and the like, supports remote voice communication operation interaction, supports common digital model import, supports gesture recognition and language interaction, supports fingerprint recognition, supports NFC (near field communication) functions, supports integration and application of various AI (analog input) algorithms, supports customization and adaptation of application software, supports GPS (global positioning system) and Beidou satellite positioning, supports a remote assistance function, and the AR glasses main body is also provided with an open application programming interface, so that the requirements of subsequent secondary development are met.
In this embodiment, the performance parameters of the AR glasses body are:
a processor: gao Tongxiao dragon 845;
Memory: 6GB;
built-in storage space: 64GB;
and (3) connection: wi-Fi, bluetooth, USB TypeC;
And (3) a display screen: micro-OLED;
Display screen number: a dual display screen;
Monocular resolution: not less than 1920 x 1080;
Angle of view: 43 degree
Contrast ratio: 10000:1;
Video: 720P@30fps,1080P@30fps;
Automatic focusing: support;
operating system: android (providing an open application programming interface);
gesture interaction: support;
Sensor (glasses): accelerometers, gyroscopes, magnetometers, light sensors;
Audio frequency: stereo headphones/microphones;
a memory card: scalable 256G Micro SD;
battery capacity 6300mAh;
Full load service time: for more than 3 hours;
gesture recognition function: support;
SLAM function: 6DOF tracking;
Positioning accuracy: 99 percent;
CPU occupancy rate: 10%;
Initialization time: less than 1s;
Support a monocular/binocular mode;
closed loop repositioning speed: less than 2s;
offline map: supporting the synchronous sharing of a unified coordinate system by a plurality of equipment cloud sides;
motion prediction: < 25ms;
3D mapping: refresh rate 10Hz; mesh precision 80%; scanning space size 10m x 10m;
The interactive unit comprises a handle interactive module, a voice interactive module and a gesture interactive module, wherein the handle interactive module is connected with the AR glasses main body and comprises a return key, a homepage key, a menu key, a system indicator light, a power key, a volume key, a storage expansion groove, a USB interface and a touch pad, the movement of a user hand is sensed by the handle interactive module to control the action of the pointer, the block connection operation is realized through the entity key to serve as a substitute of a mouse, the user can achieve good interactive experience by matching with the entity key, the user can conveniently conduct interactive input of a virtual object through integrating handle interactive equipment, for example, the touch pad is used for conveniently switching the display interface of the head display unit, the movement of the user hand is sensed by the touch pad to control the action of the pointer, the block connection operation is realized through the entity key to serve as a substitute of the mouse, the good interactive experience can be achieved through integrating the handle interactive equipment, and the user can conveniently conduct interactive input of the virtual object, for example, the head display unit is switched through the touch pad.
The voice interaction module is used for realizing interaction of the AR glasses system through language instructions as shown in fig. 2, and firstly, preprocessing is carried out on input voice, wherein the preprocessing comprises framing, windowing, pre-emphasis and the like. And secondly, feature extraction, so that the selection of proper feature parameters is particularly important. Common characteristic parameters include: pitch period, formants, short-time average energy or amplitude, linear Prediction Coefficients (LPC), perceptual weighting prediction coefficients (PLP), short-time average zero-crossing rate, linear Prediction Cepstrum Coefficients (LPCC), autocorrelation functions, mel cepstrum coefficients (MFCC), wavelet transform coefficients, empirical mode decomposition coefficients (EMD), gamma-pass filter coefficients (GFCC), and the like. When the actual recognition is carried out, templates are generated for the test voice according to the training process, and finally the recognition is carried out according to the distortion judgment criterion. Common distortion decision criteria include euclidean distance, covariance matrix and bayesian distance. The AR glasses main body integrates the microphone module and the loudspeaker module, combines the voice recognition SDK, realizes accurate off-line voice recognition function, and can achieve good voice interaction. The voice recognition device can perform voice recognition locally without a network, does not need to install an app, has high response speed, small volume and low cost, can realize voice recognition of various lengths and numbers, does not need a background server, and has no later hidden trouble.
The gesture interaction module achieves interaction of the AR glasses system by grabbing gestures and interpreting the gestures as instructions, and gesture recognition is a perception operation user interface which allows a computer to grab and interpret human gestures as instructions. The gestures are very diversified, can be simple double-click or a series of complex sign language, and the current gesture recognition is mainly divided into gesture recognition interacted with a touch screen and gesture recognition without the touch screen. The system integrates gesture recognition without a touch screen, is realized based on a depth perception camera, acquires image depth information by using the depth camera, calculates hand gestures, and is matched with a gesture model library so as to recognize gesture information.
The image recognition unit is used for recognizing two-dimensional codes and ammunition marks of ammunition, the two-dimensional codes are arranged on the cylindrical surface of an ammunition projectile body or an ammunition packaging box, the two-dimensional codes comprise a dragon Bei Ma and a Hamming code, the two-dimensional codes adopt a DES (data encryption standard) and RSA (rivest-Shamir-Adleman) double encryption algorithm, the fact that plaintext data cannot be stolen by unauthorized parties is ensured, and operations such as exporting, printing and renaming can be carried out on the two-dimensional codes, so that two-dimensional code management is facilitated. The ammunition mark is placed on the cylindrical surface of the ammunition body or on an ammunition packing box, and the ammunition mark is Chinese, english and numerals. The cylindrical two-dimensional code adopts an 8-equal-division segmentation mode to correct cylindrical image distortion, so that the recognition rate is improved, and in the image recognition process, an image is captured through a high-resolution camera, and image preprocessing-image feature extraction is performed to match a model output result. Meanwhile, the image recognition unit can adapt to various actual environments, and can accurately recognize images in the weather of over-bright, over-dark, relatively low visibility and rain and snow.
The two-dimensional code identification process mainly comprises the following steps: image preprocessing, positioning position detection patterns, positioning correction patterns, perspective transformation, decoding and error correction. The method comprises the following steps:
image pretreatment: graying, denoising, distortion correction and binarization; the two-dimensional code recognition process is easily affected by the environment and difficult to recognize, and the preprocessing process is used for improving the image quality and recognizing the environment.
① Graying the image: the output data format of the camera is many, the black-and-white camera directly outputs the gray level map, and the output format of the color camera is YUV422, YUV410, RGB565, RGB888, etc., and the two-dimensional code identification only needs the gray level map of a single channel, so the conversion is needed, taking RGB888 as an example, the conversion formula is as follows:
Gray=0.2989R+0.5870G+0.1140B
② Denoising: the influence of noise can cause inaccurate feature positioning and incorrect decoding in a data stage, common noise is mainly Gaussian noise and spiced salt noise, and Gaussian filtering, median filtering or mean filtering can be adopted to improve image quality.
③ Distortion correction: the wide-angle camera or the fisheye camera has larger distortion, and the deformation of the image is larger when the image is closer to the visual angle edge, and for the image with larger distortion, not only 1:1:3:1:1, the data of the data area has no standard module size, and can not be decoded accurately. In this case, the distortion model is used for correction, and the correction is performed to obtain an undistorted image.
④ Binarization: under normal conditions, the background and the QR code targets are obviously distinguished, the illumination is uniform, and only a global binarization method is needed, and common methods include a fixed threshold method, an Otsu method, a histogram double-peak thresholding method and the like. For the condition of uneven illumination, the method is inapplicable, and global brightness unbalance can be caused, so that the method can be realized by adopting a block thresholding and re-equalization method because the method needs to be processed by a self-adaptive local thresholding method.
B. Positioning position detection pattern: the feature is scanned horizontally and vertically through the feature search of the position detection pattern, the extremely candidate position detection image is penetrated for multiple times, the false position detection pattern is removed through the screening strategy to determine the true pattern, and then the azimuth of the true pattern and the true pattern is determined.
C. Positioning correction patterns: a corrector is estimated from the detected image.
D. Perspective transformation: acquiring a homography matrix according to the positioning points and the corrector, and acquiring a standard square image through perspective transformation, wherein the perspective transformation formula is as follows:
x=a11u+a12v+a13
y=a21u+a22v+a23
z=a31u+a32v+a33
E. decoding and error correction: the decoding is to decode and compare the two-dimension code version information, format information, data and error correcting code. The data area is converted into bit streams of 0 and 1, and the bit streams are checked and corrected by an error correction algorithm. And judging the coding format and then decoding to obtain the data contained in the two-dimensional code.
F. Cylindrical two-dimensional code identification: aiming at the project research object, the requirement of cylindrical two-dimensional code image identification exists. In this regard, the cylindrical distortion is corrected by adopting an 8-equal-division segmentation mode for the acquired two-dimensional code image, so that the recognition rate is improved.
G. two-dimensional code encryption: when the two-dimensional code is generated, encrypting plaintext data by adopting a DES (data encryption standard) and RSA (rivest-Shamir-Adleman) double encryption algorithm; and corresponding decryption is carried out in the decoding process, so that the legend data cannot be stolen by an unauthorized party.
The ammunition identification process comprises the following steps: the ammunition mark is serial data on ammunition cylinder and packing box and consists of Chinese, english and numerals. Several parts of the ammunition are marked at several positions of the case or the ammunition. For such data recognition, a character recognition technique is adopted, an image is acquired through a camera, the shape of the image is determined by detecting dark and bright modes, and then the shape is translated into computer characters by a character recognition method.
A. pretreatment: mainly comprises graying, binarizing, noise removing, inclination correcting and the like.
Graying: the gray scale image is a picture containing only luminance information and no color information. In the RGB model, if r=g=b, the color represents a gray color, where the value of r=g=b is called a gray value. The following formula is generally satisfied: the gray=0.299r+0.587g+0.114 b parameter takes into account the physiological characteristics of the human eye.
B. Binarization: non-black, i.e. white. Most of pictures shot by a camera are color images, the information content of the color images is huge, the content of the pictures can be simply divided into foreground and background, in order to enable a computer to recognize characters faster and better, the color images need to be processed first, so that only foreground information and background information of the pictures can be simply defined, the foreground information is black, and the background information is white, namely the binary image is shown. The gray-scale processed color image is further separated from the background by binarization. The concept of "threshold" is involved in the binarization process, simply by finding a suitable value as a limit above or below which a value becomes white or black, i.e. 0 or 255.
A histogram method (also called a bimodal method) is used to find the binarization threshold, the histogram being an important feature of the image. The histogram method considers that the image consists of a foreground and a background, wherein the foreground and the background form peaks on the gray level histogram, and the lowest valley between the peaks is the threshold value.
C. image noise reduction: in reality, digital images are often affected by interference of imaging equipment and external environment noise and the like in the processes of digitizing and transmitting, and are called noisy images or noise images. The process of reducing noise in a digital image is referred to as image noise reduction (Image Denoising). In the demonstration process, the picture after binarization can be seen to display a plurality of small black points, which are all unnecessary information, and the contour cutting recognition of the picture is greatly influenced, so that the noise reduction is a very important stage, and the accuracy of the picture recognition is directly influenced by the quality of the noise reduction treatment.
D. Correcting inclination: the picture to be photographed or selected cannot be completely horizontal, and inclination affects the picture cut out later, so the most common method for correcting the picture by rotating and inclining is Hough transformation, which is based on the principle that the picture is subjected to expansion treatment, intermittent characters are connected into a straight line, and the straight line is convenient to detect. After the angle of the straight line is calculated, the inclined picture can be corrected to a horizontal position by using a rotation algorithm.
E. and (3) picture segmentation: for a segment of multi-line text, the text segmentation comprises two steps of line segmentation and character segmentation, and the inclination correction is the premise of the text segmentation. We project the tilt corrected text to the Y-axis and accumulate all values to obtain a histogram on the Y-axis.
F. Character recognition: and (3) performing template coarse classification and template fine matching on the character extraction feature vector of each part of image scanning and the feature template by acquiring an image slice, and identifying characters.
The virtual scene generating unit is composed of a processor (CPU and GPU), a memory (memory and storage) and the like, and is used for matching the information identified by the image identifying unit with ammunition information data stored in the database system, processing the interaction data and tracking data generated by the interaction unit, generating a virtual picture matched with the real environment data and transmitting the virtual picture to the head display unit.
The display interface of the head display unit is basic information, pictures, videos and three-dimensional models of ammunition, wherein the basic information of the ammunition comprises an ID (two-dimensional code ID), a name, a country, a type and a description, and the basic information is used for database search query, outline display during application, assembly information, information of units such as range, length, diameter, emission mode, guidance mode and the like.
The video of the ammunition comprises an operation and use video, an accident maintenance video and an ammunition dismantling video, the operation process of viewing the video is shown in fig. 3, the picture of the ammunition comprises a picture of an ammunition body and a manual of the ammunition, the operation process of viewing the picture is shown in fig. 4, the three-dimensional model of the ammunition is completed by Solidworks to perform three-dimensional modeling and virtual assembly of the parts, and the operation process of viewing the three-dimensional model is shown in fig. 5.
In this embodiment, the three-dimensional modeling of the entity is divided into two stages: the data acquisition stage and the model generation stage are mainly completed by Solidworks. Building an ammunition 3D model through 3D modeling software Soliworks and Pro/e, wherein the physical characteristics of a virtual three-dimensional model are required to be consistent with those of a live ammunition according to 1:1 to calculate the authenticity of the ammunition three-dimensional virtual model, and simplifying some less important size structures as much as possible on the premise of not influencing the display simulation effect.
The whole modeling process starts from drawing a part library of ammunition, then assembling the ammunition from bottom to top according to the matching relation among the parts, and finally obtaining a complete equipment three-dimensional model.
The interior of the assembly model is divided into a plurality of sub-assemblies and parts from the aspect of function or mechanical composition, and the sub-assemblies can be decomposed into the sub-assemblies and parts with different layers. This parent-child subordinate relationship is typically described in terms of an assembly tree, as shown in FIG. 6. In modeling software, when bottom-up modeling is performed, this structure can be implemented by a "reassembly" function in the software.
Solid model modeling is generally divided into two phases: a data acquisition phase and a model generation phase, as shown in fig. 7.
And a data acquisition stage: and obtaining the parameter basis of the physical appearance parameters and the action effect parameters for constructing the physical model with high precision and high simulation degree.
Reference model generation: advanced methods of instrument scanning or image generation are used for rapidly generating models as references and bases for three-dimensional fabrication.
Model generation: generating an original high-precision model: the physical model is restored with maximum precision through three-dimensional software, and the physical model comprises the appearance of a main structure, fine textures, concave-convex details and the like. The original high-precision model cannot be directly used, and the details of the model are transcribed in a plurality of methods in the subsequent stage and are completely presented in the application stage.
Application level high-precision model generation: and generating the original high-precision model into a high-precision model conforming to the virtual simulation operation level by using a model topology technology. The baking technology is used for recording detail information of the original high-precision model, and the detail information mainly comprises a normal line map representing detail concave-convex of the model.
Material and texture generation: the physical-based rendering (PBR) technology is adopted, and the surface material is directly written by physical parameters, so that the performance of the model accords with physical rules, and the calculation of illumination accords with reality.
The core algorithm is as follows:
The results are stored in a map form: albedo map matte color mapping, normal map normal mapping, METALLIC MAP metallization mapping, roughness mapping, ao map ambient occlusion mapping, and the like.
Action and effect generation: the method is characterized in that joint, father-son relation binding, skeleton binding, physical behavior binding and the like are added for the entity model, action effects such as advancing, expanding and withdrawing of the model are produced, and effects such as smoke dust, fire light and the like of the model are produced by using a particle system.
Generating a multi-appearance model: and generating an appearance effect model such as damage, damage and the like based on the application-level high-precision model.
Generating an application-level multi-precision model: and generating a middle-level precision model and a low-level precision model based on the application-level precision model so as to have LOD multi-detail level.
Parameter driven binding: the method comprises the steps of multi-precision multi-effect model synthesis, action and effect binding, and the model is driven by parameters.
The software modeling mode adopted by the invention has the advantages of high controllable precision and quality, high precision and reduction of physical appearance and action effects; physical-based rendering (PBR) has more excellent display effect; the multi-appearance state is rich and meets the display requirement; the multi-detail-level LOD meets the rendering effect and performance optimization of different distances; the parametric model can realize parameterized driving to the simulation system.
The head display unit is also used for displaying warehouse information and production information, wherein the warehouse information comprises a warehouse ammunition position lead and ammunition allocation statistics, and the production information comprises an ammunition quality state providing prompt and an ammunition destruction prompt.
Important attributes of the display unit include: visual field, window size, brightness, transparency and duty cycle, contrast, uniformity and color quality, resolution, real world distortion, virtual image distortion, eye safety, eye relief, color differences, depth perception, volume, weight and shape parameters, optical efficiency, retardation, stray light, wherein a larger visual field in this embodiment can increase the sense of immersion. The field of view, window, and eye distance are closely related as shown in the simplified equation below:
s=b+2r tan(v/2)
where s is the size (e.g., width) of the optical surface, b is the window size, r is the eye relief, and v is the field of view.
The display brightness defines whether the brightness of the display is sufficient to support the user to clearly perceive the virtual content in a particular situation. Transparency is about how much real world light can reach the eye.
Display brightness is very challenging, so most AR glasses are often formulated, limited to indoor use, and become unusable outdoors, especially in direct sunlight. To alleviate this problem, the head-up reduces the transparency and thus reduces the amount of ambient light reaching the user's eyes, thereby making the display relatively brighter.
Contrast describes the ability of a display to produce brighter and darker pixels simultaneously. The OLED adopted by the hardware of this embodiment has a higher contrast ratio, which can reach 1:1.000.000 or higher.
The color quality defines the degree of accuracy with which the display can reproduce colors. In order to achieve proper color reproduction, calibration (including gamma) is required. Since AR displays are typically add rays, the perceived color depends on the scene on which the virtual content is superimposed. The hardware realizes better uniformity and color quality by reasonably optimizing the position of the display and an image recognition algorithm.
The AR glasses main body uses distortion calibration to reduce and eliminate the distortion of virtual contents, and processes the virtual contents as a part of a rendering pipeline to realize the display effect matched with the real world.
AR glasses need to ensure that the eyes are not affected by the AR display; the AR display is ensured to protect the eye from external injury.
By integrating all glass elements into a protective cover that is not breakable. And meets the safety standard of the glasses protection of ANSI Z87. Z.
Eye accommodation refers to the supporting distance of the pupil to the closest point of the AR display. Since not all users have the same head shape, a range of eye relief needs to be supported to define the thickness (along the viewing direction) of the window.
By preferably providing sufficient eye relief to support conventional eyewear, a user with vision problems does not need to purchase custom lenses to be compatible with eyewear wear.
AR glasses are two fields of view: AR glasses are used to display visually enhanced areas of virtual content, commonly referred to as fields of view; but the field of view that humans can perceive is larger than the visually enhanced area of current AR displays, and we refer to the field of view outside of the visually enhanced area as the peripheral field of view. We need to ensure that the peripheral field of view is not excessively occluded.
The human field of view is about 150 degrees by 120 degrees for a single eye and 220 degrees by 120 degrees for both eyes in addition. Placing the display in front of the eye creates additional occlusion, so an important design goal is to keep this occlusion to a minimum.
The refractive index of the lens varies with the wavelength of light, which results in different "color dependent" focal lengths. Color difference is an attractive problem in AR displays. Certain aberrations need to be corrected in the software (by proper calibration) to reduce the artifacts.
AR displays, the two most important cues are visual convergence (eye rotation to observe the same object) and visual accommodation (pupil focus object). They are neural couplings, whereas mismatched vergence and accommodation can cause user discomfort, so-called Vergence Accommodation Conflict (VAC). The AR display uses a single focal plane display and it is necessary to determine where to place it. The most suitable situation for most scenes seems to be around 2 meters. The focal plane should be substantially flat and the same for all colors.
Display size and eyewear size are one of the most challenging design parameters for today's AR devices. It is difficult to make the display smaller because of the large field of view and large viewing window required. Larger displays typically result in heavier optical elements. Size and weight are not parameters independent of other properties. The human head can comfortably withstand a weight greater than 70 grams (if the weight is evenly distributed). Although light weight is quickly injuring the bridge of the nose, the ears can bear more weight and the crown of the head is stiffer. The weight distribution is more important than the weight itself.
The optical efficiency refers to how much of the light emitted by the light emitting element actually reaches the eyes of the user. AR glasses employing micro LEDs can achieve higher brightness levels.
When the user rotates the head to the right, the content displayed by the display cable must correspondingly "move to the left". For AR, a system that achieves sufficiently low latency and latency below 5 milliseconds is required.
The higher the degree of openness of the AR glasses, the more light from the extra directions and light sources can enter the system. While AR displays are generally well suited to handle ambient light from the front, light from the sides or behind the user can cause serious problems. There is a need to reduce stray light with good design.
The convergence and accommodation of stereoscopic displays is a well known problem, and there are other problems associated with binocular vision that can have a significant impact on comfort. One of which is known as binocular vertical angle difference. When there is vertical parallax or tilting between binocular displays, binocular vertical angle differences may occur. This problem is intolerable to the human visual system and can lead to dizziness, nausea and even vomiting. This too needs to be solved by a good design.
The database system selects SQLite for bearing various ammunition data of the system. The user can perform operations of adding, deleting, modifying and checking various ammunition data in the existing database, and along with the gradual perfection of the system, the information and model data of the ammunition can be continuously perfected to the database. The SQLite is a lightweight relational database management system, occupies very low resources, can support Windows and Android operating systems required by the project, can be combined with multiple program languages, has processing speed superior to Mysql, postgreSQL and the like, and is an open source database.
The database system stores: the method comprises the steps of two-dimensional codes and two-dimensional code encryption data, two-dimensional code mapping data, basic information (type, name, assembly information, information of elements and the like) data of ammunition, visual perception (three-dimensional) of ammunition and structural signs of part components, and production data such as operation use and accident handling video data of ammunition, photo of ammunition bodies, use manual picture data of ammunition, three-dimensional model data of ammunition, storage data of ammunition warehouse digital management (warehouse ammunition position lead, ammunition allocation automation log statistics and the like), ammunition quality digital management (conventional detection automation input, quality state automation identification and the like) and ammunition destruction automation warning prompt (destruction dangerous goods automatic warning prompt) and the like.
Data upgrading: the database software supports the data package to be imported into the AR glasses system catalog, and the updating of the AR glasses database can be realized through the database software.
When the low-power management unit is not operated, the control system enters the sleep countdown, the time reaches to enter the ultra-low power consumption state, and when the touch time triggers, the sleep is immediately awakened, and the normal recognition state is entered.
The system can enable ammunition users to realize quick and accurate identification of basic information of ammunition, three-dimensional display of ammunition structure, visual demonstration of ammunition operation and use flow and requirements and the like through advanced method means such as simulation model demonstration and augmented reality interaction. Realizing technical support of ammunition and quick and visual mastering of basic performance of the medicine by operators, familiarizing with operation and use, reducing waiting time of ammunition use preparation, reducing potential safety hazard of ammunition use and the like.
The ammunition knowledge learning efficiency and learning effect are improved, and the AR augmented reality technology and the computer simulation technology are adopted to construct a visual virtual system, so that ammunition security and users can quickly master ammunition knowledge, and the ammunition structural composition and operation and use flow are visually perceived. More importantly, many ammunition, after being disassembled, cannot observe its internal mechanism of operation. The visual simulation technology of the augmented reality is adopted, so that the structural characteristics and the working process of the display system can be clear and vivid, a large amount of visual ammunition structural information can be provided, and the learning and skill grasping efficiency of the visual ammunition is greatly improved.
The potential safety hazard is reduced, the system comprises multimedia data such as pictures, videos, three-dimensional models and the like of ammunition operation and accident handling, and the ammunition operation and handling knowledge is rapidly mastered through virtual learning and operation, so that the probability of occurrence of the safety accident is further reduced.
The method improves the informatization management level of ammunition, realizes the digital management of ammunition warehouses (position leading of ammunition in warehouses, automatic boarding statistics of ammunition allocation and setting, and the like) and the digital management of ammunition quality (conventional detection and automatic input, quality state automatic identification, and the like) and the automatic warning prompt of ammunition destruction (automatic warning prompt of destroyed dangerous goods) through subsequent function expansion, and lays a foundation for improving the informatization management level of army ammunition.
Ammunition identification and use research based on visual perception technology develops an augmented reality ammunition identification glasses system, and fusion of a real environment and a virtual environment is realized. Enhanced display of ammunition by computer generated data enhances user understanding of ammunition. And perfect fusion and display of the real object and the virtual object are realized by using a space positioning technology and a three-dimensional rendering display technology.
The system integrates various interaction modes including voice, gestures, touch, entity keys and the like, integrates advanced technology on the basis of keeping the traditional interaction modes, and achieves a more natural interaction mode.
By researching the image recognition algorithm, various ammunition recognition methods are inherited, and aiming at specific ammunition attributes, the special algorithms for developing two-dimensional code recognition algorithm, two-dimensional code encryption algorithm, ammunition identification recognition algorithm and the like are developed, so that ammunition recognition efficiency is improved.
The system integrates multiple ammunition information data including parameter attribute data, picture text data, video multimedia data and three-dimensional model data, and forms a complete comprehensive ammunition data platform.
The system has high expandability, supports multi-type ammunition data storage, and openly reserves ammunition quality management and ammunition storage data management interfaces, thereby meeting the application requirements of ammunition management. The design of the augmented reality ammunition recognition glasses system is designed to be as concise as possible in consideration of the future development requirement of business, the coupling degree of each functional module is reduced, and the compatibility is fully considered.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (9)

1. An ammunition identification system based on visual perception technology, comprising: the AR glasses system comprises an AR glasses main body, an image recognition unit, a head display unit, an interaction unit, a virtual scene generation unit and a low-power consumption management unit, wherein the image recognition unit is used for recognizing two-dimensional codes and ammunition identifications of ammunition, the virtual scene generation unit is used for matching information recognized by the image recognition unit with ammunition information data stored in the database system, processing interaction data and tracking data generated by the interaction unit, generating a virtual picture matched with real environment data and transmitting the virtual picture to the head display unit, the head display unit is used for displaying basic information, pictures, videos and a three-dimensional model of the ammunition, and when the low-power consumption management unit does not operate, the control system is in sleep countdown, and time reaches an ultralow-power consumption state;
The three-dimensional model of the ammunition is formed by Solidworks to complete three-dimensional modeling and virtual assembly of the component, and the three-dimensional modeling comprises:
And a data acquisition stage: obtaining the parameters of physical appearance parameters and action effect parameters for constructing a high-precision high-simulation entity model;
Reference model generation: using instrument scanning or image generation model as reference and basis for three-dimensional production;
Model generation: generating an original high-precision model, and restoring a physical model by three-dimensional software, wherein the physical model comprises a main structure appearance, fine textures and concave-convex details; the original high-precision model cannot be directly used, the details of the model are transcribed by various methods in the subsequent stage, and the details are completely presented in the application stage;
Application level high-precision model generation: generating a high-precision model conforming to the virtual simulation operation level from the original high-precision model by using a model topology technology; recording detail information of an original high-precision model by using a baking technology, wherein the detail information comprises a normal map representing detail concave-convex of the model;
Material and texture generation: adopting a physical-based rendering (PBR) technology, directly writing surface materials by physical parameters, so that the performance of the model accords with physical rules, and the calculation of illumination accords with reality;
The core algorithm is as follows:
The results are stored in a map form: albedo map matt color map, normal map, METALLIC MAP metallization map, roughess roughness map, ao map ambient occlusion map;
Action and effect generation: adding joint, father-son relation binding, skeleton binding and physical behavior binding for the entity model, making running, unfolding and withdrawing action effects of the model, and making smoke and fire effects of the model by using a particle system;
Generating a multi-appearance model: generating a damaged and destroyed appearance effect model based on the application-level high-precision model;
generating an application-level multi-precision model: generating a middle-level precision model and a low-level precision model based on the application-level precision model so as to have LOD multi-detail level;
Parameter driven binding: the method comprises the steps of multi-precision multi-effect model synthesis, action and effect binding, and the model is driven by parameters.
2. The visual perception technology-based ammunition identification system of claim 1, wherein: the two-dimensional code is arranged on the cylindrical surface of the ammunition projectile body or the ammunition packaging box and comprises a dragon Bei Ma and a Hamming code.
3. The visual perception technology-based ammunition identification system of claim 1, wherein: the ammunition mark is arranged on the cylindrical surface of the ammunition body or on the ammunition packing box, and the ammunition mark is Chinese, english and numerals.
4. The visual perception technology-based ammunition identification system of claim 1, wherein: the head display unit is an optical see-through display.
5. The visual perception technology-based ammunition identification system of claim 1, wherein: the interaction unit comprises a handle interaction module, a voice interaction module and a gesture interaction module, wherein the handle interaction module is connected with the AR glasses main body and comprises a touch pad, the touch pad is used for switching the display interface of the head display unit, the voice interaction module is used for realizing interaction with the AR glasses system through language instructions, and the gesture interaction module is used for realizing interaction with the AR glasses system through grabbing gestures and reading the gestures as instructions.
6. The visual perception technology-based ammunition identification system of claim 1, wherein: the basic information of the ammunition comprises the type, name, assembly information and information of the ammunition.
7. The visual perception technology-based ammunition identification system of claim 1, wherein: the video of ammunition includes operation usage video, accident maintenance video, and ammunition dismantling video.
8. The visual perception technology-based ammunition identification system of claim 1, wherein: the picture of the ammunition comprises a picture of an ammunition body and a manual of ammunition.
9. The visual perception technology-based ammunition identification system of claim 1, wherein: the head display unit is also used for displaying warehouse information and production information, wherein the warehouse information comprises a warehouse ammunition position lead and ammunition allocation statistics, and the production information comprises an ammunition quality state providing prompt and an ammunition destruction prompt.
CN202011621699.5A 2020-12-30 2020-12-30 Ammunition identification system based on visual perception technology Active CN112633442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011621699.5A CN112633442B (en) 2020-12-30 2020-12-30 Ammunition identification system based on visual perception technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011621699.5A CN112633442B (en) 2020-12-30 2020-12-30 Ammunition identification system based on visual perception technology

Publications (2)

Publication Number Publication Date
CN112633442A CN112633442A (en) 2021-04-09
CN112633442B true CN112633442B (en) 2024-05-14

Family

ID=75287684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011621699.5A Active CN112633442B (en) 2020-12-30 2020-12-30 Ammunition identification system based on visual perception technology

Country Status (1)

Country Link
CN (1) CN112633442B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359396B (en) * 2022-03-18 2022-05-17 成都工业学院 Stereo image acquisition and display method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203520087U (en) * 2013-10-23 2014-04-02 深圳市唯特视科技有限公司 Active 3D glasses power supply control system and active 3D glasses
CN205507687U (en) * 2016-04-12 2016-08-24 京东方科技集团股份有限公司 Wear -type virtual reality equipment and virtual reality system
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality
CN108205618A (en) * 2016-12-20 2018-06-26 亿航智能设备(广州)有限公司 VR glasses and connection method, the apparatus and system of control VR glasses and unmanned plane
CN109243233A (en) * 2018-08-31 2019-01-18 苏州竹原信息科技有限公司 A kind of defensive combat drilling system and method based on virtual reality
CN110299033A (en) * 2019-04-02 2019-10-01 郑州铁路职业技术学院 A kind of English study training device based on the dialogue of VR real scene
CN212032113U (en) * 2020-04-22 2020-11-27 Oppo(重庆)智能科技有限公司 Intelligent glasses

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170115742A1 (en) * 2015-08-01 2017-04-27 Zhou Tian Xing Wearable augmented reality eyeglass communication device including mobile phone and mobile computing via virtual touch screen gesture control and neuron command

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203520087U (en) * 2013-10-23 2014-04-02 深圳市唯特视科技有限公司 Active 3D glasses power supply control system and active 3D glasses
CN205507687U (en) * 2016-04-12 2016-08-24 京东方科技集团股份有限公司 Wear -type virtual reality equipment and virtual reality system
CN108205618A (en) * 2016-12-20 2018-06-26 亿航智能设备(广州)有限公司 VR glasses and connection method, the apparatus and system of control VR glasses and unmanned plane
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality
CN109243233A (en) * 2018-08-31 2019-01-18 苏州竹原信息科技有限公司 A kind of defensive combat drilling system and method based on virtual reality
CN110299033A (en) * 2019-04-02 2019-10-01 郑州铁路职业技术学院 A kind of English study training device based on the dialogue of VR real scene
CN212032113U (en) * 2020-04-22 2020-11-27 Oppo(重庆)智能科技有限公司 Intelligent glasses

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
《二维码技术在移动地铁购票系统中的应用研究》;王雨等;《电脑知识与技术》;第12卷(第33期);全文 *
《基于图像处理的复杂条件下手机二维码识别》;黄宏博等;《北京信息科技大学学报》;第26卷(第5期);全文 *
《改进二值化算法在QR 码识别中的应用》;杨凌霄等;《软件导刊》;第19卷(第3期);全文 *
《柱面二维码 识别算法的设计与实现》;司国东等;《图形图像》(第8期);全文 *
中国物品编码中心.《二维条码技术与应用》.中国 计量出版社,2007,第 103-108 页. *
田景熙等.《物联网概论》.东南大学出版社,2017,第53-56 页. *

Also Published As

Publication number Publication date
CN112633442A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112684892B (en) Augmented reality ammunition recognition glasses-handle carrying system
CN111415422B (en) Virtual object adjustment method and device, storage medium and augmented reality equipment
KR102417177B1 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
US9165381B2 (en) Augmented books in a mixed reality environment
CN104798370B (en) System and method for generating 3-D plenoptic video images
EP3798801A1 (en) Image processing method and apparatus, storage medium, and computer device
CN102566756B (en) Comprehension and intent-based content for augmented reality displays
CN103472909B (en) Realistic occlusion for a head mounted augmented reality display
Cruz et al. Kinect and rgbd images: Challenges and applications
US11710287B2 (en) Generative latent textured proxies for object category modeling
KR20190051028A (en) Sensory eyewear
US20240282149A1 (en) Liveness detection method and apparatus, and training method and apparatus for liveness detection system
KR20230062802A (en) Image-based detection of surfaces that provide specular reflections and reflection modification
WO2016122973A1 (en) Real time texture mapping
US11508130B2 (en) Augmented reality environment enhancement
CN109784128A (en) Mixed reality intelligent glasses with text and language process function
EP3385915A1 (en) Method and device for processing multimedia information
CN117274383A (en) Viewpoint prediction method and device, electronic equipment and storage medium
CN112633442B (en) Ammunition identification system based on visual perception technology
CN112764530A (en) Ammunition identification method based on touch handle and augmented reality glasses
US11823433B1 (en) Shadow removal for local feature detector and descriptor learning using a camera sensor sensitivity model
US20230396750A1 (en) Dynamic resolution of depth conflicts in telepresence
Soares et al. Designing a highly immersive interactive environment: The virtual mine
CN112764531A (en) Augmented reality ammunition identification method
US11544910B2 (en) System and method for positioning image elements in augmented reality system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant