CN107331220A - Transformer O&M simulation training system and method based on augmented reality - Google Patents

Transformer O&M simulation training system and method based on augmented reality Download PDF

Info

Publication number
CN107331220A
CN107331220A CN201710779761.5A CN201710779761A CN107331220A CN 107331220 A CN107331220 A CN 107331220A CN 201710779761 A CN201710779761 A CN 201710779761A CN 107331220 A CN107331220 A CN 107331220A
Authority
CN
China
Prior art keywords
training
transformer
scene
augmented reality
trainee
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710779761.5A
Other languages
Chinese (zh)
Inventor
雷振江
林昌年
刘国忠
王国平
陈硕
李钊
王拓
王兰香
王磊
崔吉生
唐志
赵守忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Beijing Kedong Electric Power Control System Co Ltd
State Grid Liaoning Electric Power Co Ltd
Beijing Information Science and Technology University
Original Assignee
State Grid Corp of China SGCC
Beijing Kedong Electric Power Control System Co Ltd
State Grid Liaoning Electric Power Co Ltd
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Beijing Kedong Electric Power Control System Co Ltd, State Grid Liaoning Electric Power Co Ltd, Beijing Information Science and Technology University filed Critical State Grid Corp of China SGCC
Priority to CN201710779761.5A priority Critical patent/CN107331220A/en
Publication of CN107331220A publication Critical patent/CN107331220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of transformer O&M simulation training system and method based on augmented reality, including:Training Management server, AR subsystems, training outdoor scene visual monitoring subsystem;AR subsystems:For the binocular stereo vision three-dimensional perception unit for recognizing, tracking and registering to target device, it will show that virtual scene is superimposed upon the display unit on actual scene, gesture identification unit for recognizing user's action, the voice recognition unit for recognizing user's voice;Training outdoor scene visual monitoring subsystem includes the multi-view stereo vision measurement being arranged in actual scene and tracking camera and equipment condition monitoring video camera;Equipment condition monitoring video camera is used for the working condition for recognizing transformer, multi-view stereo vision is measured and tracking camera is used to obtain the three dimensional space coordinate of the transformer in actual scene, measures three dimensional space coordinate of the trainee at training scene, and the run trace of tracking measurement trainee.

Description

Transformer O&M simulation training system and method based on augmented reality
Technical field
The present invention relates to analogue technique field, and in particular to a kind of transformer O&M emulation training based on augmented reality System and method.
Background technology
With " big operation " and comprehensive deep propulsion of " general overhaul " and reality in State Grid Corporation of China " three collection five are big " strategy Apply, realize resource intensivism, tissue flattening, business specialization and the pipe of dispatching of power netwoks operation and grid equipment operating maintenance Lean is managed, the security, reliability and economy of power network production run is effectively improved.However, " big operation " and " big inspection Repair " the deep propulsion of system causes company's operation of power networks and Management for repair to occur profound change, regulating and controlling mechanism monitoring of tools Work and transformer maintenance work face new opportunities and challenges, and power grid regulation operation and transformer operation maintenance personnel are known Requirements at the higher level are proposed in terms of knowledge, technology and technical ability, are also brought to the normal operation of power grid enterprises and personnel training Huge challenge.
In recent years, Guo Wang companies pay much attention to the real training of dispatching of power netwoks operation and equipment operation maintenance personnel, main using biography The regulation and control integrated digital simulation training system of system, transformer station's Digital Simulation training system, the solid sequence based on real equipment, The modes such as the simulation training system based on multimedia and desktop VR have effectively been carried out skills training to related personnel, examined Core and identification, but still need in terms of practicality, interactivity, situational, experience sense, real-time, scalability and flexibility into One step is lifted, and is primarily present following problem:
In terms of the training towards transformer operation maintenance personnel, training on electric power mechanism mainly using theoretical training, master worker with The modes such as the apprentice, the real training based on real equipment and the emulation training based on multimedia and desktop VR carry out technical skills Training, though the training requirement of equipment operation maintenance personnel can be met to a certain extent, but still Shortcomings and needs further Improve perfect:1) focusing study and theoretical training form are dry as dust, memorize mechanically, be negligent of drilling, learn and practicing disconnection, training effect It is really undesirable.2) master worker's formula training of training an apprentice is by master worker to carry out technical ability biography to the apprentice according to code and experience in real work Award, it is difficult to standardize, result of training depends on the ability and sense of responsibility of master worker, there is unit exception and failure classes in real work Type limited amount, training cycle are long, it is impossible to support a large amount of students while the problem of carrying out training.3) real training based on real equipment There is floor space greatly in mode, investment cost is high, and some equipment are difficult to or can not disassembled, and student can not understand in depth and learn interior Cage structure and operation mechanism, it is difficult to which the phenomenon of superimposed anomaly, failure, equipment is easily damaged, it is impossible to while supporting that a large amount of students are same Shi Kaizhan real trainings, scalability is low, it is costly the problems such as.4) the training side based on multimedia and desktop VR emulation technology Formula possesses training contents standardization, and recycling performance is high, and can support a large amount of students while carrying out the features such as training, but forcing True degree, sense of reality aspect and physical equipment still suffer from a certain distance, and user is cumbersome by operation with mouse and keyboard, interactive experience It is not good.In summary, the real training mode of existing power transformation operation maintenance personnel, which exists, lacks professional guidance and asking that auxiliary information is supported Topic, the result of training to power transformation operation maintenance personnel can not meet actual demand.
By the research to domestic and international research level, find:
1st, the current situation of transformer O&M emulation training:
In recent years, power system development is very rapid, and power network scale is increasing, and the power network that each grid company is administered is set It is standby to sharply increase.As the renewal speed of transformer accelerates the extensive application with new and high technology, equipment operation management difficulty Improve constantly, necessarily expedite the emergence of to the active demand with abundant operation maintenance experience personnel.
Transformer O&M integration is by equipment-patrolling, execute-in-place, safeguards (maintenance of C, D class) business and operating maintenance Personnel are integrated.At present, the training for grid equipment operation maintenance personnel mainly includes theoretical training, master worker and trained an apprentice formula training The modes such as instruction, the training based on real equipment and the training based on multimedia and desktop VR emulation technology.
Theory study is mainly the form lectured according to teacher, introduce equipment-patrolling, operation and C, D grade flows overhauled and Points for attention.The training pattern that master worker trains an apprentice is to specify master worker for student, and is instructed at any time in the course of the work by master worker.Base Be exactly to set up real transformer station in the training system of physical device, including once, secondary and integrated system etc., student is true O&M operation is carried out in equipment, the course of work is experienced, has in this experience system and uses hybrid simulation system by primary equipment The pattern of the information-driven secondary device such as Current Voltage, also there is the pattern not driven, and the pattern for having driving can simulate actual electricity The reaction of substation equipment, allows student more to experience the scene of real equipment under operation when net breaks down or be abnormal. Training method based on multimedia and virtual reality emulation technology is the mistake that actual O&M is emulated using computer hardware technique Journey and effect, and the information such as O&M knowledge point and dangerous spot are shown or pointed out, this mode can be repeated and extensive Training application.
2nd, the O&M analogue system present Research based on augmented reality:
The SUARMATE projects that Spain carries out realize the interaction with virtual scene by speech recognition system.System passes through Sound Neumann and Majoros et al. are devised for auxiliary maintaining instruction after research cognitive learning theory and augmented reality Practice the augmented reality prototype system implemented with maintenance task, it is believed that augmented reality can effectively strengthen in maintenance task implementation procedure Human information processing ability, is very beneficial for improving cognition and the memory capability of people, can be widely applied to various maintenance tasks Training.These systematic differences, tentatively present application prospect of the augmented reality in training on maintenance, also illustrate that foreign countries Augmented reality Virtual Maintenance training system progressively moving towards practical, while being also faced with further research and perfect. EADS solves the wiring efficiency and quality problems of certain European type fighter plane using arvika systems, Its assembler can call virtual instruction by voice, easily operate, smoothly completed in 1*6 meters of plates according to the prompting of every step Upper highdensity installation work.The European Economic Community has initiated Starmate projects, major function be to aid in user's assembling maintenance with And new user training.Oneself applies starmate systems in European aircraft wing assembly line for EADA companies, uses Three-dimensional information in video and virtual scene guides the implementation of maintenance task, implements available for auxiliary maintaining and maintenance training. Though but augmented reality has been used in maintenance industry, the technology in terms of transformer O&M is not yet molded, and is needed The power industry actual conditions that further to fit are designed improvement.
The content of the invention
Cause to lack suitable emulation for the technology in terms of transformer O&M present in prior art is not mature enough The problem of training apparatus, the purpose of the embodiment of the present invention is to provide a kind of transformer O&M emulation training based on augmented reality System and method, can utilize and a kind of O&M simulation training system is realized based on augmented reality.
In order to solve the above problems, the embodiment of the present invention proposes a kind of transformer O&M emulation based on augmented reality Training system, including:Training Management server, AR subsystems, training outdoor scene visual monitoring subsystem;
Wherein AR subsystems include the AR helmets;The AR helmets include:For what is recognized, track and register to target device Binocular stereo vision three-dimensional perception unit, will show that virtual scene is superimposed upon the display unit on actual scene, makes for recognizing The gesture identification unit of user's action, the voice recognition unit for recognizing user's voice;
Training outdoor scene visual monitoring subsystem includes the multi-view stereo vision measurement being arranged in actual scene and tracking is taken the photograph As head and equipment condition monitoring video camera;Wherein described equipment condition monitoring video camera is used for the work shape for recognizing transformer State, wherein multi-view stereo vision measurement and tracking camera are used for the three dimensions for obtaining the transformer in actual scene Coordinate, measurement trainee are training live three dimensional space coordinate, and the run trace of tracking measurement trainee.
Training Management server connects AR subsystems and training outdoor scene visual monitoring subsystem, to utilize the shape of transformer State information and the run trace of trainee are evaluated the normalization of trainee's practical operation transformer.
Wherein, the Training Management server passes through wireless router and at least one AR subsystem, training outdoor scene vision Monitoring subsystem carries out radio communication.
Wherein, the AR subsystems set up the superposition of virtual scene and reality scene by the following method:
Step 1, transformer 3D models are set up using the binocular stereo vision measuring unit on the AR helmets or will be existing Transformer 3D models are introduced directly into;Transformer 3D moulds are wherein set up using the binocular stereo vision measuring unit on the AR helmets The specific Ah Bai of type is wealthy:The 3D features of transformer are extracted, using the two dimensional character of two dimensional image extraction equipment nameplate, equipment are set up Property data base;
It is step 2, three-dimensional using the binocular stereo vision in multi-view stereo vision three-dimensional measurement and tracking system and the AR helmets Measuring system, which is combined, realizes transformer space orientation, the space orientation of the AR helmets and tracking, and transformer, parts and Instrument is quickly recognized;
Step 3, based on technique of binocular stereoscopic vision, in real time detection camera coordinate system and transformer coordinate system between close System, it is determined that position of the virtual content to be added under camera coordinate system, realizes the quick track and localization and three of transformer Tie up registration technology;
Step 4, using LCOS projective techniques, realize that the fusion of transformer virtual scene and actual scene is shown.
Meanwhile, the embodiment of the present invention also proposed a kind of power transformation based on augmented reality using as described in preceding any one and set The method that received shipment dimension simulation training system is giveed training, including:
The virtual scene and actual scene being superimposed are shown by AR subsystems;Enter pedestrian using AR subsystems simultaneously Machine interaction is to determine action and the voice of trainee;
By train outdoor scene visual monitoring subsystem obtain transformer working condition, and transformer three-dimensional space Between coordinate, measurement trainee in the three dimensional space coordinate at training scene, and the run trace of tracking measurement trainee;
The data obtained according to AR subsystems and training outdoor scene visual monitoring subsystem, using Training Management server to by The normalization of instruction personnel's practical operation transformer is evaluated.
Wherein, methods described also includes:
The practical operation behavior or the pseudo operation in virtual scene of transformer in trainee is to actual scene Training system produces feedback when behavior does not meet code requirement, and the event corresponding with faulty operation is shown on AR Helmet Mounted Displays Hinder phenomenon, and the loudspeaker of the AR helmets plays corresponding sound;Wherein described phenomenon of the failure includes following at least one: Fire, smolder, strike sparks, electric arc, blast.
The above-mentioned technical proposal of the present invention has the beneficial effect that:Above-mentioned technical proposal proposes a kind of based on augmented reality Transformer O&M simulation training system and method, built by means of augmented reality, optical measuring technique, network technology etc. The transformer O&M academic environment based on augmented reality, Training Environment and examination environment are found, realization theory study, theory are examined The training methods such as core, pseudo operation training, practical operation training, pseudo operation examination, practical operation examination, completion is once transported The training contents such as dimension, secondary O&M, live detection and power transformation operation.
Brief description of the drawings
Fig. 1 is that the structure for the transformer O&M simulation training system based on augmented reality that the embodiment of the present invention is proposed is shown It is intended to;
Fig. 2 is the schematic diagram of the system of the embodiment of the present invention when in use;
Fig. 3 is the academic environment pie graph of transformer O&M emulation training;
Fig. 4 is the transformer O&M simulation training system learning tool based on augmented reality in the embodiment of the present invention Schematic diagram;
Fig. 5 is the proposed vertical transformer O&M simulation training system hardware architecture diagram based on augmented reality;
Fig. 6 is the proposed vertical transformer O&M simulation training system architecture diagram based on augmented reality;
Fig. 7 is the transformer O&M simulation training system construction mode figure based on augmented reality;
Fig. 8 is the transformer O&M training training pattern figure based on augmented reality;
Fig. 9 is the transformer O&M training and examination illustraton of model based on augmented reality;
Figure 10 is binocular stereo vision three-dimensional measurement principle schematic;
Figure 11 is the extraction schematic diagram of gesture appearance features;
Figure 12 is to obtain the decision tree schematic diagram that class condition is set up by the test training of small sample;
Figure 13 is the comparison diagram of optical perspective formula display device and video perspective type display device;
Figure 14 is that simulation training system builds system schematic diagram.
Embodiment
To make the technical problem to be solved in the present invention, technical scheme and advantage clearer, below in conjunction with accompanying drawing and tool Body embodiment is described in detail.
The embodiment of the present invention proposes a kind of transformer O&M simulation training system based on augmented reality and method, its Virtual training scene can be superimposed upon real reality by the transformer O&M simulation training system based on augmented reality provided Instruct on scene, trainee can carry out O&M training and assessment in a virtual environment true to nature merged with reality.Base In augmented reality transformer O&M simulation training system operation principle as depicted in figs. 1 and 2, its core is such as Fig. 1 Shown support technology, i.e., realize virtual scene by virtual reality technology;Then can be as shown in Figure 2 by virtual scene It is superimposed upon on actual scene, to realize the transformer O&M emulation training based on augmented reality.
As shown in Figure 1, the support section of the transformer O&M emulation training based on augmented reality includes:Training Management Server, AR subsystems, training outdoor scene visual monitoring subsystem, data transmission sub-system.
Wherein AR subsystems can include:The AR helmets;The AR helmets include:For recognizing, track to target device and The binocular stereo vision three-dimensional perception unit of registration, will show that virtual scene is superimposed upon the display unit on actual scene, is used for Recognize the gesture identification unit of user's action, the voice recognition unit for recognizing user's voice.
Training outdoor scene visual monitoring subsystem includes the multi-view stereo vision measurement being arranged in actual scene and tracking is taken the photograph As head and equipment condition monitoring video camera;Wherein described equipment condition monitoring video camera is used for the work shape for recognizing transformer State, wherein multi-view stereo vision measurement and tracking camera are used for the three dimensions for obtaining the transformer in actual scene Coordinate, measurement trainee are training live three dimensional space coordinate, and the run trace of tracking measurement trainee.
Training Management server connects AR subsystems and training outdoor scene visual monitoring subsystem by data transmission sub-system, With using the status information of transformer and the run trace of trainee to the specification of trainee's practical operation transformer Property is evaluated.Training Management server is carried out by wireless router and each AR helmet and training outdoor scene visual monitoring system Radio communication.
(1) as depicted in figs. 1 and 2, trainee not can be only seen training scene by eyeglass on the AR helmets worn Actual scene, and can see virtual scene, establish augmented reality training scene true to nature.Wherein, the AR helmets pass through Following methods set up the superposition of virtual scene and reality scene:
Step 1, transformer 3D models are set up using the binocular stereo vision measuring unit on the AR helmets or will be existing Transformer 3D models are introduced directly into;Transformer 3D moulds are wherein set up using the binocular stereo vision measuring unit on the AR helmets The specific Ah Bai of type is wealthy:The 3D features of transformer are extracted, using the two dimensional character of two dimensional image extraction equipment nameplate, equipment are set up Property data base;
It is step 2, three-dimensional using the binocular stereo vision in multi-view stereo vision three-dimensional measurement and tracking system and the AR helmets Measuring system, which is combined, realizes transformer space orientation, the space orientation of the AR helmets and tracking, and transformer, parts and Instrument is quickly recognized;
Step 3, based on technique of binocular stereoscopic vision, in real time detection camera coordinate system and transformer coordinate system between close System, it is determined that position of the virtual content to be added under camera coordinate system, realizes the quick track and localization and three of transformer Tie up registration technology;
Step 4, the 4th step utilize LCOS projective techniques, realize that the fusion of transformer virtual scene and actual scene shows Show.
Man-machine interaction is one of the transformer O&M emulation training based on augmented reality of the embodiment of the present invention important Feature.In the project by the stereophony microphone collected sound signal installed on the AR helmets, recognized using speech recognition technology The order commonly used in training, for example:" confirmation ", " beginning ", " being moved to the left ", " moving right ", " moving down ", " to moving up It is dynamic ", " opening a sluice gate ", " closing lock ", " left-handed ", " dextrorotation " and the keyword for retrieval etc..In addition, vertical using binocular on the AR helmets The depth camera of body vision three-dimensional perception unit, using pumped FIR laser gesture identification tracking technique, the conventional gesture of identification, example Such as:Click on, slide, opening a sluice gate, closing lock, left-handed, dextrorotation etc..
(2) training system automatic feedback technology is another of the transformer O&M emulation training based on augmented reality Key character.The practical operation behavior or the pseudo operation in virtual scene of transformer in trainee is to actual scene Training system produces feedback when behavior does not meet code requirement, and corresponding with faulty operation show is shown on AR Helmet Mounted Displays As such as catching fire, smoldering, striking sparks, electric arc and blast, and corresponding sound is played on AR helmet loudspeakers, forming complete Reponse system.Using feedback technique is trained, influence of the trainee to faulty operation can be deepened significantly, standard operation behavior, Enhancement training effect.
Transformer O&M training system based on augmented reality is by means of augmented reality, optical measuring technique, net Network technology etc. establishes the transformer O&M academic environment based on augmented reality, Training Environment and examination environment, realization theory The training methods such as study, theoretical examination, pseudo operation training, practical operation training, pseudo operation examination, practical operation examination, The training contents such as O&M of completion, secondary O&M, live detection and power transformation operation.
The technology of transformer O&M simulation training system based on augmented reality and method in the embodiment of the present invention is closed Key and difficult point are:
(1) transformer 3D modeling and feature point extraction:
AR augmented reality scenes are built by the way that virtual equipment 3D models are added in real training scene, whether model is accurate Directly affect actual situation and overlap degree;The extraction accuracy of characteristic point directly affects the precision of Tracing Registration in model.Therefore power transformation is set Standby 3D modeling and feature point extraction are one of key problem in technology point.
Intend using strategy:Using the binocular stereo vision 3-D imaging system on the AR helmets from multiple viewing angles equipment drawings Picture, sets up the threedimensional model of equipment, and extract characteristic point by stereo vision three-dimensional rebuilding and three-dimensional splicing process.Generally, it is special Levying can a little be provided by hole, flex point or the mark artificially set.Wherein, for the difficult equipment of modeling comparison using stickup Artificial marker characteristic point mode.Selection coded markings and polygonal mark mixed mark mode in the project, using polygonal Turning can accurately determine characteristic point, utilize the other characteristic point of code identification.Ready-made equipment 3D models, which can also be introduced directly into, to be made With.
(2) the software and hardware system framework of the transformer O&M emulation training based on augmented reality
The software and hardware architecture of project needs to possess comprehensive understanding to the function and software function of system hardware.Hardware structure Should be practiced every conceivable frugality cost in the case where ensureing working effect and robustness.Software architecture should try one's best comprising many-sided function, right A variety of situations are thoughtfully considered.
Because software and hardware architecture to after system actually build and function expand it is significant, pretend for key point it One.
(3) towards the quick track and localization and three-dimensional registration technology of transformer
Target following positioning and three-dimensional registration are AR key links, it is necessary to position and the ginseng of accurate calculating outdoor scene camera Number, virtual objects is added to appropriate size the appropriate location in scene, it is straight that its accuracy directly affects system user Perception by.The combination property of augmented reality system is main to be determined by it registers precision and real-time, how to make track and localization and Three-dimensional registration more accurately be quickly this project key point and difficult point.
Intend using strategy:Intend using binocular stereo vision three-dimensional reconstruction with being based on Kanade-Lucas-Tomasi's (KLT) Feature point tracking be combined realize in real time, stably, accurate augmented reality three-dimensional registration method.To at regular intervals from double A pair of the key frames taken out in item stereo vision video flowing are resolved, and set up target object threedimensional model, and extract needs with The characteristic point of track;Characteristic point to all intermediate frames between key frame is tracked, and is obtained using the real time position of characteristic point The real time position and posture of camera, realize real-time tracking and registration;Three-dimensional reconstruction to key frame and the characteristic point to intermediate frame Track dual-thread to carry out, and the characteristic point of tracking constantly updated using the three-dimensional reconstruction of key frame, three-dimensional is registered into Row correction, solves the problems, such as that the deviation accumulation and three-dimensional reconstruction of feature based point-tracking method are calculated and takes, can not register in real time Shortcoming.
(4) trainee's operation behavior based on multi-sensor information fusion technology is evaluated
Practical operation training of the trainee to the transformer at real training scene is one of emulation training major way, in order to Trainee's practical operation is realized automation examination and being produced in time when operating lack of standardization smolder, the virtual training such as catch fire Feedback phenomenon, it is necessary to practical operation behavior carry out automatic Evaluation, therefore, set up based on multi-sensor information fusion technology by The assay method for instructing human users' behavior is project key technology.
Intend using strategy:Intend obtaining the three-dimensional of trainee in real time using multi-view stereo vision three-dimensional measurement and tracking technique Locus and run trace, utilize the working condition of the sensing technology monitoring device such as optical imagery.Pass through multi-sensor information Convergence analysis trainee's operation behavior, according to O&M code and operating process, evaluate operation behavior whether specification, to undergoing training Personnel's practical operation behavior is examined.
The project research contents and embodiment of the present invention
1st, research contents:
The key technology research of transformer O&M emulation training based on augmented reality and application:
Academic environment model, the software and hardware system frame of 1.1 transformer O&M emulation trainings of the research based on augmented reality Structure and construction and application model:
(1) the academic environment model of the transformer O&M emulation training based on augmented reality is studied
Study the application of transformer O&M emulation training and augmented reality in training industry, specify training process with Attainable effect, builds the academic environment model of the transformer O&M emulation training based on augmented reality.
Academic environment, which is one, the complication system of close ties with knowledge architecture, generally comprises Studying Situntion, study money The key elements such as source, support instrument, learning community, learning activities and evaluation activity.Transformer O&M based on augmented reality is imitated True training and learning environment is that augmented reality, network technology, sensing technology and optical measuring technique etc. are penetrated into study ring In the construction process of six inscapes in border, the academic environment of transformer O&M emulation training is constituted as shown in Figure 3.
This problem will from the augmented reality academic environment property immersed strong, virtual information multi-dimensional nature, good interactivity, individual character Change, the feature such as high efficiency is set out to be designed to six key elements of academic environment.
1) Studying Situntion refers to provides a complete, real Question background for student.Therefore, research is based on augmented reality Transformer O&M emulation training ABC Studying Situntion, the design method for putting into practice situation and evaluation situation.Substantially know The main purpose for knowing scene is to carry out education to trainee at training initial stage to use, and passes through word, picture, audio, video, mould The modes such as type, animation show transformer O&M training relevant declarative knowledge and program on trainee's Helmet Mounted Display Sex knowledge, including transformer O&M system and article, transformer O&M handbook, transformer function, principle, structure and behaviour Make with repairing the contents such as step demonstration.Trainee utilizes the catalogue shown on display by interactive modes such as voice or gestures Or search function searches the content for oneself wanting study.It is to undergoing training after study completes ABC to put into practice scene major function Human users' process is instructed and detected, improves trainee's by the practical operation to transformer or pseudo operation Theoretical level and practical operative ability.The purpose for evaluating scene is that the standardized degree that trainee's O&M is operated is evaluated, Provide the score of the examination.
2) education resource refers to support education activities, realizes the various objective reality forms of instructional objective, can generally divide For physical resources and information resources, therefore, research application augmented reality, network technology, optical measuring technique etc. are to information Resource carries out tissue and design, completes learning tasks.
3) learning tool supports the Knowledge Construction of learner.Transformer O&M simulation training system based on augmented reality Learning tool is as shown in Figure 4.Accordingly, it would be desirable to study start-up, trainee, server end, client and transformer phase Support instrument between mutually.
4) learning community refer to one by learner (trainee), give financial aid to students and group that examination person (start-up) is constituted Body.
5) the operation summation that learning activities refers to learner and carried out to complete specific learning objective.Based on augmented reality The learning activities of transformer O&M emulation training include basic theoretical knowledge study, practical operation and pseudo operation etc., therefore, Need to be designed these learning activities target, task, basic procedure and steps.
6) evaluation activity refers to complete learner the behaviour that the standardized degree of specific learning objective is examined and carried out Make summation.Transformer O&M emulation training evaluation activity based on augmented reality includes basic theoretical knowledge evaluation, actual behaviour Judge and pseudo operation is evaluated etc., accordingly, it would be desirable to be designed to evaluating subject, task, method, basic procedure and step.
(2) the software and hardware system framework of the transformer O&M emulation training based on augmented reality is studied
The basic fundamental of augmented reality is studied, the software and hardware system of framework transformer O&M emulation training specifies system 26S Proteasome Structure and Function, finds high efficiency, inexpensive design.
Project research utilizes augmented reality, network technology, sensing skill on the basis of existing transformer experience system The method that art and optical measuring technique etc. build training system hardware structure, Fig. 5 is the proposed vertical power transformation based on augmented reality Equipment O&M simulation training system hardware structure, including the N platforms transformer at real training scene are in kind, and the M AR helmet trains outdoor scene Visual monitoring system, Training Management server and wireless router.The AR helmets are the necessary equipment of augmented reality, AR helmets profit Target device identification, tracking are carried out with its binocular stereo vision three-dimensional perception unit and is registered, will be virtual empty using display unit Between incorporate real space, utilize gesture identification unit and voice recognition unit identification trainee operating gesture and sound life Order, forms the augmented reality environment of good man-machine interaction.Train outdoor scene visual monitoring system video cameras, monitoring computer and nothing Line communication unit is constituted, by the multi-view stereo vision measurement of training site layout project and tracking camera (video camera number root According to live range set, general 4) measure the three dimensional space coordinate of each live transformer of training, measure trainee Member is in the live three dimensional space coordinate of training and tracks its run trace, passes through equipment condition monitoring video camera and recognizes transformer Working condition, trainee's practical operation power transformation is set using the status information of transformer and the run trace of trainee Standby normalization is evaluated.Training Management server passes through wireless router and each AR helmet and training outdoor scene visual monitoring System carries out radio communication.
Fig. 6 is the proposed vertical transformer O&M simulation training system software architecture based on augmented reality, software systems Using C/S model.Under being supported in client models storehouse and client database, training contents or generation virtual training ring is presented Border, including knowwhy training, theoretical knowledge examination, pseudo operation training, pseudo operation examination and training feedback fraction.Service The operation that device end Training Management Information System is responsible for whole system is controlled, supervises, records, coordinates training process.
(3) transformer O&M simulation training system construction mode and application model of the research based on augmented reality
1) the transformer O&M simulation training system construction mode based on augmented reality is studied
As shown in Figure 7, it will be researched and developed jointly based on augmented reality with existing intelligent substation experience system unit cooperation Transformer O&M simulation training system, be related to training platform construction, train subject construction, training system foundation and plan, Train operation workflow etc..It is as shown in figure 14 that simulation training system builds system.
The basic principle that training platform is built increases augmented reality training on the basis of existing intelligent substation experience system Exercising system, by experience system and augmented reality training organic unity in a training platform.Therefore, will in training system construction On the premise of original experience system normal work is not influenceed, increase hardware device, build augmented reality platform.
Training subject construction is the most important thing that training system is built, and should have intellectual, manifestation mode diversity, easily expand Extensional, good interactivity.The proposed training subject that sets has theoretical knowledge training, theoretical knowledge examination, practical operation training, reality Operation test, pseudo operation training and pseudo operation examination.
Training system is set up to be needed to set up student management system, training discipline system, training archive management and trains with planning Instruct appraisal system.
Training operation workflow is to complete once to train the specific steps for needing to complete, including Training Needs Analysis, training side Case design, implement of the training management and control and result of training evaluation.
2) the transformer O&M simulation training system application model based on augmented reality is studied
To give full play to the power of the transformer O&M simulation training system based on augmented reality, it is necessary to study it Application model.Served as theme with the operating process of standardization and electric power safety code, operating location be set to the primary climate of operation, Electric operating item setup is the target in operation, and operating personnel is set to the personnel in operation, and Work tool and material are set For the equipment in operation, dangerous spot, points for attention and safety measure is set etc. as the activity in operation, to selected specific Work flow is operated guidance and process auxiliary, realizes that the perfection with the hardware device such as augmented reality (AR), optical measurement is melted Close, trainee is obtained information enhancement and scene perception ability in terms of vision, the sense of hearing, realize interactive demonstration, scene weight The function such as existing, data and information enhancement, induction guide, intelligent retrieval and scene perception, further strengthens the safety for the student that undergoes training Consciousness, helps and inspires work thinking, lift result of training, overcome the feared state of mind, strengthen manipulative ability, improves power generation and shows Field level of security
Training subject includes knowwhy training module, pseudo operation training module, practical operation training module, theory and known Know examination, pseudo operation examination and practical operation examination;Practical operation examination can be using training system automatic examination or artificial Assessment;It can select to train, examine or training+examination.In training, trainee is operated according to guiding, in real time Instruct, reach the purpose of enhancing training experiences, in examination, without prompt message.
It needs to be determined that each personal information of trainee, training subject, and believed personnel by wireless network before training Breath and training subject information are sent to each client (the AR helmets) from server end;After the order of start-up is received to by The selected section's purpose training of instruction person's expansion or examination;Practical operation train or examine in, by multi-sensor fusion technology to by Instruction human behavior is evaluated, and in pseudo operation training or examination, passes through interacting to trainee for voice and gesture motion Behavior evaluated, if occurring faulty operation behavior in pseudo operation or practical operation, produce blast, smog etc. virtually Feedback phenomenon;Trainee can carry out one's own personalized application, and there is provided individualized training service.
1.2 research transformer property data base foundation, space orientation, quick identification and three-dimensional registration technology and power transformation The Training Environment of equipment virtual reality fusion
(1) goal in research identification feature database building method
The identification of target substation equipment or parts is the top priority of augmented reality, and target identification first has to set up target Property data base.Consider the complex situations of the live transformer of training, intend using architectural feature identification, the identification of transformer nameplate The method being combined with manual identification.Accordingly, it would be desirable to study its feature and feature extracting method and data according to the characteristics of target Storehouse method for building up.
(2) transformer space orientation technique of the research based on optical measurement and the transformer based on locus are fast Speed identification
Train field apparatus more miscellaneous, transformer quickly recognizes and positioned relatively difficult and time-consuming.Intend using training Live multi-view stereo vision three-dimensional measurement and tracking system are combined with the binocular stereo vision three-dimension measuring system on the AR helmets Realize that transformer space orientation, the space orientation of the AR helmets and tracking, and transformer, parts and instrument are quickly recognized. First, the locus of each transformer is obtained using the live multi-view stereo vision three-dimensional measurement of training and tracking system, its It is secondary, obtain the three dimensional space coordinate for the AR helmets that trainee wears in real time when training trainee, and search and AR heads Helmet is apart near facility information, and finally, the binocular stereo vision three-dimension measuring system on the AR helmets obtains the mark of identification object Image, nameplate image or structural images, extract feature, and are matched with property data base, identification transformer, parts And instrument.By tracking trainee (helmet) position, greatly reduce the equipment or amount of parts for needing to match, reduce meter Calculation amount is with calculating the time.Nameplate and mark are used as far as possible for the equipment with nameplate or the convenient parts and instrument for pasting mark Know recognition methods.
Research contents includes:Multi-view stereo vision three-dimensional measurement and the scaling method of tracking system, three-dimensional coordinate measurement side Method and human body tracing method;Nameplate and identification characteristics based on two dimensional image treatment technology are extracted and matching process;Binocular solid Visual measuring system scaling method, structure 3D models, three-dimensional feature are extracted and three-dimensional matching process.
(3) quick track and localization and three-dimensional registration technology of the research and development towards transformer
Target following is positioned and three-dimensional registration technology is AR key link, determines the coincidence of reality scene and virtual scene Precision.Three-dimensional registration process, i.e., by detecting relation between camera coordinate system and transformer coordinate system in real time, it is determined that to add Plus position of the virtual content under camera coordinate system.Propose binocular stereo vision three-dimensional reconstruction with being based in project Kanade-Lucas-Tomasi (KLT) feature point tracking be combined realization in real time, stably, accurate augmented reality three-dimensional note The method of volume.
(4) Training Environment of transformer virtual reality fusion is researched and developed:LCOS projective techniques are used in project, in AR eyeglasses It is upper that reality scene can be seen and virtual scene is superimposed, reach the Training Environment of virtual reality fusion.
Transformer O&M emulation training environment natural human-machine interaction technology and training of 1.3 researchs based on augmented reality Feedback technique:
(1) the transformer O&M emulation training speech recognition natural human-machine interaction technology based on augmented reality is studied
During the transformer O&M emulation training based on augmented reality, the selection of training subject, information inquiry, maintenance Induction guide, equipment pseudo operation etc., which are related to, can use the order of speech recognition, for example:" confirmation ", " beginning ", " to moving to left It is dynamic ", " moving right ", " moving down ", " moving up ", " opening a sluice gate ", " closing lock ", " left-handed ", " dextrorotation " and need retrieval Keyword etc..
(2) the transformer O&M emulation training gesture identification natural human-machine interaction technology based on augmented reality is studied
During the transformer O&M emulation training based on augmented reality, the selection of training subject, information inquiry, maintenance Induction guide, equipment pseudo operation etc. are related to the order for the identification that can use gesture, for example:Click on, slide, opening a sluice gate, closing lock, a left side Rotation, dextrorotation etc..Gesture identification tracking uses pumped FIR laser (light coding) technology based on Emission and receiving of infrared in project.
(3) research training feedback technique
The practical operation behavior or the pseudo operation in virtual scene of transformer in trainee is to actual scene Training system produces feedback when behavior does not meet code requirement, and corresponding with faulty operation show is shown on AR Helmet Mounted Displays As such as catching fire, smoldering, striking sparks, electric arc and blast, and corresponding sound is played on AR helmet loudspeakers.Virtual phenomenon is needed Position alignment is actually occurred with transformer.
Transformer O&M study of 1.4 research and establishments based on augmented reality, training and assessment integrated environment
(1) transformer O&M academic environment of the research and establishment based on augmented reality
Study the transformer O&M learning model modeling technique based on augmented reality:
The transformer O&M academic environment based on augmented reality is set up, virtual scene is added to transformer real training system On the real scene of system, carry out fusion and show, by picture and text, video, audio, dynamic on display on trainee's helmet The modes such as picture, model show transformer O&M general rule, handbook, equipment operation principle, technical parameter, maintenance, detection, fortune The knowledge such as row.Mode of learning is as follows:
● contexture by self learns, and determines learning objective, selection learned lesson, planning learning strategy and implements learning activities.
● information inquiry, when scabrous problem or obstruction are run into training process, tune can be inquired about whenever and wherever possible Take relevant information auxiliary to solve the problems, such as, study scene is merged with training Scene realization.
● knowledge is pushed, and the knowledge architecture of matching is automatically pushed to trainee, intense session weight by real-time perception scene Point, accomplishes targetedly, to solve tradition on-line study theoretical and actual disconnect " class hour can not be used, and the used time can not be learned " and " forgetting The predicament of curve ".
Study the virtual fusion of transformer O&M study and interactive display technology based on augmented reality:
O&M learning process based on augmented reality needs the virtual maintenance relevant with study content, detection and run Scene is added on real scene, carries out fusion Display Technique, produces the sense of reality and feeling of immersion, and by trainee's language The seizure for gesture motion of making peace, completion people interacts with system, can change displaying content according to the order of user.
(2) transformer O&M training environment of the research and establishment based on augmented reality
Study the modeling technique of the augmented reality training pattern of transformer O&M flow:
Transformer O&M training environment based on augmented reality includes original solid sequence and increased actual situation is combined Training environment.Therefore, the training of transformer O&M flow is included under practical operation training and the AR environment to transformer Pseudo operation is trained.Under the procedure information guiding of virtual scene, trainee carries out practical operation training to transformer; Under the procedure information guiding of virtual scene, trainee completes pseudo operation by voice or gesture interaction technology and trained;
Under transformer O&M training training pattern such as Fig. 8 operations and pseudo operation training mode based on augmented reality, Learning activities is the process of a continuous repetition training.
Study augmented reality training interaction and the operation guide technology of transformer O&M flow:
O&M training process based on augmented reality needs the virtual maintenance relevant with training content, detection and run Scene is added on real scene, carries out fusion Display Technique, and trainee is led to according to the word or auditory tone cues of virtual scene Cross gesture or voice command realizes pseudo operation.Accordingly, it would be desirable to study the pseudo operation gesture related with running to overhauling, detecting Identification technology and research with maintenance, detect the speech recognition technology of the pseudo operation order related with operation.For example,
The voice command such as " opening a sluice gate ", " closing lock ", " left-handed ", " dextrorotation ", clicks on, slides, opening a sluice gate, closing lock, left-handed, dextrorotation etc. Operating gesture.
(3) the trainee operation row that with augmented reality, actual situation combines of the research based on multi-sensor information fusion technology For assay and wire examination method:
Study the transformer O&M training and examination model based on augmented reality
Transformer O&M training system based on augmented reality also has practical operation examination and empty in addition to theory examination Intend two kinds of training patterns of operation test.Practical operation examination can both be carried out to trainee, pseudo operation can also be carried out and examined Core.Practical operation examination can be examined using the automatic examination of multi-sensor information fusion technology or using the artificial of video record Kernel mode.Transformer O&M training and examination model based on augmented reality is as shown in Figure 9.The behavior of operating personnel's O&M is carried out Examination not only need to provide qualitative measurement result (by with do not pass through), and need quantitative result, i.e., specific score value. Need each single item examination item being divided into some steps and rule, by step and Rules expanding, standards of grading are set, if behaviour Slip up, cause severity of consequence to be deducted points accordingly according to its behavior, the score of the examination quantified.
Study trainee's practical operation behavioural analysis evaluation method based on multi-sensor information fusion technology
Three of trainee in training process is obtained in real time by training live multi-view stereo vision measurement and tracking system Dimension space coordinate and run trace, the image of transformer is gathered by equipment condition monitoring camera, analyzes the work of transformer Make state, judge whether trainee's practical operation behavior meets code requirement by the information fusion of multiple images sensor. Therefore the equipment state recognition methods that research is sensed based on two dimensional image;Research is based on multi-view stereo vision measurement and tracking technique Human body catch tracking;The method that research multi-sensor information fusion judges trainee's practical operation behavior.
Study trainee's pseudo operation behavioural analysis evaluation method based on augmented reality
Under augmented reality environment, trainee carries out pseudo operation by gesture and voice to virtual unit, by with Correct operating process, which is compared, can determine whether whether pseudo operation is correct.
Transformer O&M emulation training demonstration checking system of 1.5 research and development based on augmented reality
The transformer O&M based on augmented reality is made up on the basis of existing intelligent substation experience system Emulation training demonstration checking system.
Transformer O&M emulation training demonstration checking system based on augmented reality includes following training contents:
(1) primary equipment
The identification of the primary equipments such as breaker, disconnecting switch, transformer and auxiliary equipment, status data shows, detect with Maintenance knowledge and technical ability.
(2) secondary device
The identification of secondary device, status data are shown, relay protection rudimentary knowledge, secondary device are maked an inspection tour, equipment is cleaned, work( It can throw and move back and professional knowledge and the technical ability such as secondary specialized spy patrols.
(3) live detection
Infrared test, the test of master iron core earth current, SF6 electrical equipment gas leak detections etc. are guided.
(4) power transformation operation
Grid switching operation is guided, equipment-patrolling navigation, and accident abnormality reappears and remote assistance processing.
Embodiment
Research contents and target based on this problem, the general thought of embodiment are ground both at home and abroad in comprehensive investigation Study carefully on the basis of present situation, formulate corresponding research method and circuit, furtheing investigate and contrasting various schemes, method and theory On the basis of, propose corresponding maximally effective solution and strategy for each problem, and be finally reached or more than target.This Problem is as follows by the embodiment of use:
(1) the domestic and international present Research of each research contents of this problem is researched and analysed, it is newest that summary has been achieved with both at home and abroad Achievement in research, analyzes the reference function to project research that has been fruitful, and analysis current research achievement is solving the problems, such as this problem side The deficiency in face, specifies research contents and technology path.
(2) on the basis of comprehensive investigation, the model of correct accurately Training and Learning system is set up, it is soft according to modelling Hardware system structure, is used as the guidance for realizing systemic-function.Further investigation training unit and the demand of trainee, take into full account The foundation of system model based on these demands.
(3) analyze the target identification technology of AR technologies, be that confirmation and modeling method is identified for different objects first. The recognition methods according to feature is studied first;The change of characteristics of objects in the case of research is various;And then based on model above, build Database, target is identified with matching.
(4) research action is caught selects suitable track algorithm pair with Tracing Registration technology, the quality of the various algorithms of analysis Target is tracked and carries out three-dimensional registration.
(5) research fusion display and interaction technique, the technology of further investigation fusion display at present set up real display mould Type, coordinates accurate location technology to carry out the exploitation of human-computer interaction technology, and studies semantic accuracy, utilizes speech recognition mould Block makes interaction technique more efficient.
(6) exploitation is realized the transformer supervisory control simulation training demonstration system based on immersive VR technology and applied Deployment is verified.
(7) by the summary for the studies above, Paper Writing, patent and R & D Report, and check and accept pass through.
The embodiment of the present invention proposes binocular stereo vision three-dimensional reconstruction with being based on Kanade-Lucas-Tomasi's (KLT) Feature point tracking be combined realize in real time, stably, accurate augmented reality three-dimensional registration method.To at regular intervals from double A pair of the key frames taken out in item stereo vision video flowing are resolved, and set up target object threedimensional model, and extract needs with The characteristic point of track;Characteristic point to all intermediate frames between key frame is tracked, and is obtained using the real time position of characteristic point The real time position and posture of camera, realize real-time tracking and registration;Three-dimensional reconstruction to key frame and the characteristic point to intermediate frame Track dual-thread to carry out, and the characteristic point of tracking constantly updated using the three-dimensional reconstruction of key frame, three-dimensional is registered into Row correction, solves the problems, such as that the deviation accumulation and three-dimensional reconstruction of feature based point-tracking method are calculated and takes, can not register in real time Shortcoming.
Proposition obtains the three-dimensional space position of trainee using multi-view stereo vision three-dimensional measurement and tracking technique in real time And run trace, using the working condition of the sensing technology monitoring device such as optical imagery, pass through the fusion point of multi-sensor information Analysis evaluate trainee operation behavior whether the method for specification, can be used for the examination to trainee's practical operation behavior.
The realization principle and theory to the embodiment of the present invention are illustrated below:
1st, binocular stereo vision three-dimensional measurement principle:As shown in Figure 10, binocular stereo vision principle is from two viewpoints Same object is observed, to obtain the perceptual image under different visual angles, the position between image pixel is calculated by triangle geometrical principle Deviation (i.e. parallax) is put to obtain the three-dimensional information of scenery.Spatial point is the most basic unit for constituting three-D space structure, theoretical On line can be formed by point, be formed of a wire face, then 3-D solid structure is constituted by various faces.Therefore, the measurement of coordinates of spatial point It is the most basic content of binocular stereo vision.
Three dimensions point imaging model is as shown in Figure 10, for space object surface any point P, if using C1And C2Two Individual video camera observes P points simultaneously, and P is respectively P in the projection of two cameras1With P2, and oneself is through determining P1With P2For binocular vision figure The match point of picture, then can draw, P points are both located at straight line O1P1, on, straight line O is located at again2P2On, therefore, P is exactly this two The intersection point of straight line, its three-dimensional position is uniquely determined.
It is respectively P to make projection coordinates of the spatial point P (X, Y, Z) on two camera image planes under world coordinate system1(u1, v1) and P2(u2,v2), point P1、P2Coordinate is in units of pixel.According to pin-hole imaging model, it can be obtained by theory deduction Relation between pixel coordinate and world coordinates is;
In above-mentioned formula, s is proportionality coefficient;ax=f/dx, ay=f/dy;Matrix R is spin matrix, and t is translation square Battle array;F is focal length of camera;dxAnd dyDistance of the adjacent pixel in X-axis and Y direction respectively in image coordinate system.
In measuring system, video camera is configured based on the parallel spatial relationship of binocular.It is required that spatial point P seat Mark, first obtains the projection matrix of binocular camera respectively with camera calibration target method, then by the two projection matrixes band respectively Enter above-mentioned formula, so as to obtain the over-determined systems of four linear equations composition on X, Y, Z, using least square Method solves the world coordinates of the point.
2nd, target identification principle:
1) static object is recognized
Feature extraction is the key technology in images steganalysis, has conclusive shadow for the final effect of identification Ring.In the narrow sense, feature extraction is exactly characteristic formp, i.e., the one group of essential characteristic produced according to identified object.Figure As feature extraction is defined as:Computer goes to extract as image construction related like vegetarian refreshments for identification image, and to pixel Analyzed to determine the process of its feature ownership, be to the image-region description comprising the notable structural information of image, for example, to scheme The edge of picture, angle point, and other characteristics of image.
From the perspective of conversion or mapping, it is to enter line translation to one group of measured value of a certain pattern, with the prominent mould A kind of method of the representative feature of formula, by image analysing computer and conversion, the feature that requirement is met in subregion is clicked Take out as the information input for continuing to recognize, the starting point of subsequent treatment is because of characteristics of image.Characteristics of image embodies image sheet The most basic attribute of body, it can combine vision and carry out quantization means.
The characteristics of image species that can be extracted from digital picture is a lot, has Corner Feature, edge feature than more typical And blob features.
Once detecting the interest region of image, ensuing work is exactly to describe this region in quantity.Obtain Quantitative characteristics of image description is referred to as the feature descriptor of image.
By image characteristics extraction and the structure of characteristics of image descriptor, a series of part is conceptualized as per piece image Feature.In order to carry out images match, it is necessary to which the feature that application query Technical comparing two images are extracted, then passes through matching strategy Differentiate whether image matches.Image Feature Matching process mainly includes three parts:Measuring similarity, matching strategy and inquiry skill Art.
It is the most frequently used also most efficient method during current object matching is recognized to carry out pattern match using feature, and it specifically contains Justice refers to that clarification of objective matches with the model in model library in image.It is to be identified in many images steganalysis tasks Destination number it is more, the feature that each target possesses also has many, therefore, when identifying system is set up, it is necessary to examine Consider the validity of feature and the high efficiency of matching algorithm.
Assuming that each feature classification is represented by its feature.Assume j-th of list of feature values of the i-th type objects It is shown as fij.For a unknown object, its character representation is uj.The similitude of the object and the i-th class is given by:
Wherein, wjIt is the weights of j-th of feature.The selection of weights is based on the relative importance of feature.J-th Feature similar value is Sj, it can be absolute difference, poor or other distance measures of standardizing.Most common method be with following formula simultaneously Consider the weights standardization being used together with feature:
sj=| uj-fij| (3-3)
One object can not only be represented with its feature, and can be represented with the contact between feature.Feature Between relation can be space, or other forms.In this case, object may be represented as a figure Shape.Each node of figure represents an object, and camber line connecting node represents the contact between object.
Therefore, object identification problem may be considered Graphic Pattern Matching problem.
One Graphic Pattern Matching problem can be defined as follows:There are two figure G1And G2, include NijIndividual node, wherein i are represented Figure number, j represents nodes, and the contact between node j and node k is expressed as Rjk.A similarity measurement is defined on figure Value, the measured value contains the similitude of all nodes and function.
In most applications of target identification, object to be identified is probably partially visible.Therefore, an identifying system The partial view from object is must be able to recognize them.
2) gesture identification
Undergraduate course topic obtains scene depth image using depth finding sensing technology, recognizes gesture.
During man-machine interaction, gesture motion is often all placed in before body pose, utilizes gesture area and background The different depth value in region is partitioned into gesture area.The pixel of same depth gray value in depth image is identical, but often Secondary people and depth camera the distance between it is all incomplete same, it is impossible to the segmentation in region is realized with constant depth threshold value.This Problem finds gesture area and the segmentation threshold of background using the method based on grey level histogram.
Grey level histogram represents that every kind of gray scale occurs in the number of the pixel with every kind of gray level in image, reflection image Frequency.For the gray level image corresponding to depth value, grey level histogram is calculated.Gesture position is often from depth camera Nearer region, and it is smaller relative to background area area, therefore begun look for from gray value is descending as several point changes, than The gray threshold of region segmentation is used as at larger gray value.
By extracting the appearance features of gesture, according to the difference of the finger number for characterizing gesture and the angle between referring to gesture Classified, realize quick identification of the gesture under the conditions of rotation scaling.Compared to other gesture feature extracting methods, gesture Appearance features have the advantages that it is more directly perceived, good without training sample, strong adaptability, the fast real-time of arithmetic speed.
As shown in figure 11, the extraction step of gesture appearance features is as follows:
(1) gesture area center point is obtained by the etching operation in mathematical morphology.Because palm is used as gesture Apparent chief component, it occupies the area of maximum and put and more concentrates in gesture area.By continuously corroding Operation, can eliminate the boundary point of gesture area, gesture area is progressively reduced, finally give the center point of gesture area C0
(2) the maximum range value l of central point and gesture area edge is calculated, 10 deciles of progress of adjusting the distance, with d=l/ 10.Circular test is made by the center of circle of gesture area central point, radius of circle is to l since d, and value of progressively increasing every time is d, obtains 10 circles Trajectory, as shown in the figure.
(3) record in the direction of the clock pixel value change point Pij on every Circular test line (0~1, i.e., from black region to White portion) and QijThe position coordinate value of (1~0 is i.e. from white portion to black region), i represents the numbering of locus circle, j tables Show that P or Q points are numbered on same locus circle, while deleting the P of individualismijPoint and QijPoint.
(4) P is passed throughijAnd QijPosition coordinates, calculate each pair PijAnd QijThe distance between Dij, when locus circle and finger tip portion Subregion phase obtains less D when cuttingij, it is impossible to show the developed width value of finger;So working as DijDuring less than threshold value δ, deletion pair The P answeredijAnd QijLocate P between point, the finger of some fingersijAnd QijPoint is removed, and it is δ=d/4 rule of thumb to set threshold value.
(5) the j maximums obtained on each locus circle, are the numbers of branches summation N=max (j) that is connected with palm.Due to dividing Finger and wrist branch are included in branch, then finger quantity is Nf=N-1.
(6) the mean breadth W of branch is obtained by the mean value calculation of each branchj=Dij.Understand that wrist is wide in hand Degree is more than finger, and wrist corresponds to the branch of Breadth Maximum in branch.Each cut by other branches in addition to wrist branch P is taken on the maximum locus circle takenijAnd QijMidpoint and center point C0Line, included angle A j-1 between being referred to.
Natural gesture motion is needed in man-machine interaction, and is not limited to some certain gestures poses or certain gestures area Domain size.The distance of distance and the pose of gesture can cause in image the problem of gesture size and rotation.In feature extraction In, point locus circle such as utilize to eliminate influence of the gesture area size to feature extraction, while being carried out to the change point on track special Levy calculating, the finger quantity Nf of extraction and refer to an included angle A j-1 feature all there is the consistency of rotation and scaling, not by gesture away from From far and near and rotation influence.
By extracting the appearance features of gesture, decision-tree model is set up, gesture is identified classification.Decision tree is to pass through Inductive learning, generation decision tree or decision rule are carried out to training sample, then using decision tree or decision rule to new data A kind of mathematical method classified.Decision tree is classified by the way that example is aligned to some leaf node from root node, leaf Node is the classification belonging to example.The key of the decision tree constructed is how to select appropriate logic judgment or attribute.
The extraction of gesture mainly is characterized in finger quantity NfThe included angle A between fingerj-1Feature, is used as the class node of decision tree. Finger quantative attribute distinguishes obvious between different gestures, and the decision tree of foundation is first with finger quantity NfIt is used as decision tree Root node, for finger number identical gesture, resettles child node and refers to an included angle A to eachj-1Difference condition makes a distinction.Son For included angle A between finger on nodej-1,
Class condition is obtained by the test training of small sample, the decision tree of foundation is as shown in figure 12.
Gesture 1, gesture 3, gesture 4, the finger quantative attribute of gesture 5 have uniqueness, directly can be classified by root node; The Nf=2 of gesture 2 and gesture 6, next level of child nodes is by judging that referring to the size of an included angle A 1 is distinguish between;Gesture 7, gesture 8, hand The Nf=3 of gesture 9 by 2 level of child nodes, it is necessary to judge to refer to an included angle A 1 and A1 sizes make a distinction.
3) target following and three-dimensional registration method
Using binocular stereo vision three-dimensional reconstruction and the characteristic point based on Kanade-Lucas-Tomasi (KLT) optical flow method Tracking, which is combined, realizes augmented reality target following and three-dimensional registration.To at regular intervals from binocular stereo vision video flowing A pair of the key frames taken out are resolved, and set up target object threedimensional model, and extract the characteristic point for needing to track;Utilize KLT Method is tracked to the characteristic point of all intermediate frames between key frame, and the reality of camera is obtained using the real time position of characteristic point When position and posture, realize real-time tracking and registration;The characteristic point of tracking is constantly updated using key frame three-dimensional reconstruction, Three-dimensional registration is corrected.
A, stereo vision three-dimensional rebuilding principle
Three-dimensional reconstruction based on stereoscopic vision is the method for recovering object dimensional geometry by two width or multiple image, is used In the image sequence of reconstruction be as mobile separate unit video camera or captured by the multiple cameras in different points of view.Video camera The two dimensional image of three-dimensional object is obtained by perspective transform, the point on point actual object in the image is existed necessarily Corresponding relation.Just as our eyes, two CCD take the photograph after machine is shot to a point in space from different directions and obtained The two images arrived, then according to corresponding relation to the position coordinates for releasing real space midpoint, here it is binocular stereo vision The process of three-dimensional reconstruction.
By completing after camera calibration, images match, spatial point reconstruction, it is possible to object is carried out using these data Three-dimensional reconstruction.
Spatial point algorithm for reconstructing:Under convergence type stereoscopic model, it is assumed that space any point P is in two video camera C1With C2On picture point P1With P2Oneself through detecting respectively from two images, i.e., oneself knows P1With P2For space same point P correspondence Point.Furthermore it is assumed that C1With C2Video camera oneself through demarcation, their projection matrix is respectively M1With M2.Then have
Wherein, (u1,v1, 1) and (u2,v2, 2) and it is respectively P1With P2Image homogeneous coordinates of the point in respective image, (X, Y, Z, 1) it is homogeneous coordinates of the P points under world coordinate system;mi k j(k=1,2;I=1 ..., 3;J=1 ..., 4) it is respectively Mk I row j column elements.
According to the linear model formula (3-6) of video camera, Z can be eliminated in above formulac1And Zc2, obtain on X, Y, the four of Z Individual linear equation:
Formula (3-7) and (3-8) geometric meaning were O1P1And O2P2Straight line.Because spatial point (X, Y, Z) is O1P1With O2P2The intersection point for being, it is inevitable while meeting both the above equation.It therefore, it can both the above equations simultaneousness obtaining spatial point P Coordinate (X, Y, Z).But in actual applications, because data are always noisy, generally obtain space using least square method The three-dimensional coordinate of point.It is that can obtain space three-dimensional figure that these points are carried out into surface fitting.
B, optical flow method method for tracking target
Optical flow method be it is a kind of by shade of gray be basically unchanged or brightness constancy constraint assume based on target detection Effective ways.Light stream refers to the speed that grayscale mode is moved in image, is that the three dimensional velocity vectors of visible point in scenery are being imaged Projection in plane, illustrates the time change of scenery surface point position in the picture.The fortune that optical flow method obtains optical flow computation Momentum judges moving target as an important identification feature.Optical flow method detects that the general principle of moving target is:For figure Each pixel as in assigns a velocity, and which forms an image motion, one in motion is specific Point on point and three-dimensional body on moment, image is corresponded, and this corresponding relation can be obtained by projection relation, according to each The velocity feature of pixel, can enter Mobile state analysis to image.If there is no moving target in image, light stream vector It is consecutive variations in whole image region.When there is moving target in image, there is relative motion in target and image background, fortune The velocity that moving-target is formed is inevitable different with background velocity vector, so as to detect moving target and position.
Optical flow method be used for target following principle be:
● a continuous sequence of frames of video is handled;
● for each video sequence, using certain object detection method, detect the foreground target being likely to occur;
● if a certain frame occurs in that foreground target, find its representative key feature points (can randomly generate, Characteristic point can also be done using angle point);
● for any two adjacent video frames after, the key feature points occurred in previous frame are found in present frame In optimum position, so as to obtain the position coordinates of foreground target in the current frame;
● such iteration is carried out, and just can realize the tracking of target.
C, three-dimensional registration method
Three-dimensional registration includes the translation and projection between plane of delineation coordinate system, camera coordinate system, world coordinate system Conversion.Under pinhole camera imaging model, the point of world coordinate system is represented with homogeneous coordinates
Xw=(xw, yw, zw, 1)T, X=(x, y, 1)T (3-9)
X, y are its projection in image plane.Relation between them can be represented with following formula (3-10):
Work as XwIn identity planar above formula, Zw=0, then formula (3-10) switch to following formula (3-11):
Wherein, the K in formula (3-10) and (3-11) represents the inner parameter of video camera, and M represents the external parameter of video camera. Three-dimensional registration mainly obtains inner parameter and external parameter.The inner parameter matrix K of video camera can be true after camera calibration Fixed, K keeps constant in system operation.M includes 3 rotational components (R1 R2 R3) and a translational component (T), registration During key issue be exactly to obtain this four components.From above formula (3-11), if parameter K and homography matrix internally In the case of Hw is known, and R1, R2 and R3 meet orthogonality, then (R1 R2 R3) and (T) can just be obtained, so as to obtain The external parameter M of video camera.As long as obtaining the M of each frame, that is, include spin matrix and include translation matrix again, it is possible to void Intend object to be correctly registered in real world, so as to realize virtual reality fusion.
3rd, virtual reality fusion displaying principle
The head-mounted display apparatus of current main flow is divided by its realization principle, is broadly divided into two classes, and one is optical perspective formula Display device, two be on the left of video perspective type display device, its principle such as Figure 13 and the contrast on right side.
The characteristics of optical perspective formula display device is to allow user to be immediately seen real environment, then passes through projector equipment The image superposition of dummy object is got on Deng device.Due to optical perspective formula display device allow user can direct viewing to very Real environment, therefore, the field range of user are broad, are not likely to produce the phenomenons such as dizziness, and the comfort level of user is higher.But due to optics The dummy object image of perspective formula display device is the eyes for being projected directly into user, and dummy object is typically merely able to direct superposition To above real-world object, it is very difficult to the occlusion issue between correct processing dummy object and real-world object.Additionally, because true environment It is completely independent and separates between dummy object, easily cause causes true environment because dummy object is drawn or renewal speed is not enough The nonsynchronous problem the 3rd of the change of change and dummy object, therefore it will be in binocular partly overlaps, due to binocular vision Image difference produce binocular competition, image can be caused to divide under serious conditions, produce visual fatigue, aggravate the work of user Burden.Therefore, it requires that the real-time responsiveness of augmented reality is very high.
Video perspective type display device is that user is separated with true environment completely, perception of the user to true environment It is only capable of indirectly being perceived by the video image of the camera on device.Augmented reality system vision signal with After dummy object is overlapped, the vision signal for changing into virtual reality fusion is output in display screen again, allows user to watch actual situation The hybird environment of fusion.Video perspective type display device is due to completely separating user with true environment, and user is to true The perception of environment be it is secondhand by camera, therefore, for optical perspective formula display device, its visual field model Enclose smaller, the comfort level of user's observation does not have optical perspective formula high.But video perspective type display device has optical perspective formula to show The unrivaled advantage of showing device.
1. virtual reality fusion effect is good.After being obtained due to true environment by camera, can again it be transmitted by computer disposal To user, therefore, this display device provides base for the illumination consistency solved the problems, such as between true environment and dummy object Plinth.In addition, computer can realize blocking between dummy object and real-world object by carrying out appropriate processing to frame of video Relation is correctly shown.
2. simple, low manufacture cost is realized.The camera of core component one of video perspective type display device and miniature display Device has highly developed product at present, and price is more moderate, it is not necessary to which core is researched and developed, it is only necessary to taking the photograph It can be used as head and miniscope carry out simple repacking.
3. actual situation image synchronous performance is good.By user is that the video captured by camera is felt to real world Know, therefore, when dummy object is asynchronous with the output of true environment, the mode such as can export by delayed video makes virtually Object is synchronous with the video image holding of true environment.
Select the video perspective type helmet, the video perspective type helmet be broadly divided into binocular-type with the class of monocular formula two.Monocular formula The helmet due to only one of which camera, therefore, user can not obtain the depth of true environment completely by single frame video image Information, so as to cause user can not accurately be judged the distance of real-world object, can often go out in augmented reality system It now can not once grab the problem of holding real-world object.Two observation cameras are mounted with the helmet, left and right two-path video is formed Input and be respectively transmitted to by stereoscopic display glasses in the right and left eyes of user, allow users to obtain the complete depth of true environment Spend information.On the other hand, because the interpupillary distance of human eye differs, if two observation cameras are not matched that with human eye interpupillary distance, not only Third dimension can be caused to weaken, the bad phenomenons such as dizziness can be also caused when serious.In order to not reduce user because of interpupillary distance and preset value not The sense of discomfort that timing is produced, is devised and two observation the distance between cameras is finely adjusted using stepper motor, so as to Family results in the 3 d display device of optimum experience effect.
(LCoS technologies) present scientific and technological level determines that people want to see a virtual image, then certain to need Have:Image source.And because light will finally enter human eye, therefore its display portion will not only have display capabilities, also need There is orientation projection's ability of light.Thus it is only liquid crystal display, its brightness for launching light and efficiency are far from being enough. In AR aobvious, accurately say, actual demand is minitype projection machine (Micro-Projector).
The structure of traditional projector includes the lens of this pile, and it seems that just two from " miniature " words are far.Yes Quite right, this set system is difficult to do small, and power consumption is very big.
Liquid crystal covers silicon, and (also known as liquid crystal on silicon, English:Liquid Crystal On Silicon, abbreviation:LCoS it is) small-sized Change one of AR aobvious key technologies.
The LCOS imaging systems of three-chip type, the white light for first sending projecting camera light bulb passes through beam splitting system system It is divided into the light of red green blue tricolor, then, each primary colors light is irradiated on the LCOS chip of a blocks of reflecting, system is led to Cross the state of liquid crystal molecule on control LCOS panel to change the power of each pixel reflection light of the block chip, eventually pass The light of LCOS reflections pools Ray Of Light by necessary light refraction, is irradiated to by projector lens on screen, shape Into colored image.
In Hololens, at the bridge of the nose two at luminous point be exactly where LCoS micro projectors at.First and last Structure, it is in fact very small but excellent skilful, directly put so big display screen like that rather than Meta2, it is simple and crude.
Liquid crystal cover silicon have using light efficiency height, small volume, aperture opening ratio height, manufacturing technology it is more ripe, it is inexpensive the features such as, It can easily realize high-resolution and sufficient color representation.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, on the premise of principle of the present invention is not departed from, some improvements and modifications can also be made, these improvements and modifications It should be regarded as protection scope of the present invention.

Claims (5)

1. a kind of transformer O&M simulation training system based on augmented reality, it is characterised in that including:Training Management is serviced Device, AR subsystems, training outdoor scene visual monitoring subsystem;
Wherein AR subsystems include the AR helmets;The AR helmets include:For the binocular for recognizing, tracking and registering to target device Stereo vision three-dimensional perceives unit, will show that virtual scene is superimposed upon the display unit on actual scene, for recognizing user The gesture identification unit of action, the voice recognition unit for recognizing user's voice;
Training outdoor scene visual monitoring subsystem includes the multi-view stereo vision measurement being arranged in actual scene and tracking camera With equipment condition monitoring video camera;Wherein described equipment condition monitoring video camera is used for the working condition for recognizing transformer, its Described in multi-view stereo vision measurement and tracking camera be used to obtaining the three dimensional space coordinate of transformer in actual scene, Measure three dimensional space coordinate of the trainee at training scene, and the run trace of tracking measurement trainee.
Training Management server connects AR subsystems and training outdoor scene visual monitoring subsystem, to be believed using the state of transformer The run trace of breath and trainee are evaluated the normalization of trainee's practical operation transformer.
2. the transformer O&M simulation training system according to claim 1 based on augmented reality, it is characterised in that institute State Training Management server and nothing is carried out by wireless router and at least one AR subsystem, training outdoor scene visual monitoring subsystem Line communicates.
3. the transformer O&M simulation training system according to claim 1 based on augmented reality, it is characterised in that institute State the superposition that AR subsystems set up virtual scene and reality scene by the following method:
Step 1, set up using the binocular stereo vision measuring unit on the AR helmets transformer 3D models or by existing power transformation Equipment 3D models are introduced directly into;Transformer 3D models tool is wherein set up using the binocular stereo vision measuring unit on the AR helmets Body Ah Bai is wealthy:The 3D features of transformer are extracted, using the two dimensional character of two dimensional image extraction equipment nameplate, equipment feature are set up Database;
Step 2, using the binocular stereo vision three-dimensional measurement in multi-view stereo vision three-dimensional measurement and tracking system and the AR helmets System, which is combined, realizes transformer space orientation, the space orientation of the AR helmets and tracking, and transformer, parts and instrument Quick identification;
Step 3, based on technique of binocular stereoscopic vision, in real time detection camera coordinate system and transformer coordinate system between relation, It is determined that position of the virtual content to be added under camera coordinate system, realizes the quick track and localization and three-dimensional note of transformer Volume technology;
Step 4, using LCOS projective techniques, realize that the fusion of transformer virtual scene and actual scene is shown.
4. a kind of transformer O&M simulation training system based on augmented reality using as described in claim any one of 1-3 The method giveed training, it is characterised in that including:
The virtual scene and actual scene being superimposed are shown by AR subsystems;Simultaneously man-machine friendship is carried out using AR subsystems Mutual action and voice to determine trainee;
The working condition of transformer is obtained by training outdoor scene visual monitoring subsystem, and the three dimensions of transformer is sat Mark, measurement trainee are training live three dimensional space coordinate, and the run trace of tracking measurement trainee;
The data obtained according to AR subsystems and training outdoor scene visual monitoring subsystem, using Training Management server to trainee The normalization of member's practical operation transformer is evaluated.
5. the transformer O&M emulation training method according to claim 4 based on augmented reality, it is characterised in that institute Stating method also includes:
The practical operation behavior of transformer or the pseudo operation behavior in virtual scene in trainee is to actual scene Training system produces feedback when not meeting code requirement, shows that the failure corresponding with faulty operation shows on AR Helmet Mounted Displays As, and the loudspeaker of the AR helmets plays corresponding sound;Wherein described phenomenon of the failure includes following at least one:Catch fire, emit Cigarette, sparking, electric arc, blast.
CN201710779761.5A 2017-09-01 2017-09-01 Transformer O&M simulation training system and method based on augmented reality Pending CN107331220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710779761.5A CN107331220A (en) 2017-09-01 2017-09-01 Transformer O&M simulation training system and method based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710779761.5A CN107331220A (en) 2017-09-01 2017-09-01 Transformer O&M simulation training system and method based on augmented reality

Publications (1)

Publication Number Publication Date
CN107331220A true CN107331220A (en) 2017-11-07

Family

ID=60204311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710779761.5A Pending CN107331220A (en) 2017-09-01 2017-09-01 Transformer O&M simulation training system and method based on augmented reality

Country Status (1)

Country Link
CN (1) CN107331220A (en)

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833503A (en) * 2017-11-10 2018-03-23 广东电网有限责任公司教育培训评价中心 Distribution core job augmented reality simulation training system
CN107888934A (en) * 2017-11-22 2018-04-06 广东电网有限责任公司教育培训评价中心 A kind of power transformation technical ability live broadcast system based on AR technologies
CN108037822A (en) * 2017-11-23 2018-05-15 国网山东省电力公司 A kind of 3D training systems based on virtual reality
CN108090572A (en) * 2017-12-01 2018-05-29 大唐国信滨海海上风力发电有限公司 A kind of marine wind electric field augmented reality system and its control method
CN108154739A (en) * 2017-12-19 2018-06-12 兰州安信铁路科技有限公司 A kind of railway station sending and receiving vehicle operation comprehensive training system
CN108171817A (en) * 2018-01-10 2018-06-15 上海市地下空间设计研究总院有限公司 Method for inspecting based on MR or AR, MR or AR equipment and cruising inspection system
CN108287483A (en) * 2018-01-17 2018-07-17 北京航空航天大学 A kind of immersion Virtual Maintenance Simulation method and system towards Product maintenance verification
CN108510866A (en) * 2018-04-04 2018-09-07 广东机电职业技术学院 A kind of integrated practice tutoring system of intelligent Manufacturing Technology
CN108711327A (en) * 2018-05-15 2018-10-26 国网河北省电力有限公司保定供电分公司 Protection simulation training platform construction method based on VR technologies
CN108831232A (en) * 2018-05-28 2018-11-16 中南民族大学 A kind of CT virtual simulated training system and method
CN108922305A (en) * 2018-05-24 2018-11-30 陈荣 A kind of guidance system and method guiding standard operation by augmented reality
CN109102731A (en) * 2018-08-09 2018-12-28 深圳供电局有限公司 VR-based transformer substation simulation platform and method
CN109147448A (en) * 2018-08-09 2019-01-04 国网浙江省电力有限公司 A kind of transmission line high-altitude walking training system and its method
CN109308448A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 A method of it prevents from becoming distribution maloperation using image processing techniques
CN109379580A (en) * 2018-10-25 2019-02-22 国网福建省电力有限公司厦门供电公司 A method of with no paper grid switching operation is carried out based on AR glasses
CN109448126A (en) * 2018-09-06 2019-03-08 国营芜湖机械厂 A kind of aircraft equipment repairing auxiliary system and its application method based on mixed reality
CN109598998A (en) * 2018-11-30 2019-04-09 深圳供电局有限公司 Power grid training wearable device based on gesture recognition and interaction method thereof
CN109828658A (en) * 2018-12-17 2019-05-31 彭晓东 A kind of man-machine co-melting long-range situation intelligent perception system
CN109976519A (en) * 2019-03-14 2019-07-05 浙江工业大学 A kind of interactive display unit and its interactive display method based on augmented reality
CN110047150A (en) * 2019-04-24 2019-07-23 大唐环境产业集团股份有限公司 It is a kind of based on augmented reality complex device operation operate in bit emulator system
CN110070776A (en) * 2019-04-26 2019-07-30 中国南方电网有限责任公司超高压输电公司大理局 A kind of breaker protection virtual framework training system based on virtual reality technology
CN110084336A (en) * 2019-04-08 2019-08-02 山东瀚岳智能科技股份有限公司 A kind of prison institute's article management system and method based on wireless location
CN110211240A (en) * 2019-05-31 2019-09-06 中北大学 A kind of augmented reality method for exempting from sign-on ID
CN110599832A (en) * 2019-08-29 2019-12-20 广东电网有限责任公司培训与评价中心 Transformer substation safety training system based on virtual reality
CN110689220A (en) * 2019-08-20 2020-01-14 国网山东省电力公司莱芜供电公司 Automatic counter-point machine for realizing dispatching automation
CN110688772A (en) * 2019-10-14 2020-01-14 深圳供电局有限公司 Transformer substation exception handling simulation system based on VR local area network online system
CN110706566A (en) * 2019-10-14 2020-01-17 国网安徽省电力有限公司蚌埠供电公司 Virtual display training navigation integrated system of substation
CN110728869A (en) * 2019-09-04 2020-01-24 北京信息科技大学 Real interactive system of instructing of distribution lines live working safety
CN110992760A (en) * 2019-12-17 2020-04-10 武汉伊莱维特电力科技有限公司 Substation main equipment overhaul visual training method based on augmented reality technology
CN111028603A (en) * 2019-12-27 2020-04-17 广东电网有限责任公司培训与评价中心 Live-line work training method and system for transformer substation based on dynamic capture and virtual reality
CN111063033A (en) * 2019-11-30 2020-04-24 国网辽宁省电力有限公司葫芦岛供电公司 Electric power material goods arrival acceptance method based on augmented reality technology
CN111081108A (en) * 2019-12-26 2020-04-28 中国航空工业集团公司西安飞机设计研究所 Disassembly and assembly training method and device based on augmented reality technology
CN111127974A (en) * 2019-12-16 2020-05-08 广东电网有限责任公司 Virtual reality and augmented reality three-dimensional application service platform for transformer substation operation
CN111161422A (en) * 2019-12-13 2020-05-15 广东电网有限责任公司 Model display method for enhancing virtual scene implementation
CN111355922A (en) * 2018-12-24 2020-06-30 台达电子工业股份有限公司 Camera deployment and scheduling method, monitoring system and non-transitory computer readable medium
CN111381679A (en) * 2020-03-19 2020-07-07 三一筑工科技有限公司 AR-based assembly type building construction training method and device and computing equipment
CN111553974A (en) * 2020-04-21 2020-08-18 北京金恒博远科技股份有限公司 Data visualization remote assistance method and system based on mixed reality
CN111728697A (en) * 2020-07-21 2020-10-02 中国科学技术大学 Operation training and navigation device based on head-mounted three-dimensional augmented reality equipment
CN112037090A (en) * 2020-08-07 2020-12-04 湖南翰坤实业有限公司 Knowledge education system based on VR technology and 6DOF posture tracking
CN112102917A (en) * 2020-08-11 2020-12-18 东南大学 Method and system for visualizing amount of exercise for active rehabilitation training
EP3637330A4 (en) * 2018-06-29 2020-12-30 Hitachi Systems, Ltd. Content creation system
CN112201101A (en) * 2020-09-29 2021-01-08 北京科东电力控制系统有限责任公司 Education training system and training method based on augmented reality technology
CN112241477A (en) * 2019-07-18 2021-01-19 国网河北省电力有限公司邢台供电分公司 Multidimensional data visualization method for assisting transformer maintenance operation site
CN112262421A (en) * 2018-06-07 2021-01-22 微软技术许可有限责任公司 Programmable interface for automatic learning and reviewing
CN112416120A (en) * 2020-10-13 2021-02-26 深圳供电局有限公司 Intelligent multimedia interaction system based on wearable equipment
CN112435522A (en) * 2020-11-11 2021-03-02 郑州捷安高科股份有限公司 State adjustment method and device based on protection measurement and control system
CN112633442A (en) * 2020-12-30 2021-04-09 中国人民解放军32181部队 Ammunition identification system based on visual perception technology
CN112700424A (en) * 2021-01-07 2021-04-23 国网山东省电力公司电力科学研究院 Infrared detection quality evaluation method for live detection of power transformation equipment
CN112767766A (en) * 2021-01-22 2021-05-07 郑州捷安高科股份有限公司 Augmented reality interface training method, device, equipment and storage medium
CN113066322A (en) * 2021-03-29 2021-07-02 广东电网有限责任公司 Building fire-fighting training method and device
CN113066326A (en) * 2021-05-08 2021-07-02 沈阳工程学院 Power station equipment auxiliary overhaul training method based on augmented reality
CN113223182A (en) * 2021-04-28 2021-08-06 深圳市思麦云科技有限公司 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology
CN113269832A (en) * 2021-05-31 2021-08-17 长春工程学院 Electric power operation augmented reality navigation system and method for extreme weather environment
CN113271848A (en) * 2019-02-05 2021-08-17 株式会社日立制作所 Body health state image analysis device, method and system
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system
CN113450612A (en) * 2021-05-17 2021-09-28 云南电网有限责任公司 Development method of complete teaching device applied to relay protection training
CN113470471A (en) * 2021-07-19 2021-10-01 国网电力科学研究院武汉南瑞有限责任公司 Extra-high voltage GIS equipment detection training immersive interaction system and training method thereof
CN114143220A (en) * 2021-11-09 2022-03-04 北京银盾泰安网络科技有限公司 Real-time data visualization platform
CN114527868A (en) * 2021-12-29 2022-05-24 武汉理工大学 Auxiliary maintenance method, system, storage medium and computing equipment
CN114550525A (en) * 2020-11-24 2022-05-27 郑州畅想高科股份有限公司 Locomotive component overhauls real standard system based on mix reality technique
CN114821848A (en) * 2022-03-09 2022-07-29 国网浙江省电力有限公司杭州供电公司 Metering device inspection system and method based on intelligent wearable equipment and dynamic identification
CN114937393A (en) * 2022-03-30 2022-08-23 中国石油化工股份有限公司 Petrochemical enterprise high-altitude operation simulation training system based on augmented reality
CN115546449A (en) * 2022-10-18 2022-12-30 甘肃省气象信息与技术装备保障中心 Meteorological equipment training system based on augmented reality technology
CN117519487A (en) * 2024-01-05 2024-02-06 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515198A (en) * 2009-03-11 2009-08-26 上海大学 Human-computer interaction method for grapping and throwing dummy object and system thereof
CN102142055A (en) * 2011-04-07 2011-08-03 上海大学 True three-dimensional design method based on augmented reality interactive technology
CN102592124A (en) * 2011-01-13 2012-07-18 汉王科技股份有限公司 Geometrical correction method, device and binocular stereoscopic vision system of text image
CN103544857A (en) * 2013-10-30 2014-01-29 国家电网公司 Integrated power transformation operation and maintenance training system
CN104408766A (en) * 2014-11-18 2015-03-11 国家电网公司 Method for displaying and controlling equipment dismantling process of power system simulation training
CN104464431A (en) * 2014-12-15 2015-03-25 北京科东电力控制系统有限责任公司 Electric power security training entity simulation system
CN105938249A (en) * 2016-05-12 2016-09-14 深圳增强现实技术有限公司 Integrated binocular augmented reality intelligent glasses
CN106910244A (en) * 2017-02-20 2017-06-30 广东电网有限责任公司教育培训评价中心 Power equipment internal structure situated cognition method and apparatus
CN106981243A (en) * 2017-04-18 2017-07-25 国网山东省电力公司济宁供电公司 Distribution uninterrupted operation simulation training system and method based on augmented reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515198A (en) * 2009-03-11 2009-08-26 上海大学 Human-computer interaction method for grapping and throwing dummy object and system thereof
CN102592124A (en) * 2011-01-13 2012-07-18 汉王科技股份有限公司 Geometrical correction method, device and binocular stereoscopic vision system of text image
CN102142055A (en) * 2011-04-07 2011-08-03 上海大学 True three-dimensional design method based on augmented reality interactive technology
CN103544857A (en) * 2013-10-30 2014-01-29 国家电网公司 Integrated power transformation operation and maintenance training system
CN104408766A (en) * 2014-11-18 2015-03-11 国家电网公司 Method for displaying and controlling equipment dismantling process of power system simulation training
CN104464431A (en) * 2014-12-15 2015-03-25 北京科东电力控制系统有限责任公司 Electric power security training entity simulation system
CN105938249A (en) * 2016-05-12 2016-09-14 深圳增强现实技术有限公司 Integrated binocular augmented reality intelligent glasses
CN106910244A (en) * 2017-02-20 2017-06-30 广东电网有限责任公司教育培训评价中心 Power equipment internal structure situated cognition method and apparatus
CN106981243A (en) * 2017-04-18 2017-07-25 国网山东省电力公司济宁供电公司 Distribution uninterrupted operation simulation training system and method based on augmented reality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尚倩,阮秋琦,李小利: "双目立体视觉的目标识别与定位", 《智能系统学报》 *
葛动元,姚锡凡,李凯南: "双目立体视觉系统的标定", 《机械设计与制造》 *

Cited By (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833503A (en) * 2017-11-10 2018-03-23 广东电网有限责任公司教育培训评价中心 Distribution core job augmented reality simulation training system
CN107833503B (en) * 2017-11-10 2019-10-29 广东电网有限责任公司教育培训评价中心 Distribution core job augmented reality simulation training system
CN107888934A (en) * 2017-11-22 2018-04-06 广东电网有限责任公司教育培训评价中心 A kind of power transformation technical ability live broadcast system based on AR technologies
CN108037822A (en) * 2017-11-23 2018-05-15 国网山东省电力公司 A kind of 3D training systems based on virtual reality
CN108037822B (en) * 2017-11-23 2023-05-26 国网山东省电力公司 3D training system based on virtual reality
CN108090572A (en) * 2017-12-01 2018-05-29 大唐国信滨海海上风力发电有限公司 A kind of marine wind electric field augmented reality system and its control method
CN108154739A (en) * 2017-12-19 2018-06-12 兰州安信铁路科技有限公司 A kind of railway station sending and receiving vehicle operation comprehensive training system
CN108171817A (en) * 2018-01-10 2018-06-15 上海市地下空间设计研究总院有限公司 Method for inspecting based on MR or AR, MR or AR equipment and cruising inspection system
CN108287483A (en) * 2018-01-17 2018-07-17 北京航空航天大学 A kind of immersion Virtual Maintenance Simulation method and system towards Product maintenance verification
CN108510866A (en) * 2018-04-04 2018-09-07 广东机电职业技术学院 A kind of integrated practice tutoring system of intelligent Manufacturing Technology
CN108510866B (en) * 2018-04-04 2024-03-15 广东机电职业技术学院 Comprehensive practice teaching system of intelligent manufacturing technology
CN108711327A (en) * 2018-05-15 2018-10-26 国网河北省电力有限公司保定供电分公司 Protection simulation training platform construction method based on VR technologies
CN108922305A (en) * 2018-05-24 2018-11-30 陈荣 A kind of guidance system and method guiding standard operation by augmented reality
CN108831232A (en) * 2018-05-28 2018-11-16 中南民族大学 A kind of CT virtual simulated training system and method
CN112262421A (en) * 2018-06-07 2021-01-22 微软技术许可有限责任公司 Programmable interface for automatic learning and reviewing
CN112262421B (en) * 2018-06-07 2023-01-24 微软技术许可有限责任公司 Programmable interface for automatic learning and reviewing
EP4138006A1 (en) * 2018-06-29 2023-02-22 Hitachi Systems, Ltd. Content creation system
US12051340B2 (en) 2018-06-29 2024-07-30 Hitachi Systems, Ltd. Content creation system
EP3637330A4 (en) * 2018-06-29 2020-12-30 Hitachi Systems, Ltd. Content creation system
CN109308448A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 A method of it prevents from becoming distribution maloperation using image processing techniques
CN109102731A (en) * 2018-08-09 2018-12-28 深圳供电局有限公司 VR-based transformer substation simulation platform and method
CN109147448A (en) * 2018-08-09 2019-01-04 国网浙江省电力有限公司 A kind of transmission line high-altitude walking training system and its method
CN109448126B (en) * 2018-09-06 2023-01-31 国营芜湖机械厂 Mixed reality-based aviation equipment repair auxiliary system and use method thereof
CN109448126A (en) * 2018-09-06 2019-03-08 国营芜湖机械厂 A kind of aircraft equipment repairing auxiliary system and its application method based on mixed reality
CN109379580A (en) * 2018-10-25 2019-02-22 国网福建省电力有限公司厦门供电公司 A method of with no paper grid switching operation is carried out based on AR glasses
CN109598998A (en) * 2018-11-30 2019-04-09 深圳供电局有限公司 Power grid training wearable device based on gesture recognition and interaction method thereof
CN109828658A (en) * 2018-12-17 2019-05-31 彭晓东 A kind of man-machine co-melting long-range situation intelligent perception system
CN109828658B (en) * 2018-12-17 2022-03-08 彭晓东 Man-machine co-fusion remote situation intelligent sensing system
CN111355922A (en) * 2018-12-24 2020-06-30 台达电子工业股份有限公司 Camera deployment and scheduling method, monitoring system and non-transitory computer readable medium
CN111355922B (en) * 2018-12-24 2021-08-17 台达电子工业股份有限公司 Camera deployment and scheduling method, monitoring system and non-transitory computer readable medium
CN113271848B (en) * 2019-02-05 2024-01-02 株式会社日立制作所 Body health state image analysis device, method and system
CN113271848A (en) * 2019-02-05 2021-08-17 株式会社日立制作所 Body health state image analysis device, method and system
CN109976519A (en) * 2019-03-14 2019-07-05 浙江工业大学 A kind of interactive display unit and its interactive display method based on augmented reality
CN109976519B (en) * 2019-03-14 2022-05-03 浙江工业大学 Interactive display device based on augmented reality and interactive display method thereof
CN110084336B (en) * 2019-04-08 2023-07-18 山东瀚岳智能科技股份有限公司 Monitoring object management system and method based on wireless positioning
CN110084336A (en) * 2019-04-08 2019-08-02 山东瀚岳智能科技股份有限公司 A kind of prison institute's article management system and method based on wireless location
CN110047150B (en) * 2019-04-24 2023-06-16 大唐环境产业集团股份有限公司 Complex equipment operation on-site simulation system based on augmented reality
CN110047150A (en) * 2019-04-24 2019-07-23 大唐环境产业集团股份有限公司 It is a kind of based on augmented reality complex device operation operate in bit emulator system
CN110070776A (en) * 2019-04-26 2019-07-30 中国南方电网有限责任公司超高压输电公司大理局 A kind of breaker protection virtual framework training system based on virtual reality technology
CN110211240A (en) * 2019-05-31 2019-09-06 中北大学 A kind of augmented reality method for exempting from sign-on ID
CN112241477A (en) * 2019-07-18 2021-01-19 国网河北省电力有限公司邢台供电分公司 Multidimensional data visualization method for assisting transformer maintenance operation site
CN110689220A (en) * 2019-08-20 2020-01-14 国网山东省电力公司莱芜供电公司 Automatic counter-point machine for realizing dispatching automation
CN110599832A (en) * 2019-08-29 2019-12-20 广东电网有限责任公司培训与评价中心 Transformer substation safety training system based on virtual reality
CN110728869A (en) * 2019-09-04 2020-01-24 北京信息科技大学 Real interactive system of instructing of distribution lines live working safety
CN110706566A (en) * 2019-10-14 2020-01-17 国网安徽省电力有限公司蚌埠供电公司 Virtual display training navigation integrated system of substation
CN110688772A (en) * 2019-10-14 2020-01-14 深圳供电局有限公司 Transformer substation exception handling simulation system based on VR local area network online system
CN111063033A (en) * 2019-11-30 2020-04-24 国网辽宁省电力有限公司葫芦岛供电公司 Electric power material goods arrival acceptance method based on augmented reality technology
CN111161422A (en) * 2019-12-13 2020-05-15 广东电网有限责任公司 Model display method for enhancing virtual scene implementation
CN111127974A (en) * 2019-12-16 2020-05-08 广东电网有限责任公司 Virtual reality and augmented reality three-dimensional application service platform for transformer substation operation
CN110992760A (en) * 2019-12-17 2020-04-10 武汉伊莱维特电力科技有限公司 Substation main equipment overhaul visual training method based on augmented reality technology
CN111081108A (en) * 2019-12-26 2020-04-28 中国航空工业集团公司西安飞机设计研究所 Disassembly and assembly training method and device based on augmented reality technology
CN111028603A (en) * 2019-12-27 2020-04-17 广东电网有限责任公司培训与评价中心 Live-line work training method and system for transformer substation based on dynamic capture and virtual reality
CN111381679A (en) * 2020-03-19 2020-07-07 三一筑工科技有限公司 AR-based assembly type building construction training method and device and computing equipment
CN111553974A (en) * 2020-04-21 2020-08-18 北京金恒博远科技股份有限公司 Data visualization remote assistance method and system based on mixed reality
CN111728697A (en) * 2020-07-21 2020-10-02 中国科学技术大学 Operation training and navigation device based on head-mounted three-dimensional augmented reality equipment
CN112037090B (en) * 2020-08-07 2024-05-03 湖南翰坤实业有限公司 Knowledge education system based on VR technology and 6DOF gesture tracking
CN112037090A (en) * 2020-08-07 2020-12-04 湖南翰坤实业有限公司 Knowledge education system based on VR technology and 6DOF posture tracking
CN112102917A (en) * 2020-08-11 2020-12-18 东南大学 Method and system for visualizing amount of exercise for active rehabilitation training
CN112102917B (en) * 2020-08-11 2022-11-04 东南大学 Exercise amount visualization method and system for active rehabilitation training
CN112201101A (en) * 2020-09-29 2021-01-08 北京科东电力控制系统有限责任公司 Education training system and training method based on augmented reality technology
CN112416120B (en) * 2020-10-13 2023-08-25 深圳供电局有限公司 Intelligent multimedia interaction system based on wearable equipment
CN112416120A (en) * 2020-10-13 2021-02-26 深圳供电局有限公司 Intelligent multimedia interaction system based on wearable equipment
CN112435522A (en) * 2020-11-11 2021-03-02 郑州捷安高科股份有限公司 State adjustment method and device based on protection measurement and control system
CN114550525A (en) * 2020-11-24 2022-05-27 郑州畅想高科股份有限公司 Locomotive component overhauls real standard system based on mix reality technique
CN114550525B (en) * 2020-11-24 2023-09-29 郑州畅想高科股份有限公司 Locomotive component overhauling practical training system based on mixed reality technology
CN112633442A (en) * 2020-12-30 2021-04-09 中国人民解放军32181部队 Ammunition identification system based on visual perception technology
CN112633442B (en) * 2020-12-30 2024-05-14 中国人民解放军32181部队 Ammunition identification system based on visual perception technology
CN112700424A (en) * 2021-01-07 2021-04-23 国网山东省电力公司电力科学研究院 Infrared detection quality evaluation method for live detection of power transformation equipment
CN112700424B (en) * 2021-01-07 2022-11-11 国网山东省电力公司电力科学研究院 Infrared detection quality evaluation method for live detection of power transformation equipment
CN112767766A (en) * 2021-01-22 2021-05-07 郑州捷安高科股份有限公司 Augmented reality interface training method, device, equipment and storage medium
CN113066322A (en) * 2021-03-29 2021-07-02 广东电网有限责任公司 Building fire-fighting training method and device
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system
CN113223182A (en) * 2021-04-28 2021-08-06 深圳市思麦云科技有限公司 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology
CN113223182B (en) * 2021-04-28 2024-05-14 深圳市思麦云科技有限公司 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology
CN113066326A (en) * 2021-05-08 2021-07-02 沈阳工程学院 Power station equipment auxiliary overhaul training method based on augmented reality
CN113450612A (en) * 2021-05-17 2021-09-28 云南电网有限责任公司 Development method of complete teaching device applied to relay protection training
CN113269832A (en) * 2021-05-31 2021-08-17 长春工程学院 Electric power operation augmented reality navigation system and method for extreme weather environment
CN113269832B (en) * 2021-05-31 2022-03-29 长春工程学院 Electric power operation augmented reality navigation system and method for extreme weather environment
CN113470471A (en) * 2021-07-19 2021-10-01 国网电力科学研究院武汉南瑞有限责任公司 Extra-high voltage GIS equipment detection training immersive interaction system and training method thereof
CN114143220B (en) * 2021-11-09 2023-10-31 北京银盾泰安网络科技有限公司 Real-time data visualization platform
CN114143220A (en) * 2021-11-09 2022-03-04 北京银盾泰安网络科技有限公司 Real-time data visualization platform
CN114527868A (en) * 2021-12-29 2022-05-24 武汉理工大学 Auxiliary maintenance method, system, storage medium and computing equipment
CN114821848A (en) * 2022-03-09 2022-07-29 国网浙江省电力有限公司杭州供电公司 Metering device inspection system and method based on intelligent wearable equipment and dynamic identification
CN114937393B (en) * 2022-03-30 2023-10-13 中国石油化工股份有限公司 Petrochemical enterprise high-altitude operation simulation training system based on augmented reality
CN114937393A (en) * 2022-03-30 2022-08-23 中国石油化工股份有限公司 Petrochemical enterprise high-altitude operation simulation training system based on augmented reality
CN115546449A (en) * 2022-10-18 2022-12-30 甘肃省气象信息与技术装备保障中心 Meteorological equipment training system based on augmented reality technology
CN117519487A (en) * 2024-01-05 2024-02-06 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture
CN117519487B (en) * 2024-01-05 2024-03-22 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture

Similar Documents

Publication Publication Date Title
CN107331220A (en) Transformer O&M simulation training system and method based on augmented reality
CN109671142B (en) Intelligent cosmetic method and intelligent cosmetic mirror
RU2617972C1 (en) Simulator for operational and maintenance staff on the basis of virtual reality models of transformer substation
CN102968915B (en) Chemical device training management device, device knowledge base and training management system
CN111899587B (en) Semiconductor micro-nano processing technology training system based on VR and AR and application thereof
CN110599842A (en) Virtual reality technology-based distribution network uninterrupted operation training system
CN113011723B (en) Remote equipment maintenance system based on augmented reality
CN110322564B (en) Three-dimensional model construction method suitable for VR/AR transformer substation operation environment
CN108122447A (en) A kind of subway hitch malfunction simulation system based on VR technologies
CN110827602A (en) Cable joint manufacturing and operation and maintenance skill training device and method based on VR + AR technology
CN107705636A (en) A kind of ship experiment teaching system based on augmented reality
CN109446618A (en) A kind of ancient building component based on VR builds analogy method
CN113299139A (en) Nuclear power station main pump maintenance virtual simulation platform and construction method thereof
CN107833503A (en) Distribution core job augmented reality simulation training system
CN116109455B (en) Language teaching auxiliary system based on artificial intelligence
CN110738881A (en) Power transformation operation simulation training system and method
CN115311437A (en) Equipment simulation operating system based on mixed reality
CN109670249A (en) A kind of Machine Design method of adjustment based on maintenance vision accessibility
CN107730134A (en) Interactive Auto-Evaluation System based on VR technologies
Mao et al. An off-axis flight vision display system design using machine learning
CN110992758A (en) Virtual reality technology-based distribution network uninterrupted operation training system
CN114360329A (en) Interactive multifunctional studio for art education
CN113823129A (en) Method and device for guiding disassembly and assembly of turning wheel equipment based on mixed reality
CN113283384A (en) Taiji interaction system based on limb recognition technology
KR20220030760A (en) Customized Pilot Training System and Method with Collaborative Deep Learning in Virtual Reality and Augmented Reality Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171107