CN110045832A - Immersion safety education experience system and method based on AR interaction - Google Patents

Immersion safety education experience system and method based on AR interaction Download PDF

Info

Publication number
CN110045832A
CN110045832A CN201910329826.5A CN201910329826A CN110045832A CN 110045832 A CN110045832 A CN 110045832A CN 201910329826 A CN201910329826 A CN 201910329826A CN 110045832 A CN110045832 A CN 110045832A
Authority
CN
China
Prior art keywords
interaction
graphics workstation
interactive media
instructor
safety education
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910329826.5A
Other languages
Chinese (zh)
Other versions
CN110045832B (en
Inventor
张明乐
庄淑芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanshuyun (xiamen) Technology Co Ltd
Original Assignee
Sanshuyun (xiamen) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanshuyun (xiamen) Technology Co Ltd filed Critical Sanshuyun (xiamen) Technology Co Ltd
Priority to CN201910329826.5A priority Critical patent/CN110045832B/en
Publication of CN110045832A publication Critical patent/CN110045832A/en
Application granted granted Critical
Publication of CN110045832B publication Critical patent/CN110045832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of immersion safety education experience system and method based on AR interaction, method includes: that virtual secure scene is exported and shown to 3D projector with the given side in real training space by graphics workstation;Graphics workstation control single chip computer exports default interactive media pattern in the first predeterminated position;If android terminal scans interactive media pattern, it is connected to cloud server and obtains the corresponding threedimensional model of interactive media pattern, the second predeterminated position AR three-dimensional imaging in real training space goes out the image of interactive media;Several motion capture cameras capture the movement of instructor and export to the graphics workstation;The coordinate system that graphics workstation judges that whether the 6Dof posture information of instructor presets interaction point with the image of AR three-dimensional imaging has intersection.The present invention realizes that intelligence is virtually synchronous with reality, interaction can be allowed more natural, and reduce the cost of safety education practice-training teaching by the ways of presentation of separation immersion scene and human-computer interaction set-point.

Description

Immersion safety education experience system and method based on AR interaction
Technical field
The present invention relates to AR technical field, especially a kind of immersion safety education experience system and side based on AR interaction Method.
Background technique
Currently, the safety education Training Room based on immersion, based on wear-type VR, although can be taught safely with Flow experience It educates, but safety education practice-training teaching can not be satisfied with.When existing single player experience can not real time education, human-computer interaction is too single One, experiencer and instructor can not real-time interactive, these are all to need improved point.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, propose a kind of immersion safety education based on AR interaction Experience system and method, by separate immersion scene and human-computer interaction set-point ways of presentation, realize intelligence virtually with Reality is synchronous, interaction can be allowed more natural, and reduce the cost of safety education practice-training teaching.
The present invention adopts the following technical scheme:
On the one hand, a kind of immersion safety education experience system based on AR interaction of the present invention, comprising: 3D projector, figure As work station, single-chip microcontroller, interactive media pattern, android terminal and several cameras;
The graphics workstation is connected to export virtual secure scene and give 3D projector with the 3D projector, described The virtual secure scene is shown the given side in real training space by 3D projector;
The graphics workstation is connected with the single-chip microcontroller exports default hand in the first predeterminated position with control single chip computer Mutual dielectric pattern;The interactive media is arranged in real training space bottom surface;
Android terminal above the interactive media pattern is set and scans the interactive media pattern, is connected to cloud clothes Business device obtains the corresponding threedimensional model of the interactive media pattern, and the second predeterminated position AR three-dimensional imaging in real training space goes out to hand over The image of mutual medium;
Several motion capture cameras that the real training space above is arranged in capture the movement of instructor and export to institute State graphics workstation;The graphics workstation judges that instructor's body presets joint position is exported in human-computer interaction process three The coordinate system whether the 6Dof posture information of dimension space curve presets interaction point with the image of AR three-dimensional imaging has intersection, if Have, triggers respective action.
Preferably, the 3D projector includes three, is projected respectively to left and right and first three face in the real training space; Matrix splicer is additionally provided between the graphics workstation and projector to eliminate the gap or overlapping region of different projectors.
Preferably, the graphics workstation control single chip computer exports default interactive media pattern, tool in the first predeterminated position Body includes:
The graphics workstation is sent to the single-chip microcontroller by serial ports and exports default interactive media Pattern message;
The single-chip microcontroller controls corresponding motor movement by relay, and default interactive media pattern is exported to first and is preset Position.
Preferably, several motion capture cameras capture the movement of instructor and export to the graphics workstation;It is described Graphics workstation judges that instructor's body presets the 6Dof for the three-dimensional space curve that joint position is exported in human-computer interaction process The coordinate system whether posture information presets interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action, tool Body includes:
Several motion capture cameras carry out the shooting of several angles, will be clapped by data collector and Position and attitude sensor The sequence of pictures frame taken the photograph successively is sampled, and acquired data transmission is handled to graphics workstation, the figure work Make time and space series of features that image sequence frame is extracted at station, then calculates instructor by 3D convolutional neural networks and preset body The path curves that body joint position is exported in human-computer interaction process, when the coordinate of path curves and default interaction point When there is intersection in system, respective action is triggered.
It preferably, further include the tempered glass being arranged in above the android terminal screen.
On the other hand, a kind of immersion safety education practical training method based on AR interaction of the present invention, comprising:
Graphics workstation exports virtual secure scene to 3D projector to carry out immersion projection, and the 3D projector will The virtual secure scene is shown in the given side in real training space;
Graphics workstation control single chip computer exports default interactive media pattern in the first predeterminated position;
Android terminal scans the interactive media pattern, and it is corresponding to be connected to the cloud server acquisition interactive media pattern Threedimensional model, the second predeterminated position AR three-dimensional imaging in real training space goes out the image of interactive media;
Several motion capture cameras capture the movement of instructor and export to the graphics workstation;The graphical Work It stands and judges that instructor's body presets the 6Dof posture information of three-dimensional space curve that joint position is exported in human-computer interaction process The coordinate system for whether presetting interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action.
Preferably, the 3D projector includes three, is projected respectively to left and right and first three face in the real training space.
Preferably, matrix splicer is additionally provided between the graphics workstation and projector to eliminate different projectors Gap or overlapping region.
Preferably, the graphics workstation control single chip computer exports default interactive media pattern, tool in the first predeterminated position Body includes:
The graphics workstation is sent to the single-chip microcontroller by serial ports and exports default interactive media Pattern message;
The single-chip microcontroller controls corresponding motor movement by relay, and default interactive media pattern is exported to first and is preset Position.
Preferably, several motion capture cameras capture the movement of instructor and export to the graphics workstation;It is described Graphics workstation judges that instructor's body presets the 6Dof for the three-dimensional space curve that joint position is exported in human-computer interaction process The coordinate system whether posture information presets interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action, tool Body includes:
Several motion capture cameras carry out the shooting of several angles, will be clapped by data collector and Position and attitude sensor The sequence of pictures frame taken the photograph successively is sampled, and acquired data transmission is handled to graphics workstation, the figure work Make time and space series of features that image sequence frame is extracted at station, then calculates instructor by 3D convolutional neural networks and preset body The path curves that body joint position is exported in human-computer interaction process, when the coordinate of path curves and default interaction point When there is intersection in system, respective action is triggered.
Compared with prior art, beneficial effects of the present invention are as follows:
(1) a kind of immersion safety education experience system and method based on AR interaction of the present invention, can be realized AR solid Medium switching is synchronous and is imaged, specifically: virtual field is realized by the control model that 51 single-chip microcontrollers are connected with graphics workstation Scape further realizes synchronous identical after separating with human-computer interaction point, and realized both by android terminal to the access of cloud server Determine the appearance of AR imaging;
(2) a kind of immersion safety education experience system and method based on AR interaction of the present invention, with 3D convolutional Neural Network B P learns to improve the fineness of its human-computer interaction three-dimensional coordinate, specifically: 12 motion capture cameras pass through 3D volumes The algorithm of product BP NEURAL NETWORK study constructs series BP model, and the three-dimensional coordinate position accuracy for allowing it to be captured improves, with essence The really state of judgement and the identical i.e. movement of AR imager coordinate point;
(3) a kind of immersion safety education experience system and method based on AR interaction of the present invention, by three face immersion 3D Projector separates immersion scene, specifically: isolated immersion fabricates scene and projects fusion in fact by three face immersion 3D It is existing on the spot in person, allow more people can simultaneously real training space on the spot in person to carry out real training safety education study;
(4) a kind of immersion safety education experience system and method based on AR interaction of the present invention, safety education practical training apparatus Material such as fire extinguisher, fire hydrant etc. is AR imaging three-dimensional image, therefore can reduce the cost of safety education practice-training teaching.
The above description is only an overview of the technical scheme of the present invention, in order to more clearly understand technology hand of the invention Section, so as to be implemented in accordance with the contents of the specification, and in order to allow above and other objects, features and advantages of the invention It can be more clearly understood, be exemplified below a specific embodiment of the invention.
According to the following detailed description of specific embodiments of the present invention in conjunction with the accompanying drawings, those skilled in the art will be brighter Above-mentioned and other purposes of the invention, advantages and features.
Detailed description of the invention
Fig. 1 is the architecture diagram of the immersion safety education experience system of the invention based on AR interaction;
Fig. 2 is the structural block diagram of the immersion safety education experience system of the invention based on AR interaction;
Fig. 3 is that the default interaction point and curve movement of fire extinguisher is imaged in the AR of the embodiment of the present invention;
Fig. 4 is the immersion safety education practical training method flow chart of the invention based on AR interaction.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
Referring to figure 1 and figure 2, a kind of immersion safety education experience system based on AR interaction of the present invention, comprising: 3D Projector, image workstation, single-chip microcontroller, interactive media pattern, android terminal and several cameras;
The graphics workstation is connected to export virtual secure scene and give 3D projector with the 3D projector, described The virtual secure scene is shown the given side in real training space by 3D projector;
The graphics workstation is connected with the single-chip microcontroller exports default hand in the first predeterminated position with control single chip computer Mutual dielectric pattern;The interactive media is arranged in real training space bottom surface;
Android terminal above the interactive media pattern is set and scans the interactive media pattern, is connected to cloud clothes Business device obtains the corresponding threedimensional model of the interactive media pattern, and the second predeterminated position AR three-dimensional imaging in real training space goes out to hand over The image of mutual medium;
Several motion capture cameras that the real training space above is arranged in capture the movement of instructor and export to institute State graphics workstation;The graphics workstation judges that instructor's body presets joint position is exported in human-computer interaction process three The coordinate system whether the 6Dof posture information of dimension space curve presets interaction point with the image of AR three-dimensional imaging has intersection, if Have, triggers respective action.
The 3D projector includes three, respectively projector 1, projector 2 and projector 3, is projected respectively to the reality Instruct left and right and first three face in space;Matrix splicer is additionally provided between the graphics workstation and projector to eliminate not With the gap or overlapping region of projector.
Specifically, the composition such as graphics workstation, matrix splicer, projector 1, projector 2 and projector 3 immersion 3D is thrown Shadow module;Graphics workstation, 51 single-chip microcontrollers, interactive media and Android large-size screen monitors form AR three-dimensional imaging module;12 motion captures Camera and graphics workstation form interactive action capture module.12 motion capture cameras are arranged in the reality Instruct top of space wall side.
The present invention it is a kind of based on AR interaction immersion safety education experience system, by separation immersion scene with it is man-machine The ways of presentation of interaction set-point realizes that intelligence is virtually synchronous with reality, interaction can be allowed more natural, and reduce safe religion Educate the cost of practice-training teaching.
In Fig. 1, S1 be real training spatially under, the present invention in be it is hollow, AR three-dimensional imaging medium can be placed;
S2 is real training space Android large-size screen monitors (android terminal), and AR three-dimensional imaging medium is scanned by android terminal and is accessed AR imaging may be implemented in cloud server;
S3 is the tempered glass disposed on the Android large-size screen monitors of real training space, and safety education instructor can be allowed to step down in tempered glass On demonstrated;
S4, S5 and S6 are respectively left, the preceding and right side of immersion 3D projection, can construct immersion by left front right three face Virtual Space makes experiencer (for instructor, for the student for wearing AR equipment) on the spot in person in practice-training teaching system In system;
S7 is 12 groups of motion capture cameras, and the movement of the entire human body of experiencer is captured by motion capture technology, is led to Movement is crossed to judge interactive process and realize the interaction in AR imaging.
It should be noted that the virtual secure scene of graphics workstation of the invention can be realized by cloud server, The graphics workstation only realizes the function of control with calculating, goes cloud server to obtain again when needing to export virtual secure scene It takes.Virtual secure scene separates the burden that on the one hand can reduce graphics workstation with control and calculating, on the other hand can Accelerate the arithmetic speed of graphics server.
The graphics workstation control single chip computer exports default interactive media pattern in the first predeterminated position, specifically includes:
The graphics workstation is sent to the single-chip microcontroller by serial ports and exports default interactive media Pattern message;
The single-chip microcontroller controls corresponding motor movement by relay, and default interactive media pattern is exported to first and is preset Position.
Specifically, 51 single-chip microcontrollers export corresponding control information to relay after receiving the control signal of graphics workstation Device is to control the switch of different relays, to control the rotation for the different motors being connected from relay, interactive media is (corresponding Fire-fighting scene, interactive media can be fire extinguisher, fire hydrant etc.) pattern can be fixed on the small plank that different motors are driven On.
Several motion capture cameras capture the movement of instructor and export to the graphics workstation;The graphical Work It stands and judges that instructor's body presets the 6Dof posture information of three-dimensional space curve that joint position is exported in human-computer interaction process The coordinate system for whether presetting interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action, specifically includes:
Several motion capture cameras carry out the shooting of several angles, will be clapped by data collector and Position and attitude sensor The sequence of pictures frame taken the photograph successively is sampled, and acquired data transmission is handled to graphics workstation, the figure work Make time and space series of features that image sequence frame is extracted at station, then calculates instructor by 3D convolutional neural networks and preset body The path curves that body joint position is exported in human-computer interaction process, when the coordinate of path curves and default interaction point When there is intersection in system, respective action is triggered.
It is illustrated by taking fire-fighting virtual scene as an example as follows.
In the present embodiment, three-dimensional scenic immersion 3D projection module utilizes real training space by the fire-fighting virtual scene of seriation (such as campus fire scene) is shown in left and right, first three face of experiencer in conjunction with video fusion technology (matrix splicer);Work as void When quasi- scene is caught fire, graphics workstation and 51 single-chip microcontrollers are attached with serial communication, are controlled by 51 single-chip microcontrollers There is fire extinguisher pattern in the interactive media of system at this time, and Android large-size screen monitors, which scan fire extinguisher pattern and access cloud server, transfers cloud 3D resources bank, AR three-dimensional imaging goes out the image of fire extinguisher in face of instructor, and instructor at this time operates AR by both hands The fire extinguisher of three-dimensional imaging, graphics workstation are obtained using the data that 12 motion capture cameras capture come BP NEURAL NETWORK study The coordinate system of three-dimensional space 6DOF information interaction point whether related to AR three-dimensional imaging fire extinguisher where taking instructor's both hands is No have identical, when the two coordinate system parameters coincide it may determine that carry out the human-computer interaction of this operating point, and is carried out virtually with this Fire extinguisher interactive mode training.
Specifically, shown in Figure 3, safety education real training space length, width and height are divided into 500,250 and 500 (units: CM), three 0 point of dimension space coordinate system be origin (0,0,0), and among the handle of the fire extinguisher in AR three-dimensional imaging be arranged A point (250,125, 48), fire extinguisher safety plug position is set as the end B (250,125,44), using cloud server preset in advance by several man-machine friendships The three-dimensional data storage of mutual point (such as A, B) is got up, when three-dimensional camera captures the three-dimensional space curve of the right hand of instructor 6Dof posture information is Chong Die with the B point that cloud server is arranged, and illustrates that starting to touch safety pin at this time carries out pulling out safety pin Interaction, when the 6Dof posture information (angle etc. that X/Y/Z and each dimension are rotated) for capturing the right hand (safety plug) has rule Rule property variation (B point curve movement is Q2), illustrates the safety plug for having pulled out fire extinguisher;When three-dimensional camera captures left hand Three-dimensional space curve 6Dof posture information is Chong Die with A point, illustrates to start to touch handle at this time, when capturing left hand (handle) 6DOF posture information (angle etc. that X/Y/Z and each dimension are rotated) regular variation (A point curve movement is Q1), says The bright handle for having begun pressing fire extinguisher carries out extinguishing action interaction.
The present embodiment carries out the study of 3D convolutional neural networks using 12 three-dimensional thermal cameras, precisely exports instructor's body Body presets the three-dimensional space curve 6Dof pose letter that joint position (such as left hand and right hand herein) is exported in human-computer interaction process Breath, judges whether can there is intersection with the preset AR three-dimensional imaging interaction point of cloud server will pass through.
12 three-dimensional thermal cameras carry out the shooting of 12 angles, will be clapped by data collector and Position and attitude sensor The sequence of pictures frame taken the photograph successively is sampled, and acquired data transmission is analyzed and handled to graphics workstation, at this time By the time of extraction image sequence frame, space series of features, then carry out calculating an experiencer left side by 3D convolutional neural networks The corresponding path curves of hand, the right hand, when path curves have intersection with corresponding interaction point (A, B), that explanation has It triggers, in this way it may determine that carrying out an interaction.
It should be noted that safety education real training of the present invention in addition to above-mentioned fire-fighting fire extinguishing, further includes fire-fighting life-saving, earthquake peace Entirely, the seriations scene such as traffic safety, and the threedimensional model of the interactive media including human-computer interaction point involved in corresponding scene, Such as fire extinguisher, fire hydrant, electric welding, gas switch safe 3D resource model library.These 3D models are all that preset in advance stores The database of server beyond the clouds will be related to its human-computer interaction point when the specific safety education of practice-training teaching progress, so that it may To transfer related 3D model data at random.
It is shown in Figure 4, on the other hand, a kind of immersion safety education practical training method based on AR interaction of the present invention, packet It includes:
S101, graphics workstation export virtual secure scene to 3D projector to carry out immersion projection, and the 3D is thrown The virtual secure scene is shown the given side in real training space by shadow instrument;
S102, graphics workstation control single chip computer export default interactive media pattern in the first predeterminated position;
S103, android terminal scan the interactive media pattern, are connected to cloud server and obtain the interactive media figure The corresponding threedimensional model of case, the second predeterminated position AR three-dimensional imaging in real training space go out the image of interactive media;
S104, several motion capture cameras capture the movement of instructor and export to the graphics workstation;The figure Shape work station judges that instructor's body presets the position 6Dof for the three-dimensional space curve that joint position is exported in human-computer interaction process The coordinate system whether appearance information presets interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action.
The 3D projector includes three, is projected respectively to left and right and first three face in the real training space.
Matrix splicer is additionally provided between the graphics workstation and projector with eliminate the gap of different projectors or Overlapping region.
The graphics workstation control single chip computer exports default interactive media pattern in the first predeterminated position, specifically includes:
The graphics workstation is sent to the single-chip microcontroller by serial ports and exports default interactive media Pattern message;
The single-chip microcontroller controls corresponding motor movement by relay, and default interactive media pattern is exported to first and is preset Position.
Several motion capture cameras capture the movement of instructor and export to the graphics workstation;The graphical Work It stands and judges that instructor's body presets the 6Dof posture information of three-dimensional space curve that joint position is exported in human-computer interaction process The coordinate system for whether presetting interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action, specifically includes:
Several motion capture cameras carry out the shooting of several angles, will be clapped by data collector and Position and attitude sensor The sequence of pictures frame taken the photograph successively is sampled, and acquired data transmission is handled to graphics workstation, the figure work Make time and space series of features that image sequence frame is extracted at station, then calculates instructor by 3D convolutional neural networks and preset body The path curves that body joint position is exported in human-computer interaction process, when the coordinate of path curves and default interaction point When there is intersection in system, respective action is triggered.
The above is only a specific embodiment of the present invention, but the design concept of the present invention is not limited to this, all to utilize this Design makes a non-material change to the present invention, and should all belong to behavior that violates the scope of protection of the present invention.

Claims (10)

1. a kind of immersion safety education experience system based on AR interaction characterized by comprising 3D projector, image work It stands, single-chip microcontroller, interactive media pattern, android terminal and several cameras;
The graphics workstation is connected to export virtual secure scene and give 3D projector with the 3D projector, and the 3D is thrown The virtual secure scene is shown the given side in real training space by shadow instrument;
The graphics workstation, which is connected to preset in the output of the first predeterminated position with control single chip computer with the single-chip microcontroller, interacts Jie Matter pattern;The interactive media is arranged in real training space bottom surface;
Android terminal above the interactive media pattern is set and scans the interactive media pattern, is connected to cloud server The corresponding threedimensional model of the interactive media pattern is obtained, the second predeterminated position AR three-dimensional imaging in real training space goes out interaction and is situated between The image of matter;
Several motion capture cameras that the real training space above is arranged in capture the movement of instructor and export to the figure Shape work station;The graphics workstation judges that instructor's body presets the three-dimensional space that joint position is exported in human-computer interaction process The coordinate system whether the 6Dof posture information of half interval contour presets interaction point with the image of AR three-dimensional imaging has intersection, if so, touching Send out respective action.
2. the immersion safety education experience system according to claim 1 based on AR interaction, which is characterized in that the 3D Projector includes three, is projected respectively to left and right and first three face in the real training space;The graphics workstation and projector Between be additionally provided with matrix splicer to eliminate the gap or overlapping region of different projectors.
3. the immersion safety education experience system according to claim 1 based on AR interaction, which is characterized in that the figure Shape work station control single chip computer exports default interactive media pattern in the first predeterminated position, specifically includes:
The graphics workstation is sent to the single-chip microcontroller by serial ports and exports default interactive media Pattern message;
The single-chip microcontroller controls corresponding motor movement by relay, and default interactive media pattern is exported to the first default position It sets.
4. the immersion safety education experience system according to claim 1 based on AR interaction, which is characterized in that Ruo Gandong Make to capture the movement of camera capture instructor and export to the graphics workstation;The graphics workstation judges instructor's body Body preset the three-dimensional space curve that joint position is exported in human-computer interaction process 6Dof posture information whether with AR solid at The coordinate system that the image of picture presets interaction point has intersection, if so, triggering respective action, specifically includes:
Several motion capture cameras carry out the shooting of several angles, will be captured by data collector and Position and attitude sensor Sequence of pictures frame is successively sampled, and acquired data transmission is handled to graphics workstation, the graphics workstation Time and the space series of features of image sequence frame are extracted, then calculates instructor by 3D convolutional neural networks and presets body pass Section sets the path curves exported in human-computer interaction process, when path curves and the coordinate system of default interaction point have When intersection, respective action is triggered.
5. the immersion safety education experience system according to claim 1 based on AR interaction, which is characterized in that further include Tempered glass above the android terminal screen is set.
6. a kind of immersion safety education practical training method based on AR interaction characterized by comprising
Graphics workstation exports virtual secure scene to 3D projector to carry out immersion projection, and the 3D projector will be described Virtual secure scene is shown in the given side in real training space;
Graphics workstation control single chip computer exports default interactive media pattern in the first predeterminated position;
Android terminal scans the interactive media pattern, is connected to cloud server and obtains the interactive media pattern corresponding three Dimension module, the second predeterminated position AR three-dimensional imaging in real training space go out the image of interactive media;
Several motion capture cameras capture the movement of instructor and export to the graphics workstation;The graphics workstation is sentenced Disconnected instructor's body preset the three-dimensional space curve that joint position is exported in human-computer interaction process 6Dof posture information whether The coordinate system for presetting interaction point with the image of AR three-dimensional imaging has intersection, if so, triggering respective action.
7. the immersion safety education practical training method according to claim 1 based on AR interaction, which is characterized in that the 3D Projector includes three, is projected respectively to left and right and first three face in the real training space.
8. the immersion safety education practical training method according to claim 2 based on AR interaction, which is characterized in that the figure Matrix splicer is additionally provided between shape work station and projector to eliminate the gap or overlapping region of different projectors.
9. the immersion safety education practical training method according to claim 1 based on AR interaction, which is characterized in that the figure Shape work station control single chip computer exports default interactive media pattern in the first predeterminated position, specifically includes:
The graphics workstation is sent to the single-chip microcontroller by serial ports and exports default interactive media Pattern message;
The single-chip microcontroller controls corresponding motor movement by relay, and default interactive media pattern is exported to the first default position It sets.
10. the immersion safety education practical training method according to claim 1 based on AR interaction, which is characterized in that several Motion capture camera captures the movement of instructor and exports to the graphics workstation;The graphics workstation judges instructor Whether the default joint position of body is three-dimensional with AR in the 6Dof posture information for the three-dimensional space curve that human-computer interaction process is exported The coordinate system that the image of imaging presets interaction point has intersection, if so, triggering respective action, specifically includes:
Several motion capture cameras carry out the shooting of several angles, will be captured by data collector and Position and attitude sensor Sequence of pictures frame is successively sampled, and acquired data transmission is handled to graphics workstation, the graphics workstation Time and the space series of features of image sequence frame are extracted, then calculates instructor by 3D convolutional neural networks and presets body pass Section sets the path curves exported in human-computer interaction process, when path curves and the coordinate system of default interaction point have When intersection, respective action is triggered.
CN201910329826.5A 2019-04-23 2019-04-23 AR interaction-based immersive safety education training system and method Active CN110045832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910329826.5A CN110045832B (en) 2019-04-23 2019-04-23 AR interaction-based immersive safety education training system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910329826.5A CN110045832B (en) 2019-04-23 2019-04-23 AR interaction-based immersive safety education training system and method

Publications (2)

Publication Number Publication Date
CN110045832A true CN110045832A (en) 2019-07-23
CN110045832B CN110045832B (en) 2022-03-11

Family

ID=67278724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910329826.5A Active CN110045832B (en) 2019-04-23 2019-04-23 AR interaction-based immersive safety education training system and method

Country Status (1)

Country Link
CN (1) CN110045832B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599604A (en) * 2019-09-20 2019-12-20 成都中科大旗软件股份有限公司 Multimedia AR view sharing method
CN110675666A (en) * 2019-09-20 2020-01-10 陈红梅 Education and training AR intelligent system
CN110928416A (en) * 2019-12-06 2020-03-27 上海工程技术大学 Immersive scene interactive experience simulation system
CN111477060A (en) * 2020-05-29 2020-07-31 安徽新视野科教文化股份有限公司 AR augmented reality technology-based earthquake real scene simulation method and system
CN112015268A (en) * 2020-07-21 2020-12-01 重庆非科智地科技有限公司 BIM-based virtual-real interaction bottom-crossing method, device and system and storage medium
CN112738010A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497572A (en) * 2011-09-07 2012-06-13 航天科工仿真技术有限责任公司 Desktop type space stereoscopic imaging apparatus
US9462262B1 (en) * 2011-08-29 2016-10-04 Amazon Technologies, Inc. Augmented reality environment with environmental condition control
CN106060497A (en) * 2016-07-19 2016-10-26 广西家之宝网络科技有限公司 Multi-channel three-dimensional laser imaging system
TW201715132A (en) * 2015-10-26 2017-05-01 Liang Kong Immersive all-in-one pc system
CN107209570A (en) * 2015-01-27 2017-09-26 微软技术许可有限责任公司 Dynamic self-adapting virtual list
CN107402633A (en) * 2017-07-25 2017-11-28 深圳市鹰硕技术有限公司 A kind of safety education system based on image simulation technology
CN107771342A (en) * 2016-06-20 2018-03-06 华为技术有限公司 A kind of augmented reality display methods and head-mounted display apparatus
CN109215505A (en) * 2018-10-22 2019-01-15 华东交通大学 A kind of shop VR for introducing sight spot for tourist district

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9462262B1 (en) * 2011-08-29 2016-10-04 Amazon Technologies, Inc. Augmented reality environment with environmental condition control
CN102497572A (en) * 2011-09-07 2012-06-13 航天科工仿真技术有限责任公司 Desktop type space stereoscopic imaging apparatus
CN107209570A (en) * 2015-01-27 2017-09-26 微软技术许可有限责任公司 Dynamic self-adapting virtual list
TW201715132A (en) * 2015-10-26 2017-05-01 Liang Kong Immersive all-in-one pc system
CN107771342A (en) * 2016-06-20 2018-03-06 华为技术有限公司 A kind of augmented reality display methods and head-mounted display apparatus
CN106060497A (en) * 2016-07-19 2016-10-26 广西家之宝网络科技有限公司 Multi-channel three-dimensional laser imaging system
CN107402633A (en) * 2017-07-25 2017-11-28 深圳市鹰硕技术有限公司 A kind of safety education system based on image simulation technology
CN109215505A (en) * 2018-10-22 2019-01-15 华东交通大学 A kind of shop VR for introducing sight spot for tourist district

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599604A (en) * 2019-09-20 2019-12-20 成都中科大旗软件股份有限公司 Multimedia AR view sharing method
CN110675666A (en) * 2019-09-20 2020-01-10 陈红梅 Education and training AR intelligent system
CN112738010A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium
CN112738010B (en) * 2019-10-28 2023-08-22 阿里巴巴集团控股有限公司 Data interaction method and system, interaction terminal and readable storage medium
CN110928416A (en) * 2019-12-06 2020-03-27 上海工程技术大学 Immersive scene interactive experience simulation system
CN111477060A (en) * 2020-05-29 2020-07-31 安徽新视野科教文化股份有限公司 AR augmented reality technology-based earthquake real scene simulation method and system
CN112015268A (en) * 2020-07-21 2020-12-01 重庆非科智地科技有限公司 BIM-based virtual-real interaction bottom-crossing method, device and system and storage medium

Also Published As

Publication number Publication date
CN110045832B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN110045832A (en) Immersion safety education experience system and method based on AR interaction
US10701344B2 (en) Information processing device, information processing system, control method of an information processing device, and parameter setting method
CN110969905A (en) Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
CN101489150B (en) Virtual and reality mixed remote collaboration working method
CN109358754B (en) Mixed reality head-mounted display system
WO2018107781A1 (en) Method and system for implementing virtual reality
CN205581784U (en) Can mix real platform alternately based on reality scene
CN105429989A (en) Simulative tourism method and system for virtual reality equipment
CN109901713B (en) Multi-person cooperative assembly system and method
CN106652590A (en) Teaching method, teaching recognizer and teaching system
CN109531566A (en) A kind of robot livewire work control method based on virtual reality system
CN108762508A (en) A kind of human body and virtual thermal system system and method for experiencing cabin based on VR
CN105183269B (en) Method for automatically identifying screen where cursor is located
CN106910251A (en) Model emulation method based on AR and mobile terminal
CN103986905B (en) Method for video space real-time roaming based on line characteristics in 3D environment
CN106204751B (en) Real object and virtual scene real-time integration method and integration system
Yu et al. Intelligent visual-IoT-enabled real-time 3D visualization for autonomous crowd management
CN110389664B (en) Fire scene simulation analysis device and method based on augmented reality
CN110427502A (en) Display methods, device, electronic equipment and the storage medium of virtual content
RU2606875C2 (en) Method and system for displaying scaled scenes in real time
CN113941138A (en) AR interaction control system, device and application
CN112558761A (en) Remote virtual reality interaction system and method for mobile terminal
CN112532963A (en) AR-based three-dimensional holographic real-time interaction system and method
CN112330753A (en) Target detection method of augmented reality system
CN108553889A (en) Dummy model exchange method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant