CN112102680A - Train driving teaching platform and method based on VR - Google Patents

Train driving teaching platform and method based on VR Download PDF

Info

Publication number
CN112102680A
CN112102680A CN202010879547.9A CN202010879547A CN112102680A CN 112102680 A CN112102680 A CN 112102680A CN 202010879547 A CN202010879547 A CN 202010879547A CN 112102680 A CN112102680 A CN 112102680A
Authority
CN
China
Prior art keywords
video
user
head
liquid crystal
train
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010879547.9A
Other languages
Chinese (zh)
Inventor
胡军
徐振
刘燕德
李茂鹏
高云博
张云伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Jiaotong University
Original Assignee
East China Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Jiaotong University filed Critical East China Jiaotong University
Priority to CN202010879547.9A priority Critical patent/CN112102680A/en
Publication of CN112102680A publication Critical patent/CN112102680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a train driving teaching platform and a train driving teaching method based on VR, wherein a video camera is adopted to shoot a scene in front of train running in a train cab to form a video, the video is transmitted to a computer end to form a VR video, a user can feel that the video enters a virtual space by wearing an integrated machine head display device, images of a cab operation panel are collected in real time through a high-definition camera, the images are superimposed into the VR video through the computer end, the relative position of the cab operation panel and the user can be seen in the VR video, namely the relative position of the actual cab operation panel and the user is the relative position of the actual cab operation panel, teaching explanation is added into the VR video, actions required by each step of operation are reminded, an alarm is given when an operation error occurs, the user can further deepen understanding and memory, and the professional technical level of students can be improved.

Description

Train driving teaching platform and method based on VR
Technical Field
The invention relates to the field of VR (virtual reality), in particular to a train driving teaching platform and method based on VR.
Background
Along with the development of high-speed railways, the requirement on the driving safety of trains is higher and higher, and the life and property safety of passengers is guaranteed to the maximum extent. This requires a driver to have a high driving skill and a sensitive response. And a train driver training course is arranged in part of university teaching outline to carry out professional training on students. However, due to the fact that teaching facilities are imperfect, and a training base lacks a corresponding train driving environment, students only stay in theoretical knowledge, actual operation capacity cannot be improved, long-time training is needed after employment to be qualified, professional development of the students is not facilitated, and railway departments are difficult to recruit students with strong specialties and can train the students with large training funds.
In the prior art center, many schools set up the professions related to train driving, but a complete railway transportation system cannot be established for students to practice due to the fact that matched training facilities are too expensive, so that the students only have the opportunity to contact the trains through practice arranged by the schools to train on the spot. But the training time is short, so that the students are difficult to be very skilled. For students, the train is simulated to deepen the understanding of professional knowledge and improve professional practice ability.
Disclosure of Invention
In order to solve the above problems, the present invention provides a train driving teaching platform and method based on VR, wherein a video camera is used to shoot a scene in front of the train in the train cab to obtain a video, and the video is transmitted to a computer to obtain a VR video, so that a user can feel that he or she enters a virtual space by wearing an integrated head display device, and the high-definition camera collects the images of the panel of the cab operation console in real time, and the images are superposed into the VR video through the computer end, the relative position of the cab operation panel and the driver's cab operation panel can be seen in the VR video, namely the relative position of the actual cab operation panel and the driver's cab operation panel, teaching explanation is added into the VR video, and the action required by each step of operation is reminded, and an alarm is given when the operation is wrong, so that the user can further deepen understanding and memory, and the professional skill level of the student can be improved. And the students can experience the feeling of driving the train without bearing a large amount of expenses for constructing a train driving training base, so that the teaching cost is effectively saved.
In order to achieve the purpose, the train driving teaching platform and the train driving teaching method based on VR provided by the invention are realized as follows:
a train driving teaching platform based on VR and a method thereof, comprising a liquid crystal display screen, an integrated head display device, a high-definition camera, a metal frame, a voice player, a telescopic motor, an adjustable seat, a vibration instrument, a computer host and a driver operating platform, wherein the metal frame, the liquid crystal display screen and the driver operating platform build a basic frame of a train cab, the liquid crystal display screen replaces a front windshield glass of the train, the high-definition camera and the voice player are arranged at the top end in the metal frame, the high-definition camera is used for collecting an operation panel image on the driver operating platform and transmitting the collected image to the computer host, the voice player is used for giving an alarm sound to a user and simulating external sound heard by the driver when the train runs, the vibration instrument is arranged below the adjustable seat and is used for simulating the condition that the train shakes, the telescopic motor is arranged behind the adjustable seat, be used for the adjustable seat of push-and-pull, the condition of driver's health emergence slope when the simulation train starts or emergency brake, computer installs in driver's operation panel below, be used for making the VR video, and fuse the operating panel image that high definition digtal camera gathered to the VR video in, and make corresponding reaction according to the scene control voice player in the VR video, flexible motor, the vibrometer, liquid crystal display is used for showing the VR video, the video scene of reality is along with user's head position, eyeball position switches.
The integrated head display equipment comprises an elastic band, a sponge, a plastic shell, a head tracker, a control circuit board, a Bluetooth module, an LCD display screen, 3D liquid crystal glasses and a night vision camera, wherein the elastic band is fixed on the upper, left and right sides of the plastic shell, the sponge is arranged on the inner side of the plastic shell, the head tracker is arranged at the top end of the plastic shell and is used for positioning the head of a user, the control circuit board, the Bluetooth module, the LCD display screen, the 3D liquid crystal glasses and the night vision camera are arranged in the plastic shell, the LCD display screen is arranged in front of the 3D liquid crystal glasses, the night vision camera is arranged on a lens of the 3D liquid crystal glasses, the Bluetooth module is used for establishing wireless communication between the control circuit board and a computer host, the computer host transmits a prepared VR video to the control circuit board through Bluetooth, the control circuit board controls the LCD display, and the control circuit board controls the transparency and darkness of the left and right lenses of the 3D liquid crystal glasses, so that a user can watch VR videos in the LCD display screen through the 3D liquid crystal glasses, the night vision camera is used for tracking the eyeball position of the user, image information is transmitted to the control circuit board to be detected for image recognition, the eyeball position is calculated in the control circuit board and is transmitted to the computer host through the Bluetooth module, the computer host switches display scenes according to the eyeball moving position, and the display scenes are fed back to the LCD display screen through the Bluetooth module.
The head tracker of the invention comprises an inclination angle sensor, a laser transmitter and a laser receiver, wherein the laser range finders are composed of three identical laser transmitters and three identical laser receivers, three groups of laser range finders form a triangle, the inclination angle sensor is arranged at the position of the gravity center of the triangle composed of the three groups of laser range finders, the inclination angle sensor is used for detecting the spatial inclination angle of the head of a user, the laser transmitter and the laser receiver are combined to detect the distance from the head of the user to different points of a metal frame, the information collected by the inclination angle sensor and the laser receiver is transmitted to a control circuit board, the spatial coordinate position of the head of the user is calculated in the control circuit board according to the inclination angle in each direction and the distance information collected by the three groups of laser range finders, the spatial coordinate position is transmitted to a computer host through a Bluetooth module, the user can watch the virtual scene to be viewed when the head is rotated.
The invention adopts three groups of laser transmitters and laser receivers to measure the distance between the head of the user and the metal frame, and can position the spatial position coordinates of the head of the user by adopting a three-point laser positioning method.
The method for calculating the head position of the user by the three-point laser method comprises the following steps:
1. the points where the three groups of laser range finders are respectively a point A, a point B and a point C, and the points irradiated on the metal frame are respectively A1Dot, B1Dots, C1Point, AA1、BB1、CC1Is determined by three sets of laser rangefinders and is a fixed value.
2. The length of setting three groups of laser range finders is respectively: AB is 2 cm long, BC is 2 cm long, and AC is 3 cm long.
3. And establishing a space three-dimensional coordinate by taking the point A as an origin and the AC as an x axis, wherein the inclination angle of the head of the user in the x axis direction is alpha, the inclination angle in the y axis direction is beta, and the inclination angle in the z axis direction is theta.
4. Calculating the coordinates of each point, wherein A1The coordinates are (-n · cos (α), n · cos (β), n · cos (θ)), B1The coordinates are (p.cos (. alpha.), p.cos (. beta.), p.cos (. theta.), C1Coordinates are (q · cos (α), q · cos (β), q · cos (θ)), thus obtaining a1Dot, B1Dots, C1The spatial location coordinates of the points.
5. Head tracking reference point calculation, point A illuminated on the metal frame1Dot, B1Dots, C1The points form a triangle A1B1C1Calculate triangle A1B1C1The barycentric location coordinate O point of (a) is taken as the reference location coordinate of the head, the O point coordinate calculation formula is:
Figure BDA0002653693000000041
the spatial location coordinates of the O point are therefore:
Figure BDA0002653693000000042
6. according to A of triangle1B1C1The gravity center coordinates of the user can be changed to switch scenes in the VR video, and the user can view virtual scenes which the user wants to see as long as the head position of the user is changed.
The scheme for tracking the eyeball position by the night vision camera comprises the following steps: calculating and identifying eyeball images collected by a night vision camera by a control circuit board, preprocessing the collected images by image graying, wavelet neural network denoising, histogram equalization and the like, extracting texture feature maps of eyeballs by a signal processing method, fusing the feature maps corresponding to each convolution layer by a convolution neural network to obtain more obvious feature information, predicting the feature maps of a plurality of convolution layers in the convolution neural network, and obtaining final eyeball position information by utilizing training modes such as frame regression and the like.
The train driving teaching platform scheme of the invention is as follows: the method comprises the steps of shooting a scene in front of the running train into a video in a train cab by a camera, acquiring a cab operation panel image in real time by a high-definition camera, transmitting the video image shot by the camera to a computer host for editing and dubbing, adding a corresponding special effect to enable the output video to be a panoramic video, transmitting image information acquired by the high-definition camera to the computer host, overlaying the operation panel image to the video in the computer host, adding teaching explanation, reminding what action is required for each step of operation, giving an alarm for errors in the operation, making an interactive animation and a 2D interface output interactive png sequence in VR in the computer host, cutting 2D interface elements, writing corresponding codes in a computer by a programmer to realize interactive logic output of interactive VR contents, debugging and perfecting, expressing a scene output schematic scene by a sketch, and then establishing a 3D model according to the scene indication for outputting to form a VR video, transmitting the manufactured VR video to a liquid crystal display screen and an integrated head display device for displaying, controlling the transparency and darkness of left and right lenses of 3D liquid crystal glasses by a control circuit board, enabling a user to watch the 3D stereo VR video in the LCD display screen through the 3D liquid crystal glasses, positioning the spatial position coordinates of the head of the user according to a three-point laser positioning method, obtaining the position of the head of the user, switching scenes in the VR video according to the rotating position of the head, switching scenes in the VR video according to eyeball position information acquired by a night vision camera, and controlling a voice player, a telescopic motor, an adjustable seat and a vibrating meter to make corresponding reactions according to the scenes in the video.
Because the virtual reality technology is adopted to build the structure of the virtual education platform for training students to drive trains, the invention can obtain the following beneficial effects:
adopt the camera to shoot into the video with the scene in train operation the place ahead at the train driver's cabin and pass to the computer end and make into the VR video, the user can make oneself feel get into the virtual space through wearing integrative aircraft nose display device, and gather driver's cabin operation panel image in real time through high definition camera, overlap into the VR video through the computer end, be used for seeing driver's cabin operation panel and the relative position of self exactly be the relative position of actual driver's cabin operation panel and self in the VR video, and add the teaching explanation in the VR video, what action all has reminded what operation need be done in each step, the wrong warning that sends appears in the operation, be convenient for user's further deepening understands and memorizes, be favorable to improving student's professional skill level. And the students can experience the feeling of driving the train without bearing a large amount of expenses for constructing a train driving training base, so that the teaching cost is effectively saved.
Drawings
FIG. 1 is an overall structural view of a VR-based train driving teaching platform and method of the present invention;
FIG. 2 is a schematic structural diagram of an integrated head display device of a VR-based train driving teaching platform and method according to the present invention;
FIG. 3 is a schematic structural diagram of a head tracker of a VR-based train driving teaching platform and method according to the present invention;
FIG. 4 is a schematic diagram of a three-point laser method for calculating the head position of a user based on a VR train driving teaching platform and method according to the invention;
FIG. 5 is a schematic diagram of a three-point laser method for calculating spatial coordinates of a user head position based on a VR train driving teaching platform and method according to the present invention;
FIG. 6 is a flowchart of an eye tracking scheme of a VR-based train driving teaching platform and method according to the present invention;
FIG. 7 is a flowchart of an overall scheme of a VR-based train driving teaching platform and method of the present invention;
FIG. 8 is a working schematic diagram of a VR-based train driving teaching platform and method of the present invention.
The main elements are indicated by symbols.
Liquid crystal display screen 1 Integrated head display equipment 2
High-definition camera 3 Metal frame 4
Voice player 5 Telescopic motor 6
Adjustable chair 7 Vibration instrument 8
Computer host 9 Driver's operation desk 10
Elastic band 11 Sponge 12
Plastic shell 13 Head tracker 14
Control circuit board 15 Bluetooth module 16
LCD display screen 17 3D liquid crystal glasses 18
Night vision camera 19 Tilt angle sensor 20
Laser transmitter 21 Laser receiver 22
Detailed Description
The present invention will be described in further detail with reference to the following examples and drawings.
Referring to fig. 1 to 8, a train driving teaching platform and method based on VR according to the present invention is shown, and includes a liquid crystal display 1, an integrated head display device 2, a high definition camera 3, a metal frame 4, a voice player 5, a telescopic motor 6, an adjustable seat 7, a vibration meter 8, a host computer 9, and a driver console 10.
As shown in fig. 1, a basic frame of a train cab is built by a metal frame 4, a liquid crystal display screen 1 and a driver operating platform 10, a front windshield of a train is replaced by the liquid crystal display screen 1, a high-definition camera 3 and a voice player 5 are installed at the top end in the metal frame 4, the high-definition camera 3 is used for collecting images of an operating panel on the driver operating platform 10 and transmitting the collected images to a computer host 9, the voice player 5 is used for giving out alarm sound to a user and simulating external sound which can be heard by a driver when the train operates in a simulated mode, when the user simulates to drive the train, the user gives out an alarm to remind the user of operating errors, and in practice, the driver can hear the external sound when the train operates, so that the voice player 5 is used for simulating a scene with sound in a VR video, an adjustable seat 7 is installed behind the driver operating platform 10, the adjustable seat 7 can be adjusted in height and is suitable for users with different heights, the vibrator 8 is arranged below the adjustable seat 7 and used for simulating the shaking condition of a train, when a VR video shows the shaking scene of the train, the computer host 9 controls the vibrator 8 to shake so that a worker can feel the shaking condition on the actual train, the telescopic motor 6 is arranged behind the adjustable seat 7 and used for pushing and pulling the adjustable seat 7 and simulating the inclination condition of the body of a driver when the train is started or braked suddenly, when the train is braked suddenly, the computer host 9 controls the telescopic motor 6 to extend out quickly and push the adjustable seat 7 to incline forwards, so that the user has the feeling of being pushed forwards, then controls the telescopic motor 6 to contract and pulls the adjustable seat 7 backwards, and the situation that the driver is subjected to inertia action when the train is braked suddenly can be simulated, computer 9 installs in driver's operation panel 10 below, computer 9 and liquid crystal display 1, the first display device of integral type 2, high definition digtal camera 3, voice player 5, flexible motor 6, electric connection is carried out to vibrometer 8, be used for making the VR video, and fuse the operating panel image that high definition digtal camera 3 gathered to the VR video, and according to scene control voice player 5 in the VR video, flexible motor 6, corresponding reaction is made to vibrometer 8, liquid crystal display 1 is used for showing the VR video, the video scene of reality is along with user's head position, eyeball position switches.
As shown in fig. 2, the integrated head display device 2 includes an elastic band 11, a sponge 12, a plastic shell 13, a head tracker 14, a control circuit board 15, a bluetooth module 16, an LCD display 17, 3D liquid crystal glasses 18, and a night vision camera 19, the elastic band 11 is fixed on the upper, left, and right sides of the plastic shell 13, the sponge 12 is installed inside the plastic shell 13 for preventing the plastic shell 13 from scratching the face of the user, the head tracker 14 is installed on the top of the plastic shell 13 for positioning the head of the user, the head tracker 14 transmits the head position information of the user to the computer host 9, the VR video display scene is switched along with the turning of the head of the user, the control circuit board 15, the bluetooth module 16, the LCD display 17, the 3D liquid crystal glasses 18, and the night vision camera 19 are installed in the plastic shell 13, and the LCD display 17 is installed in front of the 3D liquid crystal glasses 18, the night vision camera 19 is installed on a lens of the 3D liquid crystal glasses 18, the Bluetooth module 16 is used for establishing wireless communication between the control circuit board 15 and the computer host 9, the computer host 9 transmits the prepared VR video to the control circuit board 15 through Bluetooth, the control circuit board 15 controls the LCD display screen 17 to display the VR video, the control circuit board 15 controls the transparency and darkness of the left lens and the right lens of the 3D liquid crystal glasses 18, so that a user can watch the VR video in the LCD display screen 17 through the 3D liquid crystal glasses 18, the user can watch the stereo video by watching the VR video displayed on the LCD display screen 17 through the 3D liquid crystal glasses 18, the user can watch the stereo scenery, because two eyes can watch things independently, the left eye and the right eye have a distance, so that slight difference is caused between the two eyes, and the scenery respectively seen by the left eyeball and the right eyeball has a little displacement, the images on the LCD display screen 17 seen by the left and right eyes are separated independently, so that the stereoscopic vision effect can be achieved, the images seen by the left and right eyes are continuously and alternately displayed by controlling the 3D liquid crystal glasses 18, and the real 3D stereoscopic image can be seen by adding the physiological characteristic of the persistence of vision of human eyes. The night vision camera 19 is used for tracking the eyeball position of a user, transmitting detected image information to the control circuit board 15 for image recognition, calculating the eyeball position in the control circuit board 15, transmitting the eyeball position to the computer host 9 through the Bluetooth module 16, switching a display scene according to the eyeball moving position by the computer host 9, and feeding back the display scene to the LCD display screen 17 through the Bluetooth module 16, so that the user can view the corresponding virtual scene.
As shown in fig. 3, the head tracker 14 includes an inclination sensor 20, a laser transmitter 21, and a laser receiver 22, wherein the laser range finder is composed of three identical laser transmitters 21 and three identical laser receivers 22, the three laser range finders form a triangle, the inclination sensor 20 is installed at the center of gravity of the triangle composed of the three laser range finders, the inclination sensor 20 is used for detecting the spatial inclination of the head of the user, the laser transmitter 21 and the laser receiver 22 are combined to detect the distance between the head of the user and different points of the metal frame 4, and transmit the information collected by the inclination sensor 20 and the laser receiver 22 to the control circuit board 15, the spatial coordinate position of the head of the user is calculated in the control circuit board 15 according to the inclination of each direction and the distance information collected by the three laser range finders, and transmitted to the computer host 9 through the bluetooth module 16, the VR video scenes are switched in the computer host 9 according to the head position of the user, so that the user can view the virtual scenes to be viewed when the user rotates the head.
As shown in fig. 4, three groups of laser transmitters 21 and laser receivers 22 are used to measure the distance between the head of the user and the metal frame 4, and the three-point laser positioning method is used to position the spatial position coordinates of the head of the user, that is, the included angle of the known laser beam and the distance between the head and the metal frame 4 are known, so as to obtain the spatial coordinates of the head of the user.
As shown in fig. 5, the method for calculating the head position of the user by the three-point laser method includes:
1. the points where the three groups of laser range finders are respectively a point A, a point B and a point C, and the points irradiated on the metal frame 4 are respectively A1Dot, B1Dots, C1Point, AA1、BB1、CC1Is determined by three sets of laser rangefinders and is a fixed value.
2. The length of setting three groups of laser range finders is respectively: AB is 2 cm long, BC is 2 cm long, and AC is 3 cm long.
3. And establishing a space three-dimensional coordinate by taking the point A as an origin and the AC as an x axis, wherein the inclination angle of the head of the user in the x axis direction is alpha, the inclination angle in the y axis direction is beta, and the inclination angle in the z axis direction is theta.
4. Calculating the coordinates of each point, wherein A1The coordinates are (-n · cos (α), n · cos (β), n · cos (θ)), B1The coordinates are (p.cos (. alpha.), p.cos (. beta.), p.cos (. theta.), C1Coordinates are (q · cos (α), q · cos (β), q · cos (θ)), thus obtaining a1Dot, B1Dots, C1The spatial location coordinates of the points.
5. Head tracking reference point calculation, point A illuminated on metal frame 41Dot, B1Dots, C1The points form a triangle A1B1C1When the barycentric position coordinate O of the triangle A1B1C1 is calculated as the reference position coordinate of the head, the O-point coordinate calculation formula is:
Figure BDA0002653693000000101
the spatial location coordinates of the O point are therefore:
Figure BDA0002653693000000102
6. according to A of triangle1B1C1The gravity center coordinates of the user can be changed to switch scenes in the VR video, and the user can view virtual scenes which the user wants to see as long as the head position of the user is changed.
As shown in fig. 6, the scheme for tracking the eyeball position by the night vision camera 19 is as follows: calculating and identifying eyeball images collected by a night vision camera 19 through a control circuit board 15, preprocessing the collected images by means of image graying, wavelet neural network denoising, histogram equalization and the like, extracting texture feature maps of eyeballs by means of a signal processing method, fusing the feature maps corresponding to each convolution layer by means of a convolutional neural network to obtain more obvious feature information, predicting the feature maps of a plurality of convolution layers in the convolutional neural network, obtaining final eyeball position information by means of frame regression and other training modes, associating coordinate results after eyeball tracking with radial fuzzy rendering, applying the coordinate results to scenes in VR, and improving permeability of users in the scenes.
As shown in fig. 7, the scheme of the train driving teaching platform is as follows: the method comprises the steps of shooting a scene in front of the running train into a video in a train cab by a camera, acquiring a cab operation panel image in real time by a high-definition camera 3, transmitting the video image shot by the camera to a computer host 9 for editing and dubbing, adding a corresponding special effect to enable an output video to be a panoramic video, transmitting image information acquired by the high-definition camera 3 to the computer host 9, superposing the operation panel image in the computer host 9 to the video, adding teaching explanation, reminding what action is needed in each step of operation, giving an alarm when an error occurs in the operation, facilitating further deepening understanding and memorizing of a user, being beneficial to improving the professional technical level of students, making an interactive animation and a 2D interface output interactive psequence in VR in the computer host 9, cutting a 2D interface element, writing a corresponding code in a computer by a programmer to realize interactive logic output of interactive VR content, debugging and perfecting are carried out, a scene output scene schematic is represented by a sketch, a 3D model is established according to the scene schematic for outputting to form a VR video, the manufactured VR video is transmitted to a liquid crystal display screen 1 and an integrated head display device 2 for displaying, a user can enter a space of a virtual driving train through the integrated head display device 2, a teacher can check whether the operation of the student is standard or not through the liquid crystal display screen 1, and more targeted teaching of the teacher is facilitated, the transparency and darkness of left and right lenses of 3D liquid crystal glasses 18 are controlled by a control circuit board 15, so that the user can watch the 3D three-dimensional VR video in an LCD screen 17 through the 3D liquid crystal glasses 18, the spatial position coordinate of the head of the user is located according to a three-point laser positioning method, the position of the head of the user is obtained, the scene in the VR video is switched according to the rotating position of the, and the night vision camera 19 collects eyeball position information to switch scenes in the VR video, and the voice player 5, the telescopic motor 6, the adjustable seat 7 and the vibration meter 8 are controlled to make corresponding reactions according to the scenes in the video.
The working principle and the working process of the invention are as follows:
as shown in fig. 8, a VR video is produced on a computer host 9, an operation panel image collected by a high-definition camera 3 is fused into the VR video, the computer host 9 controls a voice player 5 to send an alarm sound to a user and an external sound heard by a driver when a simulated train runs according to a scene in the VR video, when the user runs in a simulated train running error, the user is warned of the operation error, and the external sound is heard when the driver actually runs in the train running, therefore, the voice player 5 is adopted to simulate a scene with sound in the VR video, when a scene with train shaking exists in the VR video, the computer host 9 controls a vibrator 8 to shake, so that a worker feels like shaking on an actual train, the computer host 9 controls a telescopic motor 6 to extend out quickly to push an adjustable seat 7 to tilt forward, the user has a feeling of being pushed forward, then the telescopic motor 6 is controlled to contract, the adjustable seat 7 is pulled backwards, namely a scene when a driver is under inertia action during emergency braking of the train can be simulated, the head tracker 14 consisting of the tilt angle sensor 20, the laser emitter 21 and the laser receiver 22 transmits the acquired information to the control circuit board 15, head position calculation is carried out in the control circuit board 15, the calculated position information is transmitted to the computer host 9 through the Bluetooth module 16, the scene displayed by the VR video is switched along with the steering of the head of the user, the computer host 9 transmits the prepared VR video to the control circuit board 15 through Bluetooth, the control circuit board 15 controls the LCD display screen 17 to display the VR video, the control circuit board 15 controls the transparency and darkness of the left and right lenses of the 3D liquid crystal glasses 18, so that the user can watch the VR video in the LCD display screen 17 through the 3D liquid crystal glasses 18, the user sees the VR video that shows on LCD display screen 17 through 3D liquid crystal glasses 18 and can see three-dimensional video, night vision camera 19 gathers user's eyeball image to detect image information and transmit to control circuit board 15 and carry out image recognition, calculate eyeball position in control circuit board 15, and transmit to computer 9 through bluetooth module 16, computer 9 switches the display scene according to eyeball shift position, and feed back to LCD display screen 17 through bluetooth module 16, the user can see corresponding virtual scene.

Claims (6)

1. A train driving teaching platform and method based on VR is characterized in that: the device comprises a liquid crystal display screen, an integrated head display device, a high-definition camera, a metal frame, a voice player, a telescopic motor, an adjustable seat, a vibrator, a computer host and a driver operating platform, wherein a basic frame of a train cab is built by the metal frame, the liquid crystal display screen and the driver operating platform, the liquid crystal display screen replaces front windshield glass of a train, the high-definition camera and the voice player are installed at the top end in the metal frame, the high-definition camera is used for collecting images of an operating panel on the driver operating platform and transmitting the collected images to the computer host, the voice player is used for giving out alarm sound to a user and simulating external sound which can be heard by the driver when the train operates, the vibrator is installed below the adjustable seat and used for simulating the shaking condition of the train, the telescopic motor is installed behind the adjustable seat and used for pushing and pulling the adjustable seat and simulating the condition that the body of the driver inclines, the computer mainframe is installed in driver's operation panel below for the preparation VR video, and fuse the operating panel image that high definition digtal camera gathered to the VR video, and make corresponding reaction according to scene control voice player, flexible motor, the vibrometer in the VR video, liquid crystal display is used for showing the VR video, and the video scene of reality switches along with user's head position, eyeball position.
2. The VR-based train driving teaching platform and method of claim 1, wherein: the integrated head display equipment comprises an elastic band, a sponge, a plastic shell, a head tracker, a control circuit board, a Bluetooth module, an LCD display screen, 3D liquid crystal glasses and a night vision camera, wherein the elastic band is fixed on the upper, left and right sides of the plastic shell, the sponge is arranged on the inner side of the plastic shell, the head tracker is arranged at the top end of the plastic shell and used for positioning the head of a user, the control circuit board, the Bluetooth module, the LCD display screen, the 3D liquid crystal glasses and the night vision camera are arranged in the plastic shell, the LCD display screen is arranged in front of the 3D liquid crystal glasses, the night vision camera is arranged on a lens of the 3D liquid crystal glasses, the Bluetooth module is used for establishing wireless communication between the control circuit board and a computer host, the computer host transmits the prepared VR video to the control circuit board through Bluetooth, and the control circuit board controls the, and the control circuit board controls the transparency and darkness of the left and right lenses of the 3D liquid crystal glasses, so that a user can watch VR videos in the LCD display screen through the 3D liquid crystal glasses, the night vision camera is used for tracking the eyeball position of the user, image information is transmitted to the control circuit board to be detected for image recognition, the eyeball position is calculated in the control circuit board and is transmitted to the computer host through the Bluetooth module, the computer host switches display scenes according to the eyeball moving position, and the display scenes are fed back to the LCD display screen through the Bluetooth module.
3. The VR-based train driving teaching platform and method of claim 2, wherein: the head tracker comprises an inclination angle sensor, a laser transmitter and a laser receiver, wherein the laser range finders are composed of three identical laser transmitters and three identical laser receivers, three groups of laser range finders form a triangle, the inclination angle sensor is arranged at the position of the gravity center of the triangle composed of the three groups of laser range finders, the inclination angle sensor is used for detecting the spatial inclination angle of the head of a user, the laser transmitter and the laser receiver are combined to detect the distance from the head of the user to different points of a metal frame, the information collected by the inclination angle sensor and the laser receiver is transmitted to a control circuit board, the spatial coordinate position of the head of the user is calculated in the control circuit board according to the inclination angle in each direction and the distance information collected by the three groups of laser range finders, the spatial coordinate position is transmitted to a computer host through a Bluetooth module, and, the user can watch the virtual scene to be viewed when the head is rotated.
4. The VR-based train driving teaching platform and method of claim 3 wherein: the method for calculating the head position of the user by the three-point laser method comprises the following steps:
1. the points where the three groups of laser range finders are respectively a point A, a point B and a point C, and the points irradiated on the metal frame are respectively A1Dot, B1Dots, C1Point, AA1、BB1、CC1The length of (A) is determined by three groups of laser range finders and is a fixed value;
2. the length of setting three groups of laser range finders is respectively: AB is 2 cm long, BC is 2 cm long, and AC is 3 cm long;
3. establishing a space three-dimensional coordinate by taking the point A as an origin and the AC as an x axis, wherein the inclination angle of the head of the user in the x axis direction is alpha, the inclination angle in the y axis direction is beta, and the inclination angle in the z axis direction is theta;
4. calculating the coordinates of each point, wherein A1The coordinates are (-n · cos (α), n · cos (β), n · cos (θ)), B1The coordinates are (p.cos (. alpha.), p.cos (. beta.), p.cos (. theta.), C1Coordinates are (q · cos (α), q · cos (β), q · cos (θ)), thus obtaining a1Dot, B1Dots, C1Spatial position coordinates of the points;
5. head tracking reference point calculation, point A illuminated on the metal frame1Dot, B1Dots, C1The points form a triangle A1B1C1Calculate triangle A1B1C1The barycentric location coordinate O point of (a) is taken as the reference location coordinate of the head, the O point coordinate calculation formula is:
Figure FDA0002653692990000031
the spatial location coordinates of the O point are therefore:
Figure FDA0002653692990000032
6. according to A of triangle1B1C1Change of barycentric coordinates ofThe scene in the VR video can be viewed as long as the head position of the user changes.
5. The VR-based train driving teaching platform and method of claim 2, wherein: the scheme that the night vision camera tracks the positions of eyeballs is as follows: calculating and identifying eyeball images collected by a night vision camera by a control circuit board, preprocessing the collected images by image graying, wavelet neural network denoising, histogram equalization and the like, extracting texture feature maps of eyeballs by a signal processing method, fusing the feature maps corresponding to each convolution layer by a convolution neural network to obtain more obvious feature information, predicting the feature maps of a plurality of convolution layers in the convolution neural network, and obtaining final eyeball position information by utilizing training modes such as frame regression and the like.
6. The VR-based train driving teaching platform and method of claim 1, wherein: the train driving teaching platform scheme is as follows: the method comprises the steps of shooting a scene in front of the running train into a video in a train cab by a camera, acquiring a cab operation panel image in real time by a high-definition camera, transmitting the video image shot by the camera to a computer host for editing and dubbing, adding a corresponding special effect to enable the output video to be a panoramic video, transmitting image information acquired by the high-definition camera to the computer host, overlaying the operation panel image to the video in the computer host, adding teaching explanation, reminding what action is required for each step of operation, giving an alarm for errors in the operation, making an interactive animation and a 2D interface output interactive png sequence in VR in the computer host, cutting 2D interface elements, writing corresponding codes in a computer by a programmer to realize interactive logic output of interactive VR contents, debugging and perfecting, expressing a scene output schematic scene by a sketch, and then establishing a 3D model according to the scene indication for outputting to form a VR video, transmitting the manufactured VR video to a liquid crystal display screen and an integrated head display device for displaying, controlling the transparency and darkness of left and right lenses of 3D liquid crystal glasses by a control circuit board, enabling a user to watch the 3D stereo VR video in the LCD display screen through the 3D liquid crystal glasses, positioning the spatial position coordinates of the head of the user according to a three-point laser positioning method, obtaining the position of the head of the user, switching scenes in the VR video according to the rotating position of the head, switching scenes in the VR video according to eyeball position information acquired by a night vision camera, and controlling a voice player, a telescopic motor, an adjustable seat and a vibrating meter to make corresponding reactions according to the scenes in the video.
CN202010879547.9A 2020-08-27 2020-08-27 Train driving teaching platform and method based on VR Pending CN112102680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010879547.9A CN112102680A (en) 2020-08-27 2020-08-27 Train driving teaching platform and method based on VR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010879547.9A CN112102680A (en) 2020-08-27 2020-08-27 Train driving teaching platform and method based on VR

Publications (1)

Publication Number Publication Date
CN112102680A true CN112102680A (en) 2020-12-18

Family

ID=73758038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010879547.9A Pending CN112102680A (en) 2020-08-27 2020-08-27 Train driving teaching platform and method based on VR

Country Status (1)

Country Link
CN (1) CN112102680A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801526A (en) * 2021-02-05 2021-05-14 福建慧舟信息科技有限公司 Driving training cheating supervision system and supervision method based on AI technology and biological identification cross authentication
CN113377028A (en) * 2021-06-15 2021-09-10 湖南汽车工程职业学院 Power storage battery testing teaching system based on VR and 5D
CN113838356A (en) * 2021-10-20 2021-12-24 广东阿马托科技有限公司 Distribution network automation display system
CN114913732A (en) * 2021-12-23 2022-08-16 国网宁夏电力有限公司超高压公司 Virtual fire drill method and system
IT202100008477A1 (en) * 2021-04-07 2022-10-07 Medilx S R L WEARABLE SYSTEM TO ASSIST THE ACTIVITIES OF PLANNING INTERVENTIONS IN THE BUILDING FIELD
TWI818613B (en) * 2022-07-01 2023-10-11 國立臺北科技大學 Asymmetric VR remote medical collaboration guidance system and training method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873406B1 (en) * 2002-01-11 2005-03-29 Opti-Logic Corporation Tilt-compensated laser rangefinder
CN102353329A (en) * 2011-08-24 2012-02-15 吉林大学 Method for measuring non-contact three-dimensional coordinate of simulation test site and device used in same
CN103808256A (en) * 2012-11-15 2014-05-21 中国科学院沈阳自动化研究所 Non-contact type object planar motion measuring device and implementation method thereof
CN105652280A (en) * 2015-11-26 2016-06-08 广东雷洋智能科技股份有限公司 Laser radar triangulation ranging method
CN107633728A (en) * 2017-09-29 2018-01-26 广州云友网络科技有限公司 A kind of virtual driving scene method synchronous with the action of body-sensing seat
CN109740466A (en) * 2018-12-24 2019-05-10 中国科学院苏州纳米技术与纳米仿生研究所 Acquisition methods, the computer readable storage medium of advertisement serving policy
CN109991613A (en) * 2017-12-29 2019-07-09 长城汽车股份有限公司 Localization method, positioning device, vehicle and readable storage medium storing program for executing
CN110097799A (en) * 2019-05-23 2019-08-06 重庆大学 Virtual driving system based on real scene modeling
CN110244308A (en) * 2019-06-13 2019-09-17 南京拓曼思电气科技有限公司 A kind of laser sensor and its working method for surveying Gao Dingzi suitable for unmanned plane
CN111028552A (en) * 2019-12-24 2020-04-17 江西拓荒者科技有限公司 Red education platform based on VR technique
CN111209811A (en) * 2019-12-26 2020-05-29 的卢技术有限公司 Method and system for detecting eyeball attention position in real time

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6873406B1 (en) * 2002-01-11 2005-03-29 Opti-Logic Corporation Tilt-compensated laser rangefinder
CN102353329A (en) * 2011-08-24 2012-02-15 吉林大学 Method for measuring non-contact three-dimensional coordinate of simulation test site and device used in same
CN103808256A (en) * 2012-11-15 2014-05-21 中国科学院沈阳自动化研究所 Non-contact type object planar motion measuring device and implementation method thereof
CN105652280A (en) * 2015-11-26 2016-06-08 广东雷洋智能科技股份有限公司 Laser radar triangulation ranging method
CN107633728A (en) * 2017-09-29 2018-01-26 广州云友网络科技有限公司 A kind of virtual driving scene method synchronous with the action of body-sensing seat
CN109991613A (en) * 2017-12-29 2019-07-09 长城汽车股份有限公司 Localization method, positioning device, vehicle and readable storage medium storing program for executing
CN109740466A (en) * 2018-12-24 2019-05-10 中国科学院苏州纳米技术与纳米仿生研究所 Acquisition methods, the computer readable storage medium of advertisement serving policy
CN110097799A (en) * 2019-05-23 2019-08-06 重庆大学 Virtual driving system based on real scene modeling
CN110244308A (en) * 2019-06-13 2019-09-17 南京拓曼思电气科技有限公司 A kind of laser sensor and its working method for surveying Gao Dingzi suitable for unmanned plane
CN111028552A (en) * 2019-12-24 2020-04-17 江西拓荒者科技有限公司 Red education platform based on VR technique
CN111209811A (en) * 2019-12-26 2020-05-29 的卢技术有限公司 Method and system for detecting eyeball attention position in real time

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801526A (en) * 2021-02-05 2021-05-14 福建慧舟信息科技有限公司 Driving training cheating supervision system and supervision method based on AI technology and biological identification cross authentication
IT202100008477A1 (en) * 2021-04-07 2022-10-07 Medilx S R L WEARABLE SYSTEM TO ASSIST THE ACTIVITIES OF PLANNING INTERVENTIONS IN THE BUILDING FIELD
CN113377028A (en) * 2021-06-15 2021-09-10 湖南汽车工程职业学院 Power storage battery testing teaching system based on VR and 5D
CN113838356A (en) * 2021-10-20 2021-12-24 广东阿马托科技有限公司 Distribution network automation display system
CN114913732A (en) * 2021-12-23 2022-08-16 国网宁夏电力有限公司超高压公司 Virtual fire drill method and system
TWI818613B (en) * 2022-07-01 2023-10-11 國立臺北科技大學 Asymmetric VR remote medical collaboration guidance system and training method

Similar Documents

Publication Publication Date Title
CN112102680A (en) Train driving teaching platform and method based on VR
US11484790B2 (en) Reality vs virtual reality racing
US8368721B2 (en) Apparatus and method for on-field virtual reality simulation of US football and other sports
US5584696A (en) Hang gliding simulation system with a stereoscopic display and method of simulating hang gliding
US10529248B2 (en) Aircraft pilot training system, method and apparatus for theory, practice and evaluation
US4984179A (en) Method and apparatus for the perception of computer-generated imagery
US20210283496A1 (en) Realistic Virtual/Augmented/Mixed Reality Viewing and Interactions
CN209044930U (en) Special vehicle drive training simulator system based on mixed reality and multi-degree-of-freedom motion platform
CN114706483A (en) Immersive virtual display
US3283418A (en) Vehicle trainers
CN107247511A (en) A kind of across object exchange method and device based on the dynamic seizure of eye in virtual reality
KR101507014B1 (en) Vehicle simulation system and method to control thereof
WO2022133219A1 (en) Mixed-reality visor for in-situ vehicular operations training
US6149435A (en) Simulation method of a radio-controlled model airplane and its system
JPH09138637A (en) Pseudo visibility device
Riecke et al. Perceiving simulated ego-motions in virtual reality: comparing large screen displays with HMDs
Papa et al. A new interactive railway virtual simulator for testing preventive safety
CN113112888A (en) AR real scene interactive simulation driving method
KR20130117627A (en) Simulator system for micro-nano robot using real-time characteristic data
JP2000112334A (en) Driving operation training device
JP6717516B2 (en) Image generation system, image generation method and program
DE19906244A1 (en) Virtual reality transmission system, relaying scene perceived by e.g. robot or telechir to operator, includes stereoscopic camera pair turning in sympathy with operators head movement
Bouchner Driving simulators for HMI Research
WO2024095356A1 (en) Graphics generation device, graphics generation method, and program
JPH09269723A (en) Motional perception controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201218

RJ01 Rejection of invention patent application after publication