CN110610547B - Cabin practical training method, system and storage medium based on virtual reality - Google Patents

Cabin practical training method, system and storage medium based on virtual reality Download PDF

Info

Publication number
CN110610547B
CN110610547B CN201910879257.1A CN201910879257A CN110610547B CN 110610547 B CN110610547 B CN 110610547B CN 201910879257 A CN201910879257 A CN 201910879257A CN 110610547 B CN110610547 B CN 110610547B
Authority
CN
China
Prior art keywords
data
cabin
trainer
training
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910879257.1A
Other languages
Chinese (zh)
Other versions
CN110610547A (en
Inventor
师润乔
刘爽
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruilishi Multimedia Technology Beijing Co ltd
Original Assignee
Ruilishi Multimedia Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruilishi Multimedia Technology Beijing Co ltd filed Critical Ruilishi Multimedia Technology Beijing Co ltd
Priority to CN201910879257.1A priority Critical patent/CN110610547B/en
Publication of CN110610547A publication Critical patent/CN110610547A/en
Application granted granted Critical
Publication of CN110610547B publication Critical patent/CN110610547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/16Ambient or aircraft conditions simulated or indicated by instrument or alarm
    • G09B9/165Condition of cabin, cockpit or pilot's accessories

Abstract

The invention discloses a cabin practical training method based on virtual reality, which comprises the following steps: acquiring mark point two-dimensional image data of a training person in training and hand motion data; preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points to obtain point cloud coordinates and directions in a three-dimensional capturing space; according to the point cloud coordinates and the direction, spatial position positioning data corresponding to the rigid body action are obtained; determining the hand position in the cabin virtual scene according to the space position positioning data, and determining the finger gesture in the cabin virtual scene according to the action data of the hands of the trainers; and according to the preset mapping relation, determining the corresponding virtual button operation in the virtual scene of the cabin of the finger position and the finger gesture, and responding. The invention also discloses a cabin practical training system based on virtual reality and a storage medium. The invention improves the immersion and effect of the virtual reality cabin training.

Description

Cabin practical training method, system and storage medium based on virtual reality
Technical Field
The invention relates to the technical field of virtual reality, in particular to a cabin practical training method, a cabin practical training system and a cabin practical training storage medium based on virtual reality.
Background
Existing pilot or pilot training is mainly developed in a training mode of "physical simulation cockpit", which simulates the actual operation of an aircraft or vehicle, but has some drawbacks, such as: one simulation cabin can only simulate one model, and cannot meet the training of multiple models. Because the cabin door opening methods of different machine types, the aircraft landing gear structures and other equipment are different, if other machine types need to be simulated, other entity simulation cabins need to be reconfigured, so that the practical training cost is multiplied. Meanwhile, part of aviation practical training, such as water simulated evacuation, aircraft major component maintenance and the like, has very high cost for each practical training due to the irreversible problem of equipment. Meanwhile, the training of fire, attack, emergency forced landing and the like cannot be realized in a practical training cabin. Although many ways of using VR/AR to replace physical training (such as fitting a head display, inertial motion capturing glove, etc. to simulate the cabin situation into a virtual scene to display the virtual scene) have appeared, the problems of insufficient immersion, high cost, slower system response speed, etc. exist in each virtual reality technology.
Disclosure of Invention
The invention mainly aims to provide a cabin practical training method, a cabin practical training system and a cabin practical training storage medium based on virtual reality, and aims to solve the technical problem of how to improve immersion of cabin training of virtual reality.
In order to achieve the above object, the present invention provides a cabin training method based on virtual reality, which comprises the following steps:
the method comprises the steps of continuously shooting actions of a trainer wearing reflective mark points for practical training through a plurality of dynamic capture cameras in a dynamic capture space, obtaining synchronous mark point two-dimensional image data, and obtaining action data of hands of the trainer through inertial action capture gloves worn by the trainer, wherein the trainer uses a real object to simulate driving training, and the real object comprises at least one of a seat, a control rod, an accelerator and a rudder;
preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-vision technology to obtain point cloud coordinates and directions in a three-dimensional capturing space;
identifying rigid body structures bound at different positions of a trainer according to the point cloud coordinates and the directions, and calculating the positions and the orientations of the rigid body structures in a capturing space to obtain spatial position positioning data corresponding to rigid body actions of the trainer in practical training;
determining the hand position of the trainer in the virtual scene of the cabin according to the spatial position positioning data corresponding to the rigid body action of the trainer in practical training, and determining the finger position and the gesture of the trainer in the virtual scene of the cabin according to the action data provided by the inertial action capturing glove;
and determining the corresponding virtual button operation in the cabin virtual scene of the finger position and the finger gesture of the trainer according to the preset mapping relation from the dynamic capturing data of the trainer to the cabin virtual scene, and performing corresponding response.
Optionally, the inertial motion capture glove provides motion data comprising: real-time angular velocity data for each finger joint.
Optionally, an infrared narrow-band filtering technology is adopted to filter redundant background information in the shot image data, and a Field Programmable Gate Array (FPGA) is adopted to preprocess the captured mark point image information.
Optionally, calculating various types of data by adopting heterogeneous processing modes of the CPU, the GPU and the APU, wherein the data at least comprises: the mark point two-dimensional image data, the action data, the two-dimensional coordinate data of the mark point, the point cloud coordinates and directions in the three-dimensional capturing space and the space position positioning data.
Further, in order to achieve the above object, the present invention further provides a cabin training system based on virtual reality, the cabin training system based on virtual reality includes: a motion capture server end and a content presentation end;
the motion capture server end at least comprises the following components:
the dynamic capturing camera is used for collecting image data of a trainer in practical training and filtering redundant background information in the shot image data by adopting an infrared narrow-band filtering technology;
the dynamic capture data processing server comprises an electronic computer, corresponding input and output equipment and dynamic capture data analysis processing software running on the computer, wherein the input and output equipment comprises, but is not limited to, a display, a keyboard and a mouse, the dynamic capture data analysis processing software is used for carrying out operation processing on dynamic capture data transmitted by a dynamic capture camera, and the display is used for displaying the running condition of the dynamic capture software;
the data exchanger is used for realizing data exchange between the server-side components and the client-side components, between the client-side related components and between the server-side related components;
the calibration rod is used for calibrating the dynamic capturing cameras so as to obtain the relative position relation among the dynamic capturing cameras in the dynamic capturing space;
the content presentation end at least comprises the following components:
the virtual environment rendering and synchronizing server comprises an electronic computer and corresponding input and output equipment, wherein the electronic computer is used for rendering virtual scenes of a virtual reality cabin and synchronizing data in the virtual environment to a plurality of virtual reality head display clients so as to facilitate simultaneous multi-person training, and the input and output equipment comprises a display, a keyboard and a mouse, wherein the display is used for displaying a display of a emperor visual angle picture of training conditions of a trainer; the virtual reality head display host comprises an electronic computer and corresponding input and output equipment, and is used for rendering control keys and extravehicular scenes in a cabin virtual scene and displaying the control keys and extravehicular scenes by transmitting the control keys and extravehicular scenes to the virtual reality head display; a plurality of virtual reality head display hosts can be added into the cabin training system for simultaneously carrying out multi-person training;
the virtual reality head display is connected with the virtual reality head display host and used for displaying the cabin virtual scene rendered by the virtual reality head display host to a trainer;
the inertial motion capturing glove is used for collecting motion data of hands of a trainer;
the simulation training object comprises at least one of a seat, a control rod, a throttle and a rudder and is used for simulating a cockpit.
Alternatively, the dynamic capturing space may be a large space or a small space, and is formed by a plurality of dynamic capturing cameras surrounding the content presentation end.
Optionally, a rigid structure is bound on the virtual reality head display and the inertial motion capturing glove, and a plurality of reflective marker points are configured on the rigid structure.
Optionally, the dynamic capture camera is specifically configured to:
continuously shooting a training person wearing the reflective mark points to perform a training action, generating mark point two-dimensional image data which are synchronous with other dynamic capture cameras, preprocessing the mark point two-dimensional image data, obtaining two-dimensional coordinate data of the mark points, and sending the two-dimensional coordinate data to the dynamic capture data processing server through the data switch.
Optionally, the dynamic capture data processing server is specifically configured to:
receiving two-dimensional coordinate data of the mark point sent by the dynamic capture camera, and calculating the two-dimensional coordinate data of the mark point by adopting a computer multi-vision technology to obtain a point cloud coordinate and a direction in a three-dimensional capture space;
identifying rigid body structures bound to different parts of a trainer according to the point cloud coordinates and the directions, resolving the positions and the orientations of the rigid body structures in a capturing space, obtaining spatial position positioning data corresponding to the rigid bodies of the trainer in practical training, and sending the spatial position positioning data to the virtual environment rendering server;
the virtual environment rendering server is specifically configured to:
receiving spatial position positioning data corresponding to the rigid body action of a trainer in practical training sent by the dynamic capture data processing server, and receiving action data of the hands of the trainer sent by the inertial action capture glove;
determining the hand position of the trainer in the virtual scene of the cabin according to the spatial position positioning data corresponding to the rigid body action of the trainer in practical training, and determining the finger position and the gesture of the trainer in the virtual scene of the cabin according to the action data provided by the inertial action capturing glove;
and determining the corresponding virtual button operation in the cabin virtual scene of the finger position and the finger gesture of the trainer according to the preset mapping relation from the dynamic capturing data of the trainer to the cabin virtual scene, and performing corresponding response.
Further, to achieve the above object, the present invention further provides a computer readable storage medium, where a cabin training program based on virtual reality is stored, where the cabin training program based on virtual reality implements the steps of the cabin training method based on virtual reality as described in any one of the above when being executed by a processor.
The invention utilizes a plurality of dynamic capturing cameras to build the dynamic capturing space for cabin training, which is not only suitable for large space application, but also suitable for small space application, the training space is flexible, and the training from small-scale single training to large-scale multi-person training can be freely extended. The dynamic capturing camera adopted by the invention utilizes an advanced high-resolution image sensor and combines the use of a high-power infrared stroboscopic light source, so that the capturing range is greatly expanded. And meanwhile, redundant background information is filtered by utilizing an infrared narrow-band pass filtering technology, and the captured mark point image information is preprocessed by adopting an FPGA (field programmable gate array), so that a camera can rapidly and accurately output clean two-dimensional coordinate information of the captured mark point, the processing calculation time of building on a server is reduced, the system delay is greatly reduced, and meanwhile, the motion capturing precision is greatly improved. In addition, the heterogeneous processing mode of CPU+GPU is adopted in the invention to further improve the capability of the system for processing data. The virtual scene is lifelike, the operation sense is real, and the immersion sense and the effect of the virtual reality cabin training are improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a cabin training method based on virtual reality according to the present invention;
FIG. 2 is a schematic diagram of a training scenario of an embodiment of a virtual reality-based cabin training system of the present invention;
fig. 3 is a schematic diagram of functional modules of an embodiment of the cabin training system based on virtual reality according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a cabin practical training method based on virtual reality.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of a cabin training method based on virtual reality according to the present invention. In this embodiment, the cabin training method based on virtual reality includes the following steps:
step S10, a plurality of dynamic capturing cameras in a dynamic capturing space simultaneously continuously shoot the action of a trainer wearing reflective mark points for practical training, synchronous mark point two-dimensional image data are obtained, and action data of hands of the trainer are obtained through inertial action capturing gloves worn by the trainer;
in this embodiment, the training personnel performs training in a pre-built dynamic capturing space, referring to fig. 2, a plurality of dynamic capturing cameras 10 are installed in the dynamic capturing space, each dynamic capturing camera 10 surrounds the training position of the trainer 20, and simultaneously continuously shoots various actions when the trainer 20 wears the reflective mark points for training.
In this embodiment, the dynamic capturing camera can actively capture various actions of the trainer, thereby generating two-dimensional image data for the reflective marker points worn on the trainer. The actions of the trainer corresponding to the acquisition in this embodiment are not limited, and may be, for example, actions of the head and the hand of the trainer, or actions of the head, the body and the hand.
In addition, in this embodiment, in order to more accurately obtain the relevant data of the hand motion of the training person during training, the training person wears the inertial motion capturing glove to obtain the motion data of the hand of the training person.
Optionally, the action data of the hands of the trainer includes: the three types of data can be used for measuring the hand gestures of a trainer during training.
Step S20, preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-vision technology to obtain point cloud coordinates and directions in a three-dimensional capturing space;
in this embodiment, after obtaining two-dimensional image data of a mark point generated by an action of a training person captured by an active capturing camera, the two-dimensional image data of the mark point is further preprocessed, for example, key points (namely, reflective mark points) in images collected by each active capturing camera at the same time are first identified, and then coordinates of each reflective mark point in the same image are calculated, so that two-dimensional coordinate data of all reflective mark points can be obtained.
In this embodiment, in order to identify the action of the trainer, the two-dimensional coordinate data is further converted into three-dimensional coordinate data, so that in this embodiment, first, the image key points collected by a plurality of dynamic capturing cameras at the same time are matched to determine each reflective marker point, then, a computer multi-vision technology is adopted to calculate the two-dimensional coordinate data of the marker points, and specifically, the coordinates and directions of the point clouds in the three-dimensional capturing space are calculated according to the matching relationship between the two-dimensional point clouds in the image and the relative positions and orientations of the dynamic capturing cameras, so as to obtain the coordinates and directions of the point clouds corresponding to each reflective marker point in the three-dimensional capturing space.
Step S30, identifying rigid body structures bound to different parts of a trainer according to the point cloud coordinates and the directions, and resolving the positions and the orientations of the rigid body structures in a capturing space to obtain spatial position positioning data corresponding to rigid body actions of the trainer in practical training;
in this embodiment, the two-dimensional coordinate data of the marking points includes a rigid body name or an identification number and rigid body data (i.e., coordinate data), so that according to the calculated point cloud coordinates and directions of each reflective marking point corresponding to each reflective marking point in the three-dimensional capturing space, the rigid body structures bound to different positions of the trainee can be identified, and the positions and orientations of each rigid body structure in the capturing space can be calculated, so as to further determine the motion track of the trainee in the dynamic capturing space, thereby realizing the positioning of the action position of the trainee in the dynamic capturing space, and obtaining the spatial position positioning data corresponding to the rigid body actions of the trainee in the real training.
Step S40, determining the hand position of the trainer in the cabin virtual scene according to the spatial position positioning data corresponding to the rigid body action of the trainer in practical training, and determining the finger position and the gesture of the trainer in the cabin virtual scene according to the action data provided by the inertial action capturing glove;
in this embodiment, after spatial position location data corresponding to rigid body actions (including finger operations) of a trainer during training is obtained, the finger positions of the trainer in the cabin virtual scene can be further calculated. The hand or the glove of the trainer is provided with a rigid body structure with reflective mark points, so that the finger position of the trainer can be positioned by calculating the spatial position positioning data corresponding to the rigid body action of the trainer during practical training.
In addition, in this embodiment, the finger gesture information of the trainer in the cabin virtual scene can be obtained further according to the action data (including the real-time angular velocity data of each finger joint) of the hand of the trainer collected by the glove. The embodiment adopts a combination method of inertia and optical tracking, and can further improve the precision of finger positioning.
And S50, determining corresponding virtual button operations in the cabin virtual scene of the finger position and the finger gesture of the trainer according to a preset mapping relation from the training person dynamic capture data to the cabin virtual scene, and responding correspondingly.
In this embodiment, the virtual reality technology may map the hand motion of the trainer into the cabin virtual scene, the trainer may see the cabin virtual scene and the motion of the hand of the virtual trainer in the cabin virtual scene through the virtual head display, and the trainer may further implement the simulation training by adjusting the motion of the hand of the virtual trainer in the cabin virtual scene.
In this embodiment, a mapping relationship between training person dynamic capturing data (including various actions of the training person) and a cabin virtual scene is preset, and when the finger position and the finger gesture of the training person are calculated, a corresponding virtual button operation of the training action of the training person in the cabin virtual scene can be determined according to the mapping relationship, and corresponding response is performed according to the virtual button operation, such as executing various training actions such as acceleration, deceleration, turning and the like.
As shown in fig. 2, in this embodiment, in order to make the trainer have better training immersion feeling and effect, the physical simulation cockpit is adopted, including the seat 30, the operating lever 40 and the throttle 50, which are all physical objects, and by adopting these physical objects, the trainer can have more operation experience feeling, so that the trainer has more feeling of being personally on the scene.
In addition, the motion capturing precision is improved, and in the embodiment, the infrared narrow-band filtering technology is preferably adopted to filter redundant background information in the captured image data, and the Field Programmable Gate Array (FPGA) is adopted to preprocess captured mark point image information.
The invention utilizes a plurality of dynamic capturing cameras to build the dynamic capturing space for cabin training, which is not only suitable for large space application, but also suitable for small space application, the training space is flexible, and the training from small-scale single training to large-scale multi-person training can be freely extended. The dynamic capturing camera adopted by the invention utilizes an advanced high-resolution image sensor and combines the use of a high-power infrared stroboscopic light source, so that the capturing range is greatly expanded. Meanwhile, redundant background information is filtered by utilizing an infrared narrow-band pass filtering technology, and the captured mark point image information is preprocessed by adopting an FPGA, so that the dynamic capture camera can rapidly and accurately output clean two-dimensional coordinate information of the captured mark point, the processing calculation time of the cluster building on a server is reduced, the system delay is greatly reduced, and meanwhile, the motion capture precision is greatly improved. In addition, the invention adopts the heterogeneous processing mode of CPU+GPU+APU to further improve the capability of the system for processing data. The virtual scene is lifelike, the operation sense is real, and the immersion sense and the effect of the virtual reality cabin training are improved.
The invention provides a cabin practical training system based on virtual reality.
Referring to fig. 3, fig. 3 is a schematic diagram of functional modules of an embodiment of a cabin training system based on virtual reality according to the present invention. In this embodiment, the cabin training system based on virtual reality includes:
(1) The motion capture server end at least comprises the following components:
the dynamic camera 101 is used for collecting image data of a trainer in practical training and filtering redundant background information in the shot image data by adopting an infrared narrow-band filtering technology; optionally, in a specific embodiment, the motion capture server further includes: the three-dimensional cradle head adopts a high-force clamp and a bevel-mouth top grain and is used for fixing the dynamic camera at a specific installation position;
the dynamic capture data processing server 102 comprises an electronic computer and corresponding input and output equipment, wherein the input and output equipment comprises a display, a keyboard and a mouse, the dynamic capture data processing server 102 also comprises dynamic capture data analysis processing software running on the computer and used for carrying out operation processing on dynamic capture data transmitted by a dynamic capture camera, and the display is used for displaying the running condition of the dynamic capture software;
the dynamic capture data processing server in this embodiment includes a computer and dynamic capture data analysis processing software, and the input/output device refers to all devices capable of realizing computer control, which may be a pan-tilt (cloud server) or other remote control devices, or may be devices such as a display, a keyboard, and a mouse.
A data switch 103, configured to implement data exchange between the server-side component and the client-side component, between the client-side related components, and between the server-side related components;
the calibration rod 104 is used for calibrating the dynamic capturing cameras to obtain the relative position relation among the dynamic capturing cameras in the dynamic capturing space;
in this embodiment, the positioning may be performed by a passive optical tracking method, that is, the active capturing camera 101 acquires the infrared light image data reflected by the reflective mark points bound to each part of the body of the trainer, and then transmits the data to the active capturing data processing server 102 for operation processing, or the positioning may be performed by an active optical tracking method, that is, the active capturing camera 101 achieves positioning by capturing the LED infrared light image data sent by the active optical rigid body (mark point), and can continuously and stably output high-precision positioning data without depending on reflection, thereby achieving a longer capturing distance. In addition, the stable and reliable initiative light rigid body can be directly fixed on the surfaces of objects such as a head display, VR props and the like.
(2) The content presentation end at least comprises the following components:
the virtual environment rendering and synchronizing server 201 comprises an electronic computer and corresponding input and output equipment, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, the display is used for displaying a visual angle picture of a emperor training condition of a trainer, and the virtual environment rendering and synchronizing server 201 is used for rendering a virtual scene of a virtual reality cabin and displaying data in a synchronous virtual environment to a plurality of virtual reality heads so as to facilitate simultaneous multi-person training;
in this embodiment, in the cabin training process, other people with head displays can see the operation states of the other people mutually through synchronization, that is, the single-machine state is changed into the multi-person interaction state, for example, a plurality of trainers can see the virtual scene position of the other people, whether the other people are ready for attack, and the like through synchronization, and other people in the same virtual scene can avoid the conditions of collision, attack, and the like according to the information, thereby being beneficial to training such as attack, emergency forced landing, and the like.
The virtual reality head display host 202 comprises an electronic computer and corresponding input/output equipment, wherein the electronic computer is used for rendering control keys and off-cabin scenes in a cabin virtual scene and displaying the control keys and the off-cabin scenes by transmitting the control keys and the off-cabin scenes to the virtual reality head display;
in this embodiment, the virtual reality head display host also includes a computer and a corresponding input/output device, including a display, which may be used to display a virtual image seen by a trainer, and may also set the head display host in a virtual reality helmet device, where the helmet device includes a virtual reality head display host and a virtual reality head display.
The virtual reality head display 203 is connected with the virtual reality head display host 202 and is used for displaying the cabin virtual scene rendered by the virtual reality head display host 202 to a trainer;
an inertial motion capture glove 204 for collecting motion data of a trainer's hand;
the simulated training object 205 comprises at least one of a seat, a joystick, a throttle and a rudder for simulating a cockpit.
In this embodiment, the hands of the trainer wear the inertial motion capturing glove 204, the inertial motion capturing glove 204 is bound with a rigid structure, the head of the trainer wear the virtual reality head display 203, and the virtual reality head display 203 is bound with a rigid structure, and a plurality of reflective balls are configured on the rigid structure as reflective mark points.
The cabin practical training system of the embodiment takes the optical space positioning motion capture system as a core, and is assisted with customized immersion scene experience content, so that virtual reality application of the same or different scenes of single person or multiple persons in a large-scale space is realized, the virtual scene is vivid, and the simulation training object 205 is according to 1: the 1 proportion simulation genuine products such as seats, operating levers, accelerants, rudders and the like are similar to the genuine products 1 to 1, the operation and the body feeling are real, the immersion feeling is sufficient, and the realization effect is good.
In this embodiment, the dynamic capturing space is formed by a plurality of dynamic capturing cameras 10 surrounding the simulated training object. As shown in fig. 2, the trainer 20 sits on the training seat 30 and performs driving simulation training by manually operating the joystick 40, the throttle 50, and the like. In the training process, the plurality of dynamic cameras 10 simultaneously and continuously shoot the training actions of the trainer 20, and synchronously map the training actions of the trainer 20 into the cabin virtual scene, thereby realizing the combination and interaction of the virtual and the reality. The training space required by the invention is flexible in size, the training space can be built indoors, the training space can be built outdoors, the training space can be freely expanded from a small space to a large space, and in addition, an outdoor large-space training scene can be built, so that the simultaneous online training of multiple people can be met.
The cabin training system of the embodiment specifically takes an optical space positioning motion capturing system as a core, captures two-dimensional position information of an optical mark point of a target object through a plurality of high-frame-rate industrial cameras installed above a space, outputs the two-dimensional position information to algorithm software, and further calculates three-dimensional position information and motion gestures of the target object in the space, wherein the three-dimensional position information and the motion gestures are bridges for connecting virtual and reality and are entries for man-machine interaction.
In addition, the embodiment can calculate and output three-dimensional space position data of the target in real time according to synchronous two-dimensional data obtained by high-speed shooting of a plurality of optical dynamic capturing cameras. The position orientation of each target object in the space is calculated by identifying the optical mark point structures bound at different positions of the moving object, so that the motion track of the target object in the space is determined and synchronously imported to 3D software or VR real-time engine in real time, and therefore the combination and interaction of the virtual and the reality are realized.
Further, in one embodiment, the dynamic camera 101 is specifically configured to:
continuously shooting a training person wearing the reflective mark points to perform a training action, generating mark point two-dimensional image data which is synchronous with other dynamic capture cameras, preprocessing the mark point two-dimensional image data, obtaining two-dimensional coordinate data of the mark points, and sending the two-dimensional coordinate data to a dynamic capture data processing server 102 at a server through a data switch 103;
further, in one embodiment, the dynamic capture data processing server 102 is specifically configured to:
receiving two-dimensional coordinate data of a mark point sent by the dynamic camera 101, and calculating the two-dimensional coordinate data of the mark point by adopting a computer multi-vision technology to obtain a point cloud coordinate and a direction in a three-dimensional capturing space;
according to the point cloud coordinates and the direction, identifying rigid body structures bound at different positions of a trainer, resolving the positions and the orientations of the rigid body structures in a capturing space, obtaining spatial position positioning data corresponding to rigid body actions of the trainer in practical training, and sending the spatial position positioning data to a virtual environment rendering and synchronization server 201;
further, in one embodiment, the virtual environment rendering and synchronization server 201 is specifically configured to:
receiving spatial position positioning data corresponding to the rigid body motion of the trainer during practical training sent by the dynamic capture data processing server 102, and motion data of the hands of the trainer sent by the inertial motion capture glove 204;
determining the finger position of the trainer in the cabin virtual scene according to the spatial position positioning data corresponding to the rigid body action of the trainer during practical training, and determining the finger gesture of the trainer in the cabin virtual scene according to the action data of the hand of the trainer;
and determining the corresponding virtual button operation in the cabin virtual scene of the finger position and the finger gesture of the trainer according to the preset mapping relation from the dynamic capturing data of the trainer to the cabin virtual scene, and performing corresponding response.
Based on the embodiment description basically the same as the cabin practical training method based on virtual reality, the description of the cabin practical training system based on virtual reality in this embodiment is not repeated.
The invention utilizes a plurality of dynamic capturing cameras to build the dynamic capturing space for cabin training, which is not only suitable for large space application, but also suitable for small space application, the training space is flexible, and the training from small-scale single training to large-scale multi-person training can be freely extended. The dynamic capturing camera adopted by the invention utilizes an advanced high-resolution image sensor and combines the use of a high-power infrared stroboscopic light source, so that the capturing range is greatly expanded. Meanwhile, redundant background information is filtered by utilizing an infrared narrow-band pass filtering technology, and the captured mark point image information is preprocessed by adopting an FPGA, so that the dynamic capture camera can rapidly and accurately output clean two-dimensional coordinate information of the captured mark point, the processing calculation time of the cluster building on a server is reduced, the system delay is greatly reduced, and meanwhile, the motion capture precision is greatly improved. In addition, the invention adopts the heterogeneous processing mode of CPU+GPU+APU to further improve the capability of the system for processing data. The virtual scene is lifelike, the operation sense is real, and the immersion sense and the effect of the virtual reality cabin training are improved.
The invention also provides a non-volatile computer readable storage medium.
The computer readable storage medium of the present embodiment stores a cabin training program based on virtual reality, which when executed by a processor, implements the steps of the cabin training method based on virtual reality as described in any one of the above. Based on the embodiment description basically same as the cabin training method based on virtual reality, the content realized by the cabin training program based on virtual reality is not repeated in the embodiment.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM) comprising several instructions for causing a terminal (which may be a computer, a server or a network device, etc.) to perform the method according to the embodiments of the present invention.
While the embodiments of the present invention have been described above with reference to the drawings, the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many modifications may be made thereto by those of ordinary skill in the art without departing from the spirit of the present invention and the scope of the appended claims, which are to be accorded the full scope of the present invention as defined by the following description and drawings, or by any equivalent structures or equivalent flow changes, or by direct or indirect application to other relevant technical fields.

Claims (10)

1. The cabin practical training method based on the virtual reality is characterized by comprising the following steps of:
the method comprises the steps of continuously shooting actions of a trainer wearing reflective mark points for practical training through a plurality of dynamic capture cameras in a dynamic capture space, obtaining synchronous mark point two-dimensional image data, and obtaining action data of hands of the trainer through inertial action capture gloves worn by the trainer, wherein the trainer uses a real object to simulate driving training, and the real object comprises at least one of a seat, a control rod, an accelerator and a rudder;
preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-vision technology to obtain point cloud coordinates and directions in a three-dimensional capturing space; the preprocessing the two-dimensional image data of the mark point to obtain two-dimensional coordinate data of the mark point comprises the following steps: identifying key points in images acquired by each dynamic capturing camera at the same moment, and then calculating coordinates of reflective mark points in the same image; the calculating the two-dimensional coordinate data of the mark points by adopting the computer multi-vision technology to obtain the point cloud coordinates and the directions in the three-dimensional capturing space comprises the following steps: calculating coordinates and directions of the point clouds in the three-dimensional capturing space according to the matching relation between the two-dimensional point clouds in the image and the relative positions and orientations of the dynamic capturing cameras;
identifying rigid body structures bound at different positions of a trainer according to the point cloud coordinates and the directions, and calculating the positions and the orientations of the rigid body structures in a capturing space to obtain spatial position positioning data corresponding to the rigid bodies of the trainer in practical training;
determining the hand position of the trainer in the cabin virtual scene according to the spatial position positioning data corresponding to the rigid body action of the trainer in practical training, and determining the finger position and the gesture of the trainer in the cabin virtual scene according to the action data provided by the inertial action capturing glove;
and determining the corresponding virtual button operation in the cabin virtual scene of the finger position and the finger gesture of the trainer according to the preset mapping relation from the dynamic capturing data of the trainer to the cabin virtual scene, and performing corresponding response.
2. The virtual reality-based cabin training method of claim 1, wherein the inertial motion capture glove provides motion data comprising: real-time angular velocity data for each finger joint.
3. The virtual reality-based cabin practical training method of claim 1, wherein the infrared narrow-band pass filtering technology is adopted to filter redundant background information in the shot image data, and the Field Programmable Gate Array (FPGA) is adopted to preprocess captured marker point image information.
4. A virtual reality based cabin training method according to any one of claims 1-3, characterized in that heterogeneous processing modes of CPU, GPU and APU are used to calculate various types of data, wherein the data at least comprises: the mark point two-dimensional image data, the action data, the two-dimensional coordinate data of the mark point, the point cloud coordinates and directions in the three-dimensional capturing space and the space position positioning data.
5. Cabin practical training system based on virtual reality, characterized in that, cabin practical training system based on virtual reality includes:
a motion capture server end and a content presentation end;
the motion capture server end at least comprises the following components:
the dynamic capturing camera is used for collecting image data of a trainer in practical training and filtering redundant background information in the shot image data by adopting an infrared narrow-band filtering technology;
the dynamic capture data processing server comprises an electronic computer, corresponding input and output equipment and dynamic capture data analysis processing software running on the computer, wherein the input and output equipment comprises a display, a keyboard and a mouse, the dynamic capture data analysis processing software is used for carrying out operation processing on dynamic capture data transmitted by a dynamic capture camera, and the display is used for displaying the running condition of the dynamic capture software;
the calibration rod is used for calibrating the dynamic capturing cameras so as to obtain the relative position relation among the dynamic capturing cameras in the dynamic capturing space;
the motion capture server side further comprises: the three-dimensional cradle head adopts a high-force clamp and a bevel-mouth top grain and is used for fixing the dynamic camera at a specific installation position;
the content presentation end at least comprises the following components:
the virtual environment rendering and synchronizing server comprises an electronic computer and corresponding input and output equipment, wherein the electronic computer is used for rendering a virtual scene of a virtual reality cabin and synchronously transmitting data in the virtual environment to a plurality of virtual reality head displays so as to facilitate simultaneous multi-person training, the input and output equipment comprises a display, a keyboard and a mouse, and the display is used for displaying a visual angle picture of a emperor in training conditions of a trainer;
the virtual reality head display host comprises an electronic computer and corresponding input and output equipment, and is used for rendering control keys and extravehicular scenes in the cabin virtual scene and transmitting the control keys and extravehicular scenes to the virtual reality head display for display;
the virtual reality head display is connected with the virtual reality head display host and used for displaying the cabin virtual scene rendered by the virtual reality head display host to a trainer;
the inertial motion capturing glove is used for collecting motion data of hands of a trainer;
the simulation training object comprises at least one of a seat, a control rod, a throttle and a rudder and is used for simulating a cockpit.
6. The virtual reality based cabin training system of claim 5, wherein the dynamic capture space may be a large space or a small space formed by a plurality of dynamic capture cameras surrounding the content presentation end.
7. The virtual reality based cabin training system of claim 5, wherein a rigid body structure is bound to the virtual reality head display and the inertial motion capture glove, the rigid body structure being configured with a plurality of reflective marker points.
8. The virtual reality based cabin training system of any one of claims 5-7, wherein the dynamic capture camera is specifically configured to:
continuously shooting a training person wearing the reflective mark points to perform a training action, generating mark point two-dimensional image data which are synchronous with other dynamic capture cameras, preprocessing the mark point two-dimensional image data, obtaining two-dimensional coordinate data of the mark points, and sending the two-dimensional coordinate data to the dynamic capture data processing server through the data switch.
9. The virtual reality based cabin training system of claim 8, wherein the dynamic capture data processing server is specifically configured to:
receiving two-dimensional coordinate data of the mark point sent by the dynamic capture camera, and calculating the two-dimensional coordinate data of the mark point by adopting a computer multi-vision technology to obtain a point cloud coordinate and a direction in a three-dimensional capture space;
identifying rigid body structures bound to different parts of a trainer according to the point cloud coordinates and the directions, resolving the positions and the orientations of the rigid body structures in a capturing space, obtaining spatial position positioning data corresponding to the rigid bodies of the trainer in practical training, and sending the spatial position positioning data to the virtual environment rendering and synchronization server;
the virtual environment rendering and synchronization server is specifically configured to:
receiving spatial position positioning data corresponding to the rigid body action of a trainer in practical training sent by the dynamic capture data processing server, and receiving action data of the hands of the trainer sent by the inertial action capture glove;
determining the hand position of the trainer in the virtual scene of the cabin according to the spatial position positioning data corresponding to the rigid body action of the trainer in practical training, and determining the finger position and the gesture of the trainer in the virtual scene of the cabin according to the action data provided by the inertial action capturing glove;
and determining the corresponding virtual button operation in the cabin virtual scene of the finger position and the finger gesture of the trainer according to the preset mapping relation from the dynamic capturing data of the trainer to the cabin virtual scene, and performing corresponding response.
10. Computer-readable storage medium, characterized in that it has stored thereon a virtual reality-based cabin training program, which, when executed by a processor, implements the steps of the virtual reality-based cabin training method according to any one of claims 1-4.
CN201910879257.1A 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality Active CN110610547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910879257.1A CN110610547B (en) 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910879257.1A CN110610547B (en) 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality

Publications (2)

Publication Number Publication Date
CN110610547A CN110610547A (en) 2019-12-24
CN110610547B true CN110610547B (en) 2024-02-13

Family

ID=68891558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910879257.1A Active CN110610547B (en) 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality

Country Status (1)

Country Link
CN (1) CN110610547B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111338481B (en) * 2020-02-28 2023-06-23 武汉灏存科技有限公司 Data interaction system and method based on whole body dynamic capture
CN111988431A (en) * 2020-03-06 2020-11-24 王春花 Data processing method and device based on VR interaction
CN112308983B (en) * 2020-10-30 2024-03-29 北京虚拟动点科技有限公司 Virtual scene arrangement method and device, electronic equipment and storage medium
CN112781589B (en) * 2021-01-05 2021-12-28 北京诺亦腾科技有限公司 Position tracking equipment and method based on optical data and inertial data
CN112908084A (en) * 2021-02-04 2021-06-04 三一汽车起重机械有限公司 Simulation training system, method and device for working machine and electronic equipment
CN113192382A (en) * 2021-03-19 2021-07-30 徐州九鼎机电总厂 Vehicle mobility simulation system and method based on immersive human-computer interaction
CN113398578B (en) * 2021-06-03 2023-03-24 Oppo广东移动通信有限公司 Game data processing method, system, device, electronic equipment and storage medium
CN113552950B (en) * 2021-08-06 2022-09-20 上海炫伍科技股份有限公司 Virtual and real interaction method for virtual cockpit
CN114360312A (en) * 2021-12-17 2022-04-15 江西洪都航空工业集团有限责任公司 Ground service maintenance training system and method based on augmented reality technology
CN114840079B (en) * 2022-04-27 2023-03-10 西南交通大学 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition
CN116661600A (en) * 2023-06-02 2023-08-29 南开大学 Multi-person collaborative surgical virtual training system based on multi-view behavior identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374251A (en) * 2015-11-12 2016-03-02 中国矿业大学(北京) Mine virtual reality training system based on immersion type input and output equipment
CN106710362A (en) * 2016-11-30 2017-05-24 中航华东光电(上海)有限公司 Flight training method implemented by using virtual reality equipment
CN107221223A (en) * 2017-06-01 2017-09-29 北京航空航天大学 A kind of band is strong/the virtual reality aircraft cockpit system of touch feedback
CN107341832A (en) * 2017-04-27 2017-11-10 北京德火新媒体技术有限公司 A kind of various visual angles switching camera system and method based on infrared location system
US9868449B1 (en) * 2014-05-30 2018-01-16 Leap Motion, Inc. Recognizing in-air gestures of a control object to control a vehicular control system
CN109313484A (en) * 2017-08-25 2019-02-05 深圳市瑞立视多媒体科技有限公司 Virtual reality interactive system, method and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160225188A1 (en) * 2015-01-16 2016-08-04 VRstudios, Inc. Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
US10410089B2 (en) * 2018-01-19 2019-09-10 Seiko Epson Corporation Training assistance using synthetic images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9868449B1 (en) * 2014-05-30 2018-01-16 Leap Motion, Inc. Recognizing in-air gestures of a control object to control a vehicular control system
CN105374251A (en) * 2015-11-12 2016-03-02 中国矿业大学(北京) Mine virtual reality training system based on immersion type input and output equipment
CN106710362A (en) * 2016-11-30 2017-05-24 中航华东光电(上海)有限公司 Flight training method implemented by using virtual reality equipment
CN107341832A (en) * 2017-04-27 2017-11-10 北京德火新媒体技术有限公司 A kind of various visual angles switching camera system and method based on infrared location system
CN107221223A (en) * 2017-06-01 2017-09-29 北京航空航天大学 A kind of band is strong/the virtual reality aircraft cockpit system of touch feedback
CN109313484A (en) * 2017-08-25 2019-02-05 深圳市瑞立视多媒体科技有限公司 Virtual reality interactive system, method and computer storage medium

Also Published As

Publication number Publication date
CN110610547A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110610547B (en) Cabin practical training method, system and storage medium based on virtual reality
CN107221223B (en) Virtual reality cockpit system with force/tactile feedback
CN106527177B (en) The multi-functional one-stop remote operating control design case of one kind and analogue system and method
Higuchi et al. Flying head: a head motion synchronization mechanism for unmanned aerial vehicle control
US20130063560A1 (en) Combined stereo camera and stereo display interaction
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
KR101671320B1 (en) Virtual network training processing unit included client system of immersive virtual training system that enables recognition of respective virtual training space and collective and organizational cooperative training in shared virtual workspace of number of trainees through multiple access and immersive virtual training method using thereof
CN110515455B (en) Virtual assembly method based on Leap Motion and cooperation in local area network
CN104133378A (en) Real-time simulation platform for airport activity area monitoring guidance system
CN112198959A (en) Virtual reality interaction method, device and system
KR102188313B1 (en) Multi-model flight training simulator using VR
Wang et al. Augmented reality in maintenance training for military equipment
WO2013111146A4 (en) System and method of providing virtual human on human combat training operations
CN103543827A (en) Immersive outdoor activity interactive platform implement method based on single camera
CN105183161A (en) Synchronized moving method for user in real environment and virtual environment
CN105632271B (en) A kind of low-speed wind tunnel model flight tests ground simulation training system
Viertler et al. Requirements and design challenges in rotorcraft flight simulations for research applications
CN110148330A (en) Around machine check training system before a kind of Aircraft based on virtual reality
Su et al. Development of an effective 3D VR-based manipulation system for industrial robot manipulators
Patrão et al. A virtual reality system for training operators
CN103680248A (en) Ship cabin virtual reality simulation system
CN111369861A (en) Virtual reality technology-based simulated fighter plane driving system and method
KR101831364B1 (en) Flight training apparatus using flight simulators linked to exercise data
CN113467502A (en) Unmanned aerial vehicle driving examination system
RU136618U1 (en) SYSTEM OF IMITATION OF THE EXTERNAL VISUAL SITUATION IN ON-BOARD MEANS FOR OBSERVING THE EARTH SURFACE OF THE SPACE SIMULATOR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240109

Address after: 909-175, 9th Floor, Building 17, No. 30 Shixing Street, Shijingshan District, Beijing, 100043 (Cluster Registration)

Applicant after: Ruilishi Multimedia Technology (Beijing) Co.,Ltd.

Address before: Room 9-12, 10th floor, block B, building 7, Shenzhen Bay science and technology ecological park, 1819 Shahe West Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN REALIS MULTIMEDIA TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant