CN107134194A - Immersion vehicle simulator - Google Patents

Immersion vehicle simulator Download PDF

Info

Publication number
CN107134194A
CN107134194A CN201710370927.8A CN201710370927A CN107134194A CN 107134194 A CN107134194 A CN 107134194A CN 201710370927 A CN201710370927 A CN 201710370927A CN 107134194 A CN107134194 A CN 107134194A
Authority
CN
China
Prior art keywords
virtual reality
module
video
vehicle simulator
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710370927.8A
Other languages
Chinese (zh)
Inventor
钟秋发
锡泊
黄煦
李晓阳
高晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Zhongke Hengyun Software Technology Co Ltd
Original Assignee
Hebei Zhongke Hengyun Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Zhongke Hengyun Software Technology Co Ltd filed Critical Hebei Zhongke Hengyun Software Technology Co Ltd
Priority to CN201710370927.8A priority Critical patent/CN107134194A/en
Publication of CN107134194A publication Critical patent/CN107134194A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated

Abstract

The present invention is immersion vehicle simulator, belongs to vehicle simulator technical field.Immersion vehicle simulator includes wear-type virtual reality device, tracing positioning device, video process apparatus, what comes into a driver's processing unit and display device with 3D binocular camera shooting devices, wherein 3D binocular camera shootings device is connected with video process apparatus, and what comes into a driver's processing unit is connected with virtual reality device, tracing positioning device and display device respectively.The immersion vehicle simulator of the present invention uses mixed reality technology, the system creation vehicle cabin what comes into a driver's of virtual reality fusion, the natural interaction for watching the corporal parts such as human hand and cockpit can be realized by camera, to terrain environment and the perception of cabin ambient when can strengthen student's simulating vehicle.

Description

Immersion vehicle simulator
Technical field
The present invention relates to vehicle simulator technical field, and in particular to immersion vehicle simulator.
Background technology
With continuing to develop for computer technology, virtual reality skill is applied more and more in military exercises or training Art carries out war simulation.Current such vehicle simulator based on virtual reality technology comes with some shortcomings:Although supporting human body Hand is in virtual scene hypostazation, it is impossible to hypostazation other equipment or other positions of human body;Simply in certain journey on illumination system Illumination simulation on degree, and actual light differential are larger;Head movement tracking technique is not advanced enough;Need to splice several fixed viewpoints Shooting video, there is the error for being difficult to eliminate in angle and depth in spliced video and naked eye content, it is impossible under The problems such as depending on, limited sight.These problems cause people to be differed greatly with cockpit interactive simulation with realistic simulation situation, feeling of immersion matter Amount is poor, reduces the practicality of vehicle simulator.
The content of the invention
In order to solve the vehicle simulator in military exercises in the prior art or training exist people and cockpit interactive simulation with Realistic simulation situation differs greatly, the problem of feeling of immersion is second-rate, and present invention offer is a kind of to can solve the problem that the heavy of above mentioned problem Immersion vehicle simulator.
The technical scheme for realizing the object of the invention is:
Immersion vehicle simulator, including the processing of virtual reality device, tracing positioning device, video process apparatus, what comes into a driver's Device and display device, wherein virtual reality device are connected with video process apparatus, what comes into a driver's processing unit respectively with virtual reality Device, tracing positioning device and display device connection.
Further technical scheme, the virtual reality device has 3D binocular camera shooting devices, and is wear-type.
Further technical scheme, virtual reality device and the tracing positioning device peace with 3D binocular camera shooting devices Mounted in the top and middle part of seat.
Magnetometer, gyroscope and accelerometer are set in further technical scheme, the wear-type virtual reality device.
It is provided with further technical scheme, the what comes into a driver's processing unit:
Virtual scene driving module:Whole training place of the generation comprising battle field information or battlefield simulating scenes;
Stereoscopic vision generation module:The simulating scenes that virtual scene driving module is generated carry out three-dimensional;
Virtual reality what comes into a driver's tracing module:Position for video camera in stereoscopic vision generation module is set according to six-degree-of-freedom information Put and posture, simulated operation person head movement locus;
Outdoor scene acquisition module:Camera software development kit is called, virtual camera object is created, parameter needed for setting, Carry out outdoor scene collection;
Prospect abstraction module:For prospect to be extracted;
Video compressing module:Jpeg image is compressed;
Virtual reality virtual reality fusion module:By the video fusion after technical finesse into virtual reality scenario, it is ensured that regard Frequency frame number and size position are adapted to virtual scene.
The prospect abstraction module includes:The point texture technology modules of 2D tetra-:For obtain foreground video image remove depth Outer positional information;Profile extraction module based on the HOUGH depth image connected domains converted:For obtaining foreground video image Depth information;Abstraction module based on mark and Face Detection:For obtaining the limbs image in non-interception UNICOM region;It is based on Model shade and the prospect abstraction module of spatial multiplex positioning:Based on being obtained in the point texture technology modules of 2D tetra- and profile extraction module The information extraction foreground image taken, then the image merged in the prospect abstraction module based on model shade and spatial multiplex positioning are complete Extracted into prospect.
Further technical scheme, the prospect abstraction module positioned based on model shade and spatial multiplex is using truly Video camera six-degree-of-freedom information and virtual video camera six-degree-of-freedom information carry out equal proportion model shade to what comes into a driver's.
Further technical scheme, the video compressing module is optimized based on standard jpeg image compression frame, is protected Demonstrate,prove the compatibility of algorithm.
Further technical scheme, the virtual reality virtual reality fusion module by three-dimensional geometry register with block processing, Video fusion after edge-smoothing transition, interleave compensation, illumination consistency, shade synthesis processing is to virtual reality virtual scene In, it is ensured that video frame number and size position are adapted to virtual scene.
Further technical scheme, the six-degree-of-freedom information refers in the traceable scope of tracing positioning device, mainly The six-degree-of-freedom information of head position is generated by tracing positioning device capture operation person head video;Or filled by wear-type virtual reality Put the six-degree-of-freedom information that built-in magnetometer, gyroscope and accelerometer obtain head position.
Beneficial effects of the present invention are:
The present invention is by mixed reality cockpit technology, and camera is fixed on mode on virtual implementing helmet entity exactly The locus of human body and operating platform in virtual scene is changed, has been added by human body control operation platform existing Real tactile experience;Visual angle is using head rotation mode is followed, and larger true reduction human vision, especially solves some vehicles Problem is regarded down;And realize the technological difficulties such as actual situation illumination consistency, distortion elimination.It compensate for current oversize vehicle simulator The deficiency that system is present, improves student's feeling of immersion, the sense of reality, simulated training is preferably pressed close to true training, enhance car The practicality of simulation system, knowwhy is consolidated with minimum cost, operation and disposing capacity is improved, with the feeling of immersion of height One and formation vehicle are trained with telepresenc and are converted into interesting strong interactive mode, and training court limitation and most can be broken through Bigization utilizes the interior space, and training of safety and quality are improved while reduction training cost.
The software systems of the present invention rely on three-dimensional geographic information platform, the modeling of comprehensive utilization 3 D stereo, virtual emulation, sea The technologies such as data management, three-dimensional spatial analysis are measured, the training information under being experienced with immersion is globally shared, training resource is unified adjusts Match somebody with somebody, science auxiliary operational commanding is target, be integrated with the information processing based on three-dimensional geographic information platform, automatically control etc. and be multiple The knowledge of high-technology field.Its technology and weapon technologies emulation, weapon system simulation and emulation etc. of fighting, in army Played a significant role in terms of training, weapons SoS, operational commanding and program plan.
The present invention uses mixed reality technology, and the system creation vehicle cabin what comes into a driver's of virtual reality fusion passes through camera energy It is enough to realize the natural interaction for watching the corporal parts such as human hand and cockpit, to landform ring when can strengthen student's simulating vehicle Border and the perception of cabin ambient.And the system architecture of the present invention is simple, cost performance is high, except the bicycle training such as driving and shooting Outside, can also the real cooperation subject training of simulation true to nature, simulate true battlefield surroundings and truly regarded there is provided trainee Feel, the sense of hearing, tactile etc. are experienced, using the personnel that are effectively reduced, goods and materials loss in military field, and reduction training cost has Important application and promotional value.
Brief description of the drawings
Fig. 1 is the structural representation of immersion vehicle simulator in the embodiment of the present invention
Embodiment
Fig. 1 is to explain the present invention, but the invention is not restricted in the scope shown in Fig. 1.
Differed greatly to solve cockpit interactive simulation in the prior art with realistic simulation situation, feeling of immersion is second-rate Problem, the present invention is by using mixed reality technology, and the system creation vehicle cabin what comes into a driver's of virtual reality fusion passes through camera energy It is enough to realize the natural interaction for watching the corporal parts such as human hand and cockpit, to landform ring when can strengthen student's simulating vehicle Border and the perception of cabin ambient.
As shown in figure 1, immersion vehicle simulator, including the dress of the wear-type virtual reality with 3D binocular camera shootings device 1 Put 2, tracing positioning device 3, video process apparatus 4, what comes into a driver's processing unit 5 and display device, wherein 3D binocular camera shootings device with Video process apparatus is connected, and what comes into a driver's processing unit is connected with virtual reality device, tracing positioning device and display device respectively.
Magnetometer, gyroscope and accelerometer are set in wear-type virtual reality device.
It is provided with specific what comes into a driver's processing unit:
Virtual scene driving module:Whole training place of the generation comprising battle field information or battlefield simulating scenes;
Stereoscopic vision generation module:The simulating scenes that virtual scene driving module is generated carry out three-dimensional;
Virtual reality what comes into a driver's tracing module:Position for video camera in stereoscopic vision generation module is set according to six-degree-of-freedom information Put and posture, simulated operation person head movement locus;
Outdoor scene acquisition module:Camera software development kit is called, virtual camera object is created, parameter needed for setting, Carry out outdoor scene collection;
Prospect abstraction module:For prospect to be extracted;
Video compressing module:Jpeg image is compressed;
Virtual reality virtual reality fusion module:By the video fusion after technical finesse into virtual reality scenario, it is ensured that regard Frequency frame number and size position are adapted to virtual scene.
Wherein prospect abstraction module includes:The point texture technology modules of 2D tetra-:For obtain foreground video image remove depth Outer positional information;Profile extraction module based on the HOUGH depth image connected domains converted:For obtaining foreground video image Depth information;Abstraction module based on mark and Face Detection:For obtaining the limbs image in non-interception UNICOM region;It is based on Model shade and the prospect abstraction module of spatial multiplex positioning:Based on being obtained in the point texture technology modules of 2D tetra- and profile extraction module The information extraction foreground image taken, then the image merged in the prospect abstraction module based on model shade and spatial multiplex positioning are complete Extracted into prospect.
Further, the prospect abstraction module positioned based on model shade and spatial multiplex utilizes real camera six freely Spend information and virtual video camera six-degree-of-freedom information and equal proportion model shade is carried out to what comes into a driver's.
Further, video compressing module is optimized based on standard jpeg image compression frame, it is ensured that the compatibility of algorithm Property.
Further, the virtual reality virtual reality fusion module is registered by three-dimensional geometry and put down with blocking processing, edge Slip over cross, interleave compensation, illumination consistency, shade synthesis processing after video fusion into virtual reality virtual scene, it is ensured that Video frame number and size position are adapted to virtual scene.
It is preferred that, the six-degree-of-freedom information refers in the traceable scope of tracing positioning device, mainly by tracing and positioning The six-degree-of-freedom information of device capture operation person head video generation head position;Or as the magnetic built in wear-type virtual reality device Power meter, gyroscope and accelerometer obtain the six-degree-of-freedom information of head position.
Specifically prospect extraction step is:
1st, the demarcation of multiple-camera includes the acquisition of internal reference, outer ginseng and distortion parameter;
2nd, mark detection positioning central region;
3rd, the color video for collecting outdoor scene acquisition module extracts prospect cabin interior part according to algorithm, is based on The contours extract of the depth image connected domain of HOUGH conversion;
The 3D models of whole prospect cockpit are designed in advance, and are preserved hereof.Module initialization gets prospect cockpit 3D models;
Using graphics processor (GPU) pretreatment deep image, CUDA API are called to produce video shade, it is too big to depth Part filtered;
The coloured image of processing is copied to CPU from GPU;
Depth image connected domain is analyzed using OpenCV, the edge of each connected domain is obtained;
The corresponding relation of 3D models and student's headset equipment three-dimensional coordinate, Jin Erji are determined according to tracing positioner position Calculate relative position of the current 3D binocular cameras in 3D models;
Use each connected domain edge pixel point of follow-on Canny operator extractions depth image;
Straightway is obtained with Hough transform, the rough phenomenon in edge, the complexity that reduction rear edge judges is reduced;
Obtain the corresponding coordinate using camera as origin of marginal point;
Judge some connected domain marginal point whether in 3D models in space, if in the colour in connected domain Corresponding points need output in figure;If it was not then output is according to the point in the connected domain of 3D model boundaries cutting;
4th, equal proportion model shade and spatial multiplex registration positioning video interception scope;
5th, the detection of interception scope outer skin is using the oval skin color detection method after improving;
6th, many video-splicing algorithm improvements.
More specifically, stereoscopic vision generation module:Intend using Unigine engines by including that virtual scene driving subsystem is generated Red Army's command information, blue force and threaten generation information, the whole training such as scape, mission bit stream or battlefield simulating scenes carry out it is three-dimensional (Stereoscopic) change.
VR what comes into a driver's tracing modules:In the traceable scope of tracker, head video generation head position is mainly caught by tracker Six-degree-of-freedom information.Otherwise magnetometer, gyroscope and accelerometer inside headset equipment obtains six freedom on head Spend information.When six-degree-of-freedom information is collected in what comes into a driver's processing computer, the video camera inside software design patterns Unigine (Camera) position and posture, simulation student's head movement track.And due to the field range of video camera observable be it is fixed, The what comes into a driver's content produced on eyeglass changes therewith.
Outdoor scene acquisition module:CUDA of the outdoor scene acquisition module based on the OpenCV storehouses increased income and Nvidia, calls camera SDK, carries out following operation, creates virtual camera object, the parameter such as resolution ratio, frame number needed for setting, while being adopted using comprehensive screen (comprehensive screen is bulk glossy clear material composition, reflective, floodlight effect that outdoor solar light and indoor light are produced thereon to collection technology Fruit meeting camera-shot is simultaneously strengthened).
It is double by 3D in the virtual training or operation scene that what comes into a driver's computer is run when student puts on VR headset equipments The HD video that mesh camera is shot is after video computer is handled, generation analog capsule cabin part, avionics meter section and The part of the handle (control stick and collective-pitch lever, left and right weapon handle) of two bar of member's operation by human hand two and the rudder of foot operation two.Student is present Spatial orientation information (translation containing space and spatial rotational) in cabin can be caught and be collected into what comes into a driver's computer by VR locators, In display system software operation processing real-time update to what comes into a driver's, student is set to produce the feeling of immersion for driving real vehicle.
The present invention improves camera by 3D binocular camera shootings device, and closely depth detection is inaccurate:
Produce depth and disparity map using original right and left eyes video, two kinds of figure thresholdings processing are extracted identification it is bad away from From the prospect of depth, and with algorithm precisely analyze and predict closely depth.
Camera radial direction, the real time correction of tangential distortion:
Realize that real time algorithm will detect the scope of tangential distortion after camera calibration, carried out with vanishing point to being mainly barrel distortion Radial distortion automatic correction.
Realize right and left eyes fusion emulation human eye vision:
Disparity map is generated into red blueprint, then enters line distortion post processing, by eyes it is observed that contents interception go out.
The human body in interception area is not extracted using skin detection algorithm.
Virtual reality technology and video fusion
The illumination on actual situation jointing edge/surface is handled with shade unification:
Edge/the normal to a surface and various lighting effects of rendering engine are obtained, two kinds are carried out to actual situation edge and surface Different processing, completes the unification of whole scene.
Headwork is synchronous with video image change:
Head positioning interpolation processing position and posture, the interleave algorithm when vibration of adaptation high-frequency and Large Amplitude Motion are real It is existing, finally realize Integral synchronous.
Immersion experience enhancing and spinning sensation are improved:
Strengthen the vision and audio experience of immersion based on two above function, picture is reprocessed and eliminates certain flake Effect and simulation human eye focusing partial pixel are obscured/sharpened, and increase the motion blur of effective audiovisual with improving distance Strength Changes The inconsistent spinning sensation brought is moved to improve virtual scene and reality scene.
The present invention is by mixed reality cockpit technology, and camera is fixed on mode on virtual implementing helmet entity exactly The locus of human body and operating platform in virtual scene is changed, has been added by human body control operation platform existing Real tactile experience;Visual angle is using head rotation mode is followed, and larger true reduction human vision, especially solves some vehicles Problem is regarded down;And realize the technological difficulties such as actual situation illumination consistency, distortion elimination.It compensate for current oversize vehicle simulator The deficiency that system is present, improves student's feeling of immersion, the sense of reality, simulated training is preferably pressed close to true training, enhance car The practicality of simulation system, knowwhy is consolidated with minimum cost, operation and disposing capacity is improved, with the feeling of immersion of height One and formation vehicle are trained with telepresenc and are converted into interesting strong interactive mode, and training court limitation and most can be broken through Bigization utilizes the interior space, and training of safety and quality are improved while reduction training cost.
The software systems of the present invention rely on three-dimensional geographic information platform, the modeling of comprehensive utilization 3 D stereo, virtual emulation, sea The technologies such as data management, three-dimensional spatial analysis are measured, the training information under being experienced with immersion is globally shared, training resource is unified adjusts Match somebody with somebody, science auxiliary operational commanding is target, be integrated with the information processing based on three-dimensional geographic information platform, automatically control etc. and be multiple The knowledge of high-technology field.Its technology and weapon technologies emulation, weapon system simulation and emulation etc. of fighting, in army Played a significant role in terms of training, weapons SoS, operational commanding and program plan.
The present invention uses mixed reality technology, and the system creation vehicle cabin what comes into a driver's of virtual reality fusion passes through camera energy It is enough to realize the natural interaction for watching the corporal parts such as human hand and cockpit, to landform ring when can strengthen student's simulating vehicle Border and the perception of cabin ambient.And the system architecture of the present invention is simple, cost performance is high, except the bicycle training such as driving and shooting Outside, can also the real cooperation subject training of simulation true to nature, simulate true battlefield surroundings and truly regarded there is provided trainee Feel, the sense of hearing, tactile etc. are experienced, using the personnel that are effectively reduced, goods and materials loss in military field, and reduction training cost has Important application and promotional value.
Above-described embodiment is only the specific embodiment of the invention, but is not limited to embodiment, all not depart from structure of the present invention In the case of think of, equivalent modification and the prior art addition done according to the application are accordingly to be regarded as the technology of the present invention category.

Claims (10)

1. immersion vehicle simulator, it is characterised in that:Including virtual reality device, tracing positioning device, video process apparatus, What comes into a driver's processing unit and display device, wherein virtual reality device are connected with video process apparatus, what comes into a driver's processing unit respectively with Virtual reality device, tracing positioning device and display device connection.
2. immersion vehicle simulator according to claim 1, it is characterised in that:The virtual reality device has 3D double Mesh camera device, and be wear-type.
3. immersion vehicle simulator according to claim 2, it is characterised in that:It is described with 3D binocular camera shooting devices Virtual reality device and tracing positioning device are arranged on the top and middle part of seat.
4. immersion vehicle simulator according to claim 2, it is characterised in that:In the wear-type virtual reality device If magnetometer, gyroscope and accelerometer.
5. immersion vehicle simulator according to claim 1, it is characterised in that:It is provided with the what comes into a driver's processing unit:
Virtual scene driving module:Whole training place of the generation comprising battle field information or battlefield simulating scenes;
Stereoscopic vision generation module:The simulating scenes that virtual scene driving module is generated carry out three-dimensional;
Virtual reality what comes into a driver's tracing module:According to six-degree-of-freedom information set stereoscopic vision generation module in camera position and Posture, simulated operation person head movement locus;
Outdoor scene acquisition module:Camera software development kit is called, virtual camera object is created, parameter needed for setting is carried out Outdoor scene is gathered;
Prospect abstraction module:For prospect to be extracted;
Video compressing module:Jpeg image is compressed;
Virtual reality virtual reality fusion module:By the video fusion after technical finesse into virtual reality scenario, it is ensured that frame of video Number and size position are adapted to virtual scene.
6. immersion vehicle simulator according to claim 5, it is characterised in that:The virtual reality virtual reality fusion module Registered and synthesized with blocking processing, edge-smoothing transition, interleave compensation, illumination consistency, shade after handling by three-dimensional geometry Video fusion is into virtual reality virtual scene, it is ensured that video frame number and size position are adapted to virtual scene.
7. immersion vehicle simulator according to claim 5, it is characterised in that:The video compressing module is based on standard Jpeg image compression frame is optimized, it is ensured that the compatibility of algorithm.
8. immersion vehicle simulator according to claim 5, it is characterised in that:The prospect abstraction module includes:2D Four point texture technology modules:For obtaining the positional information in addition to depth of foreground video image;The depth converted based on HOUGH The profile extraction module in image connectivity domain:For obtaining the depth information of foreground video image;Based on mark and Face Detection Abstraction module:For obtaining the limbs image in non-interception UNICOM region;The prospect positioned based on model shade and spatial multiplex is taken out Modulus block:Based on the information extraction foreground image obtained in the point texture technology modules of 2D tetra- and profile extraction module, then merge base Image in the prospect abstraction module that model shade and spatial multiplex are positioned completes prospect and extracted.
9. the immersion vehicle simulator according to claim 6 or 7 or 8, it is characterised in that:It is described based on model shade and The prospect abstraction module of spatial multiplex positioning utilizes real camera six-degree-of-freedom information and virtual video camera six-degree-of-freedom information Equal proportion model shade is carried out to what comes into a driver's.
10. immersion vehicle simulator according to claim 9, it is characterised in that:The six-degree-of-freedom information refers to In the traceable scope of tracing positioning device, mainly by tracing positioning device capture operation person head video generate head position six from By degree information;Or six freedom of head position are obtained as the magnetometer built in wear-type virtual reality device, gyroscope and accelerometer Spend information.
CN201710370927.8A 2017-05-18 2017-05-18 Immersion vehicle simulator Pending CN107134194A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710370927.8A CN107134194A (en) 2017-05-18 2017-05-18 Immersion vehicle simulator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710370927.8A CN107134194A (en) 2017-05-18 2017-05-18 Immersion vehicle simulator

Publications (1)

Publication Number Publication Date
CN107134194A true CN107134194A (en) 2017-09-05

Family

ID=59732621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710370927.8A Pending CN107134194A (en) 2017-05-18 2017-05-18 Immersion vehicle simulator

Country Status (1)

Country Link
CN (1) CN107134194A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993290A (en) * 2017-12-18 2018-05-04 快创科技(大连)有限公司 It is a kind of that demo system is assembled based on AR and the mechanical part of cloud storage technology
CN108039084A (en) * 2017-12-15 2018-05-15 郑州日产汽车有限公司 Automotive visibility evaluation method and system based on virtual reality
CN108388350A (en) * 2018-03-25 2018-08-10 东莞市华睿电子科技有限公司 A kind of mixing scene generating method based on Intelligent seat
CN110136535A (en) * 2018-02-09 2019-08-16 深圳市掌网科技股份有限公司 Examination of driver simulation system and method
CN110246227A (en) * 2019-05-21 2019-09-17 佛山科学技术学院 A kind of virtual reality fusion emulation experiment image data acquiring method and system
WO2019201224A1 (en) * 2018-04-16 2019-10-24 Formula Square Holdings Ltd Method to enhance first-person-view experience
CN112150885A (en) * 2019-06-27 2020-12-29 统域机器人(深圳)有限公司 Cockpit system based on mixed reality and scene construction method
CN112289123A (en) * 2020-11-03 2021-01-29 成都合纵连横数字科技有限公司 Mixed reality scene generation method and system for automobile driving simulator
CN112289125A (en) * 2020-11-16 2021-01-29 株洲壹星科技股份有限公司 Vehicle MR simulation driving practical training method and practical training device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098475A (en) * 2007-07-10 2008-01-02 浙江大学 Interactive time-space accordant video matting method in digital video processing
CN102368810A (en) * 2011-09-19 2012-03-07 长安大学 Semi-automatic aligning video fusion system and method thereof
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
WO2013111145A1 (en) * 2011-12-14 2013-08-01 Virtual Logic Systems Private Ltd System and method of generating perspective corrected imagery for use in virtual combat training
CN103578113A (en) * 2013-11-19 2014-02-12 汕头大学 Method for extracting foreground images
CN104463250A (en) * 2014-12-12 2015-03-25 广东工业大学 Sign language recognition translation method based on Davinci technology
CN106055113A (en) * 2016-07-06 2016-10-26 北京华如科技股份有限公司 Reality-mixed helmet display system and control method
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098475A (en) * 2007-07-10 2008-01-02 浙江大学 Interactive time-space accordant video matting method in digital video processing
CN102368810A (en) * 2011-09-19 2012-03-07 长安大学 Semi-automatic aligning video fusion system and method thereof
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
WO2013111145A1 (en) * 2011-12-14 2013-08-01 Virtual Logic Systems Private Ltd System and method of generating perspective corrected imagery for use in virtual combat training
CN103578113A (en) * 2013-11-19 2014-02-12 汕头大学 Method for extracting foreground images
CN104463250A (en) * 2014-12-12 2015-03-25 广东工业大学 Sign language recognition translation method based on Davinci technology
CN106055113A (en) * 2016-07-06 2016-10-26 北京华如科技股份有限公司 Reality-mixed helmet display system and control method
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗斌;姚鹏;翁冬冬;刘越;王涌天;: "基于混合现实的新型轻量级飞行模拟器系统" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108039084A (en) * 2017-12-15 2018-05-15 郑州日产汽车有限公司 Automotive visibility evaluation method and system based on virtual reality
CN107993290A (en) * 2017-12-18 2018-05-04 快创科技(大连)有限公司 It is a kind of that demo system is assembled based on AR and the mechanical part of cloud storage technology
CN110136535A (en) * 2018-02-09 2019-08-16 深圳市掌网科技股份有限公司 Examination of driver simulation system and method
CN108388350B (en) * 2018-03-25 2021-02-19 东莞市华睿电子科技有限公司 Hybrid scene generation method based on intelligent seat
CN108388350A (en) * 2018-03-25 2018-08-10 东莞市华睿电子科技有限公司 A kind of mixing scene generating method based on Intelligent seat
WO2019201224A1 (en) * 2018-04-16 2019-10-24 Formula Square Holdings Ltd Method to enhance first-person-view experience
US11107364B2 (en) 2018-04-16 2021-08-31 Formula Square Holdings Ltd Method to enhance first-person-view experience
CN110246227A (en) * 2019-05-21 2019-09-17 佛山科学技术学院 A kind of virtual reality fusion emulation experiment image data acquiring method and system
CN110246227B (en) * 2019-05-21 2023-12-29 佛山科学技术学院 Virtual-real fusion simulation experiment image data collection method and system
CN112150885A (en) * 2019-06-27 2020-12-29 统域机器人(深圳)有限公司 Cockpit system based on mixed reality and scene construction method
CN112150885B (en) * 2019-06-27 2022-05-17 统域机器人(深圳)有限公司 Cockpit system based on mixed reality and scene construction method
CN112289123A (en) * 2020-11-03 2021-01-29 成都合纵连横数字科技有限公司 Mixed reality scene generation method and system for automobile driving simulator
CN112289125A (en) * 2020-11-16 2021-01-29 株洲壹星科技股份有限公司 Vehicle MR simulation driving practical training method and practical training device

Similar Documents

Publication Publication Date Title
CN107154197A (en) Immersion flight simulator
CN107134194A (en) Immersion vehicle simulator
US8040361B2 (en) Systems and methods for combining virtual and real-time physical environments
US5796991A (en) Image synthesis and display apparatus and simulation system using same
US7479967B2 (en) System for combining virtual and real-time environments
CN106162137B (en) Virtual visual point synthesizing method and device
US20100182340A1 (en) Systems and methods for combining virtual and real-time physical environments
CN106601060B (en) Fire-fighting scene of a fire experiencing virtual reality border system
CN106710362A (en) Flight training method implemented by using virtual reality equipment
US20100091036A1 (en) Method and System for Integrating Virtual Entities Within Live Video
WO2019140945A1 (en) Mixed reality method applied to flight simulator
DE102009049849A1 (en) Method for determining the pose of a camera and for detecting an object of a real environment
CN110850977B (en) Stereoscopic image interaction method based on 6DOF head-mounted display
CN106408515A (en) Augmented reality-based vision synthesis system
CN106791778A (en) A kind of interior decoration design system based on AR virtual reality technologies
CN207883156U (en) A kind of scenic spot simulated flight experience apparatus
CN103543827A (en) Immersive outdoor activity interactive platform implement method based on single camera
CN109920000B (en) Multi-camera cooperation-based dead-corner-free augmented reality method
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
CN104463956B (en) Construction method and device for virtual scene of lunar surface
CN116110270A (en) Multi-degree-of-freedom driving simulator based on mixed reality
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
CN114139370A (en) Synchronous simulation method and system for optical engine and electromagnetic imaging dual-mode moving target
CN110134247A (en) A kind of Ship Motion Attitude augmented reality interaction systems and method based on VR
Segura et al. Interaction and ergonomics issues in the development of a mixed reality construction machinery simulator for safety training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 1001-1020, North building, Internet of things building, 368 Xinshi North Road, Shijiazhuang, Hebei 050000

Applicant after: ZHONGKE HENGYUN Co.,Ltd.

Address before: 050000 10th floor, IOT building, 377 xinshizhong Road, Shijiazhuang City, Hebei Province

Applicant before: HEBEI ZHONGKE HENGYUN SOFTWARE TECHNOLOGY Co.,Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170905