CN108305326A - A method of mixing virtual reality - Google Patents
A method of mixing virtual reality Download PDFInfo
- Publication number
- CN108305326A CN108305326A CN201810060883.3A CN201810060883A CN108305326A CN 108305326 A CN108305326 A CN 108305326A CN 201810060883 A CN201810060883 A CN 201810060883A CN 108305326 A CN108305326 A CN 108305326A
- Authority
- CN
- China
- Prior art keywords
- boiler
- plate
- eyes
- range
- outside
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000002156 mixing Methods 0.000 title claims abstract description 23
- 210000001508 eye Anatomy 0.000 claims abstract description 82
- 238000009877 rendering Methods 0.000 claims abstract description 26
- 210000001747 pupil Anatomy 0.000 claims abstract description 24
- 230000000007 visual effect Effects 0.000 claims abstract description 15
- 230000003044 adaptive effect Effects 0.000 claims abstract description 11
- 230000004886 head movement Effects 0.000 claims abstract description 7
- 230000000694 effects Effects 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000001174 ascending effect Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 230000002441 reversible effect Effects 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims 1
- 239000007787 solid Substances 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 210000005252 bulbus oculi Anatomy 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000007654 immersion Methods 0.000 description 3
- 230000036544 posture Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 208000012886 Vertigo Diseases 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 235000021384 green leafy vegetables Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002386 leaching Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/08—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
- G09B9/30—Simulation of view from aircraft
- G09B9/301—Simulation of view from aircraft by computer-processed or -generated image
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/08—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
- G09B9/30—Simulation of view from aircraft
- G09B9/307—Simulation of view from aircraft by helmet-mounted projector or display
Abstract
A method of mixing virtual reality, two cameras by being configured at position of human eye include that itself carries out adaptive space reflection to all devices under the range depth in boiler-plate, personnel, to carrying out three-dimensional reconstruction in eyes visual range, subtle three-dimensional reconstruction is carried out in the center range focused to eyes, to carrying out coarse three-dimensional reconstruction outside eyes focusing range;Characteristic point real time information outside boiler-plate described in camera acquisition by the fixed position and angle that are fixed on around boiler-plate and the human body relative tertiary location information by the VR helmets and locator acquisition, virtual scene is compensated and is corrected and in virtual scene depth information and actual distance information match, obtain virtual scene and the VR helmets relative position relation;Layer rendering is carried out to virtual screen by the pupil and head movement that track eye, is put centered on the position of pupil and is rendered successively from inside to outside, the resolution ratio of rendering reduces successively from inside to outside.
Description
Technical field
The present invention relates to field of virtual reality more particularly to a kind of methods of mixing virtual reality.
Background technology
As mixed reality technology continues to develop, gradually start to use mixed reality in various simulation fields.It is such at present
It comes with some shortcomings in flight simulator based on mixed reality technology:Flight simulator is in different postures or movement, vibration
When, camera or the shake of sensing equipment shooting picture and positioning are not allowed;Human body is existed by optics or inertia sensing equipment
Virtual scene hypostazation, can not the various equipment of hypostazation;Due to the human skin and the clothes shortage sense of reality in virtual scene, no
It can make one to generate true feeling of immersion;Visual angle visual field is limited by head-mounted display or camera shooting visual angle, cannot track eyeball
And adaptive focusing;Audiovisual tactile and spatial impression and virtual world of the people in real world are inconsistent, generation it is illusory
Sense and spinning sensation are stronger;System is unable to the flight simulator of fast integration different manufacturers, carries out quick upgrading and secondary opens
Hair.These problems cause reality and virtual world that can not merge the genuine and believable visible environment of generation, in new visualization ring
Physics and digital object can not coexist in border, and real-time interactive, reduce the practicability of flight simulator.
Invention content
To overcome prior art problem, the present invention to provide a kind of method of mixing virtual reality.
A method of mixing virtual reality, step S1:By being configured at two cameras of position of human eye to boiler-plate
All devices, personnel under interior range depth include that itself carries out adaptive space reflection, in eyes visual range into
Row three-dimensional reconstruction, to carrying out subtle three-dimensional reconstruction in the center range of eyes focusing, to carrying out coarse three outside eyes focusing range
Dimension is rebuild;
Step S2:Outside boiler-plate described in camera acquisition by the fixed position and angle that are fixed on around boiler-plate
Characteristic point real time information and the human body relative tertiary location information obtained by the VR helmets and locator, carry out virtual scene
Compensation and correct and in virtual scene depth information and actual distance information match, obtain virtual scene and VR heads
The relative position relation of helmet;
Step S3:Layer rendering is carried out to virtual screen by the pupil and head movement that track eye, with the position of pupil
It is set to central point to be rendered successively from inside to outside, the resolution ratio of rendering reduces successively from inside to outside.
Preferably, in the step S1, by being configured at two cameras of position of human eye to the model in boiler-plate
Enclose all devices under depth, when personnel include that itself carries out adaptive space reflection:To the space of two cameras shooting
Region is split, and is split successively using three kinds of ascending voxel sizes respectively in segmentation, by identical space
Region segmentation becomes three groups of set of voxels with data independence.
Preferably, fine to being carried out in the center range of eyes focusing to carrying out three-dimensional reconstruction in eyes visual range
Three-dimensional reconstruction, when to carrying out coarse three-dimensional reconstruction outside eyes focusing range:It determines the state of each voxel, and obtains non-empty voxel
Set;Real-time tracing is carried out to eyes focusing range, the body of minimum dimension is chosen in the region in center range focused to eyes
Element is gathered, and the part selection outside the center range focused to eyes, within visible area is more than in the center range that eyes focus
Size set of voxels, maximum sized set of voxels is selected to the other visual ranges of eyes, realizes and rebuilds effect by eyes
Focusing range it is coarse by being fine to successively from inside to outside;Then the texture mapping for carrying out pixelation, obtains the three of virtual scene
Tie up grain effect.
Preferably, in the step S2, pass through the video camera of the fixed position and angle that are fixed on around boiler-plate
When acquiring the characteristic point real time information outside the boiler-plate:The video camera of the fixed position and angle that are fixed on around boiler-plate is logical
It crosses least square method and parameter calibration is carried out to the video camera, and the characteristic point initial position on static boiler-plate surface is carried out
Record, the initial pose of boiler-plate is converted to by three-dimensional imaging principle and coordinate, establishes initial coordinate system.
Preferably, during boiler-plate moves, pass through the fixed position that is fixed on around boiler-plate and angle
The characteristic point of video camera captured in real-time dynamic change, and feature point extraction is carried out by SURF algorithm, it is carried out using KLT algorithms special
When realizing that boiler-plate quickly moves generation change in displacement, tracking and matching is carried out to characteristic point for sign point matching.
Preferably, when obtaining boiler-plate movement by the video camera of the fixed position and angle that are fixed on around boiler-plate
Characteristic point world coordinates, the variation of the moment characteristic point relative to initial coordinate is calculated, to obtain the position of boiler-plate
Relationship, and human body relative tertiary location information is obtained by the VR helmets and locator, according to the boiler-plate in initial coordinate system
With the location information of human body, dynamic corrections are carried out to virtual screen by reverse transformation, and match internal virtual scene and outside
Real scene spatial impression and distance perspective.
Preferably, when extracting characteristic point, the characteristic point uses inner ring for round black and white that white outer ring is black
The characteristic point for comparing color passes through the Gaussian difference parting word of image shot by camera when carrying out feature point extraction by SURF algorithm
Tower DOG and the first tomographic image for extracting every group of image, calculus of finite differences carries out adaptive threshold point taking maximum kind to every group of image
Acquisition binary map is cut, and characteristic point detection is carried out using the binary map as constraints, makes SURF detections in the inside of characteristic point
Fringe region carries out.
Preferably, when carrying out Feature Points Matching using KLT algorithms, in the top calculating of difference of Gaussian pyramid DOG
Light stream, the initial point that the estimating motion being calculated is calculated as next layer, by successive ignition until the bottom.
Preferably, when carrying out layer rendering to virtual screen by the pupil and head movement that track eye:
Scene renders different zones and different levels respectively in unigine engines, uses pupil center's note with eye
Apparent place setting from inside to outside, high, medium and low level rendering is carried out successively.
Preferably, the identification degree for being enhanced image using the image enhancement method of segmented linear gray variation, is then used
Medium filtering reduces noise, by iterative method Fast Segmentation characteristic area, and by area performance and circularities characteristic to pupil
Region is screened, and finally utilizes the ellipse fitting based on least square method to obtain the center of pupil, by pupil
Heart position obtains the range that eyes focus
Beneficial effects of the present invention:
The method of mixing virtual reality provided by the invention, real-time three-dimensional reconstruction can be carried out in spatial scene, is promoted
The sense of reality of human skin and clothes in virtual scene promotes the spatial impression and distance perspective of human body, and it is true heavy to make one to generate
Leaching sense is divided precision three-dimensional reconstruction, is improved scene rendering speed and efficiency based on the subregion of eye movement tracking;
The boiler-plate of the solution of the present invention is same by the reality scene in cockpit by location tracking and computer vision algorithms make
It walks in virtual scene at the moment, directly by the real equipment in the limbs visual tactile auditory perception cockpit of people, preferably
The consistency problem for solving unreal & real space enhances the practicability of flight training simulation system, reduces original flight simulation
Device upgrades the difficulty of feeling of immersion, improves the flight experience of trainer;
Eye-tracking and adapted local cosine transform not only ensure that human eye saw the sense of reality and feeling of immersion of scene, but also can realize part
It renders, mutually suits with eyes imaging feature, actively adapt to screen without human eye, avoid excess eye-using from causing asthenopia, reduce
The occupancy of hardware device, while improving rendering effect and processing speed.
Description of the drawings
Fig. 1 is the camera schematic diagram of the VR helmets of the method for the mixing virtual reality of the present invention;
Fig. 2 is the schematic diagram of three kinds of voxel three-dimensional reconstructions for dividing precision of the method for the mixing virtual reality of the present invention;
Fig. 3 is the schematic diagram of the characteristic point of the method for the mixing virtual reality of the present invention;
Fig. 4 is the schematic diagram of the mixed reality boiler-plate of the method for the mixing virtual reality of the present invention;
Fig. 5 is the flow diagram of the position calculating method of the pupil of the method for the mixing virtual reality of the present invention.
Specific implementation mode
A method of mixing virtual reality:
Step S1:By being configured at two cameras of position of human eye (as shown in Figure 1, two camera shootings i.e. on the VR helmets
Head) include that itself carries out adaptive space reflection to all devices under the range depth in boiler-plate, personnel, it can to eyes
Depending on carrying out three-dimensional reconstruction in range, to eyes focus center range in carry out subtle three-dimensional reconstruction, to eyes focusing range outside
Carry out coarse three-dimensional reconstruction;
Step S2:Outside boiler-plate described in camera acquisition by the fixed position and angle that are fixed on around boiler-plate
Characteristic point real time information and the human body relative tertiary location information obtained by the VR helmets and locator, carry out virtual scene
Compensation and correct and in virtual scene depth information and actual distance information match, obtain virtual scene and VR heads
The relative position relation of helmet;
Step S3:Layer rendering is carried out to virtual screen by the pupil and head movement that track eye, with the position of pupil
It is set to central point to be rendered successively from inside to outside, the resolution ratio of rendering reduces successively from inside to outside
Further, in the step S1, when carrying out three-dimensional reconstruction to the object that eyes are seen, it can regard as and play with building blocks
Process, three-dimensional reconstruction is equivalent to builds an object with building blocks, and build process is divided into the portion fine and coarse, eyes focus
Point, using fine process, using the building blocks of smaller piece, the part of out-focus uses larger product using coarse process
Wood, which offers a saving the quantity of building blocks, to achieve the effect that accelerate three-dimensional reconstruction;Three-dimensional reconstruction is being carried out with camera
When, reconstruction includes the process of texture mapping, the skin and clothes texture of itself human body are shown on the threedimensional model of reconstruction;
Two cameras by being configured at position of human eye wrap all devices under the range depth in boiler-plate, personnel
When including itself and carrying out adaptive space reflection:The area of space of two cameras shooting is split, is distinguished in segmentation
It is split successively using three kinds of ascending voxel sizes, identical area of space, which is partitioned into three groups, has data only
The set of voxels of vertical property;
As shown in Fig. 2, to carrying out three-dimensional reconstruction in eyes visual range, it is fine to being carried out in the center range of eyes focusing
Three-dimensional reconstruction, when to carrying out coarse three-dimensional reconstruction outside eyes focusing range:By two cameras on the VR helmets, in conjunction with visual
The method for reconstructing of shell, determines the state of each voxel, and obtains non-empty set of voxels;Eyes focusing range is tracked,
The set of voxels of corresponding different groups is chosen to different visual zones, the region in center range focused to eyes is chosen most
The set of voxels of small size, outside the center range focused to eyes, the selection of part within visible area be more than what eyes focused
The set of voxels of size in center range selects maximum sized set of voxels to the other visual ranges of eyes, realizes and rebuild
Then effect is carried out the texture mapping of pixelation, is obtained void by the coarse by being fine to successively from inside to outside of eyes focusing range
The three-D grain effect of quasi- scene is realized by being fine to coarse three-dimension modeling, is brought true in trainer's virtual world
Real object there are senses;
The voxel divided by space has independent data, can first be stored in table, and when later stage three-dimensional reconstruction can
Data are directly read by tabling look-up, improve three-dimensional reconstruction speed;
When by voxel three-dimensional reconstruction, the data that can be obtained according to eyeball tracking determine the position that eye is watched attentively, are divided
Precision is rebuild, eye watching area precision highest, and outside precision is successively decreased successively, is met human eye and is seen viewing in kind original in kind
Reason, can improve the speed of three-dimensional reconstruction and reduce GPU occupancies.
In the step S2, in order to accurately track trainer it is observed that body part and below deck equipment, it is real
No matter existing human body is in stance or sitting posture or 6-dof motion platform are kept in motion, human eye can be realized
The accurate fusion of observed real scene and virtual environment, then needing two cameras that position of human eye is accurately positioned
Position (i.e. the position of the VR helmets):By two around boiler-plate, fixed position, fixed angle video camera, in real time acquire
Characteristic point information, wherein characteristic point are fixed on the outside of boiler-plate, by the method for binocular stereo vision, obtain the position of boiler-plate
It sets and posture;The human body relative tertiary location information obtained by the VR helmets and locator, compensates virtual scene and repaiies
Just and in virtual scene depth information and actual distance information match, obtain virtual scene it is opposite with the VR helmets
Position relationship;Wherein, locator is fixed on external independent fixed position, by the phase for calculating the fixed position and the VR helmets
Relative position information is obtained to coordinate;Effect caused by movement so to six degree of freedom platform carries out in virtual scene
It compensation and corrects, reaches the matching of internal virtual scene and external true model, of depth information and actual distance in scene
Match, the error that video camera and locator are brought when avoiding movement:
As shown in figure 4, boiler-plate described in camera acquisition by the fixed position and angle that are fixed on around boiler-plate
When interior real time characteristic points information:The video camera of the fixed position and angle that are fixed on around boiler-plate is adopted by the libraries OpenCV
Parameter calibration is carried out to the boiler-plate with least square method, parameter includes internal reference and outer ginseng, and to static boiler-plate surface
Characteristic point initial position recorded, the initial pose of boiler-plate is converted to by three-dimensional imaging principle and coordinate, establish
Initial coordinate system;
During boiler-plate moves, the video camera by the fixed position and angle that are fixed on around boiler-plate is real-time
The characteristic point of dynamic change is shot, and feature point extraction is carried out by SURF algorithm, Feature Points Matching is carried out using KLT algorithms,
When realizing that boiler-plate quickly moves generation change in displacement, tracking and matching is carried out to characteristic point;
Characteristic point when boiler-plate movement is obtained by the video camera of the fixed position and angle that are fixed on around boiler-plate
World coordinates, calculate the variation of the moment characteristic point relative to initial coordinate, to obtain the position relationship of boiler-plate, and lead to
It crosses the VR helmets and locator obtains human body relative tertiary location information, according to the position of boiler-plate and human body in initial coordinate system
Confidence ceases, and dynamic corrections are carried out to virtual screen by reverse transformation, and matches internal virtual scene and external real scene sky
Between feel and distance perspective;
When extracting characteristic point, the schematic diagram of characteristic point is as shown in figure 3, the characteristic point uses the inner ring to be for white outer ring
The characteristic point of the round black and white contrast color of black, characteristic point are made of three circles, form an isosceles triangle, therefore can be with
Very strong marginal texture is obtained, when carrying out feature point extraction by SURF algorithm, passes through the difference of Gaussian of image shot by camera
Pyramid DOG and the first tomographic image for extracting every group of image, calculus of finite differences carries out adaptive thresholding taking maximum kind to every group of image
Value segmentation obtains binary map, and carries out characteristic point detection using the binary map as constraints, makes SURF detections in characteristic point
Inner edge margin carries out, i.e., obtains threshold value by calculus of finite differences between maximum kind, obtain binary map by threshold value, obtained by binary map
To region limited, with SURF algorithm only to being detected inside this region, to improve the accurate of feature point extraction
Property;
Traditional KLT algorithms are for small and coherent motion in characteristic window it is assumed that when two camera parallax ranges or two cameras
When angle is excessive, it can lead in the image shot by camera of left and right that key point displacement is excessive, matching precision is not high, and the present invention uses
When KLT algorithms carry out Feature Points Matching, in the top calculating light stream of difference of Gaussian pyramid DOG, the movement being calculated
The initial point that valuation is calculated as next layer, by successive ignition until the bottom, finally improves the accuracy of Feature Points Matching,
Realize the tracking and matching to faster longer movement;
In step s3, subregion rendering is carried out to scene according to eyeball fixes position, seeing in image that certain is a part of
Display effect be adjusted, be equivalent to two-dimensional pixel adjustment;By the way that infrared camera to be mounted in the VR helmets, can clearly clap
Take the photograph eye feature;When carrying out layer rendering to virtual screen by the pupil and head movement that track eye:Pass through gradation of image
Change, image filtering, binaryzation, edge detection and pupil center's positioning mode realize eye-tracking, by eyeball position, according to people
Eye fixation principle renders different zones and different levels scene in unigine engines, uses with eye respectively
Pupil watch attentively centered on part from inside to outside, carry out high, medium and low level rendering successively, reduce edge resolution, reduce GPU
Occupancy improves operation frame per second, rendering efficiency is substantially improved.Unigine engines rendering when, position that human eye is watched attentively uses
High image resolution, such scene details is apparent, and the natural environment and geomorphology information in unigine engines are equivalent to a figure,
Image detail changes according to distance:Such as when very remote, land color is that khaki is mingled with some greens, and distance is close again
Some can see more details, such as where be forest, and closer system can load distance on this basis again
The picture of tree, distance closer to when, have the details of tree, leaf, trunk and texture etc.;In the above process here to people
Eye fixation region shows that more rich details, watching area show coarse scene;These rendering portions are on the same face simultaneously, example
Such as it is located at the tree of front, the mountainous region etc. after tree is in different levels, so rendering here is in different zones and different levels
Upper progress is finally embodied in the different resolution for being equivalent to scene shown in the VR helmets.
By the pupil center location of positioning, it is mapped to the field of view that human eye is watched in three-dimensional scenic, by this with area
The outside layer rendering in domain, is divided into three layers from inside to outside, carries out high, medium and low level rendering successively, and innermost layer is most clear, divides outward
Resolution reduces, gradually fuzzy, can save pixel, improves rendering speed.By the pupil center location of positioning, it is mapped to three dimensional field
The field of view that human eye is watched in scape.
As shown in figure 5, it includes that image is pre- that the present invention puts the when of rendering successively from inside to outside centered on the position of pupil
Processing procedure:Using the identification degree of the image enhancement image of segmented linear gray variation, noise is reduced using medium filtering, is led to
Iterative method Fast Segmentation characteristic area is crossed, and pupil region is screened by area performance and circularities characteristic, last profit
The center of pupil is obtained with the ellipse fitting based on least square method, it is poly- that eyes are calculated by the center of pupil
Burnt range;
Two cameras of the present invention, the relationship of infrared camera and the VR helmets:After on VR helmat belts, the eyes of people are seen
Less than real world, the image in two eyeglasses inside the VR helmets is can only see, at the position of the outside of VR helmets eye, installation two
A camera can directly take the real scene in the direction of eye, and real-time three-dimensional reconstruction is carried out by the two cameras;It is red
Outer camera is mounted on VR inner helmets, faces the eyes of people, can take the activity in human eye portion, for positioning the eye of people
The pupil position of ball can obtain the position of eyeglass soon by the position of human eye, so that it may with to the image shown on eyeglass,
Subregional rendering is carried out, using different resolution;The reality scene in virtual reality is mixed, is exactly the true of camera shooting
Then scape is reconstituted in virtual scene, virtually and really mix.
It is shot inside boiler-plate by two cameras before the VR helmets, real-time three-dimensional is reconstructed into be shown in the VR helmets
In virtual scene, it is not only confined to instrument face plate and outside window scene virtual reality fusion certainly, it can be in virtual scene display aircraft
Portion's cabin ambient and aircraft exterior environment, three-dimensional reconstruction instrument display portion, the hand of oneself, the leg etc. that people sees.
Embodiment described above is only that the preferred embodiment of the present invention is described, not to the model of the present invention
It encloses and is defined, under the premise of not departing from design spirit of the present invention, technical side of the those of ordinary skill in the art to the present invention
The various modifications and improvement that case is made should all be fallen into the protection domain of claims of the present invention determination.
Claims (10)
1. a kind of method of mixing virtual reality, it is characterised in that:
Step S1:By being configured at two cameras of position of human eye to all devices under the range depth in boiler-plate, people
Member carries out adaptive space reflection including itself, to carrying out three-dimensional reconstruction in eyes visual range, to the center of eyes focusing
Subtle three-dimensional reconstruction is carried out in range, to carrying out coarse three-dimensional reconstruction outside eyes focusing range;
Step S2:Feature outside boiler-plate described in camera acquisition by the fixed position and angle that are fixed on around boiler-plate
Point real time information and the human body relative tertiary location information obtained by the VR helmets and locator, compensate virtual scene
With correct and in virtual scene depth information and actual distance information match, obtain virtual scene and the VR helmets
Relative position relation;
Step S3:Layer rendering is carried out to virtual screen by the pupil and head movement that track eye, is with the position of pupil
Central point is rendered successively from inside to outside, and the resolution ratio of rendering reduces successively from inside to outside.
2. the method for mixing virtual reality according to claim 1, it is characterised in that:
In the step S1, by being configured at two cameras of position of human eye to all under the range depth in boiler-plate
When equipment, personnel include that itself carries out adaptive space reflection:The area of space of two cameras shooting is split,
It is split successively using three kinds of ascending voxel sizes respectively when segmentation, identical area of space is partitioned into three groups
Set of voxels with data independence.
3. the method for mixing virtual reality according to claim 2, it is characterised in that:
To carrying out three-dimensional reconstruction in eyes visual range, to carrying out subtle three-dimensional reconstruction in the center range of eyes focusing, to double
When carrying out coarse three-dimensional reconstruction outside eye focusing range:It determines the state of each voxel, and obtains non-empty set of voxels;It is poly- to eyes
Burnt range carries out real-time tracing, and the set of voxels of minimum dimension is chosen in the region in center range focused to eyes, to eyes
Part selection outside the center range of focusing, within visible area is more than the voxel collection of the size in the center range that eyes focus
It closes, maximum sized set of voxels is selected to the other visual ranges of eyes, realize and rebuild effect by eyes focusing range by interior
To coarse by being fine to successively outside;Then the texture mapping for carrying out pixelation, obtains the three-D grain effect of virtual scene.
4. the method for mixing virtual reality according to claim 1, it is characterised in that:
In the step S2, boiler-plate described in the camera acquisition by the fixed position and angle that are fixed on around boiler-plate
When outer characteristic point real time information:The video camera for being fixed on fixed position and angle around boiler-plate passes through least square method pair
The video camera carries out parameter calibration, and is recorded to the characteristic point initial position on static boiler-plate surface, passes through solid
Image-forming principle and coordinate are converted to the initial pose of boiler-plate, establish initial coordinate system.
5. the method for mixing virtual reality according to claim 4, it is characterised in that:
During boiler-plate moves, by the video camera captured in real-time of the fixed position and angle that are fixed on around boiler-plate
The characteristic point of dynamic change, and feature point extraction is carried out by SURF algorithm, Feature Points Matching is carried out using KLT algorithms, is realized
When boiler-plate quickly moves generation change in displacement, tracking and matching is carried out to characteristic point.
6. the method for mixing virtual reality according to claim 5, it is characterised in that:
The generation of characteristic point when boiler-plate movement is obtained by the video camera of the fixed position and angle that are fixed on around boiler-plate
Boundary's coordinate calculates the variation of the moment characteristic point relative to initial coordinate, to obtain the position relationship of boiler-plate, and passes through VR
The helmet and locator obtain human body relative tertiary location information, are believed according to the position of boiler-plate and human body in initial coordinate system
Breath carries out dynamic corrections to virtual screen by reverse transformation, and matches internal virtual scene and external real scene spatial impression
And distance perspective.
7. the method for mixing virtual reality according to claim 5, it is characterised in that:
When extracting characteristic point, the characteristic point uses inner ring for the feature for the round black and white contrast color that white outer ring is black
Point when carrying out feature point extraction by SURF algorithm, by the difference of Gaussian pyramid DOG of image shot by camera and extracts every group
First tomographic image of image, calculus of finite differences carries out adaptive threshold fuzziness and obtains binary map taking maximum kind to every group of image, and
Characteristic point detection is carried out using the binary map as constraints, SURF detections is made to be carried out in the inner edge margin of characteristic point.
8. the method for mixing virtual reality according to claim 6, it is characterised in that:
When carrying out Feature Points Matching using KLT algorithms, in the top calculating light stream of difference of Gaussian pyramid DOG, calculating
The initial point that the estimating motion arrived is calculated as next layer, by successive ignition until the bottom.
9. the method for mixing virtual reality according to claim 1, it is characterised in that:
When carrying out layer rendering to virtual screen by the pupil and head movement that track eye:To field in unigine engines
Being rendered respectively to different zones and different levels for scape, uses the pupil center with eye to watch position attentively from inside to outside, according to
It is secondary to carry out high, medium and low level rendering.
10. the method for mixing virtual reality according to claim 1, it is characterised in that:
Enhance the identification degree of image using the image enhancement method of segmented linear gray variation, medium filtering reduction is then used to make an uproar
Point screens pupil region by iterative method Fast Segmentation characteristic area, and by area performance and circularities characteristic, most
The ellipse fitting based on least square method is utilized to obtain the center of pupil afterwards, it is poly- to obtain eyes by the center of pupil
Burnt range.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810060883.3A CN108305326A (en) | 2018-01-22 | 2018-01-22 | A method of mixing virtual reality |
PCT/CN2018/107555 WO2019140945A1 (en) | 2018-01-22 | 2018-09-26 | Mixed reality method applied to flight simulator |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810060883.3A CN108305326A (en) | 2018-01-22 | 2018-01-22 | A method of mixing virtual reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108305326A true CN108305326A (en) | 2018-07-20 |
Family
ID=62866294
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810060883.3A Pending CN108305326A (en) | 2018-01-22 | 2018-01-22 | A method of mixing virtual reality |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108305326A (en) |
WO (1) | WO2019140945A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712224A (en) * | 2018-12-29 | 2019-05-03 | 青岛海信电器股份有限公司 | Rendering method, device and the smart machine of virtual scene |
WO2019140945A1 (en) * | 2018-01-22 | 2019-07-25 | 中国人民解放军陆军航空兵学院 | Mixed reality method applied to flight simulator |
CN110460831A (en) * | 2019-08-22 | 2019-11-15 | 京东方科技集团股份有限公司 | Display methods, device, equipment and computer readable storage medium |
CN111459274A (en) * | 2020-03-30 | 2020-07-28 | 华南理工大学 | 5G + AR-based remote operation method for unstructured environment |
CN111882608A (en) * | 2020-07-14 | 2020-11-03 | 中国人民解放军军事科学院国防科技创新研究院 | Pose estimation method between augmented reality glasses tracking camera and human eyes |
CN112308982A (en) * | 2020-11-11 | 2021-02-02 | 安徽山水空间装饰有限责任公司 | Decoration effect display method and device |
CN112562065A (en) * | 2020-12-17 | 2021-03-26 | 深圳市大富网络技术有限公司 | Rendering method, system and device of virtual object in virtual world |
CN112669671A (en) * | 2020-12-28 | 2021-04-16 | 北京航空航天大学江西研究院 | Mixed reality flight simulation system based on physical interaction |
CN114450942A (en) * | 2019-09-30 | 2022-05-06 | 京瓷株式会社 | Camera, head-up display system, and moving object |
CN116205952A (en) * | 2023-04-19 | 2023-06-02 | 齐鲁空天信息研究院 | Face recognition and tracking method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102568026A (en) * | 2011-12-12 | 2012-07-11 | 浙江大学 | Three-dimensional enhancing realizing method for multi-viewpoint free stereo display |
CN106055113A (en) * | 2016-07-06 | 2016-10-26 | 北京华如科技股份有限公司 | Reality-mixed helmet display system and control method |
CN106843456A (en) * | 2016-08-16 | 2017-06-13 | 深圳超多维光电子有限公司 | A kind of display methods, device and virtual reality device followed the trail of based on attitude |
US20170301137A1 (en) * | 2016-04-15 | 2017-10-19 | Superd Co., Ltd. | Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality |
US20180008141A1 (en) * | 2014-07-08 | 2018-01-11 | Krueger Wesley W O | Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101231790A (en) * | 2007-12-20 | 2008-07-30 | 北京理工大学 | Enhancing reality flight simulator based on a plurality of fixed cameras |
US10176639B2 (en) * | 2014-11-27 | 2019-01-08 | Magic Leap, Inc. | Virtual/augmented reality system having dynamic region resolution |
CN107154197A (en) * | 2017-05-18 | 2017-09-12 | 河北中科恒运软件科技股份有限公司 | Immersion flight simulator |
CN108305326A (en) * | 2018-01-22 | 2018-07-20 | 中国人民解放军陆军航空兵学院 | A method of mixing virtual reality |
-
2018
- 2018-01-22 CN CN201810060883.3A patent/CN108305326A/en active Pending
- 2018-09-26 WO PCT/CN2018/107555 patent/WO2019140945A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102568026A (en) * | 2011-12-12 | 2012-07-11 | 浙江大学 | Three-dimensional enhancing realizing method for multi-viewpoint free stereo display |
US20180008141A1 (en) * | 2014-07-08 | 2018-01-11 | Krueger Wesley W O | Systems and methods for using virtual reality, augmented reality, and/or a synthetic 3-dimensional information for the measurement of human ocular performance |
US20170301137A1 (en) * | 2016-04-15 | 2017-10-19 | Superd Co., Ltd. | Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality |
CN106055113A (en) * | 2016-07-06 | 2016-10-26 | 北京华如科技股份有限公司 | Reality-mixed helmet display system and control method |
CN106843456A (en) * | 2016-08-16 | 2017-06-13 | 深圳超多维光电子有限公司 | A kind of display methods, device and virtual reality device followed the trail of based on attitude |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019140945A1 (en) * | 2018-01-22 | 2019-07-25 | 中国人民解放军陆军航空兵学院 | Mixed reality method applied to flight simulator |
CN109712224A (en) * | 2018-12-29 | 2019-05-03 | 青岛海信电器股份有限公司 | Rendering method, device and the smart machine of virtual scene |
CN110460831B (en) * | 2019-08-22 | 2021-12-03 | 京东方科技集团股份有限公司 | Display method, device, equipment and computer readable storage medium |
CN110460831A (en) * | 2019-08-22 | 2019-11-15 | 京东方科技集团股份有限公司 | Display methods, device, equipment and computer readable storage medium |
US11263767B2 (en) | 2019-08-22 | 2022-03-01 | Beijing Boe Optoelectronics Technology Co., Ltd. | Method for processing image in virtual reality display device and related virtual reality display device |
CN114450942A (en) * | 2019-09-30 | 2022-05-06 | 京瓷株式会社 | Camera, head-up display system, and moving object |
CN111459274A (en) * | 2020-03-30 | 2020-07-28 | 华南理工大学 | 5G + AR-based remote operation method for unstructured environment |
CN111459274B (en) * | 2020-03-30 | 2021-09-21 | 华南理工大学 | 5G + AR-based remote operation method for unstructured environment |
CN111882608A (en) * | 2020-07-14 | 2020-11-03 | 中国人民解放军军事科学院国防科技创新研究院 | Pose estimation method between augmented reality glasses tracking camera and human eyes |
CN112308982A (en) * | 2020-11-11 | 2021-02-02 | 安徽山水空间装饰有限责任公司 | Decoration effect display method and device |
CN112562065A (en) * | 2020-12-17 | 2021-03-26 | 深圳市大富网络技术有限公司 | Rendering method, system and device of virtual object in virtual world |
CN112669671A (en) * | 2020-12-28 | 2021-04-16 | 北京航空航天大学江西研究院 | Mixed reality flight simulation system based on physical interaction |
CN116205952A (en) * | 2023-04-19 | 2023-06-02 | 齐鲁空天信息研究院 | Face recognition and tracking method and device, electronic equipment and storage medium |
CN116205952B (en) * | 2023-04-19 | 2023-08-04 | 齐鲁空天信息研究院 | Face recognition and tracking method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019140945A1 (en) | 2019-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108305326A (en) | A method of mixing virtual reality | |
US10846903B2 (en) | Single shot capture to animated VR avatar | |
CN105427385B (en) | A kind of high-fidelity face three-dimensional rebuilding method based on multilayer deformation model | |
US20190200003A1 (en) | System and method for 3d space-dimension based image processing | |
US8218825B2 (en) | Capturing and processing facial motion data | |
JP2021525431A (en) | Image processing methods and devices, image devices and storage media | |
CN106896925A (en) | The device that a kind of virtual reality is merged with real scene | |
CN105869160A (en) | Method and system for implementing 3D modeling and holographic display by using Kinect | |
CN113366491B (en) | Eyeball tracking method, device and storage medium | |
CN109377557A (en) | Real-time three-dimensional facial reconstruction method based on single frames facial image | |
CN106997618A (en) | A kind of method that virtual reality is merged with real scene | |
CN107145224B (en) | Human eye sight tracking and device based on three-dimensional sphere Taylor expansion | |
CN107134194A (en) | Immersion vehicle simulator | |
US20230154101A1 (en) | Techniques for multi-view neural object modeling | |
JP2022501732A (en) | Image processing methods and devices, image devices and storage media | |
CN106981100A (en) | The device that a kind of virtual reality is merged with real scene | |
US10755476B2 (en) | Image processing method and image processing device | |
Beacco et al. | Automatic 3d character reconstruction from frontal and lateral monocular 2d rgb views | |
JP5362357B2 (en) | Capture and process facial movement data | |
CN107016730A (en) | The device that a kind of virtual reality is merged with real scene | |
Aso et al. | Generating synthetic humans for learning 3D pose estimation | |
CA3215411A1 (en) | Surface texturing from multiple cameras | |
Jędrasiak et al. | Interactive application using augmented reality and photogrammetric scanning | |
CN116844239A (en) | Amputation rehabilitation training system based on mixed reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20240112 |
|
AD01 | Patent right deemed abandoned |