CN109564748A - Mix photonic system - Google Patents
Mix photonic system Download PDFInfo
- Publication number
- CN109564748A CN109564748A CN201780030255.4A CN201780030255A CN109564748A CN 109564748 A CN109564748 A CN 109564748A CN 201780030255 A CN201780030255 A CN 201780030255A CN 109564748 A CN109564748 A CN 109564748A
- Authority
- CN
- China
- Prior art keywords
- signal
- image
- group
- channelizing
- real world
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
A kind of VR/AR system, method, framework include booster, receive simultaneously and Coping with Reality world picture forms signal, while generating synthetic world image composition signal, then carry out staggeredly/enhancing to these signals to be further processed.In some embodiments, real world signal (a possibility that being handled by booster) is converted into IR (using such as pseudocolour picture) and is interleaved to synthetic world signal (generating in IR) for continuing with, including visualization (being converted to visible spectrum), amplitude/bandwidth processing and output Shaping, for generating one group of display image precursor for being used for HVS.
Description
Cross reference to related applications
This application claims the U.S. Patent Application No.s 15/457,967,15/457,980,15/ submitted on March 13rd, 2017
457,991 and 15/458,009 equity, and require the power of U.S. Patent Application No. 62/308,825,62/308,361,62/
Benefit.All applications are submitted on March 15th, 2016, this application and U.S. Patent application No.12/371, and 461,62/181,143
With 62/234,942 is related, and entire contents are expressly incorporated into herein.Theirs is completely used for all purposes.
Technical field
The present invention relates generally to video and digital picture and data processing equipments and network, generate, and transmit, and switch, distribution,
Storage and the such data of display and non-video and non-pixel data processing in array, such as sense.Array and spatial light
The application and use of modulator and data, more specifically but not exclusively, digital video image display, either plane
Screen, flexible screen, 2D or 3D or projected image and non-display data are handled by equipment array, and the space of tissue
Form and these processes of positioning, including the compact devices such as flat panel TV and consumer's mobile device, and image capture is provided,
Transmission, distribution, distribution, tissue, the data network picture element signal or data-signal of storage or polymerization or transmission of its set, display
And projection.
Background technique
Main topic of discussion is not answered merely because mentioned in the background section and be considered as existing skill in background technology part
Art.Similarly, should not be assumed that mentioned in the background section or to background technology part theme it is related problem it is previous
It is realized in the prior art.Theme in background technology part only represents different methods, these methods itself can also
To be invention.
The field of the invention is not single, is combined with two related fieldss, i.e. augmented reality and virtual reality, but seeks
The mobile device solution that location and offer integrate, which solve the critical issues and limitation of the prior art in two fields.
The brief review of the background of these related fieldss to will be solved the problems, such as clearly and limitation, and the solution of the proposition for the disclosure
The design of scheme stage.
It is as follows that two normal dictionaries of these terms define (source: Dictionary.com):
Virtual reality: " simulation true to nature, including three-dimensional figure are carried out to environment using interactive software and the computer system of hardware
Shape.Abbreviation: VR.
Augmented reality: " the enhancing image or environment checked on screen or other displays, by folding in real world environments
Add the image of computer generation, sound or other data and generates.": " subsystem or technology for generating such image improves ring
Border.Abbreviation: AR.
From definition, it is apparent that despite nontechnical, and for these those skilled in the relevant art,
Essential distinction is to simulate whether element is complete immersion simulation, even directly observe the part of reality, Huo Zhemo
Quasi- element is forced over one clearly, in accessible real viewpoint.
Little more technical definition is provided now in the wikipedia item of the theme, it is contemplated that the depth of the contribution of page editing
Degree and range, it is believed that the field is sufficiently indicated.
Virtual reality (VR), sometimes referred to as immersion multimedia are a kind of simulated computer environments, can simulate real world
Or being physically present in the imagination world.Virtual reality can reappear sensory experience, including virtually sample, vision, smell, sound,
Tactile etc..
Augmented reality (AR) is physics, the real-time direct or indirect view of the environment of real world, and element passes through computer
Feeling input (such as sound, video, figure or GPS data) Lai Zengqiang (or supplement) of generation.
Inherently but only lie in these definition in be mobile viewpoint essential attribute.By virtual or augmented reality and more generally
Computer simulation class discrimination comes, regardless of whether there is any combination, merges, it is comprehensive or with " real-time ", it is " direct " real to be imaged
(Local or Remote) it is integrated, be all simulation or mixing (enhancing or " mixing ") reality " while really " image, when spectators exist
When moving in real world, the viewpoint of viewer is mobile with observer.
The disclosure proposes, need this more precise definition distinguish immersion show it is quiet with experience simulated world (simulator)
The mobile navigation of state navigation and simulated world (virtual reality).Then, the subclass of simulator will be " personal simulator ",
Or be at most " some virtual reality ", wherein fixed user is equipped with immersion HMD (head-mounted display) and haptic interface
(for example, motion tracking gloves).), part " virtual reality " navigation of simulated world may be implemented in it.
On the other hand, CAVE system will be schematically limited to limited virtual reality system, because only that by removable
Floor could be navigated by the size of CAVE, and once reach the limitation of CAVE itself, next can be another
" some virtual reality " of form.
Pay attention to " moving " difference between viewpoint and " removable " viewpoint.Computer simulation, such as video-game are simulated worlds
Or " reality ", but unless simulated world explorer's dynamic in person, or instruct the movement of another person or robot, otherwise own
It may be said that (although this is in past 40 years, the Main Achievements of computer graphics are that simply " establish " can in software
The simulated environment being explored, i.e. simulated world are " can navigate ".
For virtual or mixing (the preferred term of author) reality simulation, one is important, definition be characterized in simulating (no matter
Be of fully synthetic or mix) arrive real space mapping.Such real space can be as the room in laboratory or sound field
It is equally basic, and be only with the grid of certain ratio mapping and the calibrated analog world.
This differentiation is not evaluation property, as part VR, provides natural interface (head tracking, tactile, sense of hearing etc.) in real time
Without movement or it is mapped to actual real terrain, either naturally, artificial or hybrid power, unlike analogies
It manages interaction and the value for providing the part VR system of sensory immersion is much lower.But without enough feedback systems, or it is more general
Time, whole body, mechanical interface-interaction surface of motion range feedback system and/or dynamically changeable shape, it supports user's mould
The quasi- body kinematics still (felt) in complete-any landform, it is any static, it either stands, is seated or sways, VR
System is " inclined " according to definition.
But in the case where no this ideal whole body physical interface/feedback system, VR is limited to " complete " and complete
The landform in the world VR can be limited in the place that can be found in real world by mobile version, modified or constructed from the beginning.
This limitation will seriously limit the range and ability of virtual reality experience.
But such as will i.e. by disclosure it will be apparent that this difference generate difference because it be existing VR and AR
How different system is and their limitation is provided with " bright line ", and provide the background of notice disclosure introduction.
The missing of simulation has been determined but essential feature and requirement are one complete " virtual realities ", has been determining in next step
What mode to realize the implication problem of " mobile viewpoint " by.The view that answer is to provide mobile simulation needs two components, it
Itself realized by the combination of hardware and software: dynamic image display device can check simulation by it, and move with
Track device, it can track the movement of the equipment of the display including 3 kinematic axis, it means that survey from minimum of three trace point
Measuring the position of three dimensional viewing equipment at any time, (two, if measuring device is mapped, may infer that the third on third axis
Position, and relative to 3 axis referentials, it can be any any 3D coordinate system for being mapped to real space, although for reality
Border purpose is mechanically navigated the space, and 2 axis will form a plane, which is ground level, gravity horizontal, third axis Z, vertically
In the ground level.
In fact, as the accurate of the function of time and continually realizing that the solution of this orientation for place needs sensor and soft
The combination of part, and the progress of these solutions represents the main carriers of the development in VR and AR hardware/software field.It is mobile
Watch equipment and system.
These are relatively new fields, with regard to earliest experiment and the time range between practical technique now and product
Speech, it is sufficient to record origin followed by current state-of-the-art technology.In two class moving-vision simulation systems, in addition in the prior art
Specific innovation except, these innovations are of great significance for the development of the disclosure, or with it is current for preferably explaining
The significant differences point or similitude of problem are related.In this field or the solution difference with the prior art of the disclosure.
Span associated analog and simulator from nineteen sixty-eight to the nineties latter stage, many innovation periods in the field VR and AR,
The middle many critical issues for realizing VR and AR have found initial or partial solution.
Initiative experiment and experimental wear-type of the Ivan Sutherland and its assistant Bob Sprouell since nineteen sixty-eight
Display system is typically considered the origin for marking these related fieldss, although the work of early stage, substantially concept development are first
In this, experimentally implements any type of AR/VR realization for the first time and immerse and navigate.
The birth of fixed-analog device system can trace back to the imaging generated to flight simulator addition computer, this is usually recognized
It is to start in nineteen sixty for the middle and later periods.This is only limitted to the use of CRT, full focal length images is shown between CRT and user, directly
By 1972, when Singer-Link company releases collimation optical projection system, remote burnt image-spectroscopy system is projected by light beam,
Visual field is improved to per unit about 25-35 degree (100 degree, three units are used in single pilot's simulator) by it.
The benchmark is only improved in nineteen eighty-two by Rediffusion company, introduces wide-field system, wide-angle infinite display system
System, realizes 150 by using multiple projectors, then final 240 degree of FOV.One big and curved collimation screen.Exactly exist
This stage, fixed-analog device may be described as finally realizing really immersing for significant degree in virtual reality, use
HMD is come the person of quarantining and eliminates peripheral visual cues interference.
But Singer-Link company was releasing the screen colimated light system for simulator at that time, the footrest as the experience of VR type
Stone, the very limited commercial helmet-mounted display of first item is exploited for military use first, wherein being integrated with based on mask
The electronic aiming system of version, the motion tracking with the helmet itself.These initial development are generally considered to be South African Air Force and exist
(being followed by the Israeli Air Force at that time with middle nineteen seventies) that the 1970s is realized with preliminary form, it may be said that be one
A beginning.Basic AR or intermediary/mixed reality system.These early stages, the minimum but still initiative helmet-type system of figure
System realizes the limited conjunction of the target information for the position co-ordination being covered in the motion tracking target of mask and user's actuating
At being the invention of Steve Mann later.First " neutral reality " mobile browsing system, the first generation " EyeTap ", it will figure
Shape is superimposed upon on glasses.
The later period version of Mann uses optics recombination system, which is based on beam splitter/combiner optical device and merges
Practical and processing image.This later stage work to work prior to Chunyu Gao and Augmented Vision Inc, Hou Zheji
A kind of double Mann systems are proposed on this, treated true picture are combined with the image that optics generates, the system of Mann
It completes processed reality and is electronically generated.In the system of Man, the RUNTIME VIEW by image is remained, but in Gao
System in, all view images are all processed, even if can also eliminate any direct view figure as an option
Picture.(Chunyu Gao, on April 13rd, 2013 U.S. Patent application 20140177023 submitted).Specified by the system of Gao
" light path folding optics " structures and methods can be found in other optics HMD system.
By 1985, Jaron Lanier and VPL Reseearch were set up to develop HMD and " data glove ", therefore, to 1980
Age, Mann, the simulation of Lanier and Redefussion company, the tri- main development paths VR and AR are very active at one
Development field in, have some most important progress and some basic solution types, these types are in majority of case
Under continue till now with most advanced level.
Computer generates the complexity of imaging (CGI), and the sustained improvement of game machine (hardware and software) has real-time interactive CG
Technology, the extension of the bigger system integration and AR between multiple systems, and more limited VR mobility is generation nineteen ninety
One of Main Trends of The Development.
CAVE system makes one's first appearance by Chicago University of Illinois electronic visualization development in laboratory, in 1992 in the whole world,
It is proposed the mobile VR and novel analog device of finite form.(Carolina Cruz-Neira, Daniel J Sandin,
Thomas A.DeFanti, Robert V.Kenyon and John C.Hart." CAVE:Audio Visual Experience
Automatic Virtual Environment (CAVE: the automatic virtual environment of audiovisual experience) ", ACM Communications,
Vol.35 (6), 1992, pp.64-72.Other than the combination of the HMD/ data glove of Lanier, CAVE is also by WFOV multi wall simulator
" stage " is combined with tactile interface.
Meanwhile Louis's Rosenberg (Arms Rosenberg) is developed in Armstrong's the Air Force Research Laboratory
A kind of fixation " virtual " VR system of stationary part AR, " virtual clamp " system (1992) and Jonathan's Walden, early in
To during nineteen ninety, it is regarded as initial exploration project within 1985, also will commercially make one's first appearance in 1992.
Mobile AR is integrated into multiple-unit move vehicle " war game " system, in " enhancing simulation " (" AUGSIMM ")
In conjunction with true and virtual vehicle, it will be seen that its next major progress in the form of Loral WDL was traded to 1993.Then
" experience and observation that augmented reality is applied to on-site training ", project participant, Peculiar were write in 1999
The Jon Barrilleaux of Technologies commented on nineteen ninety-five SBIR Final Report as a result, and what is indicated, even
It is the up to the present continuing problem that mobile VR and (movement) AR faces:
AR vs.VR tracking
In general, the commercial product for VR exploitation has good resolution ratio, but absolute precision and wide area needed for shortage AR
Coverage area, less for being used in AUGSIM.
VR application-user is immersed in synthetic environment-is more concerned about opposite tracking rather than absolute accuracy.Since the world of user is
Complete synthesis and self is consistent, his/her head just turn over 0.1 degree the fact ratio know that it refers to now within 10 degree
Much more significant is wanted to the positive north.
AR system, such as AUGSIM, without this luxury goods.AR tracking must have good resolution ratio, so as to virtual element
Seem smoothly to move in real world when the head rotation or vehicle of user are mobile, and it there must be good standard
True property is correctly covered so as to virtual element and is blocked by object in the real world.
Persistently improve with network speed in the nineties with calculating, the new projects of outdoor augmented reality system are started, packet
Include United States Naval Research Laboratory, BARS system, " BARS: battlefield augmented reality system ", Simon Julier, Yohan
Baillot, Marco Lanzagorta, Dennis Brown, Lawrence Rosenblum;NATO military system information processing
Technical seminar, 2000.Abstract: " system is shown by wearable computer, Radio Network System and tracking perspective wear-type
Device (HMD) composition.User obtains enhancing by will be on the visual field of Graphics overlay to user the perception of environment.Figure and actual rings
(alignment) is registered together in border."
Also underway, the Hirokazu Kato including Nara science and technology research institute of non-military specific exploitation,
The work of ARToolkit is issued in HITLab later and is further developed, and HITLab introduces software development kit and agreement.
Viewpoint tracking and virtual objects tracking.
During this period, these milestones are often referred to as most important, although other researchers and company are in the field
It is very active.
Although the military fund for large-scale development and test for the AR of training simulation is sufficiently recorded, and to this
The research work that obvious other systems grade design and the demand of system demonstration are being subsidized with military affairs carries out simultaneously.
The non-military experiment of most important one is video-game Quake, the AR version of ARQuake, by Bruce Thomas in south
Australian university's wearable computer laboratory is initiated and is led, and is published in " ARQuake: outdoor/indoor " augmented reality
First person application, " the 4th wearable computer international symposium, the 139-146 pages, Atlanta, Georgia, 2000
October in year;(Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M.With
Piekarski, W.).Abstract: " we have proposed one kind be based on GPS, digital compass and view-based access control model point tracking low cost, in
The framework of equally accurate six degree of freedom tracking system."
It is the system developed by the author of the disclosure in the another system that nineteen ninety-five starts to design and develop.Initial purpose is to realize
The mixing of outdoor augmented reality and TV programme, referred to as " endless live streaming ", which obtains in the nineties later period into one
Step development, fundamental in 1999 complete, effort commercial at that time be original video game provide with funds/TV mixing
Power car is released, and is then included another version, is developed for high-end theme holiday village.By 2001, it was with secrecy side
Formula is disclosed to the company including Ridley and Tony Scott company, especially their joint venture
Airtightplanet (other affiliates include Renny Harlin, Jean Giraud and European heavy metal), the disclosure is made
For supervise business operational staff, and using at that time " other worlds " and " other world industries " project and risk investment as with
The proposal joint venture of ATP investment and cooperation.
It is the system design finally determined for 1999/2000 year and component abstract below:
Abstract (history file version, 2003) from " other industry business proposal file ":
Technical background: proprietary integrated " open field " simulation of state-of-the-art technology and mobile virtual reality: tool, facility and technology.
This is only partial list and the general introduction of the relevant technologies, they are formed together the trunk of proprietary system.Some technology components are
Proprietary, it is some to come from external supplier.But the unique system for being combined with the component by verifying will be absolutely it is proprietary-simultaneously
And there is revolutionary character:
It is interacted with VR-ALTERED WORLD:
1) mobile army grade VR equipment, the VR for visitor/participant and participant to be immersed in OTHERWORLD enhance landscape
In.Although their " venture " (namely they explore each movement of the OTHERWORLD around holiday village) is to pass through
Shift action captures sensor and digital camera (band automatic extinction technology), and guest/player and employee/performer's real-time capture are logical
Crossing their sunshading board can see each other and the superposition of computer simulation image.Sunshading board is binoculars, translucent
Flat-panel monitor or binoculars, but opaque flat-panel monitor, before post binocular camera.
By these " synthins " that flat-panel monitor is superimposed upon in visual field may include landscape changing section (or entire landscape,
Digitally change).In fact, those replace " synthesis " the landscape part of necessary being to be based on the original of holiday village various pieces
3D photograph " captures " and generation.(#7 seen below).As the accurate geometry based on photo " Virtual Space " in computer,
Digital change can be carried out to them in any way, while keep the photo real quality and geometry/space essence of original capture
Degree.This makes the precise combination of the real-time digital photography and the numerical portion changed of same space.
It include people by other " synthins " that flat-panel monitor is superimposed, biology, " the evil spirit that atmosphere FX and computer generate or change
Art ".These are by showing that (transparent or opaque) is shown as the real elements in the visual field.
By using location data, guest/sportsman and employee/performer movement capturing data, and pass through multiple digital phases
Machine carries out real-time mask to it, all these to be all calibrated to each previous " capture " version.The region of holiday resort (is seen below
#4 and 5), synthesized element can absolutely accurately match the real elements shown by display in real time.
Therefore, the dragon that the true computer of photo generates seems that really tree can be passed through, returns to surrounding, then fly up and come and drop
Fall on the real castle of holiday village-then dragon " can burn " computer generation fire.It is (translucent in flat-panel monitor
Or opaque) in, flame seems the top " blackening " for making castle.Realize this effect and be because by sunshading board, castle it is upper
Portion changes version " covering " by the computer of the 3D " capture " of castle in system file.
2) physics electric light mechanical gear is used for true people and visual human, the fight between biology and foreign exchange." tactile " interface provides
Motion sensor and other data, and vibration and resistance feedback allow true man and visual human, the real-time friendship of biology and magic
Mutually.For example, the haptic apparatus of " pillar " pommel form provides data when guest/player swings, when guest/player seems
Physical feedback is provided when " strike " virtual Ogre, to realize the illusion of fight.It is all these to be all combined in real time and pass through double
Mesh flat-panel monitor is shown.
3) open field motion capture device.Mobile and fixed motion capture apparatus (is similar to and is used for The Matrix film
Equipment) disposed in entire holiday resort.The data point on theme " equipment " worn by visitor/player and employee/performer by
Video camera and/or sensor tracking, to provide exercise data, for in the visual field that is shown on the binocular plate in VR sunshading board
Virtual element interaction.
The output of motion capture data is so that (with enough uses for calculating rendering capability and motion editing and maneuver library) CGI
Guest/the player and employee/performer of change version are possibly realized " The Lord of the Rings " along the principle of second and the rumble role of third
Film.
4) LAAS and GPS data are utilized, real time laser ranging data and triangulation technique (including come from Moller Aerobot
UAV the enhancing of motion capture data).Additional " location data " allows to carry out real-time and synthesized element more effective (and entangle
It is wrong) it is integrated.
News release from unmanned plane manufacturer:
July 17.Before one week, a contract has been signed in Honeywell, for establish Local Area Augmentation System (LAAS) stand it is initial
Network, some testing stations are being run.The system can accurately vector aircraft land at airport (and heliport), and precision is
Inch.LAAS system will be estimated to come into operation in 2006.
5) the automatic real-time delustring of open field " broadcasting ".In conjunction with the motion capture data allowed with simulation element interactions, spend a holiday visitor
People/participant will use P24 (or equivalent) digital camera carry out digital imagery, using proprietary Automatte software, automatically every
It is integrated from (mask) element appropriate and view and synthesized element.The technology, which will become, to be used to ensure when being superimposed digital element just
Really one of the external member of separation foreground/background.
6) army grade simulation hardware and technology are combined with state-of-the-art game engine software.In conjunction with from motion capture system
Data are used for and such as stage property sword, synthesized element and real-time elemental (unrest by military simulation software and game engine Integrated Simulation
It is thick and disorderly or complete) etc " synthesis " element interactions haptic apparatus.
These component softwares provide AI code, with animation compound people and biology (AI- or artificial intelligence-software, such as in finger ring
The Massive software of animation is made in king's film for army), generate water true to nature, cloud, fire etc., and otherwise just as computer is swum
All elements are integrated as military simulation software and are combined in play.
7) the actual position capture based on image, digital virtual collection true to nature is created with image-based technique, by Paul
Doctor Debevec (basis of " bullet time " FX of matrix) starts.
" base " virtual location (holiday village inside and outside) and real world cannot be distinguished because they from photo and
The true illumination of position when " capture ".A series of digital picture of high quality, in conjunction with the number of light probe and laser ranging data
According to and " be based on image " appropriate graphics software, be all to rebuild all needed for actual and virtual 3d space in a computer.
It is exactly matched with original version.
Although from rural external position capture " virtual set " inside and around true castle, once by these " bases
Plinth " or default version digitlization, have lighting parameter and the every other data from initial exact time.Capture, can be with
It is changed, including illumination, the element of addition are not present in real world, and existing element is changed and " dressing " is to create
Build the illusion version in our worlds.
When guest/player and employee/performer pass through holiday village different location " portal " (" gateway " be from " our world " to
Effective " crosspoint " in " other worlds ") when, it may occur that calibration procedure.At this point, at " gateway " positioning from visitor/player or
Employee/performer data, by the Virtual Space in computer " locked " to the coordinate of " gateway ".Computer " knowing " passes through upper
State the coordinate of the gateway point for the virtual version about its entire holiday village that " capture " process based on image obtains.
Therefore, its virtual holiday village can be put into client/player or employee/performer institute before VR goggles by computer with it
That sees " arranges " together.Therefore, by the binocular flat-panel monitor of translucent version, if virtual version is superimposed upon validity
In false village, then highly precisely matching another.
Alternatively, use " opaque " binocular flat-panel monitor goggles or the helmet, wearer that can assertorically walk with the helmet,
The virtual version of the holiday village in face of him is only seen, he is actually walking because the landscape of virtual world will exactly match landscape
Road.
Of course, it is possible to by goggles is shown to him will be change red sky, the storm cloud of boiling do not deposit actually
And top have dragon castle guardrail, only " setting fire " arrive castle battlement.
In addition to charging on the army mountain at a distance of 1000 ores!
8) the supercomputer rendering of holiday village and simulation facility.One keystone resources will make very high-quality, close to the electricity of function
Shadow quality simulating becomes each holiday village in scene of a supercomputer rendering and simulation complexity.
The computer game of independent computer game machine (Playstation 2, Xbox, GameCube) and desktop computer in figure and
The improvement of aspects of game play is well-known.However, it is contemplated that the improvement of game experiencing is based on single console or individual calculus
The processor of machine and the improvement for supporting system.It imagines, then the capacity of supercomputing center is placed in after game experiencing.
Only this is a little exactly the huge leap forward of figure and quality of play.This one aspect that only mobile VR takes a risk, will be other worlds
Experience.
From to the review of foregoing teachings, it is apparent that and for being apparent for those skilled in the relevant art
, these fields are VR, AR and wider simulation field, the personal hardware or software systems of proposition.Improving the prior art must
Must consider wider system parameter, and these clear system parameters it is assumed that carry out assessment appropriate.
Therefore the substantive content of this motion, emphasis is the hardware technology system for belonging to portable AR and VR technology category, and
The actually fusion of the two, but its most reliable version is wearable technology, is a kind of in preferred wearable version
HMD technology only considers or rethinks the whole system belonging to it, could become a kind of perfect solution.Therefore, it is necessary to
Bigger VR is provided, the history of AR and simulation system, because for example, the proposal of new HMD technology and the trend of commercial product are too narrow,
Do not account for, do not examine yet, the hypothesis of system level, it is desirable that and it is new a possibility that.
It is unnecessary for carrying out similar historical review to the developing major milestones of HMD technology, because in system level
Wider history will be necessary, to provide the frame that can be used for helping explain its limitation.The prior art is existing in HMD
Technology and status, and proposed solution the reason of and the solution that is proposed solve the problems, such as identified original
Cause.
It is enough to understand and identify that the content of limitation of the prior art in HMD starts from the following contents.
In the classification of head-mounted display (for the purpose of this disclosure, including helmet-mounted display), up to the present
Identify two kinds of main subtypes: VR HMD and AR HMD, it then follows those of provide the meaning of definition.Herein, and
And in the classification of AR HMD, distinguished using two classifications these types be " video perspective " or " optical perspective " (more
Typically it is referred to as " optics HMD ").
In VR HMD display, user watches single panel or two individual displays.The typical shape of this HMD is usual
It is the shape of goggles or mask, although many VR HMD have the appearance of the welding helmet, the closing sunshade with large volume
Plate.In order to ensure optimal video quality, feeling of immersion and do not divert one's attention, this system be it is completely enclosed, around display
Periphery is a kind of light absorbent.
U.S. Provisional Application " SYSTEM, the METHOD AND that the author of the disclosure had previously submitted at 2 months 2004 on the 12nd
It is proposed in COMPUTR PROGRAM PRODUCT FOR MAGNETO-OPTIC DEVICE DISPLAY " number 60/544,591
Two kinds of VR HMD, and be incorporated herein.One of them simply proposes the chip of the main purpose with this application
Type embodiment replaces traditional direct-view LCD, the first practical magneto-optical display, and superior performance characteristic includes high frame speed
Rate and further advantage.Display technology is improved on the whole, and in this embodiment, be used for improved VR HMD.
According to the introduction of the disclosure, it is contemplated that the second edition be a kind of new image display being remotely generating, will for example exist
It generates in vehicle cab, is then transmitted via fiber optic bundle, and then (applied by a kind of special fiber array structure
Disclosed in structures and methods) be allocated, establish on the experiential basis of fibre faceplate, using new method and structure, lead to
It crosses optical fiber and carries out tele-video transmission.
Although core MO technology does not initially carry out commercialization for HMD, but is directed to optical projection system, these develop and originally mention
The some aspects of case are related, and furthermore this field is not known usually.Particularly, the second edition discloses a kind of method, the party
Method using optical fiber from be not integrated into HMD optical device or neighbouring image engine transmission video image other more recently
It is open before proposing.
About totally-enclosed VR HMD for the pass of the ambulant practicability of the stage environment of the strict control with uniform floor
Key Consideration is that, in order to make mobile security, the virtual world of navigation must map in 1:1 to human motion, to real table
The deviation of face pattern or motion path safety.
However, as the Barrilleaux of Loral WDL, the developer of BARS and since nearly a quarter century in the past
For other researchers in the field always as observation and summary, the system of AR system is practical, it is necessary in virtual (synthesis
, CG generate image) with the landform and architectural environment of real world between obtain very close to corresponding relationship, including (because
It is army to the system development no wonder of city war) mobile geometry vehicle.
Therefore, more generally situation is, in the form of moving enable VR or AR, any " virtual " or synthesized element with it is any
There must be the position corresponding relationship of 1:1 between real world element.
In the classification of AR HMD, the difference between " video perspective " and " optical perspective " is directly by transparent or semitransparent picture
Difference between the user of pixel array viewing and the display being directly arranged.In face of observer, as glasses optical device sheet
A part of body, and observed on the optical element being also set up directly on before observer by translucent projected image, from one
A (usually direct neighbor) micro-display generates and transports through the relay optics of form to the optical element that faces.
It is main and may the transparent or semitransparent display system of direct see-through display (in history) of the only practical type in part be to match
It is set to and does not illuminate the LCD- of backboard therefore, specifically, AR video perspective glasses keep watching one or more optical elements, packet
Transparent optical substrate is included, LCD light modulator pixel array has been made thereon.
For being similar to the application of original Mann " EyeTap ", wherein text/data directly display or are projected in the optics faced
On device, the landform for being calibrated to real world and object are not needed, although position correlation to a certain degree is helpful.With
Information text carries out context " label " to the project in the visual field.Here it is the main purposes of Google Glass product, although
As drafting for the disclosure, many developers are absorbed in the application program of exploitation AR type, these application programs are at the scene
What is applied in scene is not only text.
Other than the loose close position correlation in approximate 2D plane or the thick cone, to video or optical perspective system
Landform or object in the visual field of user carry out the main problem of this " calibration "., it is the phase of object in determining observer's environment
To position.If without reference to and/or the 3D of substantially real-time space orientation data and local environment map, cannot execute not
There is the calculating of the perspective and relative size of different cause.
One critical aspects of perspective are all except for their relative sizes illumination/shades true to nature from any point of observation, including
Shade depends on illumination direction.Finally, blocking object from any given viewing location is perception perspective and relative distance and determines
The critical optical feature of position.
There is no video perspective or optical perspective HMD, or can be independently of how the such data of offer in video or light
Learn the problem of realizing or be practically used for mobile VR in perspective type system and design system, the dimension around wearer
Observation, necessary safety movement or pathfinding.These data be in outside, what local or a variety of sources provided? such as fruit part office
Does what portion and part HMD, this have influence the design and performance of entire HMD system? if any, this problem is to video and light
The selection learned between perspective has any influence, gives weight, balances, volume, data handling requirements, the lag between component,
He influences and impacted parameter, and display selection and optical element it is detailed?
During the differentiation and progress of VR HMD in the technical parameter and problem to be solved, main includes increasing the visual field, reduce etc.
The problem of to time (variation of lag and virtual perspective between motion tracking sensor), improves resolution ratio, frame rate, dynamic
Range/contrast and other general display mass propertys and weight, balance, volume and general ergonomics.Image
The details of collimation and other display optical systems has been improved, and efficiently solves the problems, such as " simulator disease ", this is early
The main problem of phase.
With the improvement and weight of these general technology classifications, size/volume and balance, display, optical device and other
The weight and volume of electronic device tends to reduce.
Fixed VR device is commonly used in the night vision system in vehicle, including aircraft;However, mobile night vision goggles are considered one
Form is watched by the intermediary for kind being similar to mobile VR because wearer substantially watching be real-time real scene (IR at
Picture), but pass through video screen.Rather than in the form of " perspective ".
" indirect view is aobvious for the subtype and Barrilleaux being similar to defined in identical reference is looked back for 1999
Show ".He proposes the definition of the AR HMD about proposition, without actual " browsing ", but specially sees over the display
To be merging/processing it is true/virtual image, be possibly comprised in any VR type or night vision system.
However, night vision system not instead of dummy synthesis landscape and true fusion or mixing, are explained by video frequency signal processing
It is specifically dependent upon and is signed according to IR as the monochrome image of varying strength for the direct transmitting video image of IR sensing data
Power.As a video image, it is suitable for real-time text/Graphics overlay really, simple with Eyetap initial concept
Form is identical, and Google it is stated that its glass product main purpose.
How and provide what data and from reference to or both be supplied to mobile VR or mobile AR system, or include now
The problem of video feed " indirect view is shown " that there is the mixing of similitude to handle in real time.The two classifications, in order to realize void
The effective integration of true landscape is fitted, is any new and improved mobile HMD system of design to provide consistent assembled view
When the design parameter that must be taken into consideration and problem, regardless of its type.
The software of AR and data processing have evolved to processing these problems, establish the morning in the system developer quoted
On the basis of phase work.Its example is the work of the Matsui and Suzuki of Canon Corporation, such as its pending U.S.
Patent application " Mixed reality space image generation method and mixed reality
(U.S. Patent application No.10/951,684 (US publication) disclosed in system ".The beauty submitted on the 29th of September in 2004
State Patent No. No.20050179617 (U.S. Patent number 7,589,747).Their abstract:
" for generating through shape on the real space image obtained by capture real space that virtual space image is added to
At mixed reality spatial image mixed reality spatial image generating device include be superimposed virtual space image image synthesis
Unit (109) is blocked in view of the object on the real space of virtual space image, it will be shown on real space image,
And further apply the annotation generation unit (108) of image to be shown, without considering any block.In this way,
It can be generated and the mixed reality spatial image that nature shows and facilitates display may be implemented.
The purpose of the system is that the combination for the industrial products (such as camera) that will be rendered completely is superimposed upon on model (substitution pillar);
A pair of of optical perspective HMD glasses and model are provided with position sensor.Carry out mould using comparison procedure is searched pixel-by-pixel in real time
The quasi- pixel from model, the dummy model that CG is generated are superimposed upon on the video feed of synthesis (buffer delay, with reality
Now slight layering) fall behind).System also added comment graphics.Computerized image.For determining delustring and thereby, it is ensured that in synthesis
The basic sources of the correct and non-erroneous data blocked be motion sensor and predetermined look-up table on model, compared pixels
To pull hand mask and model mask.
Although the system is not suitable for mobile AR, the summary of VR or any hybrid power, it is an attempt to provide true for analyzing
The example that is simple but not being completely automatic system of real 3d space and positioning virtual objects.It is correct in the perspective.
In the field of video or optical perspective HMD, even if being delivered in design in the mixed reality perspective view ideally calculated
Be in progress very little in terms of the display or optics and display system that also may be implemented under to the hypothesis of HMD., one satisfactory, forces
True and accurately merging perspective view, including the correct perspective sequence of processing, merges any given observation in element and real space
It suitably blocks person position.
One most effective solution of system claims may is that uniquely integrated HMD system even part solves this problem
System is (independent with software/photogrammetric/data processing and Transmission system on the contrary, it is intended to be solved these problems with certain generic way
Referenced in front in HMD, this is high spring rain in U.S. Patent application No.13/857,656 (US publications
20140177023) proposal in, " for the device of optical observation, with the wear-type being mutually closed with OPAQUENESS control
Display capabilities."
Gao starts his investigation to the visual field HMDS of AR, has following observation:
There are two types of the ST-HMD: optics of type and video, (J.Rolland and H.Fuchs, " optics and video perspective wear-type are aobvious
Show device, " in wearable computer basis and augmented reality, page 113-157,2001.).The main of video perspective method lacks
Point includes: the image quality decrease of see-through view;Due to picture lag caused by handling input video stream;Due to hardware/soft
Part failure may lose perspective view.In contrast, optical perspective HMD (OST-HMD) is provided by beam splitter to real generation
The direct view on boundary, therefore there is the smallest influence on the view of real world.It is height (in user to site environment
It is preferred for recognizing in the vital application for requiring harshness.
However, Gao Zhisheng is to the observation of video perspective problem and unqualified, firstly, by the way that prior art video perspective to be appointed as
Single purpose LCD, he also asserts that (in contrast and standard is also omitted) reduces fluoroscopy images that verifying LCD is not necessary.Ability
Field technique personnel are it will be recognized that the viewpoint of this low-quality image is to have an X-rayed before the progress for accelerating the field recently from early stage
What the result obtained in LCD system obtained.Optical perspective system be not obvious it is also unobvious, by comparing many optical elements and
Other display technologies to the reprocessing of " true " " fluoroscopy images " or conciliation influence " with state-of-the-art liquid crystal display or other
Video-see display technology is compared, final result can relative reduction, or not as good as the bright equal proposal of high intelligence.
Compared with other must also handle the system of input realtime graphic, another problem of this groundless summary is this
Lag in class perspective is assumed.In this case, the comparison of speed is to the component of contention system and its dividing in detail for performance
The result of analysis.Finally, the guess of " may lost the see-through view to hardware/software " is substantially free, arbitrarily, and
Pass through all non-experience of any Exact Analysis to the comparison system robustness or stability between video and optical perspective scheme
Card., or between their particular version and their component technology and system design.
Other than in addition to the mistake compared in field and being biased to the initial problem indicated, there are the qualitative of the solution itself proposed
Problem, the considerations of including omitting and lacking to the HMD system proposed as complete HMD system, including such as wider AR
A component in system, wherein comprising the data acquisition previously quoted and solved, analysis and distribution problem.As HMD itself
And its design, when can be important a problem and problem, HMD, which cannot be allowed as " give ", handles a certain rank and matter
The data or processing capacity of amount generate change or mixed image.Assistance hinders, and cannot function as given offer at all.
In addition, the complete of vision integration problem true and virtual in mobile platform is omitted in the specification of issue-resolution
Dimension.
Using the disclosure and its system of introduction, specifically:
As front has been described in the background technique, Gao Jianyi is using two display type equipment, because will selectively
Specification of the specification of the spatial light modulator of reflection or transmission realtime graphic substantially for the SLM of SLM.With it any aobvious
Show that the purpose in device application is identical, operationally.
Then it will be combined in beam splitter, combiner from the output image of two devices, it is assumed that in addition to about this device
There is no any specific explanations except the statement of precision, while pixel-by-pixel basic arranging.
However, this merging in order to realize two pixilated arrays, Gao specifies the duplication that he is known as " folded optics "
Product, but there is no anything other than double versions of Mann Eyetap scheme, need two " to fold optics in total
Device " " element (for example, plane grating/HOE or other compact prisms or " it is flat " optical element, each one of each light source,
In addition two object lens (one is used for the wavefront of actual view, another is used for the focus image and beam splitter combiner of connection).
Therefore, it is necessary to multiple optical elements (he provide various traditional optical changes): 1) pass through the first reflection/folding optics device
Part (plane grating/mirror, HOE, TIR prism or other " flat " optical devices, and object lens are arrived therefrom, it passes it to
Next plane grating/mirror, HOE, TIR prism or other " plane " optical devices, " to fold " optical path again, it is all this
It is in order to ensure entire optical system relative compact and to be included in the signal group of two rectangular optical relay areas a bit;From folding light
System, light beam reach SLM by beam splitter/combiner;Then, it reflects or transmits on the basis of pixelation (sampling), from
And changeably (from the variation of true picture contrast and intensity to modify gray scale etc.) by modulation, the real image of present pixelation returns
Return to splitter/combiner.Although display is synchronous to generate virtual or synthesis/CG image, calibration may be also passed through, to ensure
It is easy to integrated with modified pixelation/sampling actual wavefront, and is integrated by beam splitter, pixel is pixel, using true
The multi-step of real field scape, the sample of modification and pixelation then return to another and " fold light from there through eyepiece object lens
Learn " element, to be reflected into the eyes of observer from optical system.
Generally, for the modified pixelation sampling section of real image wavefront, before reaching viewer's eyes, seven are passed through
A optical element does not include SLM;It shows the composograph generated, two can only be passed through.
Although optical imagery synthesizer precisely aligns problem, until Pixel-level, either from the image pattern of laser interrogation
The small function SLM/ that the reflected light of collection still combines image generation shows equipment, alignment is kept, especially under mechanical condition
Vibration and thermal stress are considered as non-trivial in the art.
Digital projection free space beam combined system, a combination thereof high-resolution (2k or 4k) is red, green and blue image
The output of engine is (in general, being expensive realization by the image that DMD or LCoS SLM is generated and safeguarding that these alignment are non-trivials
's.The case where some designs are than high 7 element of format is simple.
In addition, these complicated multi engine multi-component optical combiner systems are compact unlike as needed for HMD.
Monolithic prism is developed, such as the T-Rhomboid for being developed and being sold for life science market by Agilent is combined
Device, dedicated for solving the problems, such as that free space combiner shows in existing application.
Although what such as Microvision and other companies successfully by it based on SLM, originally developed is used for micro- projection
The companies deployment of technology is into HMD platform, but these optical setups usually substantially propose complexity not as good as high.
Additionally, it is difficult to determining two image processing steps on two platforms and calculating the basic principle of iteration is what, Yi Jiwei
What needs to realize the smooth and integrated of true and virtual wavefront input, realizes blocking/blocking for correct combine scenes element.
Seem the problem of high greatest problem and problem to be solved is composograph competition, it is difficult to the brightness phase with true picture
Than, and therefore the main task of SLM seems selectively to reduce luminance part real scene or entire real scene.It is logical
Often, it may also be inferred that, while reducing the intensity for the real scene element being blocked, such as by time division multiplexing system
The duration that the DMD mirror in reflection position is minimized in system, it can simply leave the pixel being blocked." off ", although this
It is not to be specified by Gao, the details how SLM completes its image modification function does not have yet.
It must calculate simultaneously, in many parameters calibrated and be aligned, including definitely determining which pixel from real field is
Calibration pixel is to resulting pixel.If ghost image overlapping, mistake is aligned and blocks and will be multiplied, especially without accurately matching
It is in mobile context.Have by the position of the reflective optical devices of real scene wavefront part to object lens relative to scene
True perspective position, firstly, different from the perspective position of observer in scene, it is not flat, nor it is located at dead point, it
An only wavefront sample, rather than position.Moreover, upon displacement, moving, and do not know that composograph is handled in advance yet yet
Unit.Due to these facts, the variable quantity in the system is very big.
If they are, and the target of the solution becomes more specifically, then to become clear that, it is understood that there may be ratio makes
This point is realized with the simpler method of second display, and (in biocular systems, 2 displays of addition, specified in total
SLM)。
Secondly, in the scheme of inspection it is obvious that, due to the durability of this complication system, having multiple tired if any method
The defect of product alignment tolerance, original part is accumulated and is worn over time.Multielement path, the misalignment of combined light beam
The heat and mechanical oscillation effect of accumulation are formed, and adds other complexity caused by the complexity of optical system as seven elements, just
It is that this system itself may may be degraded, especially time, external realtime graphic wavefront.
In addition, as previously having been noted that, the problem of calculating the spatial relationship between real elements and virtual element, is
Non-trivial.Design a system, it is necessary to two (in biocular systems) are driven from these calculating, four display types are set
Standby, most likely different type (therefore having different colour gamuts, frame rate etc.), increases the very harsh system of complexity
Design parameter.
In addition, in order to provide high-performance image in the case where no ghost image or lag, and the eye of vision system will not be caused
Eyeball fatigue and fatigue, high frame per second is essential.However, the SLM of perspective rather than reflection is only used for high system,
System design can be just slightly simplified;Even with faster FeLCoS micro-display, frame rate and speed image are still remote low
In the DLP (DMD) of TI equipment.
However, at least to realize broader FOV due to also needing higher HMD resolution ratio, high-resolution DMD (such as 2k is used
Or 4k equipment) mean to seek help from very expensive solution, because DMD has the yield of known features size and number
Low, ratio of defects is higher than consuming public or enterprise's production and the usually tolerable ratio of defects of cost, for using theirs now
Price is very high for system, such as d-cinema projectors available on the market are by the Barco of TI OEM, Christie and
NEC commercialization.
Although being used for optical perspective HMDS (such as Lumus, BAE etc.) from planar wave shadow casting technique is instinctively readily to walk
Suddenly, wherein blocking neither design object is also impossible in these ranges and limit of power.It is close, substantially replicate this method
And true picture is modulated, traditional optical setting two images of combination then proposed using such as Gao, while relying on a large amount of planes
Optical element is to realize combination and in the space of relative compact.
It is looked back to summarize background, and returns to HMD, the current leader in optical perspective HMD and this two major classes of classics VR HMD,
The prior art can be summarized as follows, and notice that other variants optical perspective HMD and VR HMD can be both commercially available, can also be into
It goes and largely researches and develops, wherein a large amount of business and academic work, including product announcement, publication and patent application, since
Google, Glass and Oculus VR HMD, Rift:
The Google of Glass with the leading mobile AR optics HMD of business has been optical perspective HMD class writing at this present writing
Breakthrough public's visibility and leading marketing position are not established.
However, they follow other people to go to market, they are in main national defence/industrial circle exploitation and deployment product, packet
Include Lumus and BAE (Q-Sight holographical wave guide technology).In the competition works of other recent markets and conceptual phase, such as
United Kingdom National physical reality is studied and is commercialized by TruLife Optics, and also in holographical wave guide field, they, which claim to have, compares
Advantage.
Many military helmet formula display applications and Google official are used for the major use case of Glass, again such as preceding institute
It states, super spelling text and symbol figure element on view spaces, it is only necessary to rough position association, it may be possible to be sufficient to
It is many initial, simple mobile AR application.
However, even if in the case where information shows and applies, it is obvious that in viewer in face of (and final, surrounding)
The density for watching the mark information of the project and landform in space is bigger, and the demand to space is bigger.Order/layered label with
Perspective/relative position of element with label.
Overlapping-that is, real elements in visual field to the partial occlusion of label, rather than just the overlapping of label itself, therefore must
So become the even requirement of the optical view of information display purposes " substantially ".System, to manage visual confusion.
Because in addition label must not only reflect relative position of the tag element in the perspective view of real space, but also reflect
Automation (calculates) priority and real-time degree based on predetermined or software, the priority that user specifies, tag size and transparent
Degree, other than graphics system is used to reflect two principal visuals prompt of level of information structure, it is necessary to be managed and real
It applies.
It goes wrong immediately after, how considers the translucence and overlapping/occlusion issue of label and superpower graphic element in detail
Processing passes through these basic optical perspective HMD (either simple eye graticules of the relative luminance problem of charged components of optical element
Type or the full glasses type of binocular) and superpower, the video display component of generation, especially in bright outdoor lighting condition and non-
Under often dim outdoor conditions.In order to sufficiently extend the practicability of these type of display, night use is clearly low optical issue
Extreme case.
Therefore, when we surmount the most limited use-case condition of passive optical perspective HMD type, with the increasing of information density
Add-as this system is commercially succeeded and usually intensive city or suburb are marked, what this was to be expected to.
Information-from commercial enterprise and the use parameter under bright and dim condition increase constraint condition, it is evident that " nothing
The problem of source " optical perspective HMD can not be escaped, can not also be coped with any practical actual implementation of mobile augmented reality and demand
HMD。
Then, passive optical leads directly to HMD and must be considered as realizing the endless integral mould of mobile AR HMD, and is recalling
Get up, will be considered merely as being the transition stepping-stone to activity system.
Oculus Rift VR (Facebook) HMD: it is somewhat like with the movable influence of Google Glass product marketing, but with
Oculus actually also leads the field to solve and/or starts the practical VR HMD's of difference for significantly solving certain major issues
Threshold obstacle (rather than Lumus and BAE being followed, for Google), the Oculus Rift VR HMD at this present writing that writes is leading
Pre- publication VR HMD product, into and create consumer that market accepts extensively and business/industry VR.
The basic threshold value progress of Oculus Rift VR HMD can be summarized as following product function list:
The significant widened visual field o is located at distance and is used by using the diagonal display of single current 7 inches of 1080p resolution ratio
At several inches of family eyes, and it is divided into binocular see-through area on single display device.Current FOV, as writing, and in total
45 degree are compared, and are 100 degree (improving its original 90 degree), are the common-use sizes of pre-existing HMD.Individual binocular optical device
Part realizes stereoscopic visual effect.
The significant improvement head tracking of o, leads to low lag;This is an improved motion sensor/software advances, and using from
Nintendo Wii, Apple and other fast followers are transplanted to mobile phone sensor technology, Playstation PSP and now
Vita, the advantage and Xbox Kinect system of the tiny motion sensor technology of the present 3DS of Nintendo DS, and
Other hand-helds and handheld device product with in-built motion sensor tie up position tracking (accelerometer, MEMS for 3D
Gyroscope etc.).Current head tracking realizes Multi-point infrared optical system, and there is external sensor to cooperate.
O low latency, improved head tracking and Fast Software update processor to interactive entertainment software systems synthesis result,
Although the intrinsic response time by used display technology is limited, initial LCD is replaced by faster OLED.
Low duration is a kind of buffered forms, to assist in keeping the OLED display knot that video flowing is smooth, with higher switch speed
Close work.
By using ski goggle form factor/material and mechanical platform, lighter weight, smaller volume is preferably flat
The ergonomics of weighing apparatus and overall improvement.
It summarizes and combines these improved net benefits, although such system in structure or may operate without new mode,
The net effect of improved component and particularly effective design patent US D701,206 and any proprietary software, it has produced
The verifying of breakthrough performance and general marketplace VR HMD are given birth to.
After their leader and the method using them, in many cases, according to Oculus in the case where other people
The success of VR Rift configuration changes some same period product programs of their design, there are many VR HMD product developer,
Brand name company and venture company melt in the Kickstarter of the initial demonstration of electronics fairs in 2012 and Oculus VR
Product plan bulletin has been formulated after money activity.
In these fast followers and other tactful people to follow Oculus VR template for substantially changeing them, Samsung,
The development mode that this book is shown is designed with Oculus VR Rift and the Morpheus of Sony is closely similar.It is obtained in the field
The start-up company of concern include Vrvana (pervious True Gear Player, GameFace, InfiniteEye and
Avegant。
No one of these system configurations seem identical with Oculus VR, although some use 2 and other 4 faces
Plate, InfiniteEye carry out spread F OV using 4 panel systems to claim 200+ degree.Some use LCD, some are then used
OLED.Optical sensor is used to improve the precision and renewal speed of head tracing system.
All systems are realized as substantially on the spot or highly constrained mobility.Using vehicle-mounted and based on active optical reticle
Motion tracking system is designed for enclosure space, such as living room, operating room or simulator stage.
System with Oculus VR scheme with maximum difference is the carving text and Vrvana totem of Avegant.
Glyph indeed achieves a kind of display solution, which follows the optical perspective HMD previously established and solve
Scheme and structure generate the micro- image of projection using Texas Instruments DLP DMD on plane of reflection optical element,
Configuration and operation and plane are mutually compared with the optical element of optical perspective HMD, the difference is that using high contrast, light absorptive
Back board structure realizes the micro projector display type of reflection/indirectly, and video image belongs to the opaque of general category, non-
Obvious diagram picture.
However, here, as front discuss disclosed in the height in established, when using DLP DMD or other MEMS components, increase
The limitation for adding display resolution and the other systems performance beyond 1080p/2k is those of cost, the manufacture yield of such system
And ratio of defects, durability and reliability.
In addition, to limited spread/amplification factor picture size/FOV of planar optical elements (optical grating construction, HOE or other)
Limitation, extend SLM picture size but interaction/strain (HVS) on human visual system, especially focus
System, safety and comfort to viewer propose limitation.User in Google's glass test using it is similarly sized but point
The reaction of the lower image of resolution shows that HVS further uses higher resolution, and brighter but same small image-region becomes tight
, this constitutes challenge to HVS.Official consultant Opham doctor Peli of Google is receiving online website BetaBeat
Early warning is issued to Google glass user when (on May 19th, 2014) interviews, to predict some eyes anxieties and uncomfortable modification warning
(on May 29th, 2014) attempts to limit the case and range of potential use.Division is the mode used for eye muscle, they
It does not design or uses for a long time, and the approximation in revised statement is the reason is that the small position for showing image, compels to use
Search other experts in family.
However, the specific combination that eye muscle needed for focus use uses in the sub-fraction of true FOV cannot be assumed to be with
It is identical needed for eye movement on entire true FOV.In fact, the small minor adjustment of focal muscle is than scanning nature FOV institute
The motion range being related to is more constrained and limits.Therefore, as it is known in the art, the repeating motion shunk in ROM is not only limited to
In focus direction, although due to the property of HVS, it is contemplated that will increase the overstrain beyond normal operating range., will also be to movement
The limitation and very small, the requirement of controlled fine tuning of progress of range.
Increased complexity is that the level of detail in controlled eye movement region may start quickly, because having complexity, in detail
Resolution ratio in the scene of movement has increased above the eye fatigue from precision instrument.Any exploit person of optics browsing system
Member does not carry out stringent processing to this problem, and these problems and Steve Mann use his EyeTap over more years
The eye fatigue of System Reports, headache and dizzy problem, (it is reported that by the way that image is moved to current Digital EyeTap
Central region in update, but do not studied systematically, it is partly improved, only has received limited comment, only focus on part and ask
It the problem of topic and eye fatigue, can be from intimate work and " computer eyesight disease " development.
However, the limited public comment that Google doctor provides is claimed repeatedly, in general, Glass is as optics browsing system
It is deliberately used for occaisionaly, rather than extension or high frequency to be watched.
Understanding the another way of Glyph scheme is, highest level follows Mann number EyeTap system and structure arrangement, has
It is arranged for the variation of the optical isolation VR realization operated and using transverse projection plane deflectiometry.Pass through the current light of system
Learn view.
In Vrvana totem, from Oculus VR Rift, using Jon Barrilleaux, " indirect view is aobvious
Show " scheme, by addition binocular conventional video video camera to allow to cut in the forward direction image capture that video captures and between generating
It changes.It is emulated in same optical cover OLED display screen.Vrvana indicates that they can be implemented this non-in marketing material
" indirect view is shown " of Chang Jiben fully complies with AR schematic diagram and mode that Barrilleaux is determined.Obviously, actually
Any other VR HMD that Oculus VR is generated can be equipped with such traditional camera, although the weight and balance to HMD have
It influences, but at least in this way.
From the discussion above, it is apparent that in " video perspective HMD " classification, or in " indirect view is shown " field, remove
Except night vision goggles classification, almost without making substantial progress, developed well as subtype, but in addition at this
It provides in video processor method known to field and is added except text or other simple graphs to live image, lacked any
AR feature.
In addition, the existing limitation about VR HMD, all this systems using OLED and LCD panel suffer from relatively low frame
Rate, this leads to motion delay and delay, and influences on the negative physiological of some users, belongs to the extensive class of " simulation illness "
Not.It is also noted that in the digital three-dimensional optical projection system of cinema, it is three-dimensional using the commercialization of such as RealD system etc
Sound system realizes, frame rate is also inadequate for the projector based on Texas Instrument DLP DMD or the projector based on Sony LCoS
It is high.It is reported that thering are sub-fraction spectators to take part in this research in some researchs, wherein there is up to 10% patient to occur
Headache and related symptoms.Some of them are that these people are exclusive, but wherein can greatly trace back to the limitation of frame rate.
Moreover, as described above, Oculus VR has been carried out " low persistence " buffer system, used with compensating in write-in
The still not high enough pixel switching/frame rate of OLED display.
On the performance of existing VR HMD it is further influence be due to the resolution ratio of existing OLED and LCD panel display limit,
This partly facilitate using 5-7 " diagonal display and be installed in away from observation optical device at a certain distance from requirement (and
Spectators' eyes) to realize enough effective resolutions), facilitate existing and planned product volume, size and balance, than most
Other optics headwear products of number are much bigger, and volume is bigger, heavier.
It is expected that being potentially partly improved the use from bending OLED display, it is contemplated that it can be in the feelings for not increasing volume
Further improve FOV under condition.But put goods on the market with enough quantity, it needs to carry out factory's production capacity with acceptable yield
The expense of a large amount of additional size investments makes this prospect less practical in a short time.It can only partially solve volume and size is asked
Topic.
For the sake of completeness, it is necessary to refer to for watching video content but not alternatively or with any motion sensing capabilities
Video HMD, therefore without virtual or mixed ability (mixed reality/AR) world of navigating.In past 15 years, this view
Substantial improvement, effective FOV and resolution ratio and viewing comfort/ergonomics increase has been obtained in frequency HMD,
And it provides current VR HMD and has been able to the development path for utilizing and constructing and progress.But these are also by used
The limitation of the core capabilities of display technology, mode is followed for OLED, LCD and the reflection based on DMD/deflectiometry system institute
The limitation observed.
Other important changes of the projected image of transparent glasses optics example include coming from Osterhoudt Design Group,
Those of Magic Leap and Microsoft (Hololens).
Although these variations have the advantages that some opposite or disadvantage-relative to each other and described in detail above other are existing
Technology-they all remain the limitation of basic skills.
More basic and universal common ground, they are also limited by the display/pixel technology of used fundamental type, as
Frame rate/refreshing of existing core display technology, either quick LC, OLED or MEMS, and whether use mechanical scanning
Optical fiber input or other are disclosed for that will show that image is transmitted to the optical system of viewing optical system, it is all these still not
It is sufficient for high quality, is easy to watch (HVS), low-power, high requirement resolution ratio, high dynamic range and other display performances are joined
Number, helps to realize general marketplace, the happy AR and VR of high-quality respectively and jointly.
The state for summarizing the prior art, about mentioned-above details:
" high visual acuity " VR is at many aspects substantially from FOV, incubation period, head/motion tracking, light-weight, size
Improved with volume.
But frame rate/waiting time and resolution ratio and significant inference degree, weight, size and volume are by available core
The limitation of display technology.
Modern VR is limited to the static or height-limited and limited mobile use in small controlled space.
Closing version of the VR based on optical perspective system, but it is configured to transverse projection-deflection system, wherein SLM passes through a system
Three optical elements of column project image onto eyes, and limited performance is reflected in compared with the gross area of Standard spectacles eyeglass
Image spreading but the output for being not more than SLM (DLP DMD, other MEMS or FeLCoS/LCoS).Extend observation " feature work "
The eyes anxiety risk of profundity version and eyes muscle demand is the further limitation to practical acceptance.SLM type and
Display of size also limit by the higher resolution SLM of cited technology be scaled to improve resolution ratio and globality originally
The practical approach of energy.
Optical perspective system usually by by eye muscle using be restricted to relatively small region and eye fatigue having the same can
Energy property, and relatively small and frequent eyes tracking adjustment is needed in these constraints, and need more than of short duration use
Time limit.The design of Google glass is intended to by positioning optical element upwards, and from the direct rest position of the eyes of look straight ahead
It sets to reflect expectation that finite duration uses.But user is it has been reported that eye fatigue, as media pass through Google
As the text of Glass Explorers and interview record extensively.
Due to the label for needing to organize that there are real-world objects in the perspective, optical perspective system in overlapping half
It is restricted in terms of transparent information density.It shows and applies even for graphical information, the requirement of mobility and information density also makes
Passive optical view is obtained to be restricted.
The aspect of " indirect view is shown " is realized in the form of night vision goggles, and Oculus VR competitor Vrvana is only
The totem that proposing is equipped with its binocular video video camera adapts to the suggestion of AR.
Actually it is more " indirect view is shown " although height is proposed to claim to be optical perspective display, there is quasi- perspective
Aspect is worked in the improved projection display in this way by using SLM device, for sampling a part of true wavefront
And digitally change the part of the wavefront.
Quantity (and the point to be added here, much smaller than passing of optical element in the optics routing of part before primary wave
The optical region of conventional lenses in system eyeglass), it is seven or close for this number, introduces image aberration, pseudomorphism and damage
The chance of consumption, but complicated optical alignment system, the freedom of this complexity of many elements are needed in one field
Spacial alignment is not common, and is expensive when they are required, it is difficult to safeguard, and unhealthy and strong.It is expected that SLM management is true
The method that the wavefront of real field scape changes for particular requirement also without specifying or verifying.Coordinate 2-4 type of display equipment it
Between signal processing (depending on the simple eye of biocular systems) also there is no problem, including definitely determine the pixel from real field be
The calibration pixel of the pixel suitably synthesized.It is pre-formed calculating in the perspective to establish suitably between synthesized element true
The context of relationship is very harsh, especially when individual moves in information dense, environment with a varied topography.It is mounted on
This problem can be only further exacerbated by vehicle.
Compared with constructing such as the task of the Gao Optical devices proposed, or even it is reduced to the form factor of relative compact
Task, there are the countless additional problems of the exploitation for holonomic system.Size, balance and weight are only to various processing and light
The quantity and hint of array element, one of many influences of needed position, but compared with the other problems of reference and limitation, it
It is relatively small, although serious this system actual deployment is used to scene, for military or reinforcing industrial use or consumer
It uses.
100% " indirect view is shown " has similar requirement with height suggestion in critical aspects, in addition to display type unit
Therefore the details of quantity and alignment, optical system, pixel system matching and perspective problem, joins all keys of this system
Number should need and in real time, individually have an X-rayed the synthesis CG 3D mapping space for the required storage that real time fluoroscopy images are coordinated
The degree that " strength " calculates generates query.Problem becomes increasing, it is necessary to all execute calculating, the view of forward looking camera capture
Frequency image, basic Barrilleaux and now possible Vrvana design, be forwarded to non-local (to HMD and/or t) he/her
The processor of oneself with synthesized element for synthesizing.
What real mobile system needs, either VR or AR, realizes immersing and calibrate to true environment, as follows:
Meet the optics and observing system of ergonomics, minimizes any improper requirement to human visual system.
This is to realize that more extensions use, this mobile uses is implied.
Wide visual field ideally includes peripheral view, 120-150 degree.
High frame rate, preferably 60fps/ eyes, to minimize the usually waiting time as caused by display and other puppets
Picture.
Efficient resolution ratio, at the comfort distance of unit and face.The effective resolution standard that can be used for measuring maximum value is wanted
It is effective 8k or is " retina is shown ".The distance should be similar to the distance of traditional eyewear, and traditional eyewear is usually adopted
Use the bridge of the nose as equalization point.Collimation and optical path optical device are established necessary to virtual focal plane appropriate, this is virtual
Focal plane also achieves this effective display resolution and optical element to the actual range of eyes.
High dynamic range, matching is real-time as closely as possible, the dynamic range of real views.
Determine that the airborne motion tracking-of the orientation of both head and body is either known still in advance in known pattern
In knowing in time within sweep of the eye for wearer.This can be supplemented by the external system in hybrid plan.
Display optical system, can be between real scene wave surface and any synthin in the ring of human visual system
Rapid synthesis processing is carried out in border.Passive device should be used as much as possible, with minimize vehicle-mounted (to HMD and wearer) and/
Or the burden of external treatment system.
Display optical system is relatively easy and firm, and optical element is seldom, and active device element is seldom, active device design letter
Single, weight and thickness all very littles are also very firm under mechanical and thermal stress.
Light-weight, small in size, gravity balance and form factor are suitable for being known as the design that professional user is received
Configuration, such as military and reinforcing environment industrial user, robust sports application, general consumption and business use.From
The spectacles manufacturers such as such as Oakley, Wiley, Nike and Adidas are to Oakley, Adidas, Smith, Zeal etc. other
Professional motion goggles manufacturer, these factors are also such.
A kind of system can changeably switch between VR experience, while keep complete mobility, and can be changed and block,
AR system is watched in the integrated mixing of perspective.A kind of system can not only manage the incident wavelength of HVS, but also can pass through sensing
Device and its mixture obtain effective information from these interested wavelength.IR, it is seen that light and UV are interested typical wavelengths.
Summary of the invention
System and method are disclosed, for conceiving capture again in a manner of designing from liberation equipment and system, distributes, group
It knits, transmits, store and be presented to the process of human visual system or non-display data array output function.These processes it is non-optimum
The function of changing the operational phase is impaired, but is the operational phase by photonic signal processing and array signal processing stage decomposition, allows
It is most suitable for the optimization function of the equipment in each stage, this means to design and operate equipment in these equipment and process in practice
In the frequency of most effective work, then carries out effective frequency/wavelength modulation/shift phase and come between those " convenient frequencies "
Return is dynamic, and further realizes the net effect of more effective all-optical signal processing, local and long-distance.
Following summary of the invention is provided in order to understand some technical characteristics related with signal processing, and is not intended as to this
The complete description of invention.By by the whole instruction, claims, drawings and abstract as a whole, can be obtained pair
The comprehensive understanding of various aspects of the invention.
The embodiment of the present invention can be related to the component for collecting pixel signals " modulator " resolving into discrete signal processing grade, and
Therefore it is decomposed into telecommunications type network, can be compact or space remote.Most basic version proposes three-level in operation
" picture element signal processing " sequence, comprising: pixel logic " state " coding is usually completed in integrated pixel modulator, the collection
Pixel modulator is separated with the color modulation stage, and the color modulation stage was separated with the intensity modulated stage.Further in detail
Describe in detail and more detailed picture element signal processing system be illustrated comprising sub- grade and option, and in further detail and particularly suitable for
Effective realization of magneto-optic subsystem, and including 1) effectively illuminating source level.Which body light, preferably sightless near infrared light,
It is converted into mode appropriate and is emitted in channelizing array, and the stage 2 is provided), pixel logic processing and coding;Followed by
3) optional non-visible energy filtering and Restoration stage;4) the optional modification of signal stage, with improvement/modification signal division and mould
The attributes such as formula modification;5) frequency/wavelength modulation/displacement and additional bandwidth and peak strength management;6) optional signal amplification/
Gain;7) optional analyzer, for completing certain MO type light valve switchings;8) what is handled and distribute for picture element signal is certain wireless
(grade) can arrangement.In addition, there are also DWDM types.
The configuration of the system is proposed, version and the path of all-optical network are provided, to obtain main cost and effect
Rate: especially motivate and more effectively handle the image information at scene and record.Finally, proposing new mixing magnetic photonic device
And structure, and other can maximally utilise picture previously for the unpractical other equipment of system and structure of the disclosure
Plain signal processing system and most preferably configure such system around the system, including new and/or be based on magneto-optic and non-magnetic
The modified version of the equipment of luminous effect (such as slower rays and anti-magneto-optic effect) hybridization, realizes new basic switch and new mixing
2D and 3D photon crystal structure type it can (if not most of) MPC types many for all application enhancements equipment.
In the copending application of the inventor of the disclosure, a new class of display system, the picture that will be generally integrated are proposed
The component of plain signal " modulator " resolves into discrete signal processing grade.Therefore, it is usually realized in integrated pixel modulator
The color modulation grade that basic logic " state " is separated with intensity modulated grade separates.This is considered applied to visual picture
The telecommunication signal processing framework of pixel modulation problems.In general, propose three signal processing grades and three individual device assemblies with
Operation, but can add and consider additional effect of signals operation, including polarization characteristic, from normal signal to such as polaron
With the conversion of the other forms of surface plasma, (such as base pixel open/close state is superimposed upon other signal numbers to superposed signal
According to upper) etc..Highly distributed video frequency signal processing framework in broadband network serves the display device of opposite " mute ", substantially
On be made of the passive material of follow-up phase, be one it is main as a result, and compact photonic integrated circuits device, in phase
Concatenated discrete signal is realized on the identity unit being in close contact between same one or more devices and in large-scale array
Processing step.
In the disclosure, improved and detailed version mixing telecommunications type, picture element signal handle display system and use magneto-optic/magneto-optic
Sub- grade/device is combined with other picture element signal process level/devices, and especially including frequency/wavelength modulation/shift stages and device can
To be realized in a series of steady embodiments, further includes improved and novel mixing magneto-optic/photonic device, be not limited to classics
Or non-linear Faraday effect MO effect, but broadly include non-reciprocity MO effect and phenomenon and combinations thereof, it further include mixing
Faraday/slow light effect and equipment based on Kerr effect and based on faraday and MO Kerr effect and other MO effects it is mixed
It closes;And it further include improved " light barrier " structure, wherein the path of modulated signal and the surface of device planar fold, with
Reduce the characteristic size of entire device;It further include quasi- 2D and 3D photon crystal structure and plural layers PC and surface grating/pole
Change the mixing of PC;And the mixing of MO and Mach-Zehnder interferometer arrangement.
Therefore, including the previous equipment based on MO and improved equipment disclosed herein, the present disclosure proposes a kind of telecommunications
The picture element signal processing system of type or telecommunications architecture, the process flow with the processing of following picture element signal is (or similarly,
PIC, sensor or telecommunication signal processing) stage and therefore characterize the disclosure system architecture (and its variant):
Any embodiment as described herein can be used alone or is used together each other with any combination.Include in this specification
Invention can also include the embodiment that only partially refers to or imply or do not refer to or imply in this brief overview or abstract.
Although various embodiments of the present invention may can be the one of specification by the driving of the various defects of the prior art, these defects
A or multiple places discuss or imply, but the embodiment of the present invention not necessarily solves any of these defects.In other words, of the invention
Different embodiments can solve the different defects that can be discussed in the description.Some embodiments can only partially solve one
A little defects or the defect that can only discuss in the description, and some embodiments may not solve appointing in these defects
What defect.
To be by the reading disclosure, including specification, drawings and claims, other features of the invention, benefit and advantage
Obviously.
Detailed description of the invention
In attached drawing, identical appended drawing reference refers to identical or intimate element in each view, and is included in explanation
In book and part of specification is formed, the present invention is further illustrated, and together with detailed description of the invention, for explaining
The principle of the present invention.
Detailed description of the invention Fig. 1 shows the Imager Architecture that can be used for realizing the embodiment of the present invention;
Fig. 2 shows the embodiments 1 of the photon converter of the version for the Imager Architecture for realizing Fig. 1 to use photon converter as letter
Number processor;
Fig. 3 shows the general structure of the photon converter of Fig. 1;
Fig. 4 shows the specific embodiment of photon converter;
Fig. 5 shows the generic structure for mixing photon VR/AR system;With
Fig. 6 shows the embodiment framework of mixing photon VR/AR system.
Specific embodiment
The embodiment provides a kind of system and method, for conceiving capture again in a manner of release device, distribution,
Tissue, transmission, stores and is presented to the process of human visual system or non-display data array output function.System design comes from
The impaired function of the unoptimizable operational phase of these processes, but be by picture element signal processing and array signal processing stage decomposition
Operational phase, to allow to be most suitable for the optimization function of the equipment in each stage, practice means design and operating frequency is this
Then the equipment of a little equipment and the most effective work of process carries out effective frequency/wavelength modulation/shift phase in " convenient frequency "
Between move back and forth, have and further increase the net effect all-optical signal processing of efficiency, it is either local or long-distance.Present with
Lower description is so that those of ordinary skill in the art can manufacture and use the present invention, and in the back of patent application and its requirement
It provides and is described below under scape.
To those skilled in the art, to the various modifications of preferred embodiment and General Principle as described herein and feature
It is obvious.Therefore, the embodiment the present invention is not limited to shown in, but with meet the principles described herein and feature most
Wide scope is consistent.
Definition
Unless otherwise defined, otherwise all terms (including technical and scientific term) used herein have and overall structure of the invention
Think the identical meaning of the normally understood meaning of those of ordinary skill in the art.It will be further understood that, such as in common dictionary
Defined in those terms should be interpreted as having and its meaning in the context of related fields and the disclosure is consistent contains
Justice, and will not be interpreted Utopian.Or meaning too formal, unless explicitly defining herein.
The some aspects defined below for being suitable for describing about some embodiments of the invention.These definition equally can be herein
Extension.
As used herein, term "or" includes "and/or", and term "and/or" includes one or more related listed items
Any and all combinations.The expression of such as "at least one" etc modifies entire element list when before element list
Without each element of modification list.
As used herein, unless the context clearly determines otherwise, otherwise singular references " one ", "one" and "the" refer to including plural number
Show object.Thus, for example, may include otherwise multiple objects to the reference of object unless the context clearly determines otherwise.
In addition, as used in description herein and following claims, and " ... in " meaning include " ...
In " and " above ", unless the context clearly determines otherwise.It should be appreciated that when an element is referred to as in another yuan
When part "upper", it directly on the other element, or can have intermediary element between them.On the contrary, when one
When element is referred to as on " directly existing " another element, neutral element is not present.
As used herein, term " group " refers to the set of one or more objects.Thus, for example, a group objects may include list
A object or multiple objects.The object of set is referred to as the member of set.The object of set can be identical or different.At certain
In a little situations, the object of set can share one or more public attributes.
As used herein, term " adjacent " refers to close or adjacent.Adjacent object can be separated from each other, or can be each other
Practical or directly contact.In some cases, adjacent object can be connected to each other or can be formed integrally with each other.
As it is used herein, term " connection ", " connection " and " connection " refers to direct attachment or link.It is as above shown below,
The object of connection is not with or without substantive medium object or object set.
As it is used herein, term " coupling ", " coupling " and " coupling " refers to operation connection or link.Coupling object can be with that
This is directly connected to or can be connected to each other indirectly, such as passes through medium object collection.
As used herein, term " substantially " and " substantial " refer to certain degree or degree.Make when together with event or situation
Used time, these terms can with self-explanatory characters' part or situation just there is a situation where and event or the approximate situation happened, such as
Consider typical Tolerance level or changeability.The embodiments described herein.
As used herein, term " optional " and " optionally " refer to that the event then described or situation may occur or may not
Occur, and the description includes the example that event or the example and event or situation that happen do not occur.
As used herein, term " functional device " broadly refers to provide the energy-dissipating structures that structure receives energy from energy.
Term function device includes unidirectional and bi-directional configuration.In some implementations, function device can be the component or element of display.
As used herein, term " display " broadly refers to the structure or method for generating display component.Display component be by
Show that picture element precursor generates the set for the display image component that treated image composition signal generates.Before picture element
Body is referred to as pixel or sub-pixel in other contexts sometimes.Unfortunately, term " pixel " has produced many differences
Meaning, including the output from pixel/sub-pixel, and the component part of display image.Some embodiments of the present invention packet
The realization for separating these elements and forming additional intermediate structure and element is included, some to be used for independent process, this can be by by institute
There are these elements/structures to be known as pixel and further obscure, therefore used here as various terms.Clearly refer to specific group
Part/element.Show that picture element precursor transmitting image forms signal, image composition signal can be connect by intermediate treatment system
It receives, generates one group of display picture element to form signal from image.The set for showing picture element is directly seen when by display
When examining or being reflected by optical projection system, image is presented to human visual system under the conditions of expected viewing.In this context
Signal means the output of signal generator, the signal generator be or be equal to display picture element precursor.Importantly, only
It needs to handle, these signals are just used as signal to be stored in the various propagation channels for keeping signal, without being transferred to freedom
Space, free space signal generates extension wave surface herein, which also combines with the extension wave surface in other sources.
Available space.Signal is without chiral and (do not invert, the signal be inverted or overturn, and image and image portion without mirror image
Dividing has different mirror images).In addition, image section is directly added (if it would be possible, being difficult to an image section
Overlap on another image section with prediction result) and handle image section and can be very difficult.There are many different skills
Art can be used as signal generator, and different technologies provides signal and the different disadvantages with different characteristics or benefit.This hair
The advantages of bright some embodiments allow electric hybrid module/system, can be by technical combinations, while minimizing any specific skill
The shortcomings that art.The U.S. Patent application No.12/371,461 being incorporated to describe can be advantageously combined these technologies system and
Method, therefore term shows therefore picture element precursor covers the dot structure of pixel technique and the sub-pixel knot of sub-pixel technology
Structure.
As it is used herein, term " signal " refers to coming the output of automatic signal generator, such as display picture element precursor,
It conveys the information of the state about signal generator when generating signal.In imaging systems, each signal is display image
A part of primitive generates image or image section when being perceived under anticipated conditions by human visual system.In this meaning
On, signal is coded message, that is, the status switch of the display picture element precursor in the communication channel of coded message.From one
The set of the synchronization signal of group display picture element precursor can define the frame (or a part of frame) of image.Each signal can be with
With the feature that can be combined with the one or more features from other one or more signals, (color, frequency, amplitude are fixed
When, but do not have chirality).
As it is used herein, term " human visual system " (HVS) refers to from multiple discrete display picture elements
The perception of the image of (direct view or projection) and visual biology and mental process.In this way, HVS is receiving the display propagated
The synthesis of picture element and based on receiving and when those of processing concept of the primitive to formulate image implies human eye, optic nerve and
Human brain.For everyone, HVS is not fully identical, but the significant percentage of population has general similarity.
Fig. 1 shows the Imager Architecture 100 that can be used for realizing the embodiment of the present invention.Some embodiments of the present invention, which are imagined, to be used
It includes 100. framework 100 of framework packet that human visual system (HVS), which forms human-perceivable's image-and generates structure from a large amount of signals,
It includes: 105 original precursor of image engine (DIPP) 110i, i=1 to N including multiple display images (N can be from 1 to tens of,
To hundreds of any integers to thousands of DIPP).Each DIPP 110i is suitably operated and modulates to form to generate multiple images
Signal 115i, i=1 to N (single image from each DIPP 110i forms signal 115i).Handle these image constructions letter
Number 115i is less than with forming multiple display picture element (DIP) 120j, j=1 to M, M, the integer .DIP equal to or more than N
Set/set of 120j is (such as the one or more image construction signals for occupying same space and cross section
115i will form display image 125 when being perceived by HVS (such as animation/movement effects series of displays figure
Picture).When presenting in a suitable format, HVS from DIP 120j rebuild display image 125, such as on display array or
Projected image in screen, wall or other surfaces.This is different colours or gray scale yin of the HVS from small shape (such as " point ")
The array perceptual image of shadow is familiar with phenomenon, sufficiently small relative to the distance to viewer (and HVS).Therefore, image is shown
Primitive precursor 110i will correspond to be commonly known as when with reference to the equipment for generating image composition signal from non-composite color system
The structure of pixel, and therefore will correspond to commonly known as when with reference to setting from composite coloured system generation image composition signal
Sub-pixel when standby.System known to many uses composite coloured system, such as RGB image to form signal, from each RGB member
One image of part forms signal (for example, LCD cell etc.).Unfortunately, term pixel and sub- picture are used in imaging systems
Many different concepts-such as hardware LCD cell (sub-pixel) is usually referred to, the light (sub-pixel) emitted from unit, and work as
(these usual sub-pixels have been mixed together and have been configured in one group of condition for viewing when HVS perceives signal
Under be imperceptible for users).Framework 100 distinguishes these different " pixel or sub-pixels ", therefore using different
Term refers to these different constituent element.
Framework 100 may include mixed structure, and wherein image engine 105 includes one or more subsets for DIPP 110
Different technologies.That is, the first color technology (for example, composite coloured technology) can be used to generate in the first subset of DIPP
First color technology.The subset of image composition signal and the second subset of DIPPS can be used different from the first color technology
Second color technology, for example, different composite coloured technology or non-composite color technology) with generate that image forms signal second
Subset.This allows to generate one group of display picture element and display image 125 using the combination of various technologies, can be better than working as
When being generated from any monotechnics.
Framework 100 further includes signal processing matrix 130, receives image composition signal 115i and generates as input and at output
Show picture element 120j.The adaptability and purpose of any specific implementation according to an embodiment of the present invention, there are many can for matrix 130
The arrangement of energy (some embodiments may include one-dimensional array).In general, matrix 130 includes multiple signal paths, such as channel 135
For each channel of matrix 130, there are many different possible layouts in channel 160..Each channel and other channels sufficiently every
From, for example, by discrete optical fibre generate optical isolation therefore, for implementation/embodiment, the signal in a channel will not interfere super
Cross other signals of crosstalk threshold value.Each channel includes one or more inputs and one or more outputs.It is each input from
DIPP 110 receive image composition each output of signal 115. generate display picture element 120. from be input to export, each channel
Indicate pure signal information, and the pure signal information at any point in the channel may include original image ingredient.Signal 115,
The decomposition of the processed original images composition signals of one group of one or more and/or one group of one or more are processed original
Image forms the polymerization of signal, and each " processing " may include the decomposition of one or more polymerizations or one or more signals.
In this case, polymerization refers to that (these aggregate signals itself can be original graph by the channel from SA number SA > 1
As composition signal, handle signal or combination) signal be combined into TA number (1 < TA < SA) channel and decomposition refers to channel from No. SD
The channel (itself can be original image composition signal, handle signal or combination) of code SD >=1 is divided into No. TD (SD of channel
<o).SA can exceed that N, such as due to being decomposed earlier without any polymerization, and due to subsequent polymerization, SD may surpass
The some embodiments of M. are crossed with SA=2, SD=1 and TD=2. however, framework 100 allows to polymerize many signals, this can produce
Sufficiently strong signal allows it to be broken down into many channels, each channel have enough intensity with for realizing.Signal
Polymerization come self-channel polymerization (for example, connection, merge, combination etc.) or adjacent channel other arrangement, with allow by those
The connection for the signal that adjacent channel is propagated merges, combination etc., and the decomposition of signal comes self-channel or the solution of other channel configurations
Poly- (for example, division, separation, segmentation etc.) is to allow by the separation of the signal of the dissemination channel, separation, segmentation etc..In some realities
It applies in example, there may be the specific structure of channel or elements (or will to polymerize two or more signals in multiple channels
Signal decomposition in channel is at multiple signals in multiple channels), while keeping the signal shape for the content propagated by matrix 130
State.
Many representative channels are depicted in Fig. 1.Channel 135 is shown with the single channel for inputting and individually inputting.Channel
135 receive single original image composition signal 115k and generate single display picture element 120k.This is not to say that channel 135 can
Not execute any processing.For example, processing may include the transformation of physical characteristic.The physical size size of the input of channel 135
It is designed to the effective coverage of its corresponding/associated DIPP 110 of matching/supplement, generates image and form signal 115k.It is defeated
Physical size out does not need and the physical size size of input matches-that is, output can be opposite taper or expansion
The input of open up or circular periphery can become rectilinear periphery output.Although other transformation include repositioning-image of signal
Constitute the display picture element that signal 115i can start near image construction signal 1152, but be generated by channel 135
1201 can be located at the side of the display picture element 120x generated by previous " long-range " image.Form signal 115x.This allows
Staggeredly technology used in separation signal/primitive flexibility production.For this possibility of individual or collective's physical conversion
It is the selection of each channel of matrix 130.
Channel 140 is shown with a pair of of input and the channel individually exported (polymerizeing this to input).For example, channel 140 receives
Two original images form signal, signal 1153 and signal 1154, and generate single display picture element 1202.Channel 140 is permitted
Perhaps two amplitudes are added, so that primitive 1202 has the amplitude bigger than any composition signal.Channel 140 also allows to pass through friendship
Composition signal is knitted/multiplexed to improve timing.For example, each composition signal can be operated with 30Hz, but obtained figure
Member can be operated with 60Hz.
Channel 145 shows the channel (decomposing input) with single input and a pair of of output.For example, channel 140 receives individually
Original image forms signal, signal 1155, and generates a pair and show that picture element-primitive 1203 and 1204. channel 145 of primitive are permitted
Perhaps individual signals are reproduced, such as are divided into two parallel channels with many signals.Other than amplitude, the spy of decomposed signal
Sign.When amplitude is not desired, as set forth above, it is possible to increase amplitude by polymerizeing, then decomposing be can produce enough
Strong signal, 1 shown in other representative channels as shown in Figure 2.
Channel 150 shows tool, and there are three inputs and the channel individually exported.Including channel 150 to emphasize to appoint
The independent input of what quantity aggregates into the processing signal in individual channel, to generate for example single primitive 120s.
Channel 155 shows the channel with single input and three outputs.Including channel 150 with emphasize individual channel (and
Signal therein) independent but relevant output and the primitive of substantially any quantity can be resolved into respectively.On the other hand, channel
155 different from channel 145-amplitude of primitive 120 for being generated from output.In channel 145, each amplitude can be divided into phase
Deng amplitude (although some decomposition textures can permit variable amplitude separation).In channel 155, primitive 1206 can not be waited
In primitive 1207 and 1208 amplitude (for example, primitive 1206 can have the about amplitude of primitive 1207 and primitive 1208
Twice of amplitude solves all signals because not needing to punish in same node.).For every in primitive 1207 and primitive 1208
One, the first division can lead to the half of signal generation primitive 1206, and obtained half signal is further split into two
Half.
Channel 160 shows the polymerization inputted including three and decomposes the channel exported in a pair.Including channel 160 to emphasize list
A channel may include the polymerization of signal and the decomposition of signal.Therefore, in necessary or desired situation, channel can have more
A zone of convergency and multiple decomposition regions.Therefore, matrix 130 by processing stage 170 physics and signal characteristic manipulation (including
Polymerization and decomposition) and become signal processor.
In some embodiments, matrix 130 can be generated by the accurate weaving of the physical structure in restriction channel, such as
For the loop selection technique of one group of optical fiber, limit jointly thousands of to millions of a channels.
In summary, the embodiment of the present invention may include the image generation for being coupled to primitive generating system (for example, matrix 130)
Stage (for example, image engine 105).Image generation phase includes each display image base of N number of display picture element precursor 110.
First precursor 110i generates corresponding image and forms signal 115i.These image construction signals 115i is input into primitive generating system
In.Primitive generate system include have M input channel input stage 165 (M can be equal to N but do not need match-in Fig. 1,
Such as some signals are not input in matrix 130).The input of input channel receives figure from single display picture element precursor 110x
As composition signal 115x.As shown in Figure 1, each input channel has outputs and inputs in Fig. 1, each input channel is by its list
A original image forms signal and is directed to its output from its input, and input stage 165 has M input and M output.It is originally generated and is
System be also include the distribution stage 170 with P distribution channel, each distribution channel is including outputting and inputting.Usual M=N and P can
To be changed according to embodiment.For some embodiments, P is less than N, for example, P=N/2.At those
In some embodiments, each input coupling of channel is distributed to unique output pair from input channel.For some
Embodiment, P is greater than N, such as P=N*2. is in those embodiments, each output coupling of input channel to assignment channel only
One input pair.Therefore, primitive generates image composition signal-of the system scaling from display picture element precursor in some cases
Under, multiple images composition signal is combined as signal in assignment channel, and other when single image form signal quilt
Divide and is rendered as multiple assignment channels.There are many possible variations for matrix 130, input stage 165 and distribution stage 170.
Fig. 2 shows the embodiments of the imaging system 200 of the version for the Imager Architecture for realizing Fig. 1.System 200 includes encoded signal
Set 205, such as multiple images composition signal (under the nearly IR frequency of IR/), be provided to photon signal converter 215,
Photon signal converter 215 generates the set 220 of digital picture primitive 225, preferably in visible frequencies, especially real world
It can be seen that imaging frequency.
Figure Fig. 3 shows the general structure of the photon signal converter 215 of Fig. 1.Converter 215 receives one or more defeated
Enter photon signal and generates one or more output photon signals.The various characteristics of the adjustment input photon signal of converter 215, example
Such as logic state signal (for example, ON/OFF), signal colour state (IR to visible) and/or signal strength state.
Fig. 4 shows the specific embodiment of photon converter 400.Converter 405 includes 405. source 405 of efficient light sources can be such as
Including IR and/or near-infrared source, with realized in follow-up phase best modulator performance (for example, LED array transmitting IR and/
Or near infrared ray).Converter 400 includes that optional 410. homogenizer 410 of bulk optics energy source homogenizer was provided in the necessary or phase
The structure of the polarization of the light from source 405 is homogenized when prestige.Homogenizer 410 can be arranged for actively and/or passively homogeneous
Change.
Next, converter 400 is provided including 415. encoder 415 of encoder according to the sequence of the light propagation from light source 405
The logic coding of light from source 405 can be homogenized, to generate encoded signal.Encoder 405 may include mixing magnetic
Photonic crystal (MPC), Mach-Zehnder, transmission valve etc..Encoder 415 may include the array or matrix of modulator, to set
Set the state of one group of image composition signal.In this respect, each coder structure can be equivalent to display picture element precursor (example
Such as, pixel and/or sub-pixel and/or other display light energy signal generators).
Converter 400 includes optional filter 420, such as is tied with plane deflection mechanism (for example, prism array/optical grating construction)
Polarization filter/analyzer (for example, photonic crystal dielectric mirror) of conjunction.
Converter 400 includes optional energy recapture device 425, and energy of the recapture from source 405 is (for example, IR- is close
Infrared deflection energy), it is deflected by the element of filter 420.
Converter 400 includes adjuster 430, the wavelength or frequency of the encoded signal that modulation/displacement is generated from encoder 415
(it may be filtered via filter 420).Adjuster 430 may include phosphor, periodic polarized material, shake crystal
Deng.Adjuster 430 obtains generation/switching IR/ near-IR frequency and is converted into frequency (example needed for one or more
Such as, it is seen that frequency).Adjuster 430 is not needed all input frequency shifts/be modulated to identical frequency, and can be close by IR/
Different input frequency shifts in infrared/be modulated to identical output frequency.Other adjustment are possible.
Converter 400 optionally includes second filter 435, such as IR/ near-ir energy, then optionally includes second
440. filter 435 of energy recapture device may include photonic crystal dielectric mirror) with plane deflection structure (for example, prism phase
Associative array/optical grating construction).
Converter 400 can also include optional amplifier/gain adjustment 445, for adjust one or more parameters (for example,
Increase coding, the signal amplitude and frequency shift signal optionally filtered).Can by adjusting 445 come adjust other or it is additional
Signal parameter.
System 505 is exposed to environment by 500. framework 500 of generic structure that Fig. 5 shows for mixing photon VR/AR system 505
Real world composite electromagnetic wavefront, and generate one group of display picture element 510 for being used for human visual system (HVS).One group aobvious
Show picture element 510 may include or using from real world information (AR mode) or the group show picture element can
To include the information (VR mode) generated completely by synthetic world.System 505 can be configured as optionally with any mould
Formula or both of which operation.In addition, system 500 changes used in the AR mode with being configured such that the property of can choose
A certain number of real-world informations.System 505 is firm and general.
System 505 can be realized in many different ways.One embodiment generates the image from synthetic world and forms signal, and
Composite signal is interweaved with image composition signal (" real world signal ") generated from real world under AR mode.As tie
The patent application 12/371 of conjunction, described in 461, the signal processing matrix that isolation optical channel can be used carries out letter to these signals
Road, processing and distribution.System 505 includes signal processing matrix, and in addition to any distribution, polymerization, is decomposed and/or physical features are whole
Except shape, which may include various passive and active signal manipulation structures.
These signal manipulation structures can also specific arrangements based on system 505 and design object and change.For example, these are manipulated
Structure may include real world interface 515, booster 520, visualizer 525 and/or output constructor.
Interface 515 includes the function similar to display picture element precursor, and the composite electromagnetic wavefront of real world is converted
Signal 535 is formed for one group of real world images, by channelizing and is distributed and is presented to booster 520.
As described herein, system 505 is very versatile, and there are many different embodiments.The feature and function of handle structure
It may be influenced by various Considerations and design object.It is all these to be clearly described in detail herein, but elaborate one
A little representative embodiments.Such as the patent application that is combined and described herein, framework 500 can use the group of technology
It closes (for example, mixing), each technology may be particularly advantageous 510 groups of DIP of a part production, superior to generate
Whole result.Rather than rely on all parts of monotechnics production.
For example, the composite electromagnetic wavefront of real world includes visible wavelength and nonvisible wavelength.Due to DIP 510 set also
Including visible wavelength, it can be considered that signal 535 must be also visible.As will be explained herein, when signal 535 be in can
When in light-exposed spectrum, and not all embodiments can realize excellent result.
System 505 can be configured as to be used including visible signal 535.The advantages of some embodiments is sightless using HVS
Wavelength provides signal 535.As used herein, the following range of electromagnetic spectrum is relevant:
A) visible radiation (light) is electromagnetic radiation of the wavelength between 380nm and 760nm (400-790 Terahertz), will be by HVS
It detects and is perceived as visible light;
B) infrared (IR) radiation is sightless (for HVS) electromagnetic radiation, wavelength (300GHz- between 1mm and 760nm
400THz) and including far infrared (1mm-10 μm), middle infrared ray.(10-2.5 μm) and near-infrared (2.5 μm of -750nm).
C) ultraviolet (UV) radiation is sightless (for HVS) electromagnetic radiation, and wavelength is at 380nm-10nm (790THz-30PHz)
Between.
The interface 515 of sightless real world signal embodiment generates the signal 535 in infrared/near infrared spectrum.For one
A little embodiments, it is expected that using the specific wavelength or wavelength band that will be seen that spectrum be mapped to predetermined specific wavelength in infrared spectroscopy or
The spectrogram of wavelength band generates invisible signal 535.This, which is provided, allows signal 535 as infrared wavelength in system 505
The advantages of being effectively treated, and the advantages of signal 535 is reverted to real world color including permission system 505.
Interface 515 may include other function and/or structural detail, such as filter, from the radiation of received real world
Remove IR and/or UV component.In some applications, such as the night vision mode for using IR to radiate, interface 515 will exclude IR filter
Light device will have IR optical filter, allow to some IR radiation of received real world radiation sampled and handled.
Received real world radiation through filtering is converted to place also by the sampling structure including real world by interface 515
The matrix (similar to the matrix of display picture element precursor) of the real world image composition signal of reason, with these processing
Real world image twocomponent signal turns to signal distribution and processing array by channel.
Signal distributions and processing array can also include frequency/wavelength transformational structure, processed existing to provide in IR spectrum
Real world picture twocomponent signal (when needed).Depending on the additional signal operation executed in system 505 later and realize
Which kind of coding/handoff technique, interface 515 can also pre-process the selected feature of the real world image composition signal of filtering, example
Such as include polarization filtering function (for example, Polarization filter).The real world images composition signal or polarization filtering of IR/UV filtering
Device, classification and polarization homogenization etc..
For example, in the case where system 505 includes for based on the structure or process to polarize to modify signal amplitude, interface 515
It can suitably ready signal 535.In some embodiments, it may be necessary to make default signal amplitude be in maximum value (for example,
Default " unlatching "), in other embodiments, it may be desirable to which the default signal amplitude with minimum value is (for example, default " is closed
Close ")) and other people may there are some channels to provide default value under different conditions, rather than all in the ON of default or silent
The OFF recognized.The polarized state of setting signal 535, regardless of whether as it can be seen that being an effect of interface 515.For all signals
535 or other characteristics of signals of the signal subset 535 of selection can also be arranged by interface 515, such as by design object, skill
Art and implementation detail.
It is the spy in system 505 that the channelizing image composition signal 535 of real world, which is input into 520. booster 520 of booster,
Different structure is used for further signal processing.The signal processing can be signal 535 is operated it is multi-functional, based on increase
How strong device 520 operates them, some or all signals can be considered as to " straight-through " signal.These multiple functions can be with
It include: a) manipulation signal 535, for example, the independent amplitude control of each individually real world images composition signal, is arranged/repairs
Change frequency/wavelength and/or logic state etc., b) generate one group with required characteristic be separately synthesized world picture composition signal,
And it c) world picture by generated one is combined into required ratio forms some or all of signal and " pass through " real world
Image composition signal is interleaved to generate one group of interlaced image composition signal 540.
In addition to received image composition signal (for example, real world) processor other than, booster 520 or this be combined into generation
The producer of boundary's image composition signal.System 505 is configured so that all signals can be handled by booster 520.Can have
Many different modes realize booster 520, such as when booster 520 is the multilayer optical devices for defining multiple radiation valves
Compound tense (each correlation) is for a signal), some doors are configured as to pass through, and individually receive some real world letters
Number for controllably passing through, and some doors for being configured to generate synthetic world signal receive background radiation, with pass through signal every
From for generating synthetic world image composition signal.Therefore, in this embodiment for producing the door of synthetic world from back
Scape radiation generates synthetic world signal.
As shown, framework 500 include multiple (such as two) independent display picture element precursor group, by selectively with
It controllably handles and merges.Interface 515 is used as one group of display picture element precursor, and booster 520 is used as second group of display figure
As primitive precursor.First group of generation forms signal, the second group of image of generation from synthetic world from the image of real world
Form signal.In principle, framework 500 allow to can get in system 505 one group of additional display picture element precursor (one or
It is multiple so that a total of three or more show picture element precursor), can make image composition signal additional channelization group
It can be used for booster 520 for handling.
In a kind of mode for considering framework 500, booster 520 defines one group of display picture element precursor, generates intertexture
Signal 540, some of interleaving signals are initially generated by one or more groups of preliminary display image precursors (for example, interface 515 generates
Real world images form signal, and some are directly generated by booster 520.Framework 500 does not require all display image bases
First precursor uses identical or complementary technology.By with organized and scheduled format provide it is all form signal (for example,
In independent channel and in the common frequency range compatible with signal manipulation, for example, carrying out signal amplitude by booster 520
Modulation), framework 500 can provide it is powerful, it is steady, and to one or more disadvantages of current AR/VR system, limitation and
The universal solution of disadvantage.
As described herein, channelized signal processing and distribution arrangement can the polymerization when signal propagates through system 505, decompose
And/or otherwise handle each image composition signal.As a result, the quantity of the signaling channel in signal.540 can be with
Summation by the quantity of the signal of the quantity and generation of signal is different.Augmenter 520 is by the real world of the first quantity
Being interweaved by signal and the composite signal of the second quantity, (for the pure VR mode of system 505, zero) the first quantity is.It is upper and lower at this
What is interweaved in text includes widely there is two kinds of signal, and do not mean that and each real world is required to deposit by signal
It is in the channel adjacent with including another Channel Physical of synthetic world signal.It can by the channel distribution attribute of system 505
To independently control routing.
Visualizer 525 receives interleaving signal 520 and exports one group of visible signal 545. in system 505, the synthesis of signal 540
World picture forms signal and generates in the invisible range (for example, IR or nearly IR) of electromagnetic spectrum.In some embodiments,
Invisible range (its of electromagnetic spectrum is had been translated by some or all of real world signals 535 that booster 520 passes through
It can also be overlapped or completely or partially be included in the range of synthetic world signal).).Visualizer 525 executes frequency/wave
Long modulation and/or the conversion of non-visible signal.It is synthesized and real world when defining and generating using sightless pseudocolour picture
When signal, color appropriate is restored to the real world signal of frequency modification, and synthetic world can be according to real world
Color visualization.
Output constructor 530 generates one group of display picture element 510 from visible signal 545, for HVS perception, either for example
By direct-view or project.Output constructor 530 may include merging, and polymerize, and decompose, and channel is rearranged/relocated, physics
Characterizing definition, ray shaping etc. and other possible functions.Constructor 530 can also include some or all of visible signals
545 amplification, bandwidth modification is (for example, the polymerization of multiple channels of the signal with preconfigured timing relationship and time are multiple
With-i.e. they can be generated with out-phase and group is combined into signal to generate the signal stream at the multiple of the frequency of any stream, and
Other images form signal manipulation.Two streams of 180 degree phase difference relationship can make the doubling frequency of each stream.120 degree of phases
Three streams of relationship can make frequency increase by three times, therefore the 4th stream of N=1 or more multiplex stream.And each other
Merging stream with phase can increase signal amplitude (for example, two can be such that signal amplitude doubles, etc. with phase stream).
Mixing photon VR/AR system 600. system 600 that Fig. 6 shows the embodiment of realization system 500 includes that mapping graph 6 is
The dotted line frame of counter structure between system 600 and system 505.
System 600 includes optional filter 605, " signalling means " 610, real world signal processor 615, by radiation source 625
The radiation diffuser 620 of (for example, IR radiate) power supply, magneto-optic sub-encoders 630, frequency/wavelength converter as shown in Figure 63 5,
Signal processor 640, signal combiner 645 and output Shaping device optical device 650. are as described herein, and there are many different
It realizes and embodiment, some of them includes the different technologies with different requirements.For example, visible light can be used in some embodiments
In spectrum radiation and do not need for wavelength/frequency convert element.Pure VR is realized, the signal of real world is not needed
Processing structure.In certain situations it is desirable to or visualization merging and shaping after expectation minimum.Framework 500 very flexibly, can be with
Adapt to preferably a set of technology.
Filter 605 removes unwanted wavelength from the illumination of the environment real world occurred on interface 515.Unwanted east
West depends on application and design object, and (for example, night vision goggles may need some or all of IR to radiate, and other AR systems may wish
It looks except UV/IR is radiated.
Signalling means 610 is used as display picture element precursor, and the incident real world radiation of filtering is converted into real world figure
As forming signal, and being optically isolated each signal insertion signal distributor grade in channel.These signals can be based on compound
Or non-composite imaging model.
Processor 615 may include for filtering polarization and/or filtering, and the polarization structure of classification and homogenization polarization works as reality
When some or all of world will be converted into different by image composition signal, wavelength/frequency converter frequencies (for example,
IR)。
Diffuser 620 obtains ray from radiation source and establishes background radiation environment for encoder 630 to generate synthetic world image
Form signal.Diffusing globe 620 keeps the background radiation being isolated with real world to pass through channel.
Encoder 630 simultaneously receive and process real world by signal (for example, it can modulate these signals etc.) and produce
GCMS computer world signal.Encoder 630 is interleaved/replaces to the signal from real world and from synthetic world, and will
They are maintained in optoisolated channel.In Fig. 1 in Fig. 6, real world signal is depicted as filling arrow, and closes
It is depicted as being not filled by arrow to show staggeredly/alternating at world's signal.Figure.Fig. 6 is not meant to imply that 630 needs of encoder
Refuse the pith of real world signal.Encoder 630 may include the square of many display picture element precursor-type structures
Battle array, to handle all real world signals and all synthetic world signals.
When it is present, invisible signal is converted to visible signal by converter 635.Therefore, converter 635 can handle synthesis generation
Sector signal, real world signal or both.In other words, it can be distributed on channel in each signal and enable the conversion.
When it is present, signal processor 640 can modify signal amplitude/gain, bandwidth or other modification of signal/modulation.
When it is present, signal combiner 645 can be organized and (for example, polymerization, is decomposed, route, be grouped, cluster, duplication etc.) to come from
The signal of visualizer 525.
When it is present, output Shaping device optical device 650 executes any necessary or desired signal shaping or other signal manipulations,
To generate the appreciable required display picture element of HVS.This may include direct-view, project, reflection, combination etc..Routing/grouping can
To realize 3D imaging or other visual effects.
System 600 can be implemented as the stacking of functional photonic component, sometimes integrated, and functional photonic component is from generating it
Time start to receive, processing and transmitting discrete the signal being optically isolated in channel, until they are included in display
In.A part for traveling to the image precursor of HVS, as other signals in other display image precursors.
The field of the invention is not single, is combined with two related fieldss, augmented reality and virtual reality, but addresses
The integrated mobile device solution with offer, which solve the critical issues and limitation of the prior art in two fields.It is right
The brief review of the background of these related fieldss will clearly solve the problems, such as and limitation, and the solution party of the proposition for the disclosure
Case sets the stage.
It is as follows that two normal dictionaries of these terms define (source: Dictionary.com):
Virtual reality: " simulation true to nature, including three-dimensional figure are carried out to environment using interactive software and the computer system of hardware
Shape.Abbreviation: VR.
The reality of the enhancing: " image or environment for the enhancing watched on screen or other displays, by real world
The image that middle superposition calculation machine generates, sound or other data and generate.And: " subsystem or technology that is used for generates such one
The environment of a enhancing.Abbreviation: AR.
From definition, it is apparent that despite nontechnical, and for these those skilled in the relevant art,
Essential distinction is to simulate whether element is complete immersion simulation, even directly observe the part of reality completely., or
It has been more than clear, the accessible viewpoint to reality that person, which simulates element,.
Little more technical definition is provided now in the wikipedia item of the theme, it is contemplated that the depth of the contribution of page editing
Degree and range, it is believed that the field is sufficiently indicated.
Virtual reality (VR), sometimes referred to as immersion multimedia are the environment of computer simulation, can simulate real world
Or being physically present in the imagination world.Virtual reality can reappear sensory experience, including virtually sample, vision, smell, sound,
Tactile etc..
Augmented reality (AR) is the real-time direct or indirect view of physical reality world environments, and element is generated by computer
Feel input (such as sound, video, figure or GPS data) Lai Zengqiang (or supplement).
Inherently but only lie in these definition in be mobile viewpoint essential attribute.By virtual or augmented reality and more generally
Computer simulation class discrimination comes, regardless of whether and any combination, fusion, it is comprehensive or with " real-time ", it is " direct " real to be imaged
(Local or Remote) combines, and be all the image of simulation or mixing (enhancing or " mixing ") reality " while true " is when seeing
The viewpoint of observer is mobile with observer when the person of seeing moves in real world.
The disclosure proposes, need this more precise definition distinguish immersion show it is quiet with experience simulated world (simulator)
The mobile navigation (virtual reality) of state navigation and simulated world.Then, the subclass of simulator will be " personal simulator ",
Or be at most " some virtual reality ", wherein fixed user is equipped with immersion HMD (head-mounted display) and haptic interface
(for example, motion tracking gloves).Part " virtual reality " navigation of simulated world may be implemented in it.
On the other hand, CAVE system will schematically be limited to limited virtual reality system, because only by moving floor
May be implemented be more than the size of CAVE navigation, once and CAVE itself limitation.Reach, next can be another shape
" some virtual reality " of formula.
Pay attention to " moving " difference between viewpoint and " removable " viewpoint.Computer simulation, such as video-game are simulated worlds
Or " reality ", but unless simulated world explorer's dynamic in person, or instruct the movement of another person or robot, otherwise own
It may be said that (although this is in past 40 years, the Main Achievements of computer graphics are that simply " establish " can in software
The simulated environment being explored, i.e. simulated world are " can navigate ".
For virtual or mixing (the preferred term of author) reality simulation, one is important, definition be characterized in simulating (no matter
Be of fully synthetic or mix) arrive real space mapping.Such real space can be as the room in laboratory or sound field
It is equally basic, and be only with the grid of certain ratio mapping and the calibrated analog world.This differentiation is not evaluation property, is made
For part VR, real-time natural interface (head tracking, tactile, sense of hearing etc.) is provided without movement or is mapped to actual true
Landform, either naturally, artificial or hybrid power unlike analog physical interaction and provides the part VR of sensory immersion
The value of system is much lower.But without enough feedback systems, or, it is more common that whole body, motion range feedback system
And/or mechanical interface-interaction surface of dynamically changeable shape, it support user simulation still (their feeling) it is complete-anyly
Body kinematics in shape, it is any static, it either stands, is seated or sways, VR system is " inclined " according to definition.
But in the case where no this ideal whole body physical interface/feedback system, VR is limited to " complete " and complete
The landform in the world VR can be limited to the landform that can be found in real world by mobile version.The world, modification or from the beginning
Building.This limitation will seriously limit the range and ability of virtual reality experience.
But such as will i.e. by disclosure it will be apparent that this difference generate difference because it be existing VR and AR
How different and its limitation is provided with " bright line " to system, and provides the background of notice teaching.
It has determined that the missing of simulation but necessary feature and requirement is complete " virtual reality ", be identified by next step
The implication problem of " mobile viewpoint " that mode is realized.The view that answer is to provide mobile simulation needs two components, they
Itself realized by the combination of hardware and software: dynamic image display device can check simulation and motion tracking by it
Device, it can track the movement of the equipment of the display including 3 kinematic axis, it means that measure from minimum of three trace point
(two, if measuring device is mapped, may infer that the third position on third axis for the position of three dimensional viewing equipment at any time
It sets, and relative to 3 axis reference systems, can be any any 3D coordinate system for being mapped to real space, although for reality
Purpose is mechanically navigated the space, and 2 axis will form a ground level, gravity horizontal, third axis Z, perpendicular to the ground level.
In fact, the function as the time is accurate and continually realizes that the solution of the orientation for place needs sensor and software
Combination, and the progress of these solutions represents the main carriers of the development in the field VR and AR.Hardware/software movement is looked into
See equipment and system.
These are relatively new fields, with regard to earliest experiment and the time frame between practical technique now and product
Speech, it is sufficient to record originating from and current state-of-the-art for two classifications.The exception of moving-vision simulation system in addition to
Except specific innovation in the prior art, these innovations are of great significance for the development of the disclosure, or with for more preferable
Explain that the significant differences point of the field current problem or similitude are related in ground.Or by the solution and the prior art of the disclosure
It distinguishes.
Period from nineteen sixty-eight to the nineties later period spans associated analog and simulator, many innovations in the field VR and AR
Period, wherein realizing that many critical issues of practical VR and AR have found initially or partial solution.
Initiative of the Ivan Sutherland and his assistant Bob Sprouell since nineteen sixty-eight is tested and is experimentally worn
Formula display system is typically considered the origin for marking these related fieldss, although the work of early stage, substantially concept development are
Through before this, first experiment implements any type of AR/VR realization and immerses and navigate.
The birth of fixed-analog device system, which can be traced back to, is added to flight simulator for the imaging that computer generates, this usual quilt
It is considered to start in nineteen sixty for the middle and later periods.This is only limitted to the use of CRT, and full focal length images are shown between CRT and user,
Until 1972, when Singer-Link company releases collimation optical projection system, remote burnt image-spectroscope system is projected by light beam
System, visual field is improved to per unit about 25-35 degree (100 degree, three units are used in single pilot's simulator) by it.
The benchmark is only improved in nineteen eighty-two by Rediffusion company, introduces wide-field system, wide-angle infinite display system,
150 degree are realized by using multiple projectors, has been finally reached 240 degree of FOV.One big and curved collimation screen.Exactly
In this stage, fixed-analog device may be described as finally realizing really immersing for significant degree in virtual reality, make
Come the person of quarantining and peripheral visual cues interference is eliminated with HMD.
But Singer-Link company was releasing the screen colimated light system for simulator at that time, the footrest as the experience of VR type
Stone, the very limited commercial helmet-mounted display of first item is exploited for military use first, wherein being integrated with based on mask
The electronic aiming system of version, the motion tracking with the helmet itself.These initial development are generally considered to be South African Air Force and exist
(being followed by the Israeli Air Force at that time with middle nineteen seventies) that the 1970s is realized with preliminary form, it may be said that be one
A beginning.Basic AR or intermediary/mixed reality system.
These early stages, the minimum but still initiative helmet-type system of figure realizes and is covered on mask and user's cause
The limited synthesis of the target information of position co-ordination in dynamic motion tracking target is the invention of Steve Mann later.First
A " neutral reality " mobile browsing system, the first generation " EyeTap ", it is by Graphics overlay on glasses.
The later release of Mann uses optics recombination system, based on the beam splitter/combiner for merging reality and processing image
Optical device.This later stage work to work prior to Chunyu Gao and Augmented Vision Inc, the latter substantially mention
Go out a kind of double Mann systems, treated true picture and the image of generation is subjected to optical combination, the system of Mann is completed
It processed-electronics and is electronically generated.In the system of Man, the RUNTIME VIEW by image is remained, but Gao's
In system, all ken images are all processed, even if can also eliminate any direct view image as an option.
(Chunyu Gao, on April 13rd, 2013 U.S. Patent application 20140177023 submitted).By the specified " light of the system of Gao
Road folding optics " structures and methods can be found in other optics HMD system.
By 1985, Jaron Lanier and VPL Reseearch were set up to develop HMD and " data glove ", therefore, to 1980
Age, simulation, the three main development paths and Mann, Lanier and Redefussion company of VR and AR are one very living
The development field of jump, it is attributed to the fact that the progress of some most criticals and the foundation of some basic solution types, in most of feelings
Under condition, they be continued for till now with state-of-the-art level.
Computer generates the complication of imaging (CGI), and game machine (hardware and software) with real-time interactive CG technology is held
It is continuous to improve, the extension of the bigger system integration and AR between multiple systems, and more limited degree is arrived, VR mobility is
One of the Main Trends of The Development in generation nineteen ninety.
CAVE system makes one's first appearance by Chicago University of Illinois electronic visualization development in laboratory, in 1992 in the whole world,
It is proposed the mobile VR and novel analog device of finite form.(Carolina Cruz-Neira, Daniel J Sandin,
Thomas A.DeFanti, Robert V.Kenyon and John C.Hart." CAVE:Audio Visual Experience
Automatic Virtual Environment ", ACM Communications, vol.35 (6), 1992, pp.64-72.It removes
The HMD/ data glove combination of Lanier is outer, and CAVE also combines WFOV multi wall simulator " stage " with tactile interface.
Meanwhile Louis's Rosenberg develops a kind of stationary part AR in Armstrong's the Air Force Research Laboratory,
Fixation " virtual " VR system of " virtual clamp " system (1992) and Jonathan Waldern, early in 1985 to 1990
During year, it is regarded as initial exploration project, also will commercially make one's first appearance in 1992.
The mobile AR being integrated into multiple-unit move vehicle " war game " system, the group in " enhancing simulation " (" AUGSIMM ")
Closing true and virtual vehicle is that its next major progress is seen in the form of Loral WDL, is shown to industry within 1993.Then
" experience and observation that augmented reality is applied to on-site training ", project participant, Peculiar were write in 1999
The Jon Barrilleaux of Technologies commented on nineteen ninety-five SBIR Final Report as a result, and indicate what is, i.e.,
Make up to the present, the continuing problem that mobile VR and (movement) AR faces:
AR and VR is tracked
In general, the commercial product for VR exploitation has good resolution ratio, but absolute accuracy needed for shortage AR and wide area cover
Lid range, the less use for them in AUGSIM.
VR application-user is immersed in synthetic environment-is more concerned about opposite tracking rather than absolute accuracy.Since the world of user is
Complete synthesis and self is consistent, his/her head just turn over 0.1 degree the fact ratio know that it refers to now within 10 degree
Much more significant is wanted to the positive north.
AR system, such as AUGSIM, without this luxury goods.AR tracking must have good resolution ratio, so that virtual element exists
The head rotation or vehicle of user seem smoothly to move in real world when mobile, and it must have it is good accurate
Property, it correctly covers so as to virtual element and is blocked by object in the real world.
Persistently improve with network speed in the nineties with calculating, the new projects in outdoor AR system are activated, and are included in beauty
State's Naval Research Labratory, BARS system, " BARS: battlefield augmented reality system ", Simon Julier, Yohan Baillot,
Marco Lanzagorta, Dennis Brown, Lawrence Rosenblum;The discussion of the NATO military system information processing technology
Meeting, 2000.Abstract: " system is by wearable computer, Radio Network System and tracking perspective head-mounted display (HMD)
Composition.User obtains enhancing by will be on the visual field of Graphics overlay to user the perception of environment.Figure is together with actual environment
It registers (alignment)."
Also underway, the Hirokazu Kato including Nara Institute Of Science And Technology of non-military specific development,
The work of ARToolkit is issued in HITLab later and is further developed, and HITLab introduces software development kit and agreement.
Viewpoint tracking and virtual objects tracking.
Although other researchers and company are active in this field, these milestones are often referred to as most heavy during this period
It wants.
Although the military fund for large-scale development and the AR for training simulation is sufficiently recorded, and to this aobvious and
The research work that the demand of the design of other systems grade and system demonstration that are clear to is being subsidized with military affairs carries out simultaneously.
Most important non-military experiment first is that the AR version of video-game Quake, ARQuake, by Bruce Thomas can
The exploitation University of South Australia computer laboratory initiated and led on wearable device is published in " ARQuake: outdoor/indoor
Augmented reality first person application ", the 4th wearable international symposium Computers, pp 139-146, Atlanta, Ga,
In October, 2000;(Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M.With
Piekarski, W.).Abstract: " we have proposed one kind be based on GPS, digital compass and view-based access control model point tracking low cost, in
The framework of equally accurate six degree of freedom tracking system."
It is the system developed by the author of the disclosure in the another system that nineteen ninety-five starts to design and develop.It is initially intended to realize outdoor
The mixing of augmented reality and TV programme, referred to as " endless live streaming ", which is further sent out in the nineties later period
Exhibition, fundamental were completed in 1999, and effort commercial at that time is that original video game is provided with funds/TV hybrid power
Vehicle is released, and is then included another version, is developed for high-end theme holiday village.By 2001, it with secure fashion to
Company including Ridley and Tony Scott company discloses, especially their joint venture Airtightplanet
(other affiliates include Renny Harlin, Jean Giraud and European heavy metal), the disclosure is held as supervision business
Administrative staff, and using at that time " other worlds " and " other world industries " project and risk investment as with ATP invest and cooperate
Proposal joint venture.
It is the abstract of the system design and component that finally determine for 1999/2000 year below:
Select from " other industry business proposal file " (history file version, 2003);Technical background: state-of-the-art technology it is proprietary
Integrated " open " simulation and mobile virtual reality: tool, facility and technology:
This is only partial list and the general introduction of the relevant technologies, they are formed together the trunk of proprietary system.Some technology components are
Proprietary, it is some to come from external supplier.But the unique system for being combined with the component by verifying will be absolutely it is proprietary-simultaneously
And there is revolutionary character:
It interacts with VR-ALTERED WORLD:
1) mobile army grade VR equipment, the VR for guest/participant and participant to be immersed in OTHERWORLD enhance landscape
In.Although their " venture " (namely they explore each movement of the OTHERWORLD around holiday village) is to pass through
Shift action captures sensor and digital camera (band automatic extinction technology), and guest/player and employee/performer's real-time capture are logical
Crossing their sunshading board can see each other and the superposition of computer simulation image.Sunshading board is binoculars, translucent
Flat-panel monitor or binoculars, but opaque flat-panel monitor, before post binocular camera.
By these " synthins " that flat-panel monitor is superimposed upon in visual field may include landscape changing section (or entire landscape,
Digitally change).In fact, those replace " synthesis " the landscape part of necessary being to be based on the original of holiday village various pieces
3D photograph " captures " and generation.(#7 seen below).As the accurate geometry based on photo " Virtual Space " in computer,
Digital change can be carried out to them in any way, while keep the photo real quality and geometry/space essence of original capture
Degree.This makes the precise combination of the real-time digital photography and the numerical portion changed of same space.
It include the people that computer generates or changes, biology, atmosphere FX and " evil spirit by other " synthins " that flat-panel monitor is superimposed
Art ".These are by showing that (transparent or opaque) is shown as the real elements in the visual field.
By using location data, guest/sportsman and employee/performer movement capturing data, and pass through multiple digital phases
Machine carries out real-time mask to it, all these to be all calibrated to each previous " capture " version.The region of holiday resort (is seen below
#4 and 5), synthesized element can absolutely accurately match the real elements shown by display in real time.
Therefore, the dragon that the true computer of photo generates seems back, can then fly to and log in by really setting below
Then dragon " can burn " fire of computer generation for the real castle-of holiday village.It is (translucent or impermeable in flat-panel monitor
It is bright) in, flame seems the top " blackening " for making castle.Realize this effect and be because by sunshading board, the top of castle by
The computer of the 3D " capture " of castle changes version " covering " in system file.
2) physics electric light mechanical gear is used for true people and visual human, the fight between biology and foreign exchange." tactile " interface provides
Motion sensor and other data, and vibration and resistance feedback allow true man and visual human, the real-time friendship of biology and magic
Mutually.For example, the haptic apparatus of " pillar " pommel form provides data when guest/player swings, when guest/player seems
Physical feedback is provided when " strike " virtual Ogre, to realize the illusion of fight.It is all these to be all combined in real time and pass through double
Mesh flat-panel monitor is shown.
3) open field capturing movement equipment.Mobile and fixed motion capture apparatus (is similar to and is used for The Matrix film
Equipment) disposed in entire holiday resort.The data point on theme " equipment " worn by visitor/player and employee/performer by
Video camera and/or sensor tracking, to provide exercise data, for in the visual field that is shown on the binocular plate in VR sunshading board
Virtual element interaction.
The output of motion capture data is so that (with enough uses for calculating rendering capability and motion editing and maneuver library) CGI
Guest/the player and employee/performer of change version are possibly realized " The Lord of the Rings " along the principle of second and the rumble role of third
Film.
4) LAAS and GPS data are utilized, real time laser ranging data and triangulation technique (including come from Moller Aerobot
UAV the enhancing of motion capture data).Additional " location data " allows to carry out real-time and synthesized element more effective (and entangle
It is wrong) it is integrated.
News release from unmanned plane manufacturer:
July 17.Before one week, a contract has been signed in Honeywell, for establish Local Area Augmentation System (LAAS) stand it is initial
Network, some testing stations are being run.The system can accurately vector aircraft land at airport (and heliport), and precision is
Inch.LAAS system will be estimated to come into operation in 2006.
5) the automatic real-time delustring of open field " broadcasting ".In conjunction with the motion capture data allowed with simulation element interactions, spend a holiday visitor
People/participant will use P24 (or equivalent) digital camera carry out digital imagery, using proprietary Automatte software, automatically every
It is integrated from (mask) element appropriate and view and synthesized element.The technology, which will become, to be used to ensure when being superimposed digital element just
Really one of the external member of separation foreground/background.
6) army grade simulation hardware and technology are combined with state-of-the-art game engine software.In conjunction with from motion capture system
Data are used for and such as stage property sword, synthesized element and real-time elemental (unrest by military simulation software and game engine Integrated Simulation
It is thick and disorderly or complete) etc " synthesis " element interactions haptic apparatus.
These component softwares provide AI code, with animation compound people and biology (AI- or artificial intelligence-software, such as in finger ring
The Massive software of animation is made in king's film for army), generate water true to nature, cloud, fire etc., and otherwise just as computer is swum
All elements are integrated as military simulation software and are combined in play.
7) the actual position capture based on image, digital virtual collection true to nature is created with image-based technique, by Paul
Doctor Debevec (basis of " bullet time " FX of matrix) starts.
" basis " virtual location (holiday village inside and outside) and real world cannot be distinguished because they be from photo and
What the true illumination of position when " capture " obtained.A series of digital picture of high quality, in conjunction with light probe and laser ranging number
According to data and " be based on image " appropriate graphics software, be all to rebuild needed for actual and virtual 3d space in a computer
All.It is exactly matched with original version.
Although from rural external position capture " virtual set " inside and around true castle, once by these " bases
Plinth " or default version digitlization, have lighting parameter and the every other data from initial exact time.Capture, can be with
It is changed, including illumination, the element of addition are not present in real world, and existing element is changed and " dressing " is to create
Build the illusion version in our worlds.
When " gateway " at guest/player and employee/performer pass through each point in holiday village, (" gateway " is from " we
The world " arrives effective " crosspoint " in " other worlds "), calibration process needs place.At this point, positioning is from visit at " gateway "
Visitor/player or employee/performer data, by the Virtual Space in computer " locked " to the coordinate of " gateway ".Computer " is known
The seat of the gateway point for the virtual version about its entire holiday village that road " is obtained by above-mentioned " capture " process based on image
Mark.
Therefore, its virtual holiday village can be put into client/player or employee/performer institute before VR goggles by computer with it
That sees " arranges " together.Therefore, by the binocular flat-panel monitor of translucent version, if virtual version is superimposed upon validity
In false village, then highly precisely matching another.
Alternatively, using " opaque " binocular flat-panel monitor goggles or the helmet, wearer that can assertorically leave with the helmet one
It walks, the virtual version of holiday village is only seen in face of him, because the landscape of virtual world will match exactly his the practical wind passed by
Scape.
Of course, it is possible to by goggles is shown to him will be change red sky, the storm cloud of boiling do not deposit actually
And top have dragon castle railing, only " setting fire " arrive castle battlement.
And it charges on the mountain of the army of 1000 ores at a distance!
8) the supercomputer rendering of holiday village and simulation facility.One keystone resources will make very high-quality, close to the electricity of function
Shadow quality simulating becomes each holiday village in scene of a supercomputer rendering and simulation complexity.
The computer game of independent computer game machine (Playstation 2, Xbox, GameCube) and desktop computer in figure and
The improvement of aspects of game play is well-known.
However, it is contemplated that processor of the improvement of game experiencing based on single console or personal computer and system is supported to change
Into.It imagines, then the capacity of supercomputing center is placed in after game experiencing.Only this is exactly some figure and game matter
The huge leap forward of amount.This one aspect that only mobile VR takes a risk, will be the experience in other worlds.
From to the review of foregoing teachings, it is apparent that and for being apparent for those skilled in the relevant art
, these fields are VR, AR and wider simulation field, the personal hardware or software systems of proposition.Improving the prior art must
Must consider wider system parameter, and these clear system parameters it is assumed that carry out assessment appropriate.
The substantive content of this motion, emphasis are the hardware technology systems for belonging to portable AR and VR technology category, and practical
On both be fusion, but its most preferred version is wearable technology, and preferred wearable version, is a kind of HMD
Technology only considers or rethinks the whole system belonging to it, could become a kind of perfect solution.Therefore, it is necessary to mention
For bigger VR, the history of AR and simulation system, because not having for example, the proposal of new HMD technology and the trend of commercial product are too narrow
Have it is considered that also not examining, the hypothesis of system level, it is desirable that and it is new a possibility that.
It is unnecessary for carrying out similar historical review to the developing major milestones of HMD technology, because in system level
Wider history will be necessary, to provide the frame that can be used for helping explain its limitation.The prior art is existing in HMD
Technology and status, and proposed solution the reason of and the solution that is proposed solve the problems, such as identified original
Cause.
It is enough to understand and identify that the content of limitation of the prior art in HMD starts from the following contents.
In the classification of head-mounted display (for purposes of this disclosure, including helmet-mounted display), up to the present
Identify two kinds of main subtypes: VR HMD and AR HMD, it then follows these meaning.There has been provided definition, and
In the classification of AR HMD, distinguished using two classifications these types be " video perspective " or " optical perspective " (more often
It is called " optics HMD " for short.
In VR HMD display, user watches single panel or two individual displays.The typical shape of this HMD is usual
It is the shape of goggles or mask, although many VR HMD have the appearance of the welding helmet, the closing sunshade with large volume
Plate.In order to ensure optimal video quality, feeling of immersion and do not divert one's attention, this system be it is completely enclosed, around display
Periphery is a kind of light absorbent.
The author of the disclosure previously be incorporated to U.S. Provisional Application " for the system that magneto-optic device is shown, method and computer
Two kinds of VR HMD is proposed in program product ".One of people simply proposes with the main purpose of this application
Chip type embodiment replaces traditional direct-view LCD, the first practical magneto-optical display, superior performance characteristic includes high
Frame rate and further advantage.Display technology is improved on the whole, and in this embodiment, be used for improved VR HMD.
According to the introduction of the disclosure, it is contemplated that the second edition be a kind of new image display being remotely generating, will for example exist
It generates in vehicle cab, is then transmitted via fiber optic bundle, and then (applied by a kind of special fiber array structure
Disclosed in structures and methods) be allocated, establish on the experiential basis of fibre faceplate, using new method and structure, lead to
It crosses optical fiber and carries out tele-video transmission.
Although core MO technology is initially without carrying out commercialization for HMD, but for optical projection system, these develop and this
The some aspects of motion are related, and furthermore this field is not well known.Particularly, the second edition discloses a kind of method, should
Method using optical fiber from be not integrated into HMD optical device or neighbouring image engine transmission video image other more recently
Proposal before it is open.
About totally-enclosed VR HMD for the pass of the ambulant practicability of the stage environment of the strict control with uniform floor
Key Consideration is that, in order to make mobile security, the virtual world of navigation must map in 1:1 to human motion, to real table
The deviation of face pattern or motion path safety.
However, BARS's opens as the conclusion observed and obtained via researchers such as the Barrilleaux of Loral WDL
Other researchers in the field think always in originator, and the exploitation in nearly a quarter century in past, and AR system is come
It says, system is in fact, must be between virtual (synthesis, the image that CG is generated) and the landform and architectural environment of real world
The corresponding relationship (development including (because this no wonder) for the military system of city war) obtained closely is mobile
The geometry of vehicle.
Therefore, more general case is that VR or AR are enabled in the form of moving, any " virtual " or synthesized element with it is any existing
There must be the position corresponding relationship of 1:1 between real world's element.
In the classification of AR HMD, the difference between " video perspective " and " optical perspective " is directly by transparent or semitransparent picture
Difference between the user of pixel array viewing and the display directly arranged.In face of observer, as glasses optical device sheet
A part of body, and observed on the optical element being also set up directly on before observer by translucent projected image, from one
A (usually direct neighbor) micro-display generates and transports through the relay optics of form to the optical element that faces.
It is main and may the transparent or semitransparent display system of direct see-through display (in history) of the only practical type in part be to match
It is set to and does not illuminate the LCD- of backboard therefore, specifically, AR video perspective glasses keep watching one or more optical elements, packet
Transparent optical substrate is included, LCD light modulator pixel array has been made thereon.
For being similar to the application of original Mann " EyeTap ", wherein text/data directly display or are projected in the optics faced
On device, the landform for being calibrated to real world and object are not needed, although position correlation to a certain degree is helpful use
Information text carries out context " label " to the project in the visual field.Here it is the main purposes of Google Glass product, although
As drafting for the disclosure, many developers are absorbed in the application program of exploitation AR type, these application programs are at the scene
What is applied in scene is not only text.
Other than the loose close position correlation in approximate 2D plane or the thick cone, to video or optical perspective system
Landform or object in the visual field of user carry out the main problem of this " calibration "., it is the phase of object in determining observer's environment
To position.If without reference to and/or the 3D of substantially real-time space orientation data and local environment map, cannot execute not
There is the calculating of the perspective and relative size of different cause.
It except for their relative sizes, is true illumination/shade from the critical aspects of the perspective of any point of observation, including according to photograph
The shade in bright direction.Finally, blocking the pass that object is perception perspective and relative distance and positioning from any given viewing location
Key optical signature.
There is no video perspective or optical perspective HMD, or can be independently of how the such data of offer in video or light
Learn the problem of realizing or be practically used for mobile VR in perspective type system and design system, the dimension around wearer
Observation, necessary safety movement or pathfinding.These data be in outside, what local or a variety of sources provided? such as fruit part office
Does what portion and part HMD, this have influence the design and performance of entire HMD system? if any, this problem is to video and light
The selection learned between perspective has any influence, gives weight, balances, volume, data handling requirements, the lag between component,
He influences and impacted parameter, and display selection and optical element it is detailed?
During the differentiation and progress of VR HMD in the technical parameter and problem to be solved, main includes increasing the visual field, reduce etc.
The problem of to time (variation of lag and virtual perspective between motion tracking sensor), improves resolution ratio, frame rate, dynamic
Range/contrast and other general display mass propertys and weight, balance, volume and general ergonomics.Image
The details of collimation and other display optical systems has been improved, and efficiently solves the problems, such as " simulator disease ", this is early
The main problem of phase.
With the improvement and weight of these general technology classifications, size/volume and balance, display, optical device and other
The weight and volume of electronic device tends to reduce.
Fixed VR gear is commonly used in the night vision system in vehicle, including aircraft;However, mobile night vision goggles are considered
Form is watched by a kind of intermediary similar to mobile VR, because that wearer substantially watching is real-time real scene (IR
Imaging), but pass through video screen.Rather than in the form of " perspective ".
" indirect view is aobvious for the subtype and Barrilleaux being similar to defined in identical reference is looked back for 1999
Show ".He proposes the definition of the AR HMD about proposition, without actual " browsing ", but specially sees over the display
To be merging/processing it is true/virtual image, be possibly comprised in any VR type or night vision system.
However, night vision system not instead of dummy synthesis landscape and true fusion or mixing, are explained by video frequency signal processing
To pass through the direct transmitting video image of the IR sensing data of video frequency signal processing.Intensity, the intensity depending on IR signature.Make
For a video image, it is suitable for real-time text/Graphics overlay really, identical as the simple form of Eyetap initial concept,
And Google it is stated that its glass product main purpose.
How and extract what data and from reference to or both be supplied to mobile VR or mobile AR system, or include now
There is the problem of mixing of similitude handles video feed " indirect view is shown " in real time.The two classifications, with realize virtually and
The effective integration of true landscape is that any new and improved mobile HMD system of design is necessary to provide consistent assembled view
The design parameter and problem of consideration, regardless of type.
It has been proposed for the software of AR and data processing to be located based on the Prior efforts for the system developer quoted
Manage these problems.The example of this respect is the work of the Matsui and Suzuki of Canon Corporation, as they are pending
United States Patent (USP) disclosed in as apply, " mixed reality space image generation method and mixed reality system "
(U.S. Patent application 20050179617 submitted on the 29th of September in 2004).Their abstract:
" for generating through shape on the real space image obtained by capture real space that virtual space image is added to
At mixed reality spatial image mixed reality spatial image generating device, including be superimposed virtual space image image synthesis
Unit (109).In view of blocking for the object on the real space of virtual space image, it will be shown in real space images
On, and further apply the annotation generation unit (108) of image to be shown, without considering any block.Pass through this side
Formula can be generated and the mixed reality spatial image that nature shows and facilitates display may be implemented.
The purpose of the system is designed to that the combination of the industrial products that rendered completely (such as camera) model can be superimposed upon
Upper (substitution pillar);A pair of of optical perspective HMD glasses and model are provided with position sensor.Using searching pixel-by-pixel in real time
Comparison procedure simulates the pixel from model, and the dummy model that CG is generated is superimposed upon on the video feed of synthesis
(buffer delay, to realize slight layering) falls behind).System also added comment graphics.Computerized image.For determining delustring simultaneously
Thereby, it is ensured that the basic source of the correct and non-erroneous data blocked in synthesis is motion sensor on model and predetermined looks into
Table is looked for, compared pixels are to pull hand mask and model mask.
Although this system is not suitable for mobile AR, VR or any hybrid electric vehicle, it is that an attempt does not provide simply but not
It is completely automatic system to analyze true 3d space and be properly positioned an example of dummy object.View.
In the field of video or optical perspective HMD, even if in the vacation for the ideal mixed reality perspective view calculated for passing to HMD
It sets, also almost without progress in terms of the display or optics and display system that design may be implemented., one is satisfactory,
True to nature and accurate merging perspective view, including the correct perspective sequence of processing, merge any given sight in element and real space
It suitably blocks the person of examining position.
One system claims proposes most effective solution to the problem, even partial solution, may is that unique
Integrated HMD system (with the software for being solved these problems with certain generic way/photogrammetric/data processing and transmission
System is opposite, independently of HMD), it has quoted in front, this is Chunyu Gao in U.S. Patent application
What is proposed in 20140177023 " for the device of optical observation, has to be mutually closed and wear with OPAQUENESS control ability
Formula display ".
Gao starts his investigation to the visual field HMDS of AR, has following observation:
There are two types of the ST-HMD: optics of type and video, (J.Rolland and H.Fuchs, " optics and video perspective wear-type are shown
Show device ", in the rudimentary knowledge of wearable device) computer and augmented reality, the 113-157 pages, 2001 years.Video perspective side
The major defect of method includes: the image quality decrease of see-through view;Due to picture lag caused by handling input video stream;By
In hardware/software failure, perspective view may be lost.In contrast, optical perspective HMD (OST-HMD) is provided by beam splitter
There is the smallest influence to the direct view of real world, therefore on the view of real world.It is height (in user couple
The cognition of site environment is vital to be required in harsh application to be preferred.
However, in the first scenario, by the way that prior art video perspective is appointed as single purpose LCD, Gao Zhisheng is to video perspective
The observation of problem be it is underproof, also without verifying, LCD is necessary asserts (in contrast, and what standard be omitted) for he
Reduce fluoroscopy images.It will be appreciated by the appropriately skilled person that the viewpoint of this low-quality image is to accelerate the neck recently
What the result obtained from early stage perspective LCD system before the progress in domain obtained.Optical perspective system be not obvious it is also unobvious,
By comparing many optical elements and other display technologies to the reprocessing of " true " " fluoroscopy images " or conciliation influence " with most
Advanced liquid crystal display or other video-see display technologies are compared, final result meeting relative reduction, or are not so good as Gao Zhisheng
Deng proposal.
Compared with other must also handle the system of input realtime graphic, another problem of this groundless summary is this
Lag in class perspective is assumed.In this case, the comparison of speed is to the component of contention system and its dividing in detail for performance
The result of analysis.Finally, the guess of " may lost the see-through view to hardware/software " is substantially free, arbitrarily, and
Pass through all non-experience of any Exact Analysis to the comparison system robustness or stability between video and optical perspective scheme
Card., or between their particular version and their component technology and system design.
Other than in addition to the mistake compared in field and being biased to the initial problem indicated, there are the qualitative of the solution itself proposed
Problem, the considerations of including omitting and lacking to the HMD system proposed as complete HMD system, including such as wider AR
A component in system, wherein comprising the data acquisition previously quoted and solved, analysis and distribution problem.As HMD itself
And its design, when can be important a problem and problem, HMD, which cannot be allowed as " give ", handles a certain rank and matter
The data or processing capacity of amount generate change or mixed image.Assistance hinders, and cannot function as given offer at all.
In addition, the complete of vision integration problem true and virtual in mobile platform is omitted from the specification of issue-resolution
Dimension.
Using the disclosure and its system of introduction, specifically:
As front has been described in the background technique, Gao Jianyi is using two display type equipment, because will selectively
Specification of the specification of the spatial light modulator of reflection or transmission realtime graphic substantially for the SLM of SLM.With it any aobvious
Show that the purpose in device application is identical, operationally.
Then it will be combined in beam splitter, combiner from the output image of two equipment, it is assumed that in addition to about this equipment
There is no any specific explanations except the statement of precision, while arranging pixel by pixel.Basis.
However, this merging in order to realize two pixilated arrays, Gao specifies the duplication that he is known as " folded optics "
Product, but there is no anything other than double versions of Mann Eyetap scheme, need two " to fold optics in total
" optical element, each one of each light source, adds plane device " " element (for example, plane grating/HOE or other compact prisms or "
Upper two object lens (one is used for the wavefront of actual view, a focus for the other end for connection) image and beam splitter group
Clutch).
Therefore, it is necessary to multiple optical elements (he provide various traditional optical changes): 1) pass through the first reflection/folding optics device
Part (plane grating/mirror, HOE collect the light of real scene).TIR prism or other " flat " optical elements, and therefrom
To object lens, next plane grating/mirror, HOE, TIR prism or other " plane " optical elements are passed it to, with again
" folding " optical path, it is all these to be provided to ensure entire optical system relative compact and be included in two rectangular optical relay areas
Schematic diagram in;From folded optical system, light beam reaches SLM by beam splitter/combiner;Then in the base of pixelation (sampling)
It reflects or transmits on plinth, thus it is changeably modulated (from the variation of true picture contrast and intensity to modify gray scale etc.),
The true picture of present pixelation divides/combining back to light beam.Although display is synchronous to generate virtual or synthesis/CG image,
Calibration may be also passed through, to ensure to be easy to integrated with modified pixelation/sampling actual wavefront, and is collected by beam splitter
At pixel is pixel, using the multi-step of real scene, modifies the sample with pixelation, from there through eyepiece object lens, then
Back to another " folding optics " element, to be reflected into the eyes of observer from optical system.
Generally, for the modified pixelation sampling section of true picture wavefront, before reaching observer's eyes, lead to
Seven optical elements are crossed, do not include SLM;It shows the composograph generated, two can only be passed through.
Although the problem of precisely aligning of optical imagery synthesizer, until Pixel-level, if be the image pattern from laser interrogation
The small function SLM/ display device that the reflected light of collection or combination image generate, maintains alignment, especially in the art, machine
The condition of tool vibration and thermal stress is considered as non-trivial.
Digital projection free space beam combined system, a combination thereof high-resolution (2k or 4k) is red, green and blue image
The output of engine is (in general, being expensive realization by the image that DMD or LCoS SLM is generated and safeguarding that these alignment are non-trivials
's.The case where some designs are than high 7 element of format is simple.
In addition, these complicated multi engine multi-component optical combiner systems are compact unlike as needed for HMD.
Whole prism, such as the T-Rhomboid combiner developed and sold for life science market by Agilent, dedicated for
Solve the problems, such as that free space combiner shows in existing application.
Although micro- shadow casting technique based on SLM that the companies such as Microvision have successfully been originally developed is deployed to HMD
In platform, but these optical setups usually substantially propose complexity not as good as Gao.
Further, it is difficult to determining two image processing steps on two platforms and calculating the basic principle of iteration is what, and
Why need to realize the smooth and integrated of true and virtual wavefront input, implements blocking/hiding for correct combine scenes element
Gear.Seem the problem of high greatest problem and problem to be solved is composograph competition, it is difficult to the brightness with true picture
It compares, and therefore the main task of SLM seems selectively to reduce luminance part real scene or entire real scene.It is logical
Often, it may also be inferred that, while reducing the intensity for the real scene element being blocked, such as by time division multiplexing system
The duration that the DMD mirror in reflection position is minimized in system, it can simply leave the pixel being blocked." off ", although this
It is not to be specified by Gao, the details how SLM completes its image modification function does not have yet.
It must calculate simultaneously, in many parameters calibrated and be aligned, including definitely determining that the pixel from real field is calibration
Pixel is to resulting pixel.If ghost image overlapping, mistake is aligned and blocks and will be multiplied, especially exists without accurately matching
In mobile context.Have by the position of the reflective optical devices of real scene wavefront part to object lens relative to the true of scene
Perspective position, the scene is different from the perspective position of observer in scene first, if it is not flat, nor being located at dead
Point, it is a wavefront sample, rather than position.Moreover, upon displacement, also moving, and composite diagram is not also known in advance
As processing unit.Due to these facts, the variable quantity in the system is very big.
If they are, and the target of the solution becomes more specifically, then to become clear that, it is understood that there may be ratio makes
This point is realized with the simpler method of second display, and (in biocular systems, 2 displays of addition, specified in total
SLM)。
Secondly, it is clear that if any method, due to this of multiple accumulation alignment tolerances in the inspection to scheme
The durability of complication system, the defect accumulation of primitive part and wears time in multicomponent path, combined light beam it is not right
Standard forms the heat and mechanical oscillation effect of accumulation, and adds other complexity caused by the complexity of optical system as seven elements,
Exactly this system itself may cause the passage to degrade in particular with the time, external realtime graphic wavefront.
In addition, as previously having been noted that, the problem of calculating the spatial relationship between real elements and virtual element, is
Non-trivial.Design one must drive the system of two (and in biocular systems) four display types to set from these calculating
It is standby, it is most likely different types of (therefore there is different colour gamuts, frame rate etc.), increase the system design to harshness is required
The complexity of parameter.
In addition, in order to transmit high-performance image in the case where no ghost image or lag, and the eye of vision system will not be caused
Eyeball fatigue and fatigue, high frame rate are necessary.However, only using the SLM for having an X-rayed rather than reflecting, being for high system
System design can be just slightly simplified;Even with faster FeLCoS micro-display, frame rate and speed image are still far below
The DLP (DMD) of TI equipment.
However, at least to realize broader FOV due to also needing higher HMD resolution ratio, high-resolution DMD (such as 2k is used
Or 4k equipment) mean to seek help from very expensive solution, because DMD has the yield of known features size and number
Low, ratio of defects is higher than consuming public or enterprise's production and the usually tolerable ratio of defects of cost, for using theirs now
Price is very high for system, such as d-cinema projectors available on the market are by the Barco of TI OEM, Christie and
NEC commercialization.
Although being used for optical perspective HMDS (such as Lumus, BAE etc.) from planar wave shadow casting technique is instinctively readily to walk
Suddenly, wherein blocking neither design object is also impossible in these ranges and limit of power.It is close, substantially replicate this method
And true picture is modulated, traditional optical setting two images of combination then proposed using such as Gao, while relying on a large amount of planes
Optical element is to realize combination and in the space of relative compact.
It looks back, and returns to current in HMD, optical perspective HMD and classics VR two general categories of HMD in order to terminate background
Leader, the prior art can be summarized as follows, and notice that optical other variants perspective HMD and VR HMD both can be obtained commercially
, can be used for numerous studies and exploitation, there is a large amount of business and academic work, including product is from Google, Glass and
Oculus VR HMD, since Rift is broken through, bulletin is published and patent application substantially upgrades:
Google is writing at this present writing together with commercial leading mobile AR optics HMD, is being optical perspective HMD classification
Establish breakthrough public's visibility and leading marketing status.
However, they follow other people to enter market, they in main national defence/industrial circle exploitation and dispose product,
Including Lumus and BAE (Q-Sight holographical wave guide technology).The competition works in nearest other markets and conceptual phase are all
Research is commercialized by the companies such as TruLife Optics.
Many military helmet formula display applications and Google official are used for the major use case of Glass, again such as preceding institute
It states, super spelling text and symbol figure element on view spaces, it is only necessary to rough position association, it may be possible to be sufficient to
It is many initial, simple mobile AR application.
However, even if in the case where information shows and applies, it is obvious that in viewer in face of (and final, surrounding)
The density for watching the mark information of the project and landform in space is bigger, and the demand to space is bigger.Order/layered label with
Perspective/relative position of element with label.
Overlapping-that is, real elements in visual field to the partial occlusion of label, rather than just the overlapping of label itself, therefore must
So become the even requirement of the optical view of information display purposes " substantially ".System, to manage visual confusion.
Because in addition label must not only reflect relative position of the tag element in the perspective view of real space, but also reflect
Automation (calculates) priority and real-time degree based on predetermined or software, the priority that user specifies, tag size and thoroughly
Lightness, in addition to graphics system be used to reflect level of information structure two principal visuals prompt other than, it is necessary to be managed and
Implement.
It goes wrong immediately after, how considers the translucence and overlapping/occlusion issue of label and superpower graphic element in detail
These basic opticals of the relative luminance problem of live element of processing transmitting have an X-rayed HMD (either simple eye cross line style or binocular
Full glasses type) optical element and superpower, the video display component of generation, especially under bright outdoor lighting-conditions and
Under very dim outdoor conditions.In order to sufficiently extend the practicability of these type of display, night use is clearly that low light is asked
The extreme case of topic.Therefore, as we cross the most limited service condition of passive optical perspective HMD type, with information
The increase-of density commercially succeeds and usual intensive city or suburb with such system, what this was to be expected to.Area
Domain obtains label information-from commercial enterprise and the use parameter under bright and dim condition increases constraint condition, very bright
The problem of aobvious " passive " optical perspective HMD can not be escaped, can not also be coped with any real actual implementation and the mobile AR HMD of demand.
Then passive optical break-through HMD must be considered as realizing the endless integral mould of mobile AR HMD, and in retrospect,
It will be considered merely as being the transition stepping-stone to activity system.
Oculus Rift VR (Facebook) HMD: it is somewhat like with the movable influence of Google Glass product marketing, but
Oculus actually also solves and/or starts the threshold for significantly solving the practical VR HMD of certain major issues in the leader field
Obstacle (rather than Lumus and BAE being followed, for Google), the Oculus Rift VR HMD to write at this present writing is leading
Pre- publication VR HMD product, into and create consumer that market accepts extensively and business/industry VR.
The basic threshold value progress of Oculus Rift VR HMD can be summarised in following product feature list:
The significant widened visual field o, by using the 1080p resolution ratio of single current 7 inch diagonal display, apart from user's eye
Several inches of eyeball, and it is divided into binocular see-through area on single display device.Current FOV is 100 just as writing this article
Degree (improves its original 90 degree), is the common-use size of pre-existing HMD compared with 45 degree in total.Individual binocular light
It learns device and realizes stereoscopic visual effect.
The significant improvement head tracking of o, leads to low lag;This is an improved motion sensor/software advances, and using from
Nintendo Wii, Apple and other fast followers are transplanted to mobile phone sensor technology, Playstation PSP and now
Vita, the advantage and Xbox Kinect system of the tiny motion sensor technology of the present 3DS of Nintendo DS, and
Other hand-helds and handheld device product with in-built motion sensor tie up position tracking (accelerometer, MEMS for 3D
Gyroscope etc.).Current head tracking realizes Multi-point infrared optical system, and there is external sensor to cooperate.
O low latency, improved head tracking and Fast Software update processor to interactive entertainment software systems combined result,
Although the intrinsic response time by used display technology is limited, original LCD, replaced in a way faster
OLED。
The low duration of o is a kind of buffered forms, and video flowing is smooth to assist in keeping, and shows with the OLED of higher switch speed
Device combination work.
O lighter weight, the volume of reduction preferably balance, and integrally improve
Ergonomics, using ski goggle form factor/material and mechanical platform.
It summarizes and combines these improved net benefits, although such system in structure or may operate without new mode,
The net effect of improved component and particularly effective design patent US D701,206 and any proprietary software, it has produced
The verifying of breakthrough performance and general marketplace VR HMD.
After their leader and the method using them, in many cases, according to Oculus in the case where other people
The success of VR Rift configuration changes some same period product programs of their design, there are many VR HMD product developer,
Brand name company and venture company melt in the Kickstarter of the initial demonstration of electronics fairs in 2012 and Oculus VR
Product plan bulletin has been formulated after money activity.
Those substantially change its strategy with follow those of Oculus VR template fast follower and other people in, Samsung,
The development model write and Oculus VR Rift design are very similar, and the Morpheus of Sony.It is paid close attention in the field
Start-up company include Vrvana (pervious True Gear Player, GameFace, InfiniteEye and Avegant.
No one of these system configurations seem absolutely identical as Oculus VR, although some use 2 and other 4 faces
Plate, InfiniteEye use 4 panel systems to widen FOV to claim 200+ degree.Some use LCD, some then use OLED.
Optical sensor is used to improve the precision and renewal speed of head tracing system.
All systems are realized as substantially on the spot or highly constrained mobility.Using vehicle-mounted and based on active optical reticle
Motion tracking system is designed for enclosure space, such as living room, operating room or simulator stage.
System with Oculus VR scheme with maximum difference is the carving text and Vrvana totem of Avegant.
Font indeed achieves display solution, follows the optical perspective HMD solution and structure previously established, adopts
The micro- image for generating projection on plane of reflection optical element with Texas Instruments DLP DMD, in configuration and operation
The middle planar optical elements as existing optical perspective HMD, the difference is that light absorptive back board structure comes using high contrast
Realize the micro projector display type of reflection/indirectly, video image belongs to that general category is opaque, opaque display image.
Here, although as previously mentioned, in being discussed disclosed in height it has been determined that when using DLP DMD or other MEMS components,
The limitation for increasing display resolution and the other systems performance beyond 1080p/2k is those of cost, and the manufacture of this system produces
Amount and ratio of defects, durability and reliability.
In addition, to limited spread/amplification factor picture size/FOV of planar optical elements (optical grating construction, HOE or other)
Limitation, extend SLM picture size but interaction/strain (HVS) on human visual system, especially focus
System proposes limitation to viewer's safety and comfort.
In addition, the existing limitation for VR HMD, all such systems using OLED and LCD panel suffer from relatively low
Frame rate, this will lead to motion delay and delay, and influence on the negative physiological of certain user, belong to extensive " simulation disease
Disease ".It is also noted that in the digital three-dimensional optical projection system of cinema, it is three-dimensional using the commercialization of such as RealD system etc
Sound system realizes, frame rate is also inadequate for the projector based on Texas Instrument DLP DMD or the projector based on Sony LCoS
It is high.It is reported that thering are sub-fraction spectators to take part in this research in some researchs, wherein there is up to 10% patient to occur
Headache and related symptoms.Some of them are that these people are exclusive, but wherein can greatly trace back to the limitation of frame rate.
Moreover as noted, Oculus VR has been carried out " low persistence " buffer system, write with compensation
Still not high enough pixel switching/frame rate of the fashionable OLED display used.
On the performance of existing VR HMD it is further influence be due to the resolution ratio of existing OLED and LCD panel display limit,
This partly facilitates " diagonal display and they to be mounted remotely from observation optics (and the eye of observer using 5-7
Eyeball) to obtain enough effective resolutions), facilitate existing and planned product volume, size and balance, than it is most of other
Optics headwear product is much bigger, and volume is bigger, heavier.
It is expected that being potentially partly improved the use from bending OLED display, it is contemplated that it can be in the feelings for not increasing volume
Further improve FOV under condition.But put goods on the market with enough quantity, it needs to carry out factory's production capacity with acceptable yield
The expense of a large amount of additional size investments makes this prospect less practical in a short time.It can only partially solve volume and size is asked
Topic.
For the sake of completeness, it there is a need to refer to video HMD for watching video content but alternatively or not there is any movement
Sensing function, therefore without virtual or mixed ability (mixed reality/AR) world of navigating.It is this in past 15 years
Substantial improvement, effective FOV and resolution ratio and viewing comfort/ergonomics increasing has been obtained in video HMD
Add, and provides current VR HMD and have been able to the development path for utilizing and constructing and progress.But these are also by being used
Display technology core capabilities limitation, mode is followed for OLED, LCD and the reflection based on DMD/deflectiometry system
Observed limitation.
Other important changes of projected image on transparent glasses optics example include coming from Osterhoudt Design
Those of Group, Magic Leap and Microsoft (Hololens).
Although these variations have the advantages that some opposite or disadvantage-relative to each other and described in detail above other are existing
Technology-they all remain the limitation of basic skills.
More basic and universal common ground, they are also limited by the display/pixel technology of used fundamental type, as
Frame rate/refreshing of existing core display technology, either quick LC, OLED or MEMS, and whether use mechanical scanning
Optical fiber input or other are disclosed for that will show that image is transmitted to the optical system of viewing optical system, it is all these still not
It is sufficient for high quality, is easy to watch (HVS), low-power, high requirement resolution ratio, high dynamic range and other display performances are joined
Number, helps to realize general marketplace, the happy AR and VR of high-quality respectively and jointly.
The state for summarizing the prior art, about mentioned-above details:
" high visual acuity " VR is at many aspects substantially from FOV, and in incubation period, head/motion tracking is light-weight, size and
Improved in terms of volume.
But frame rate/waiting time and resolution ratio and significant inference degree, weight, size and volume are by available core
The limitation of the constraint of display technology.
Modern VR is limited to the static or height-limited and limited mobile use in small controlled space.
Closing version of the VR based on optical perspective system, but it is configured as transverse projection deflection system, wherein SLM passes through a system
Three optical elements of column project image onto eyes, and performance is restricted.Compared with the gross area of Standard spectacles eyeglass, instead
It penetrates the size expansion of image but is not more than the output of SLM (DLP DMD, other MEMS or FeLCoS/LCoS).It is watched by extending
The profundity version of " feature work " and its requirement to e, the risk of eyes anxiety
Closing version of the VR based on optical perspective system, but it is configured to transverse projection-deflection system, wherein SLM passes through a system
Three optical elements of column project image onto eyes, and limited performance is reflected in compared with the gross area of Standard spectacles eyeglass
Image spreading but the output for being not more than SLM (DLP DMD, other MEMS or FeLCoS/LCoS).Extend observation " feature work "
The eyes anxiety risk of profundity version and eyes muscle demand is the further limitation to practical acceptance.SLM type and
Display of size also limit by the higher resolution SLM of cited technology be scaled to improve resolution ratio and globality originally
The practical approach of energy.
Optical perspective system usually by by eyes muscle using be restricted to relatively small region and eyes having the same it is tired
Labor possibility, and relatively small and frequent eyes tracking adjustment is needed in these constraints, and need more than of short duration
Service life.The design of Google glass is intended to by positioning optical element upwards, and from the direct quiet of the eyes of look straight ahead
Stop bit is set to reflect expectation that finite duration uses.But user is it has been reported that eye fatigue, as media pass through
As the text of Google Glass Explorers and interview record extensively.
Due to the label for needing to organize that there are real-world objects in the perspective, optical perspective system in overlapping half
It is restricted in terms of transparent information density.It shows and applies even for graphical information, the requirement of mobility and information density also makes
Passive optical view is obtained to be restricted.
The aspect of " indirect view is shown " is realized in the form of night vision goggles, and Oculus VR competitor Vrvana is only
The totem that proposing is equipped with its binocular video video camera is suitable for the suggestion of AR.
Actually it is more " indirect view is shown " although height is proposed to claim to be optical perspective display, there is quasi- perspective
Aspect is worked in the improved projection display in this way by using SLM device, for sampling a part of true wavefront
And digitally change the part of the wavefront.
Quantity (and the point to be added here, much smaller than passing of optical element in the optics routing of part before primary wave
The optical region of conventional lenses in a pair of glasses of system), this is seven or close to the number, introduce image aberration, pseudomorphism and
The chance of loss, but need complicated optical alignment system in one field, many elements it is this it is complicated from
It is not common by spacial alignment and when need them, it is expensive, it is difficult to maintenance and built on the sand.It is expected that SLM management is true
The method that the wavefront of real field scape changes for particular requirement also without specifying or verifying.Coordinate 2-4 type of display equipment it
Between signal processing (depending on the simple eye of biocular systems) also there is no problem, including definitely determine the pixel from real field be
The calibration pixel of the pixel suitably synthesized.It is pre-formed calculating in the perspective to establish suitably between synthesized element true
The context of relationship is very harsh, especially when individual moves in information dense, environment with a varied topography.It is mounted on
This problem can be only further exacerbated by vehicle.
Compared with constructing such as the task of the Gao Optical devices proposed, or even it is reduced to the form factor of relative compact, completely
There are countless accessory problems for the exploitation of system.Size, balance and weight are the number to various processing and optical array unit
Amount and hint, one of many influences of needed position, but compared with the other problems of reference and limitation, they are relatively small, to the greatest extent
It manages serious this system actual deployment to use to live, be used for military or reinforce industrial use or consumer's use.
100% " indirect view is shown " suggests thering is similar requirement for height in critical aspects, in addition to display type unit
Quantity and alignment, optical system, the matched details of pixel system, and perspective problem, therefore query all keys of this system
Parameter should need " brute-force " to calculate the synthesis CG 3D mapping space stored and real-time, the journey of individual's perspective RUNTIME VIEW coordination
Spend-pass through image.Problem becomes increasing, it is necessary to all execute calculating, the video image of forward looking camera capture.
What real mobile system needs, either VR or AR, it realizes immersing and calibrate to true environment, as follows
It is shown:
Meet the optics and observing system of ergonomics, minimizes any improper requirement to human visual system.
This is to realize that more extensions use, this mobile uses is implied.
Wide visual field is ideally comprised the peripheral view of 120-150 degree.
High frame rate, preferably 60fps/ eyes, to minimize the usually waiting time as caused by display and other puppets
Picture.
Efficient resolution ratio, at the comfort distance of unit and face.The effective resolution standard that can be used for measuring maximum value is wanted
It is effective 8k or is " retina is shown ".The distance should be similar to the distance of traditional eyewear, and traditional eyewear is usually adopted
Use the bridge of the nose as equalization point.Collimation and optical path optical device are necessary for establishing virtual focal plane appropriate,
Realize this effective display resolution and optical element to eyes actual range.
High dynamic range, matching is real-time as closely as possible, the dynamic range of real views.
Determine that the airborne motion tracking-of the orientation of both head and body is either known still in advance in known pattern
In knowing in time within sweep of the eye for wearer.This can be supplemented by the external system in hybrid plan.
Display optical system, can be in the environment of human visual system in real scene wave surface and any synthin
Between carry out rapid synthesis processing.Passive device should be used as much as possible, with minimize vehicle-mounted (to HMD and wearer) and/
Or the burden of external treatment system.
Display optical system, it is relatively easy and firm, have seldom optical element, seldom active device element, with
And simple active device design, it is firm with the smallest weight and thickness, and under mechanical and thermal stress.
Light-weight, small in size, gravity balance and form factor are suitable for being known as the design that professional user is received
Configuration, such as military and reinforcing environment industrial user, robust sports application, general consumption and business use.From
The spectacles manufacturers such as such as Oakley, Wiley, Nike and Adidas are to Oakley, Adidas, Smith, Zeal etc. other
Professional motion goggles manufacturer, these factors are also such.
A kind of system can changeably switch between VR experience, while keep complete mobility, and can be changed and block,
AR system is watched in the integrated mixing of perspective.
A kind of system can not only manage the incident wavelength of HVS, but also can be emerging from these senses by sensor and its mixture
The wavelength of interest obtains effective information.IR, it is seen that light and UV are interested typical wavelengths.
The disclosure propose system solve the problems, such as and meet the prior art fundamentally limit and insufficient enhancing with
The final goal of function in virtual reality (task and standard).
Present disclosure be incorporated to and realize telecommunications architecture and picture element signal processing system and mix magnetic photonics feature (to
Examine U.S. Patent application) [2008] and Photonic Encoder, by same inventor), and preferred picture element signal processing
The mixing MPC picture element signal of subtype is handled, display and the application of network co-pending U.S. Patent, by same inventor).Equipment is (outstanding
It is array) addressing and power supply be preferably the wireless equipment of pending U.S. patent application in pending U.S. Patent application,
The addressing and power supply of array, and the preferred embodiment of mixing MPC type system are had also discovered in 3D factory and system.
The application is incorporated by reference these pending applications completely.
However, in the class of class and key subsystem and the preferred version of subsystem and embodiment for establishing system type, and
It is not to say that the details of this proposal is included in cited application.And the application is only these systems, structures and methods
Combination.
On the contrary, this motion proposes new and improved system and subsystem, belong to those in most of perhaps more situations
(and the usually new) classification and classification of reference, are disclosed in detail component, system, subsystem, structure, process and side
Method, simultaneously as the unique combination of the constitution element of these and other classifications, therefore also achieve unique Novel movable AR and
VR system has preferred embodiment and wearable device system as wearable system, and wear-type is most preferred.
The specification of the system proposed (lists) major subsystems preferably by grouping and comes tissue overall structure and operation structure, so
The details of these subsystems is provided in the form of the outside line of layering afterwards to start.
Major subsystems:
I. the telecommunication system type frame structure for the display with picture element signal processing platform, and preferred mixing MPC pixel
Signal processing, including photon coded system and equipment.
II.For moving the sensing system of AR and VR
III.Structure and basic system
These major subsystems are accomplished that novel integrated duplex system " generation " and variable directly transmission direct-view mixed display
System:
I. the telecommunication system type frame structure for the display with picture element signal processing platform, and preferred mixing MPC pixel
Signal processing, including photon coded system and equipment:
The purpose of the disclosure is as much as possible using Passive optics and component, to help to minimize to for handling sensor
The demand of the active equipment system of data, especially in real time, and the calculating of image and 3D for computing computer generation,
The perspective view of practical and synthesis/number or the digital image information of storage is integrated.
Image procossing and pixel image show structure/operation-Structure Stage of generation system, subsystem, component and element with
Lower subdivision will include how to realize the specification of the target.Be truncated to the structure of system from the external image wavefront of system, component and
Operational phase by final intermediate image be transmitted to HVS (for the sake of simplicity, from left to right any setting sequence (referring to Fig. 1):
A. ordinary circumstance-system essential element:
(IR and close red is omitted in 1.IR/ near-infrared and UV filtration stage and structure in the version for the system of night vision system
Outer filtering).
2. polarization filtering, to reduce the straight-through intensity of illumination of input, have some advantages and advantage option or polarization filtering/point
From channel, polarization rotation and channel recombination, to keep maximum input or by illumination phase, one has other benefits and advantage
Selection.
3. reality by pixelation or it is subpixellated-these are realized by illumination and channel.
4. the array of sub-pixels being with internally generated by channel is integrated in a unified array, to realize optimal increasing
By force/mixing/mixed reality or virtual reality image display are presented.
i.For handling and handling the two preferred overall plans and structure/structure that illuminate by (real world): although logical
The general features for crossing the disclosure realizes other arrangements and version, but the main difference of two preferred embodiments is handling upper base
This difference.Channel in the natural light and feature optical system of entrance will transfer light to inwardly/face by subsequent processing stage
To the complex optics surface of observer output surface-in one case, all real worlds pass through-pass through illumination
Under be converted to IR and/or close to IR " false color " to be effectively treated;In another case, the direct true generation of processing/control
The straight-through visible frequencies on boundary illuminate, and shift without frequency/wavelength.
ii.Generation/" artificial " sub-pixel merges array: this preferably mixes the processing of magneto-optic sub-pixel signal and photon coding system
System.In the case that all versions for being converted to IR and/or near-infrared by under by light and, by identical holistic approach, sequentially and
Process is applied to straight-through optical channel.
Specific embodiment
1.IR/ near-infrared and UV filtration stage and structure: wearable HMD " glasses " or " sunshading board " have the first optical element,
Its preferred form is binocular element, and the connecting element of left and right resolution element or a similar sunshading board is intercepted from observer/pendant
The perspective for the light that the external world relatively forward of wearer issues, the wavefront of real world.
The first element is composite material or structuring (for example, substrate/structured optical elements, are deposited with material/film thereon
Layer or its own is periodically or non-periodically but complicated 2D or 3D structured material or mixture be compound and direct organization
, realize IR and/or near-infrared filtering.
UV filtering.Equally, more specifically, these can be grating/structure (photon crystal structure) and/or its chemical component is realized
The body film of reflection and/or the absorption of undesired frequency.These selections for material structure are known to related fields,
There are many business to select.
In some embodiments, especially for night vision application, IR filtering is eliminated, and follows the pattern and knot of the disclosure
Some elements of function phases sequence are eliminated or had modified to structure in order.The details of this classification and embodiment will be carried out below
It is described in detail.
(pass through illumination intensity with knock down entrance) 2. polarization filtering or polarization filtering/it is separated in channel, polarization rotation and letter
Road reconstruct retains maximum input or passes through the ILLUMNATION stage: a similar filter most preferably follows optics battle array sequence
First filter in column, next elemental map on opposite the right), it is Polarization filter or polarization sorting grade.This may be
One batch " polarizing film " or polarizer film or deposition materials and/or polarization grating structure or any other polarizing filter structure
And/or material, the optimal combination of practical feature and benefit is provided for any given embodiment, i.e., the system in terms of efficiency, cost
It makes, weight, durability may need to optimize the parameter of tradeoff with other.
3. polarization filtering option, as a result: this train of optical elements arranged in optics/optical structural element entire scope
Later, incident wavefront is by frequency parantheses, and it is bracketed and/the compartmented mode that sorts by polarization mode.For can
Light-exposed frequency, the net brightness of each mode passageway have reduced the size of polarization filtering device, for simplicity, reflection week
The current efficiency of phase property optical grating construction material, actually close to 100% filtration efficiency meaning, each channel eliminates about 50%
Light.
4. polarization filtering, classification and is reconfigured Single-channel Rolling, as a result: for example together by two separation/classification channels,
Combined intensity by it is close but not exclusively be original incident light intensity before filtration/separation/classification.
5. benefit and meaning: since these filters can also be implemented on identical layer/material structure, or passing sequentially through
Individual layer/material structure realizes, HVS 1) prevent bad UV 2) brightness reduces, 3) (night vision, which is applied, to remove for removal IR and near-infrared
Outside, visible spectrum will be minimum, and does not need filtering visible light).Advantage/feature 2 and 3 for system next stage and
Whole system is of great significance, and will be explained further below.
6. pixelation that reality passes through is subpixellated by illuminating and realizing these channel: the sub-pixel of incident wavefront is thin
Point, PASSIVE OPTICAL or initiating structure or operational phase are implemented together with front and are preferably followed, because it will tend to drop
Low manufacturing expense.The subdivision can realize by various methods known in the art and still nondesigned other methods, and
And the deposition including differential refraction rate block of material, using the material system of photochemistry resist-mask-etch process or nano particle
It makes.Pass through electrostatic/method of the Van der Waals colloidal solution based on power and other self-assembling methods;Focused ion bam etching, or
Embossing, and by etching, cutting and embossing method, especially manufacture capillary microwell array, pass through improved total refractive index
Realize waveguide, manufacture realize photonic crystal Prague other periodic structure grating type structures or other periodics or
The other structures manufactured with bulk material.Alternatively, or with known or future the reference or other methods group that can be designed
It closes, sub-pixel refinement/guiding material structure of array can be formed on the region of macro optics/structural detail, pass through assembled portion
Part, such as optical fiber and other optical element precursors, the author including manufacturing the disclosure by disclosed method elsewhere, with
And the method that Fink and Bayindir is proposed, prefabricated component component or melten glass or composite material for fiber device architecture
Assemble method.
Suitable for the different embodiments of structure/operational phase this system and certain specific details of version of system and requirement
It will cover in the appropriate later stage of system segmented with flowering structure/operation.
7. will be integrated by channel and interior raw sub-pixel in channel array: still, will be regarded from front in addition to providing
Incident wavefront be divided into except the device of subdivision and be suitable for controlled light path control, and then, in order to it is further passive and/
Or active filtering and/or modification, it specifies be supplied to viewer using the system of this proposal there are two types of the extremely important of type herein
Total visual field-field array pixel/sub-pixel component and two different " branch " processing sequences and operation structure, arrive
In the way that the final pixel of observer is presented.And this is the first rank for the sequence of current composite construction and operating process
One of section and requirement, are implemented, the stage appropriate pixel-by-pixel and by the sub-pixel light path control of sub-pixel.
8. two pixels ,-signal component type-passes through and generates or artificially: in picture element signal processing, pixel logic state encoding
Stage such as refers to disclosure, we use two kinds of type of pixel now or more precisely, are two pixel letters respectively
Number component type.
9. for handling and handling the two preferred overall plans and structure/structure that illuminate by (real world): although logical
The general features for crossing the disclosure realizes other arrangements and version, but the main difference of two preferred embodiments is substantially different
In processing enter natural light and Structured optical system in channel, by subsequent processing stage transfer light to towards
In one case, all real worlds lead directly to illumination by under to the output surface-on the complex optics surface of interior/observer
Be converted to IR and/or close to IR " false color " to be effectively treated;In another case, direct processing/control real world
Straight-through visible frequencies illumination, and without frequency/wavelength shift.
a.In a preferred version, it is seen that optical channel is filtered by UV and IR and (and optionally, polarization mode sorts
Filtering is to reduce the overall strength by illumination), it is frequency shifted to IR.Or near-infrared, but being all in either case can not
The frequency seen realizes " false color " range of same ratio with position width and intensity.In the photon of frequency/wavelength modulation and frequency reducing
After pixel signal processing method, HVS will test and can't see any content.Then, the subsequent sub-pixels letter in these channels
Number processing substantially with for picture element signal channel generated proposed it is identical, as disclosed in following part.
b.In a further advantageous embodiment, put-through channel not instead of frequency/wavelength is modulated, and is down-converted to sightless
IR and/or near-infrared.In the configuration, the preferred default configuration and pixel logic state of direct channel are " unlatchings ", for example,
In the case where using the conventional linear Faraday rotation handover scheme for pixel status coded/modulated, including output and input
Polarization filtering device, for the subchannel of any given model of polarization classification, analyzer (or output polarising means) will be basic
It is upper identical as input polarization device, so that handle and activate when operating linear Faraday effect pixel logic state encoder,
Operation is to reduce strength transfer channel.In subsequent part, for channel generated operating function and structure provide
After details, some features of the embodiment and the details of requirement are disclosed.
If combined polarization filtering with the preferred embodiment and modification, rather than the reality of pattern classification and independent mode passageway
It is existing, it is combined into the channel of merging, by polarization rotation device then to keep as the straight-through illumination of original pixels
More degree.It is possible, such as pass through passive element (for example, half-wave plate) and/or active magneto-optic or other modes/angle of polarization tune
Device processed then would generally reduce the about 50 opposite visible ranges for considering presently written magneto-optic memory technique by the overall brightness of illumination
Performance will be it is furthermore preferred that as preferred classification and method in some cases.
Therefore, background is proportionally reduced by brightness of illumination maximum value, for offer " generation " (artificial, non-by)
The subsystem of sub-pixel channels and relevant method and apparatus can be correspondingly easier.In " augmented reality " image and view
Usual comfortable and true to nature integral illumination in matching and integrate and coordinate pictorial element generated.
Alternatively, put-through channel can be configured in " closing configuration " of default, so that if using typical linear faraday's rotation
Turn device scheme, then input polarization device (polarizer) and output device (analyzer) are opposite or " intersection " is " as frequency dependence
MO material (or other photonic modulation modes, in the degree of the material determined using frequency dependence/performance) continues to improve, and uses
This default configuration may be advantageous, wherein by illumination intensity by subsequent sub-pixels signal processing step and method,
From default " close or close to zero or effective zero intensity " increase and management ground state.
Although being converted to IR under c. proposing is preferably, it is contemplated that the IR of photonic modulation device and method and near-infrared performance
The common material of optimization-system dependence, UV are also the option for including and may be in the future in some cases for shifting
Radiation of visible light is inputted to convenient black light spectral domain, to carry out intermediate treatment before final output.
10. combined generation/" artificial " sub-pixel
Array: firstly, it is contemplated that image generation pixel signal component, or, in other words, picture element signal processing structure, operation
Sequence preferably mixes the processing of magneto-optic sub-pixel signal and photon end coded system.
a.In the most common configuration of proposed image
It is used for collection/processing/display subsystem of the whole system of mobile AR entirely, next knot in sequence in daylight conditions
Structure, process and element are optics IR and/or near-infrared flat illumination chromatic dispersion structure and picture element signal processing stage.
b.For this structurally and operationally process, optical surface and structure (deposition or the mechanical film being laminated on structure/substrate,
Perhaps the combination of the patterning of material or deposition or methods known in the art, directly on substrate) it is evenly distributed IR
And/or near-infrared irradiates the entire optical region for flowing uniformly across 100+FOV binocular eyeglass or continuous sunshade type form factor.IR
And/or near-infrared irradiation is uniformly distributed in the following way: 1) combination of the light leakage fiber on the X-Y plane of structure is set,
All in X or Y-direction or within a grid.This leakage fiber has been developed that and can be by such as physioptial company trade
It obtains, leaks basically by fibre core in particular design apart from the upper illumination transmitted in a substantially uniform manner, with diffusion layer
(such as acyclic) combines.It can be from Luminit, Inc.Commercially available 3D projection cube structure film (the embossing micro- surface of aperiodicity)
And/or other diffusion materials known in the art and structure;2) side illumination from IR and/or near-infrared LED edge array
Or IR and/or near-infrared edge laser array, such as VCSEL array, it projects to be intercepted as volumetric illumination, it is this flat
Face sequence optical beam expander/expander optical device includes holographic element (HOE) structure as planar-periodic optical grating construction, such as
The commercially-available structure of other commercial suppliers that can be quoted from Lumus, BAE and this paper and previously cited pending application,
And other backboard diffusion structures, material and device;In general, other display bottom plate means of illumination known in the art,
Device and structure can be developed in future.
c.The stage/structure purpose is that transmitting IR and/or near-infrared backboard shine in the sequence of operation and picture element signal processing
It is bright, it is limited in the complex optics/material structure relative interior proposed so far, there is IR.And/or near infrared filter
Device, the IR and/or near-infrared for reflecting injection are irradiated to illuminating layer/structure.
d.It is important to note that IR and/or nearly IR are sightless to HVS even if the apparent fact.
e.The irradiation source of IR and/or near-infrared can be LED, laser (such as VCSEL array), or both mixing, or this
Field is known or other devices that may develop in the future.
f.IR and/or the near-infrared irradiation of injection are also single polarization mode, preferred planar polarised light.
g.This can be by polarizing conditioning unit, by by IR and/or near-infrared LED and/or laser and/or other light sources
It is implemented separately with polarizing beam splitter or optical filter/reflector sequence (such as optical fiber).Separator, and make plane polarization point
Amount passes through passive and/or active polarization rotating device, such as blocky magneto-optic or the sub- rotator of magneto-optic or a series of passive devices,
Such as half-combination wave plate or these mixing.Polarizing filter, such as high-efficient grating or 2D or 3D periodicity photonic crystal
The light being rejected can be reflected into polarization rotary optical sequence and channel, so at an angle with incident light by structure setting
The optical series and channel and the part that has not been changed of original illumination reconfigure afterwards.In plane or optical fiber, wherein polarization mode
(plane polarization) is separated, and a branch is then followed by by polarization conditioning unit and reconnects another branch.
h.Source irradiation can also be constrained in the structure of its own, inclined only to generate the optical plane in given angle or range
Vibration.
I. light locally can generate and/or coordinate in HMD, or separate HMD (such as wearing with electric power storing device
Wear vest) and HMD is transmitted to via optical fiber.In HMD, illumination and/or coordinate stage and structure/device can be close to being retouched
Transmitting in the complex optics structure or HMD stated elsewhere and by optical fiber, if farther and/or by slab guide,
Then closer to optical delivery.
j.Up to the present and the aforementioned structure below operated and handled and structure are the pixels disclosed in application reference
The example of signal processing, it is characterised in that the decomposition of picture element signal characteristic.Entered using best approach generation and transmission process excellent
The change stage, and usually operated under the wavelength for the type process optimization, with particular reference to pixel status logic coding rank
Section and process.Many MO and EO and other optical interaction phenomenons are for most of materials in IR or near infrared bands region
Material system is optimal.Whole system, method are disclosed in cited application, structure operates the structure with process, and
Each details, including basic and optional element.
k.Picture element signal processing, pixel logic grade code level-modulator array:
1. IR and/or close/IR irradiation pass through pixel in illumination and after the coordination stage.Signal-stage-logic coding processing,
Operation, construction and device, preferably for the disclosure, modulating device belongs to the scope of magneto-optic modulation method.Wherein, a kind of preferred
Method is based on Faraday effect.Disclosed in the U.S. Patent application " mixing MPC picture element signal processing " of reference the device and
The details of method.
m.In binary system pixel-signal-logic state system, encoded by the angle of polarization of the incident linearly polarized light of rotation
"On" state, so that when the light passes through the rear class of picture element signal processing system, then (referred to as with opposite polarizing filter device
" analyzer "), light will pass through analyzer.
n.In such MO (or subtype, MPC) pixel-signal-logic level coded system, light passes through medium or structure
Be subjected to magnetic field, uniformly/blocky or structure photonic crystal or element material.Material, it is usually solid (although it can also be with
By the inclusion of gas or the package cavity of rare steam or liquid), with effective quality factor, measuring medium or material/
The efficiency of structure enables to rotatory polarization angle.
o.The picture element signal processing logic level code level and the preferred type of device and the details of option of this preferred type are being joined
It is found in the pending application examined, and further variation can be found in the prior art, or can developed in future.
p.Need it is highlighted mixing MPC picture element signal processing system preferably with reference category other aspect include:
q.It mixes MPC picture element signal processing system and realizes memory or " latch ", inactivity, until pixel logic state needs change
Change system.This is by following adjustment and to realize magnetic " remanent magnetism " method known in the art and realize, wherein magnetic material
It is to be manufactured in batch processing (for example, the commercially available locking LPE thickness MO Bi-YIG film of Integrated Photonics.[ginseng
Examine our other disclosures];And/or the permanent domain such as implementation Levy latching period ID grating is [with reference to our other disclosures
Content];Or composite magnetic, in conjunction with the MO material of opposite " harder " magnetic material juxtaposition/be used in mixed way optimization, so that applying
The magnetic field locking low-coercivity added, straight line B-H loop material keep the magnetization (locking) of MO/MPC material as intermediate.
Intermediate materials can surround MO/MPC material or it can be to mix or construct to the transparent periodic structure of transmission frequency
[here, IR or close/IR].This third complex method is to propose interim Shen in the U.S. in 2004 by the author of the disclosure first
Please, it was included in United States Patent (USP)/U.S. Patent application later.Later, Belotelov et al. was on the basis of the company of acquisition subsidized
It has set up 2004 and has disclosed, this complex method has been known as " spin-exchange-coupled " structure, and will be more for specific ID in the said firm
Implement in the design of layer magnetic photonic crystal, wherein the different MO materials of relative hardness are used for the inefficient of 2004 composite process
In variant.
R. the combination of these methods is also possible design alternative.
s.Mix the benefit of this " the memory pixel " in MPC scheme and the bistable of such as electrophoresis or " E-Ink " monochrome display
State pixel switch is identical.As non-volatile (design relatively, at least, selected depending on hysteresis curve and material) storage
Device, simply by the presence of IR or near infrared illumination source, image will keep being formed.In picture element signal treatment channel and system " transmission " and
" processing ".
t.Second basic sides and element of preferred picture element signal processing, pixel logic code level and method are effective real estates
Magnetisation field, the magnetic field switch the magnetic state (basic principle as such as color system) of sub-pixel.RGB, therefore for convenience
The traditional components that final color pixel is discussed, broadly retain naming convention, and distinguish when needed.In order to ensure not
There are magnetic cross-talks, it is preferred that field generates structure (for example, " coil ") and is arranged in the path of pixel transmission axis, rather than sets
It sets on side.It reduce required field strength, and by being not provided with a generating device in edge, by adjacent material/
The implementation of (magnetism) impermeable material or periodic structure in matrix manages magnetic flux line, such as prolongs in the domain of Levy et al.
In the case where continuous method, the line of flux is limited in modulation areas.Transparent material may include the available material of such as ITO etc with
And to transparent other of correlated frequency are new and upcoming conductive material.And/or other materials, in volume not necessarily
It is transparent, but in periodic elements size appropriate, geometry and periodic periodic structure, such as metal,
It can also deposit or be formed in modulation areas/subregion.Pixel transmission path.
u.This method is mentioned in interior design files in 2004 of the same company in the U.S. in 2004 by the author of the disclosure first
Provisional application out disclosed in U.S. Patent application later.Then, the method for using him in 201+, the researcher of NHK,
It is generally used for MO and MPC equipment, is used for kerr rotation device, uses ITO [referring to SE TO UP UP] in the path of pixel
v.For picture element signal processing subsystem preferred mixing MPC picture element signal processing solution third important element be
The method of addressed sub-pixel array.As previously mentioned, prefered method can find patent application in pending US, wireless addressing and equipment
The power of array.For the application, wireless addressing may be enough to consolidate wireless array (son in the case where given low power requirements
Pixel) element power supply, by low frequency magnetic resonance omit wireless power method, although micro-ring resonator according to material select and set
Details is counted, it may be more more effective than being powered by miniature antenna.However, it is whole that the wireless power of entire HMD or wearable device, which is,
A unit power supply while the preferred method for reducing the weight and volume of installation in public, especially when close with local high power
When spending first capacitor system or the combination of other capacity technologies, it can be powered on by wireless low frequency packet.Basic low frequency magnetic resonance solution
Certainly scheme can be from Witricity, Inc.It obtains.For more complicated system, with reference to U.S. Patent application, wireless power relay
Device.
w.Other preferred method that array/matrix is addressed and is powered include based on voltage spin wave addressing, this be
Unspecified variant in cited application, therefore be novel for this proposal, although being suitable for the mixing MPC of original reference
Picture element signal processing application.With other forms factor and use-case.It is the exploitation of other display technologies (such as OLED) based on high speed electricity
The backboard of stream/active matrix solution is also alternative.
X.It is selected depending on other particular designs, other less preferred picture element signal processing, pixel logic coding techniques and side
Method also will benefit from wirelessly addressing and method of supplying power to, and the Spin-wave method based on voltage.
y.Other such picture element signals handle pixel logic code device, including the modulation based on Mach-Zehnder interferometer
Device, efficiency is generally also to be based on frequency material system and most effective in IR and/or near-infrared, although also can be used not
Too preferably, but other any amount of picture element signal logic coding devices are all in the design according to the introduction of referenced application,
Configuration and/or material system for the most effective frequency optimization of such device.
z.According to the pixel signal processing method of cited [2008] U.S. Patent application telecommunications architecture, the system proposed
Preferred embodiment is also necessary identification Shuangzi pixel array system, has this specific change disclosed herein and optimization
Version.The application and with similar operations require or it is expected benefit other non-HMD and non-wearable display system application.
aa.After picture element signal processing, the pixel logic state encoding stage for operating structure and processing is that optional signal increases
The beneficial stage.The situation of this option correlation will explicitly point out in following PowerPoint.
bb.Wavelength/frequency displacement stage: the current particular version for preferably mixing MPC picture element signal processing system is followed by frequency
The stage is converted in rate, the phosphor color system enhanced using preferred nano-phosphor and/or quantum dot (for example, QD vision)
(although periodic polarized equipment/material system is also designated as with reference to disclosed option).Commercially available basic skill
Art includes known various other suppliers in GE, the suppliers such as Cree and business practice.
cc.Now for for those skilled in the art it is readily apparent that it is ongoing be will usually illumination phase send out
Raw upper conversion process separates or separation, and is deferred to after other several stages, optimization on IR operation and/or
Near-IR frequency and other reasons are completed.
dd.Therefore, by optimization be tuned to such as color system of RGB sub-pixel colors system nano-phosphor/quantum dot
Enhance phosphor material/structural formulation, fully achieves color system.Equally, it is found in the application reference being disclosed more closely in
These of concept and operation to display system are again thought deeply.
ee.An advantage using mixing MPC pixel signal processing method is the high speed of the machine MPC modulating speed, it has proved that
It is lower than 10ns, and the benchmark that sub-ns is presently relevant within the quite a long time.The speed of phosphor excitation-emission response
Spend relatively fast, if not so fast, but under total amount and net value, total panchromatic modulating speed is lower than 15ns, theoretically will be excellent
Change to the lower net duration and measures.
FF.The variant of the structure proposed increases a band filter on each IR and/or near-infrared sub-pixel channels,
At the end of processing sequence, " unlatching " or " closing " is amplified to this variant of R, G or B. and although increases filter element by it
Complexity, but if 1) mix MPC grade in itself be a series of custom materials in material composition, can more effectively respond not
Same subband, then may be preferred.IR and/or near-infrared domain, it is even contemplated that he is unlikely to be such case, because at this
Commercially available or even bulk LPE MO film almost 100% efficiency of transmission and the polarization rotation of extremely low power in wavelength domain, or
It is more likely that 2) if different nano-phosphors and/or quantum dot enhance nano-phosphor/phosphor material formula efficiency
It is very high, so that infrared and/or near infrared bands R, G and B the sub-pixel ingredient that more accurately surrounds of each ultimate product is
Worth.Design t rade-off will be attributed to band encirclement extra play/structure/deposition channels addition complexity at
Sheet/performance analysis and the invisible input illumination light of efficiency gain " being adjusted to " for using more frequent frequency/wavelength displaced material
The different piece of spectrum.
gg.After the color treatments stage, the sub-pixel group realized from initial IR and/or near infrared illumination source continues through conjunction
And optics pixel access.In the case where not adding the final pixel component of any other composition, output pixel is by basis
The design alternative of modulation and color range size of components, optional pixel-expansion, preferably by disperser, including those references
Those, depending on as needed.And disclosed in application as quoted, it may be possible to which necessary (pixel spot size reduces
Possibility is much smaller, this needs optical focus or other methods, as known to related fields and such as in certain application references
As disclosed, especially [2008].
In order to realize virtual focal plane, collimation optics, including lenslet array with observer's eyes place of being suitably distant from
Column, the fiber array being embedded in weaving compound, wherein optical fiber is parallel to the setting of optical transmission axis;Using " flat " or plane
Reversed refractive index metamaterial structure and other optical means known in the art.Preferably, all elements are in macroscopical optics
It manufactures or realizes in composite layer in elements/structures, rather than need additional bulk optics eye lens element/structure.Fibrous type side
The multilayered structure or more than one combination/mixture other problems of method and laminar composite or deposit manufacture are following
Structure/mechanical system is handled in part.
ii.As previously mentioned, picture element signal handles pixel logic array
Function/optics/the structural detail in realize disclosed picture element signal processing pixel logic structurally and operationally stage, including it is excellent
The mixing MO/MPC method and operation structure of choosing, are not that the large capacity equipment-operated in entire incident field has filtered
Front, the still pixilated array (as desired by those skilled in the art).
jj.Each final pixel may include at least two pixel components (beyond previously described color system RGB sub-pixel):
It one, arranges component in an array, generates video image from the beginning really, may include simple text sum number
Word figure, but for whole purposes of this system, can from the digital picture of CGI or relatively remote scene or archive or its
Compound and mixed image generates high-definition picture.This is as previously described.
11. the illumination and pixilated array-by real world pass through about the specified in more detail frequency of visible vision case
(that is, not being converted to IR/ near-infrared down): structuring and operable optical device and photon structure are passed back through and is transmitted from visual field
True, the light of non-generation, stage with processing;
a.The sub-pixel cluster driven with these IR and/or near-infrared be co-located on addressing array be another group of pixel or other
Sub-pixel component is actually derived from the final pixel channel components of real-time field.Wearer's eyes front of viewer and HMD.
These are final pixel " straight-through ", complete addressable component.
b.These channels are originated from composite optic element/structure of front, as previously mentioned, it is subdivided into pixel.
c.These optical channels transmit wavefront part by using available effective dividing method, have the loss of low wavefront.Surface
Lenslet array or mirror surface funnel array can be used in combination with the divided method proposed, to realize that very close edge arrives
The light capturing efficiency at edge, so that the wavefront part of capture is efficiently couple to opposite " opposite ".Subdivision/pixelation guidance
Optics/array structure core.Therefore, traditional step-refraction index coupling process is either used, or uses MTIR micropore
The mixing of array or real photon crystal structure or more than one method, the pixilated array shape for coupling device
At region by before received wave percentage minimize, make minimization of loss.
d.For certain versions and operation mode of this system, efficient wavefront capture, routing and guidance/pixelation segmentation need
It focuses and/or reflects the Reflection Optical Thin Film element-of visible and IR and/or near-IR frequency and it is seen that although it is recommended that by IR
And/or near infrared filter device is realized as initial and the first optically filtering structure and in optics battle array and sequence.
a.In most of configurations, IR and near-infrared illumination phase pass through distribution drawing for " penetrating " capture illumination in the stage
Guide structure, the illumination are transparent to IR and/or near-infrared, but provide light-guidance/path limitation of visible frequencies, so that IR
And/or nearly IR can be uniformly distributed, while not " straight-through " pixel component of interference channel.
b.Once the incident wavefront passage portion being guided reaches picture element signal and handles, pixel status coding stage, if there is
Single body MO or the multilayer MPC film or periodic structure grating (or 2D or 3D period) of being formed is if the material or structured material
Efficiency be optimized for IR and/or near infrared ray, then the pixel logic status architecture of picture element signal processing in parallel will
It is the structure of " body " film.It realizes in an identical manner, but efficiency is much lower.
c.However, formula or structuring photon crystal material either in bulk, are all by various sides as broadband MO material
Formula manufacture, although efficiency will continue to change at present also not equal to the IR of the MO/MPC material/structure material of optimization and neighbouring-IR
Into.In the Prior efforts that the author of the disclosure leads, in 2005, new MO and MPC material was modeled and manufactures, this is for the first time
Not only show transmission/Faraday rotation pairing of significant improved green carrier state, and demonstrate for display application and
Speech, is very important, actually significant, acceptable and competitive, the performance in blue zone first.However, this
The manufacture of kind of material is often more expensive, and if the different material of deposition, as " film ", for " generations " pixel components with
By pixel components, which increase complexity and expense.Manufacturing process.But such configuration will improve final merging pixel
The identical efficiency of all conditions of the pixel logic state encoding of " straight-through " component.
d.(this logic is also applied for less preferred modulation in the case where no deposition or formation " customization " MO class material
System, maximal efficiency, such as MO/MPC are that frequency relies on, but use single formula, it is all under identical circumstances, lead to
The intensity for crossing final pixel component will be smaller, reach the lower degree of modulating device efficiency.
e.Typically for direct communication system, it will be assumed that do not use phosphor type or other wavelength/frequency offset assemblies.However,
In the lower degree of the possible efficiency of natural MO/MPC material, material prescription can be optimized using different bands in this case,
To solve the material property defect in pixel logic stage to a certain extent.Coding stage.
f.In addition, as being proposed to be used in low light or night vision operation, optional " gain " stage, as certain applications in reference application
Option propose (U.S. Patent application picture element signal processing and U.S. Patent application mix MPC Pixel signa.
G.In addition, the incident wavefront passage portion once guided reaches picture element signal processing, pixel status coding stage, as schemed institute
Show that it is optional to have one, but for low light and night vision application, valuable optional entire picture element signal handles and optical channel management
The configuration of system.
h.In this variant, IR filter be it is removable, target is by IR and/or near infrared light from incident true wavefront
It is transmitted to active modulation array sequence, modulator and direct is handled to be passed to " true " IR by picture element signal, in visual field
It can be seen that the degree of IR output, generates similar color (monochromatic or pseudo-colours IR gradient) image for viewer, without intermediary
Sensor array.
i.Also, as indicated, it is possible to implement gain stage by intensity (+IR, if beneficial to if) enhancing of IR will be arrived
Wavelength/frequency displacement grade.
j.Furthermore, it is possible to open basis IR and/or near-infrared background illumination by normal panchromatic operation mode, modulate intensity with
Datum-plane appropriate is set, degree activation wavelength/frequency displacement stage and media that input IR radiation is not up to threshold value are reached.
k.If passive optical component is deployed in radial type or cantilevered articulated mounting, " can overturn ", then it can be mechanically
The removal of realization IR filter/deactivate;Or it as active parts, deactivates, such as in the blocky encapsulation of electrophoretype activation
In layer, wherein (such as presented herein) electrostatic (mechanically) rotates the filtering microcomponent of multiple relatively flats, so that minimum enters
Firing angle passes through, and multiple rotating elements do not refilter IR).Activation/minimizing technology can be passively or actively using other.
1. the IR optical filter and polarizing filter that operate for low light or night vision can be removed, whether this depends on generation system
" initiatively " it uses, rather than just generating threshold value, and incidence of the superposition of data in pixilated array on some parts
True IR wavefront part.If actively used, preferred digital pixel signal processing system, in order to maximize the effect in generation source
Rate needs initial polarization filter to realize photoswitch/modulator of the pixel logic state in encoded signal.
m.The shortcomings that direct communication system, is that it reduces the intensity of input IR and/or near-infrared.
n.Designed for solving the problems, such as that another embodiment of this this system is handled in picture element signal, pixel logic state encoding
Gain stage is set before grade, to enhance input signal.
o.The efficiency of gain media with noncoherent, non-collimated " nature " light must setting in any system of the system
It counts in parameter and is considered, which uses the energization gain media with the input of " nature " incident light.
p.In the second alternative solution, three-component system is realized comprising incident visible for the component subchannel of generating means
Light component and not by the incident IR component of polarization filtering.The poliarziation filter element that must realize pixelation, from
Third subchannel/the component is opened without Polarization filter element, to realize the modification.
q.For the optional system type of more basic collection component in pairs required with this low smooth night vision operation mode, initial
Input and channelizing/pixelation stage need additional optical element before incoming wave.
r.Although the IR (and nearly IR, if necessary) of input can be directed toward usual " generation " source component of final visible pixel
It is divided between subchannel and the incidence channel of the entire visible light part of guidance.Wavefront does not have to the source component of final visible pixel
Special efficiency gain is used to send any IR and/or near-infrared to the source of optical photon channel and final pixel.
S. but, the sequence after lenslet or substitution optical acquisition device is for maximizing catching for the actual wavefront entered
It obtains, or integrated with lenslet, is frequency divider.A kind of method is to realize opposite optical filter, and one is used for the belt of visible light
Optical filter only allows IR and/or near infrared light, and for IR and/or the adjacent optical filter of near infrared light.This opposite filter
The various geometrical arrangements of light device, which provide different advantage, including plane or both and are all disposed within, deviates incident wavefront optical acquisition
The opposite 45 degree angle of the gonglion of structure, enable to focus (from lenslet or other optical elements) or including
Reversed refractive index Meta Materials " plane " lens) composite visible light/infrared near-infrared light beam, a band range is separated first, simultaneously
Another is reflected into opposite optical filter surface-, and vice versa, may hit the filter far from gonglion first for part
The focus on light beam of light device structure.Optical grating construction is the preferred method for realizing double optical filter-beam-divider means, but other methods
Be it is known in the art, based on can be by the bulk material preparation of various methods deposition known in the art and to be developed.,
Successive stages realize two filtering surfaces.(ultraviolet light is filtered in attention before this stage, but preferably filters after infrared ray.
In some prompts, it is infrared with polarization be mutually first and second, ultraviolet filter is third;In other cases, infrared ray
It is followed by ultraviolet light, followed by polarizer.The different values of different use-cases, and not to manufacturing cost and particular procedure sequence
With influence).
12. pass through and generate/combination of artificial pixel/sub-pixel array:
As already noted, bi-component optical channel co-locates and is preferably output in pixel conditioning unit (diffusion together
And/or other mixed methods, and can be obtained by other methods known in the art, or design in the future, so that raw
Cheng Yuanyu transmits source combination, and as the RGB sub-pixel of traditional color vision man-made additive color display system, shape
At final composite pixel.Then, as already noted, and as being described in detail in cited application,
Further pixel harness shape, and particularly, to form image in virtual focal plane, this is that most have for collimation and optical orientation
It imitates and is easiest to.HVS gives ergonomic designs' target close to face, this is also one of the purpose of the disclosure
Point.
a.The operation of baseset component system in pairs has " generations " component (itself by RGB sub-pixel group at) and can be changed " directly
It is logical " component-firstly, in its main operating modes, second, is configured to optional low smooth night vision mode:
It becomes clear outdoors, sun-drenched date, the wearer of the HMD form proposed watches integrated binoculars (two
Individual lens shape factor apparatus structure) or connection sunshading board, the image being integrally formed is presented by him/her.Picture
Pixel array itself is integrated to form by two input modules, the high-performance pixel of generation and towards observer " world it
Window " penetrates, variable intensity wavefront part:
b.For finally collecting the composite coloured component of pixel, formed by " generation " pixel component, as invisible IR and/or
Near-infrared " inside ", is opened or for each sub-pixel at " injection " light on afterwards, with the speed of sub- 10ns (it is current,
Lower than 1ns) it closes.Then, IR and/or near-infrared sub-pixel activate composite phosphor material/structure, most using can be used for generating
The optimum current material and system of colour gamut that may be wide.
C.Once being provided with the state of sub-pixel, using the very short pulse, " memory " switch keeps its on-state straight
Change to its state, without applying firm power to switch.
d.Therefore, formation component is high frame per second, high dynamic range, low-power, wide gamut pixels handoff technique.
e.The second component of composite pixel is straight-through component, begin as irradiated on the preceding optical surface of this HMD it is whole
Effective high percentage of the subdivided portions of a wavefront enters wearer in the normal mode to these wavefront parts from face of direction
Carry out UV and IR filtering, and polarization classification or filtering (selection will depend on selected layout strategy, the practical photograph of reduction
Bright radix or maximized radix).Using the base stage of reduction, i.e. polarizing filter, this causes substantially to reduce the whole of visible visual field
Body brightness (about 1/3 to 1/2, depending on the composition of incident polarization mode and the efficiency of polarizer).
f.Especially under bright daylight, but usually pass through intensity under all illumination conditions in addition to extremely low illumination
Reduce so that generate system more easily " compete " and match or more than incidence wave illumination level-front.Therefore, by by system
Component execute dual role or generate dual benefits the passive optical devices realized: it is the preferred tune for realizing pixel logic
The required component of system (being based on Polarization Modulation) processed.State encoding, it also reduces power requirement, and simplifies using straight-through
System calibration, coordinate and be synthetically generated system value process.
g.The system design feature utilizes the fact that for most people, the bright lighting condition in open air is logical
It crosses using polarized sunglasses and to manage.Known interior, excessively bright luminous or transmissive display can generate eye fatigue, therefore whole
Even room lighting level is reduced on body will lead to simpler problem, i.e., improves illumination level using generation system, relatively
It is few, without creating " competition luminous environment " in the visual field again.Reduction naturally by illumination (optionally enhance, although
Efficiency be lower than using LED or certainly use laser, gain stage) and generate system combination, can by figure or synthesized element addition
To certain parts of scene results at one than more coordinating in terms of other, more low intensive baseline.(generate system-integrated array
Part-be not necessarily AR mode and generate entire FOV, although it can be under complete VR mode).
H.Assuming that calculating synthesis and the coordination of real elements and synthesis-in the perspective view of user in sensing and computing system
Next one aspect-the generation to be solved and the mixing for transmitting source can easily and quickly with it is not apparent in display level
Lag and obvious delay generate mixing, mobile AR/ mixed reality view.
i.Using with default " shutdown strategy " design by pixel components subchannel (that is, polarizer and preferred Polarization Modulation
The analyzer " intersection " of form rather than it is identical), and do not transport through wavefront part, mobile HMD passes through true landscape and fortune
Motion tracking is calibrated, and can be run under mobile VR mode.As can be seen that in conjunction with the sensor and relevant treatment system that are proposed
System, HMD can be used as " indirect view is shown " of Barrilleaux, and transmitting is turned off.
j.Generation system is closed, especially if increasing the expense and complexity of the visible frequencies MO/MPC material structure of optimization
Property, also may be implemented not generate/enhance the channel addition variable direct communication system of pixel illumination/picture element information).
In the reverse configuration of " indirect view is shown ", as incited somebody to action during the specification of the sensor and associated processing system that are proposed
See, if using the another variant of this system, and the subdivision of " transmitting " channel model (follows IR/ near-infrared and visible
Spectral filter-separator mode) RGB sub-pixel channels are arrived, there is the pixel-signal-logic-state of oneself in each channel
Coding demodulator, by variable transmission devices system can be extended to direct view system.Its shortcomings that, is dynamic range,
And it is supplemented without generation means, in contrast, relative low light limitation;In addition, this variant (is simply eliminated and generates structure
Mode or system) will not have can by parallel processing system (PPS) solve double arrays benefit, thus simplify execution scene it is whole
The bottleneck that conjunction-synthesis and perspective calculate.In addition, such system, based on different tunings and the best MO/MPC material of visible spectrum
Material/structure is by more more expensive than the generation system based on IR/ near-infrared and execution efficiency is lower.
k.The system of optimization combines efficient formation component and variable intensity, but generally lower brightness, leads directly to component.
1. preferred wireless addressing and power supply further reduced the power of the function device part of Structure with Intelligent Structure System, heat,
Weight and volume.
m.Under extremely low illumination or night vision mode, system for that can remove or close IR optical filter, IR (and near-infrared, such as
Fruit needs) pass through pixel status system without losing, and optional gain stage improves IR signal strength and/or the nearly iR of IR/
The luminance component of inside injection improves threshold value/underlying strength, on this basis by addition/superposition incidence pixelation IR intensity, IR/
Near-infrared has been gone over wavelength/frequency offset assembly (preferred phosphor system), also, no matter system is set as monochromatic or false
Color all realizes the low light of direct-view or night vision system.Using Polarization filter, generation system can operate and add figure and complete
Image using the strength reduction of the signal (see below) compensation input IR from auxiliary sensor system, or simply adds
Add reference level, such as proposed in other configurations, is enough to generate enough output to ensure to be input to the energy of wavelength/frequency.
II.For moving the sensing system of AR and VR:
The ordinary circumstance for following the proposal, without display image structure in the case where no sensing system in this way
It does, which optimizes and coordinate synthesis, and the image of generation and general inside are (and in some cases, external to shine
Bright condition, may by-according to the various situations of cited disclosure, pass through, may need or want according to efficiency consideration
It asks);The position of user, direction of observation and general motion tracking are not accounted for yet.
1. in the preferred version of this system, at least some equipment
Component has dual function as structural detail;But under those at all impossible situation, in any considerable journey
On degree, equipment is especially divided into integrated total system by the other elements that sensing is combined with other function purpose
Other elements.(
2. all motion tracking sensors as known in the art are integrally realized with optimised form in the system of the disclosure,
Including accelerometer, Together, digital gyroscope sensor, optical tracking and other systems, form is the macroscopical video camera of little individual
System, but multiple distributed sensor arrays, are preferably to realize, to realize distribution, local and processing locality benefit,
And the additional specific advantages captured in real time based on image/photogrammetric survey method, " overall situation " lighting condition and extract real-time
Geometric data accelerates the school of composograph element so as to which position/the earth/terrain data to storage carries out local updating
Quasi- and its effective perspective view renders and integrates and be combined into mixing/mixing view scene.
3. disclosed in application as quoted, and briefly extend, it is special in " being based on image " and photogrammetric survey method
It is not the real time information gathering values for using and proving is light field method, such as commercially available Lytro.System, from more samplings (and
Most preferably, distributed sensor array) space, image sampling can be carried out to space in real time, it is then enough in input/capture
Primary data after, generate the 3d space of View morph.Then, virtual camera is located in real time with given resolution ratio from taking the photograph
The change location in 3d space extracted in shadow measurement data.
4. other methods based on image can be used together with Lytro light field method, with additional local geometric/terrain data
Be used in combination, to realize the fluoroscopy images synthesis of calibration, including block with it is opaque (using it is integrated it is twin at) and preferably mention
The straight-through component of display subsystem out.Such method provides the sampling of entire FOV in real time to obtain matched lighting parameter
The even simple graphic/text element of shade/illumination of CGI, and navigation true 3D topographic space real-time update, without
It is simply to be disconnected in GPS and conventional motion, incoherent pixel, which executes, individually calculates only sensor to file.One
As correction can by parameter sampling be applied to illumination and relative position/geometry, significant reduction computation burden.
5. " absolute " positioning by GPS and the user of other mobile network's triangulations from signal method is combined, in conjunction with
The tracking of the motion sensor of HMD and any haptic interface, and include the taking the photograph based on image of the slave real-time update based on image
Then the body of shadow measuring system map user is relied on from the relative position and landform that quickly the method based on image obtains in real time
Parameter, using multiple Miniature Sensors and video camera.
6. it is related to this, it is that distributive array is realized in verifying in Bayindir/Fink " optical texture " camera of Μ Τ Τ exploitation
Specific physical method example.Either follow the optical fibre device and intelligent composite material for weaving of the present inventor's proposition
Method or simpler MIT optical fibre device manufacturing method and optical texture embodiment or other optical fibre devices intelligence/have
Source/photonic textile method, a distributed compound camera array of weaving, be arranged in the structure of HMD mechanical framework-also,
As described below, dual is executed by being added to structural system solution not as the non-contribution load in system
Be engaged in-it is the preferred version for realizing advantageous more equipment array systems, parallel distributed data capture is provided.
7. multiple spot microsensor array may include multiple micro-camera optic sensor array devices, be multi-angle of view system
Another preferred embodiment.
8. more basic integrated business Lytro system is less excellent with some other camera/sensor combinations in small array
Choosing but still superior combination allow a variety of methods based on image.
It, as already noted, can 9. secondary IR sensors are preferably arranged in again in the apparatus array of multiple low resolutions
To provide the low light of override/night vision feed to display system, or provide correction and supplement.Data and generation system coordination work,
It transmits and coordinates with real IR.
10. the Lytro type light field system based on identical set, can be used for visible spectrum in the pattern of mean level
Sensor in other frequency bands, this depends on application, not only may include low light/night vision and other application and use-case (such as purple
Outside line or microwave) field assay.In view of the limitation of the resolution ratio at longer wavelength, can be generated from invisible or not
The visible space reconstruction supplemented by GPS/LID AR reference data, and other dimension datas receipts are obtained when executing sensor
Collect correlation.Scan complex environment.It is now real with smaller and smaller form factor and miniaturization with the progress of miniaturization
Existing compact mass spectrum is it is also contemplated that be integrated into HMD.
11. finally, for rapid data sampling lighting parameter the method based on image in and they teach that about
The material of local environment, geometry and atmospheric conditions, one or more micro- " optical detectors ", this is a sphere of reflection,
Surface can be imaged to extract compact global reflectance map, for example, positioned at HMD crucial vertex (left and right corner or individually in
The heart is matched with multiple imagers to capture entire reflecting surface;Or can also individually or be preferably used in combination with sphere recessed
The part-spherical " hole " of face reflection, perhaps by magnetic field holding rhythm or on strong main shaft or mainly concealed peace
Dress, to extract illumination data reflecting surface from compact compression), the method that height accelerates can be provided, in conjunction with photogrammetric
Other correlation techniques, the illumination of Parametric space, material and geometry-are not only to exchange CGI/ digital picture that is lively and generating
Fast Graphics integrate (shade, illumination, perspective rendering, including block), and to quick in complexity, fast-changing environment
The possibility risk factors of sense operation are quickly analyzed.
III.Mechanical and substrate system:
It will be apparent from the foregoing that the image display subsystem proposed and the distribution proposed and based on figure
The sensing and auxiliary imaging system of picture, focus on preferred embodiment, and substantive benefit and value are provided for structure.This
Disclosed mechanical and ergonomics target.
1. an integrated preferred embodiment of structure-function has weight, volume, size, balance, ergonomics and cost
Benefit, be stretched film fabric composite construction and flexible optical structural substrates combined implementation.Particularly preferably by
The HMD frame that healthy and free from worry willow glass is formed, folding and (and preferably, sealing) has all places that must be integrated into HMD
Reason and function electronic device may include the power supply without using wireless less preferable version.Power supply, in the instrument bezel of folding
It is made on frame.In order to protect glass and wearer, and for comfortable and ergonomics, on functional optical structural elements
Application/package otherwise adds protective coating, such as the D30 based on shockwave system, in non-vibration, soft and bullet
Property, when being hit, shock wave curing materials mention for less durable (although obvious durable) willow glass structure/function system
For protective barrier.The Willow glass of folding, inner surface are the positions of system electronics on glass, are shaped as cylinder
Or semi-cylindrical, to increase intensity and preferably protect the electronics from impact, and it therefore also can be realized thinner base
Plate.
Fiber data and illumination are by flexibility, and (preferably, D30 is as external composite layer or other are anti-for textile package and protection
Shake composite component) cable, from illumination, power supply (preferred wireless) and data transmitting.Processing unit is in pocket or is integrated into user
Intelligence on body is weaved in compound wearable items, thus smooth and distribution of weight and balance.
2. once optical fiber is just used as compound optical fiber (data, light and optional power supply) cable and compound willow bezel are integrated
Material bonding is better than more expensive and unnecessary heat fusion, the input point of data E-0 data transmission and the photograph on display surface
Bright insertion point.
3. display frame structural detail is also willow glass or willow glass mould with optional additional composite component in the version
Material system: but replace formed optic shape because subcomponent solid glass or polymer lens (binocular lens tube) these be thin
Film composite layer, it then follows lens-type prefabricated component, to help to form required morphology;It is suitable to realize that compression ribs can also be used
When curvature.
4. since the sequence of function optical element includes light guide/limit after initial filter and in its most complicated stage
Channel processed, therefore all existing preferred option is to realize optical channel element, such as light in the structure and subsystem proposed
Fibre, a part as aeroge tensioning membrane matrix.Alternatively, can use hollow IR/ near-infrared rigid crust, have solid (or
It is semi-flexible) optical channel, channel is generated by IR for IR, it is seen that by channel, permeate cavity and space.Aeroge it
Between, including positive pressure aeroge, it will realize extremely firm, low-density, the enhancing structure system of lightweight.Aeroge-long filament composite wood
Expect commercial development, and continue to make progress in this kind of composite aerogel system, is silica and other airsettings
Glue provides extensive material selection, is manufactured with low cost method manufacture (Cabot, Aspen) aeroge etc. now).
5. another kind selection, and/or can be used in hybrid form with willow glass, it is graphene-CNT (carbon nanotube) function knot
Structure system, it is individually or preferably compound with aeroge.
6. being formed in thin with the further development of graphene or functional electronics and photonics feature, graphene layer or multilayer
Willow glass substrate on or in the sandwich system with aeroge, the mixture of graphene and CNT is used for electronic interconnection,
Optical fiber and slab guide of the optics on the glass of optical interconnection, and combined with other SOG system elements, and surmount
The case where more and more heterogeneous material systems (will be heterogeneous CMOS+ system, rear " pure " CMOS) of SOG, will preferably tie
Structure is realized.
7. in closer term, graphene, CNT, preferably graphene-CNT combination is used as compressing member, individually or with roll
Willow glass and the combination of optional aeroge battery sandwich, providing preferred light, [weight-integrated morphology system has superior
Property substrate mass.Therefore, semi-flexible for onboard processing device, sensor deployment and dense-pixel signal processing array layer
Willow Glass or Asahi, Schott etc. may the similar glass products of exploitation and other but less preferred recent
Polymer or polymer glass hybrid also are used as deposition substrate.
IV.Many mobile AR and VR solution party also may be implemented in other movements or half wearable form factor (such as tablet computer)
Case, these solutions obtain overall application in preferred HMD form factor.
Although specific embodiment has been disclosed herein, based on solution combination and single optimization pixel modulation needed for operation and
In the stage, they are not necessarily to be construed as the application and range that the novel image that limitation is proposed shows and projects.
The systems and methods are generally described, to help to understand the details of the preferred embodiment of the present invention.Herein
Description in, provide many details, such as the example of component and/or method, with provide to the saturating of the embodiment of the present invention
Thorough understanding.Some features of the invention and benefit are realized in this mode, and are all not required in each case.So
And those skilled in the relevant art are it will be recognized that the embodiment of the present invention can be in the feelings of none or multiple details
Implement under condition, or with other devices, system, component, method, component, material component and/or likes.In other cases,
Well known structure, material or operation are not shown or described in detail to avoid the various aspects of fuzzy the embodiment of the present invention.
To " one embodiment " in this specification, the reference of " embodiment " or " specific embodiment " means that the embodiment is combined to retouch
The special characteristic stated, structure or characteristic are included at least one embodiment of the present invention.It invents and not necessarily in all realities
It applies in example.Therefore, there is " embodiment " in each place throughout the specification or " specific in the respective of phrase " in one "
In embodiment " embodiment be not necessarily meant to refer to identical embodiment.In addition, the specific spy of any specific embodiment of the invention
Sign, structure or characteristic can with it is understood that according to the teaching of this article, the embodiment of the present invention described and illustrated herein
Other change and modification are possible, and are considered as a part of the invention.The spirit and scope of the present invention.
It is also understood that the one or more attached drawings or figure in the element described in Fig. 6 can also be more to separate or integrate
Mode realize, or even in some cases be removed or be rendered as it is inoperable, it is such as useful according to specific application.
In addition, unless stated otherwise, otherwise any signal arrows in drawings/figures should be considered only as exemplary rather than limit
Property.In addition, unless otherwise stated, the term as used herein "or" is generally intended to mean "and/or".Component or step
Combination also will be considered as noticing, wherein term is also contemplated as so that ability separately or in combination is unclear.
As used in description herein and following claims, "one", "one" and "the" include plural, are removed
Non- context is expressly stated otherwise.In addition, as used in description herein and following claims, " ... in "
Meaning include " ... in " and " above ", unless the context clearly determines otherwise.
The foregoing description (including content described in abstract) of illustrated embodiment of the invention is not intended to exhaustion or sends out this
It is bright to be limited to precise forms disclosed herein.Although describing the particular embodiment of the present invention merely for illustrative purpose herein
And example, but as those skilled in the relevant art will recognize and appreciate that, can carry out within the spirit and scope of the present invention
Various equivalent modifications.As noted, the foregoing description of illustrated embodiment according to the present invention can carry out these to the present invention
Modification, and these modifications are included in the spirit and scope of the present invention.
Therefore, although by reference to its specific embodiment describing the present invention herein, be intended in foregoing disclosure into
Row modification, the range of various changes and replacement, and it is to be understood that in some cases, some features of embodiment are not
It, will be using the present invention without correspondingly using other features in the case where being detached from illustrated scope and spirit of the present invention.
Therefore, it can much modify so that specific condition or material adapt to base region and spirit of the invention.The present invention is unlimited
The particular implementation disclosed in the specific term used in following following claims and/or as preferred embodiment of the present invention
Example, but the present invention will include any and all embodiments and equivalent.Scope of the appended claims.Therefore, of the invention
Range is only indicated in the appended claims.
Claims (10)
1. having novelty and it is desirable that being by United States Patent (USP) certificate protection:
A kind of photon augmented reality system, comprising:
First interface generates first group of channelizing image from real world environments and forms signal;
Second interface forms signal from second group of channelizing image of synthetic world environment generation;
The signal processing matrix of optical channel is isolated, is coupled to the interface, the signal processing matrix configuration is by the letter
Road image composition signal channellization, processing, alternation sum are assigned as the channelizing image handled composition signal group;And
Signal operation structure group, is coupled to the signal processing matrix, and the signal operation structure is configured to from described through handling
Channelizing image composition signal group generate be used for human visual system display picture element group.
2. a kind of photonic system for the visualized operation world, the operation world includes the synthesis generation in virtual real mode
Boundary, comprising:
The booster of one group of channelizing synthetic world image composition signal is generated from the synthetic world, the channel is combined to generation
Attribute needed for image composition signal group in boundary's respectively has booster group, wherein the booster is formed in channelizing booster image
It include that the channelizing synthetic world image forms signal group in signal output group;
It is coupled to the visualizer of the booster, the output group of the channelizing booster image composition signal is handled, with needle
Modification of signal is formed to each of the channelizing visualizer image composition signal output group channelizing booster image is generated
The frequency/wavelength of attribute needed for the booster group is modulated or frequency/wavelength converting attribute, and each signal has visualizer group
Required attribute;And
Output constructor is coupled to the visualizer, from the output group life of channelizing visualizer image composition signal
At display picture element group.
3. photonic system as claimed in claim 2, wherein attribute needed for each booster group includes each channel
It is combined to the frequency/wavelength attribute of world picture composition signal, wherein the frequency/wavelength attribute of attribute needed for the booster group
Be it is non-visible, with reference to human visual system, a part of electromagnetic spectrum, and the wherein frequency/wavelength modulation or institute
Attribute needed for frequency/wavelength converting attribute generates the visualizer group with the frequency/wavelength attribute is stated, all as it can be seen that ginseng
Examine human vision system, a part of the electromagnetic spectrum.
4. photonic system as claimed in claim 2, wherein the operation world further includes the real generation under augmented reality mode
Boundary, further includes:
Real world interface, from the real world, the channelizing real world of the respective required set of properties with real world
Image forms signal group and generates channelizing real world image composition signal group;And
Wherein the booster receives the channelizing real world image and forms signal group, and in the channelizing booster figure
It selectively include that the channelizing real world image forms signal group in output group as forming signal.
5. photonic system as claimed in claim 4, wherein the required set of properties of each real world includes each described
Channelizing real world image forms the frequency/wavelength attribute of signal, wherein attribute needed for each booster group includes every
The frequency/wavelength attribute of a channelizing synthetic world image composition signal, wherein the frequency of attribute needed for the booster group
Rate/wavelength properties are all invisible, with reference to human visual system, a part of electromagnetic spectrum, and the wherein frequency/wavelength tune
Attribute needed for system or the frequency/wavelength converting attribute generate the visualizer group with the frequency/wavelength attribute, institute
Frequency/wavelength attribute is stated as it can be seen that with reference to the human visual system, a part of the electromagnetic spectrum.
6. photonic system as claimed in claim 5, wherein the real world interface is by the complex composite of the real world
Electromagnetic wave array group is converted into the channelizing real world image composition signal group, wherein the complex composite electromagnetic wave array
Group wave surface includes the frequency/wavelength having in the visible part of the electromagnetic spectrum and the invisible part of the electromagnetic spectrum
Wave surface, and wherein the real world interface include inhibit have the electromagnetic spectrum visible part the wave battle array
The input structure of the input in face, to contribute the channelizing real world image to form signal group.
7. photonic system as claimed in claim 5, wherein the real world interface is by the complex composite of the real world
Electromagnetic wave array group is converted into the channelizing real world image composition signal group, wherein the complex composite electromagnetic wave array
Group includes having the wave battle array of the visible part of the electromagnetic spectrum and the frequency/wavelength in the invisible part of the electromagnetic spectrum
Face, and wherein the real world interface includes the wave surface for inhibiting to have the invisible part of the electromagnetic spectrum
The input structure of input, to contribute the channelizing real world image to form signal group, wherein the real world interface will
Wave surface in the visible part of the electromagnetic spectrum is converted and is mapped in the invisible part of the electromagnetic spectrum
Signal.
8. a kind of method, comprising:
First group of channelizing image, which is generated, from real world environments forms signal;
Signal is formed from second group of channelizing image of synthetic world environment generation;
Channelizing image described in the signal processing matrix disposal of isolation optical channel is used to form signal as the channelizing of processing
Image forms signal group;And
The display picture element group for being used for human visual system is generated from the processed channelizing image composition signal group.
9. device substantially as disclosed herein.
10. method substantially as disclosed herein.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211274817.9A CN115547275A (en) | 2016-03-15 | 2017-03-15 | Mixed photon VR/AR system |
Applications Claiming Priority (17)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662308585P | 2016-03-15 | 2016-03-15 | |
US201662308825P | 2016-03-15 | 2016-03-15 | |
US201662308687P | 2016-03-15 | 2016-03-15 | |
US201662308361P | 2016-03-15 | 2016-03-15 | |
US62/308,825 | 2016-03-15 | ||
US62/308,585 | 2016-03-15 | ||
US62/308,361 | 2016-03-15 | ||
US62/308,687 | 2016-03-15 | ||
US15/457,991 | 2017-03-13 | ||
US15/457,991 US9986217B2 (en) | 2016-03-15 | 2017-03-13 | Magneto photonic encoder |
US15/458,009 US20180122143A1 (en) | 2016-03-15 | 2017-03-13 | Hybrid photonic vr/ar systems |
US15/458,009 | 2017-03-13 | ||
US15/457,980 US20180031763A1 (en) | 2016-03-15 | 2017-03-13 | Multi-tiered photonic structures |
US15/457,967 US20180035090A1 (en) | 2016-03-15 | 2017-03-13 | Photonic signal converter |
US15/457,967 | 2017-03-13 | ||
US15/457,980 | 2017-03-13 | ||
PCT/US2017/022459 WO2017209829A2 (en) | 2016-03-15 | 2017-03-15 | Hybrid photonic vr/ar systems |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211274817.9A Division CN115547275A (en) | 2016-03-15 | 2017-03-15 | Mixed photon VR/AR system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109564748A true CN109564748A (en) | 2019-04-02 |
CN109564748B CN109564748B (en) | 2022-11-04 |
Family
ID=65863567
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780030255.4A Active CN109564748B (en) | 2016-03-15 | 2017-03-15 | Mixed photon VR/AR system |
CN202211274817.9A Pending CN115547275A (en) | 2016-03-15 | 2017-03-15 | Mixed photon VR/AR system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211274817.9A Pending CN115547275A (en) | 2016-03-15 | 2017-03-15 | Mixed photon VR/AR system |
Country Status (2)
Country | Link |
---|---|
JP (2) | JP2019521387A (en) |
CN (2) | CN109564748B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110518976A (en) * | 2019-07-31 | 2019-11-29 | 同济大学 | A kind of communication device based on distributed optical resonance system |
CN110706322A (en) * | 2019-10-17 | 2020-01-17 | 网易(杭州)网络有限公司 | Image display method and device, electronic equipment and readable storage medium |
CN112891946A (en) * | 2021-03-15 | 2021-06-04 | 网易(杭州)网络有限公司 | Game scene generation method and device, readable storage medium and electronic equipment |
CN116027270A (en) * | 2023-03-30 | 2023-04-28 | 烟台恒研光电有限公司 | Positioning method and positioning system based on averaging processing technology |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1629916A (en) * | 2003-12-16 | 2005-06-22 | 精工爱普生株式会社 | Light propagation characteristic control apparatus, optical display apparatus, light propagation characteristic control method and optical display apparatus control method |
US20090231358A1 (en) * | 2008-02-14 | 2009-09-17 | Photonica, Inc. | Hybrid Telecom Network-structured Architecture and System for Digital Image Distribution, Display and Projection |
CN103443846A (en) * | 2011-03-09 | 2013-12-11 | 杜比实验室特许公司 | High contrast grayscale and color displays |
US20140036044A1 (en) * | 2012-08-03 | 2014-02-06 | Samsung Electronics Co., Ltd. | Display apparatus which displays a plurality of content views, glasses apparatus which synchronizes with one of the content views, and methods thereof |
CN103853913A (en) * | 2012-12-03 | 2014-06-11 | 三星电子株式会社 | Method for operating augmented reality contents and device and system for supporting the same |
US20160026253A1 (en) * | 2014-03-11 | 2016-01-28 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
-
2017
- 2017-03-15 CN CN201780030255.4A patent/CN109564748B/en active Active
- 2017-03-15 CN CN202211274817.9A patent/CN115547275A/en active Pending
- 2017-03-15 JP JP2018568159A patent/JP2019521387A/en active Pending
-
2022
- 2022-03-02 JP JP2022031648A patent/JP2022081556A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1629916A (en) * | 2003-12-16 | 2005-06-22 | 精工爱普生株式会社 | Light propagation characteristic control apparatus, optical display apparatus, light propagation characteristic control method and optical display apparatus control method |
US20090231358A1 (en) * | 2008-02-14 | 2009-09-17 | Photonica, Inc. | Hybrid Telecom Network-structured Architecture and System for Digital Image Distribution, Display and Projection |
CN103443846A (en) * | 2011-03-09 | 2013-12-11 | 杜比实验室特许公司 | High contrast grayscale and color displays |
US20140036044A1 (en) * | 2012-08-03 | 2014-02-06 | Samsung Electronics Co., Ltd. | Display apparatus which displays a plurality of content views, glasses apparatus which synchronizes with one of the content views, and methods thereof |
CN103853913A (en) * | 2012-12-03 | 2014-06-11 | 三星电子株式会社 | Method for operating augmented reality contents and device and system for supporting the same |
US20160026253A1 (en) * | 2014-03-11 | 2016-01-28 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110518976A (en) * | 2019-07-31 | 2019-11-29 | 同济大学 | A kind of communication device based on distributed optical resonance system |
CN110706322A (en) * | 2019-10-17 | 2020-01-17 | 网易(杭州)网络有限公司 | Image display method and device, electronic equipment and readable storage medium |
CN110706322B (en) * | 2019-10-17 | 2023-08-11 | 网易(杭州)网络有限公司 | Image display method, device, electronic equipment and readable storage medium |
CN112891946A (en) * | 2021-03-15 | 2021-06-04 | 网易(杭州)网络有限公司 | Game scene generation method and device, readable storage medium and electronic equipment |
CN112891946B (en) * | 2021-03-15 | 2024-05-28 | 网易(杭州)网络有限公司 | Game scene generation method and device, readable storage medium and electronic equipment |
CN116027270A (en) * | 2023-03-30 | 2023-04-28 | 烟台恒研光电有限公司 | Positioning method and positioning system based on averaging processing technology |
Also Published As
Publication number | Publication date |
---|---|
JP2019521387A (en) | 2019-07-25 |
CN109564748B (en) | 2022-11-04 |
CN115547275A (en) | 2022-12-30 |
JP2022081556A (en) | 2022-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180122143A1 (en) | Hybrid photonic vr/ar systems | |
US11107288B2 (en) | Systems and methods for mixed reality | |
CN101414425B (en) | Display device and display method | |
CN107024775B (en) | Stereoscopic display device | |
CN103562802B (en) | Holographic wide angle display | |
CN106371218B (en) | A kind of wear-type three-dimensional display apparatus | |
CN106537220B (en) | Wearable 3D augmented reality display with variable-focus and/or Object identifying | |
US10593092B2 (en) | Integrated 3D-D2 visual effects display | |
CN109564748A (en) | Mix photonic system | |
CN101361023B (en) | Three-dimensional internal back-projection system and method for using the same | |
CN101568889A (en) | Holographic display device | |
CN107390380A (en) | A kind of display device, light guide panel and multilayer suspension display device | |
CN101568888A (en) | Mobile tlephony system comprising holographic display | |
CN204156999U (en) | A kind of bore hole 3D display system based on Unity3D game engine | |
CN210835313U (en) | Holographic diffraction waveguide lens, waveguide lens group and augmented reality color display device | |
CN112415656A (en) | Holographic diffraction waveguide lens and augmented reality color display device | |
US20200018960A1 (en) | Ultra light-weight see-through display glasses | |
Cheng | Metaverse and immersive interaction technology | |
Hua | Past and future of wearable augmented reality displays and their applications | |
WO2017209829A2 (en) | Hybrid photonic vr/ar systems | |
US20200018961A1 (en) | Optical image generators using miniature display panels | |
Lu | Research on optical display technology of virtual reality technology based on optical image | |
US7292384B2 (en) | Method and apparatus for providing a three-dimensional moving image display | |
Zhong et al. | Application of Three-Dimensional Vision Technology in Dance | |
Rath et al. | Modern Development and Challenges in Virtual Reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |