CN108139803A - For the method and system calibrated automatically of dynamic display configuration - Google Patents

For the method and system calibrated automatically of dynamic display configuration Download PDF

Info

Publication number
CN108139803A
CN108139803A CN201680058637.3A CN201680058637A CN108139803A CN 108139803 A CN108139803 A CN 108139803A CN 201680058637 A CN201680058637 A CN 201680058637A CN 108139803 A CN108139803 A CN 108139803A
Authority
CN
China
Prior art keywords
display
hmd
image
further comprise
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201680058637.3A
Other languages
Chinese (zh)
Other versions
CN108139803B (en
Inventor
塔图·V·J·哈尔维艾宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InterDigital VC Holdings Inc
Original Assignee
Pcms Holdings Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings Inc filed Critical Pcms Holdings Inc
Priority to CN202110366934.7A priority Critical patent/CN113190111A/en
Publication of CN108139803A publication Critical patent/CN108139803A/en
Application granted granted Critical
Publication of CN108139803B publication Critical patent/CN108139803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0693Calibration of display systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computer Hardware Design (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Describe system and method for performing the following operations:Using the image of the part of first and second display equipment in forward direction video camera capturing ambient associated with wear-type augmented reality (AR) head-mounted display (HMD), which shows first and second partial content of related AR displayings;And the Part III content related with the AR displayings is shown on the AR HMD, the Part III based on use the forward direction video camera captured described in first and second show equipment part image and be determined.In addition, first and second described display equipment can be active stereoscopic display, and the AR HMD can be used as shutter glasses simultaneously.

Description

For the method and system calibrated automatically of dynamic display configuration
Cross reference to related applications
This application claims entitled " the METHODS AND SYSTEMS OF AUTOMATIC submitted on October 8th, 2015 The U.S. Provisional Application No.62/239,143 of CALIBRATION FOR DYNAMIC DISPLAY CONFIGURATIONS ", Entitled " the METHODS AND SYSTEMS FOR OPTIMIZING DISPLAY DEVICE submitted on November 25th, 2015 The U.S. Provisional Application No.62/260,069 and 2015 of OUTPUT USING A SECONDARY DISPLAY DEVICE " Entitled " the METHODS AND SYSTEMS FOR OPTIMIZING DISPLAY DEVICE OUTPUT that on November 30, in submits The priority of the U.S. Provisional Application No.62/261,029 of USING A SECONDARY DISPLAY DEVICE ".
Technical field
Augmented reality (AR) head-mounted display (HMD) and traditional monitor can be penetrated present application relates generally to light is used The immersion 3D contents of combination.
Background technology
It is being continuously increased currently for the interest and activity in virtual reality (VR) and AR fields.Main captain of industry exists It is dedicated to the work of AR/VR HMD devices in relation to satisfying the needs of consumers, while the various stakeholder of Hollywood entertainment industry Seem also to surround the activity of AR and VR contents in active development.If consumer AR/VR soars, that will be to that can utilize The scheme generation that relatively large equipment other than only currently driving AR the or VR HMD of development carries out content consumption is huge Demand.
One of illusion that VR is dedicated to creating is that user can experience it and be present in the virtual world of synthesis and non-physical In reality.The composograph that the illusion can be generated by the way that user to replace with the audiovisual perception of real world to Computer Simulation And sound and be implemented.In order to realize compellent illusion, described image and sound that the Computer Simulation is generated should This has the spatial relationship consistent with user and visible change caused by user action can be responded, so as to which user can Real physical world is explored just as them like that explore the virtual world.Similar to VR, AR is dedicated to generating illusion, in void In the case that plan element can be attached to physical environment, the replaceable version of physical reality around relevant user is created.
When AR and VR HMD just prepare for the indoor immersion experience of daily life, it is observed that consumer display Size is increasing.User wishes the display in its living room with bigger.However, although size is constantly enhancing, Current plane or micro- flexible displays comprehensively can not support immersion to experience, the leaching such as created for AR/VR HMD Enter formula experience.However, with the evolution of display technology, in the near future, user may have even more in its living room More display areas, display become it is nonplanar and occupy number face wall and/or display can be it is deformable, so as to Shape and the configuration of display can be changed to match the content just consumed and use context (context).
Due to light can penetration indicator translucent attribute and the limited visual field, AR and VR HMD be also not particularly suited at it The upper full immersion content of display.The transparency of display can cause virtual content to be used as afterimage on the display world visual of user (ghost image) occurs, and when drastically switching towards vision edge in the visual field, the limited visual field can further upset the mistake Feel.
Currently, several manufacturers (the key industry giant including Microsoft, Sony, Epson etc.) are to consumer Marketing light can penetrate AR HMD.These first generation equipment for concentrating one's gaze on consumer market have some defects, are especially having In terms of the transparency for limiting the picture of the visual field, brightness and its generation.However, it is possible to very realistically it is envisioned that next-generation equipment will It makes up at least partly described defect and provides high resolution A R using the excellent picture quality in comfortable form factor and visualize.
Although following improve of AR HMD is Great possibility, it can penetrate HMD using light and reappear dividing for traditional monitor Resolution and it will be extremely difficult with color/brightness.In addition, the current techniques for AR HMD only provide fixed human eye tune Pitch is from and by doing so, it may appear that it is another considered the defects of.
AR HMD also there are some may be such that the characteristic of their beyond tradition displays in some aspects.AR HMD may be such that User can be moved freely in environment without damaging overall image quality.Further, newest intelligent glasses HMD can be to user It is mobile simultaneously to create the estimation in relation to environment solid into line trace, transfer to may be such that environment enhancing and the abundant friendship with environment Mutually it is possibly realized.In addition, as HMD display is close to eyes of user, user, which can have to the gem-pure of the display, to be regarded Line.
In deformation (shape-changing) field of display, maximally related example is can be in curve form and flat shape Between the display equipment of volume production that is adjusted.Additionally, there are many guesses in relation to tiled display technology.In addition to comprising more Except the display of kind flat-panel screens or micro- flexible displays, display manufacturer is also devoted for years to really flexible in research Display, volume production scheme just tend to become a reality.In academic research community, the example of existing some related prototypicals installation.Example Such as, documents below describes robot display and small-sized tiled display by way of example:Support personal and group activity Deformation wall display (TAKASHIMA, Kazuki, et al.A Shape-Shifting Wall Display that Supports Individual and Group Activities.2015) and bank indicator:There is multiaxis to tilt for design With display surface (ALEXANDER, the Jason of driving;LUCERO,Andrés;SUBRAMANIAN,Sriram.Tilt displays:Designing display surfaces with multi-axis tilting and actuation ( 14th human-computer interaction and (Proceedings of the 14th in mobile equipment and service international conference collection of thesis international conference on Human-computer interaction with mobile devices and services.ACM,2012.p.161-170))).Some relevant device and method can be found from following patent document: U.S. Patent number 8988343;U.S. Patent number 8711091;And U.S. patent application publication number US20100321275 A1.So And for allowing in experience of the process for the deformable or splice displaying system of dynamic regulation, it appears that rarely have its example.
Invention content
Method and system in this describes following process, the process receive sensor-based related monitor information and The data of user information (for example, user location, eyes of user position) are changed currently as input, and based on the input received Images outputting stream caused by the application program of activity, so that the images outputting be adjusted match present displays configuration And use context.The general view of component in relation to the process is as shown in Figure 2.
Described embodiment can automatically adjust the rendering output of application program to match the display that can be dynamically varied Configuration, such as the shape of the display, direction of display, two or more can be changed during application program run time performs Relative position of display etc..
Sensing data of some embodiments based on sensor of the analysis from observation user and display configuration.Matching institute Figure API Calls caused by the application program intercepted and captured can be injected by stating the rendering variation needed for display configuration.Mesh Be in order to realize the high immersion of unmodified application program render, while consider viewing content and context with improve knot Fruit.
In some embodiments, display setting (display setup), which is equipped with, is configured as detecting current display The sensor (one or more) of device layout configuration.It, can also be aobvious using that can realize other than display sensors configured Show that the sensor of user's detection in front of device observes the region around the display configuration.Embodiment there is described herein mistake as follows Journey, wherein figure API Calls caused by unmodified application program can be intercepted and captured, based on display configuration information and head with Track calculates view conversion, and in the figure that view conversion injection is sent to display via display driver later API Calls.This method can realize the dynamic change of immersion rendering and the display configuration of unmodified application program (i.e. It is during application program run time).In method described below, the variation in display configuration can be monitored always And output parameter can be conditioned the variation of the configuration to be detected during considering application program run time, without user It participates in.
Due to the process, user will be provided the immersion for being adapted to the current active application program that present displays are configured Formula renders.For example, the user that the first visual angle shooting games of 3D are played in the setting with multiple displays can broadcast in game Screen is adjusted during putting in surrounding arrangement.When the direction of display changes, user will undergo game play just True immersion renders, just as it watches gaming world by the combination of multiple displays.Similar, for flexibility For display, user can move to a turning during application program run time only by by flexible display (corner), the shape of display is changed to more immerse to the shape of sense from the flat shape on a wall, so as to which it can be in wall In the case of being gradually bent 90 degree between wall another wall is extended to from a wall.
In the method according to some examples, the experience of the quality and immersion realized can be better than can be saturating by individual light Penetrate the effect that AR HMD or traditional monitor can be realized.In some embodiments, auxiliary display device can be used to fill Existing gap when being to be rendered in main equipment or main equipment set.For example, there are multiple large-scale displays in space In the case of device is available, AR HMD can be used to bridge the arbitrary optical gap between these displays.AR HMD be for In the case of other of the main display equipment of content, can be used traditional monitor come to drop into the AR HMD limited field it Outer region is filled.For example, the main equipment (one or more) can be to be judged as or be selected as being most suitable for showing One or more equipment of the interior perhaps part content.
Some embodiments described here optionally draw content using the method based on combined method, this method Divide to different displays, while also be able to realize that immersion renders based on head tracking.In addition, this method can monitor always it is defeated Go out and unwanted caused by being located at the physical object between display and viewer is blocked and shade is corrected. There are it is a variety of display equipment it is available in the case of, therein one or more can be used to be combined on AR HMD devices Existing gap is filled when being rendered.For example, there are multiple giant displays are available in space, it can The optical gap between these display equipment is bridged using AR HMD.AR HMD be for content main display equipment its In the case of him, the region except the limited field that traditional monitor can be to dropping into the AR HMD is filled.
Description of the drawings
By the description provided by way of example below in conjunction with attached drawing, more detailed understanding can obtain.
Figure 1A depicts the example communication system that can implement one or more the disclosed embodiments in it.
Figure 1B depicts the example client device of the communication system available for Figure 1A.
Fig. 2 depicts the overview of the component that can be used in system according to an at least embodiment.
Fig. 3 depicts the flow chart according at least process of an embodiment.
Fig. 4 depicts the flow chart according at least process of an embodiment.
Fig. 5 A and 5B depict the exemplary configuration of two flat-panel screens according to an at least embodiment.
Fig. 6 A, 6B and 6C depict the various meters projected for two flat-panel screens according to an at least embodiment It calculates.
Fig. 7 A and 7B depict the exemplary configuration according at least flexible display of an embodiment.
Fig. 8 depicts the example calculation projected according to the flexible display of an at least embodiment.
Fig. 9 depicts user in accordance with some embodiments and is in the immersive environment for including 2 external display devices.
Figure 10 depicts shadow correction example in accordance with some embodiments.
Figure 11, which is depicted, in accordance with some embodiments blocks correction example.
Figure 12 depicts the example in accordance with some embodiments that distance distribution virtual element is adapted to according to eyes.
Figure 13 depicts the example of the user in accordance with some embodiments in immersive environment.
Figure 14 depicts the example of detection artifact in accordance with some embodiments.
Figure 15 depicts the example of the detected artifact of repairing in accordance with some embodiments.
Figure 16 depicts the example of AR HMD rendering contents on the outside of display in accordance with some embodiments.
Figure 17 depicts rendering in accordance with some embodiments and is selected as that will be displayed on the virtual element on AR HMD Example.
Figure 18 depicts the embodiment in accordance with some embodiments by the use of AR HMD as shutter glasses.
The flow chart of the process of some embodiments according to Figure 19.
Specific embodiment
The detailed description of illustrative embodiment is provided with reference to various attached drawings.Although this description provides what may be implemented Detailed example, it is noted that, the details provided is only for exemplary purposes not for limitation the application Range.The system and method related with immersion augmented reality can be used for the wired and wireless communication system with reference to described in figure 1A and 1B System.First, described wired and wireless system will be described.
Figure 1A is the diagram for the exemplary communication system 100 that can implement one or more the disclosed embodiments in it.It should Communication system 100 can provide the multiple access of the contents such as voice, data, video, message transmission, broadcast for multiple wireless users to connect Enter system.The communication system 100 allowed by the shared system resource including wired and wireless bandwidth it is multiple wired and The such content of radio subscriber to access.As an example, the communication system 100 can use one or more channel access methods, Such as CDMA (CDMA), time division multiple acess (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), Single Carrier Frequency Division Multiple Access (SC-FDMA) etc..The communication system 100 also can be used one or more wired communications standards (for example, Ethernet, DSL, Radio frequency (RF), optical fiber technology on coaxial cable etc.).
As shown in Figure 1A, communication system 100 can include client device 102a, 102b, 102c, 102d and/or 102e, Personal area network (PAN) 106 and communication link 114/115/116/117/118, it would be appreciated that the disclosed embodiments are set Any number of client device, base station, network and/or network components are thought.Each described client device 102a, 102b, 102c, 102d, 102e can be arranged to work and/or communicate in wired or wireless environment any kind of Equipment.For example, client device 102a is shown as tablet computer/touch screen smart phone, client device 102b is shown For loud speaker, client device 102c is shown as luminaire, and client device 102d is shown as television set and client End equipment 102e is shown as HMD (being in some embodiments AR or VR).
Some or all of described client device 102a, 102b, 102c, 102d, 102e in communication system 100 packets Include multi-mode ability, i.e. client device 102a, 102b, 102c, 102d and 102e include on different communication chain road with Multiple transceivers that different wired or wireless networks communicate.
In some embodiments, local communication protocol can be used in client device 102a, 102b, 102c, 102d and 102e It is communicated with each other via PAN 106.For example, bluetooth, Wi-Fi, Wireless LAN (WLAN) or other shapes can be used in client device The wireless local communication protocol of formula communicates.
Figure 1B depicts the example client device in the communication system available for Figure 1A.Specifically, Figure 1B is shows The system diagram of example property client device 102.As shown in Figure 1B, client device 102 may include processor 118, transceiver 120th, transmitting/receiving means 122, speaker/microphone 124, numeric keypad 126, display/touch tablet 128, non-removable deposit Reservoir 130, removable memory 132, power supply 134, global positioning system (GPS) chipset 136 and other peripheral equipments 138.It should be understood that client device 102 can represent appointing in client device 102a, 102b, 102c, 102d and 102e Meaning client device, and can be while keeping consistent with embodiment, the arbitrary sub-portfolio including above-mentioned component.
Processor 118 can be general processor, application specific processor, conventional processors, digital signal processor (DSP), Multi-microprocessor and the associated one or more microprocessors of DSP core, controller, microcontroller, application-specific integrated circuit (ASIC), field programmable gate array (FPGA) circuit, other any kind of integrated circuit (IC), state machines etc..Processing Device 118 can perform Signal coding, data processing, power control, input/output processing and/or other any can make client The function that equipment 102 works in wired or wireless environment.Processor 118 is coupled to transceiver 120, transceiver 120 are coupled to transmitting/receiving means 122.Although processor 118 and transceiver 120 are described as stand-alone assembly by Figure 1B, It would be appreciated that processor 118 and transceiver 120 can be integrated in an electronic building brick or chip.
Transmitting/receiving means 122 may be configured to emit or receive via communication link 114/115/116/117/118 Go into or from the signal of PAN 106.For example, in one embodiment, transmitting/receiving means 122 can be configured Into the antenna for emitting and/or receiving RF signals.In another embodiment, as an example, transmitting/receiving means 122 can be It is configured to emit and/or receives IR, UV or emitter/detector of visible light signal.In a further embodiment, emit/ Receiving part 122 may be configured to emit and receive RF and optical signal.In another embodiment, the transmitting/receiving means Can be wired connection port, such as ethernet port.It should be understood that transmitting/receiving means 122 may be configured to emit And/or receive any combinations of wired or wireless signal.
In addition, although transmitting/receiving means 122 is described as single component in fig. ib, client device 102 can To include any amount of transmitting/receiving means 122.More specifically, client device 102 can use MIMO technology.Cause This, in one embodiment, WTRU 102 can be including two or more via communication link 114/115/116/117/118 Transmitting and the transmitting/receiving means 122 (such as mutiple antennas) for receiving wireless signal.
Transceiver 120 may be configured to be modulated the signal that transmitting/receiving means 122 will emit and 122 received signal of transmitting/receiving means is demodulated.As described above, client device 102 can have multimode ability. Therefore, transceiver 120 can include allowing client device 102 by a variety of of such as UTRA and IEEE 802.11 etc RAT is come multiple transceivers for communicating.
The processor 118 of client device 102 is coupled to speaker/microphone 124, numeric keypad 126 and/or shows Show device/touch tablet 128 (such as liquid crystal display (LCD) display unit or Organic Light Emitting Diode (OLED) display unit), and And the user input data from these components can be received.Processor 118 can also be to speaker/microphone 124, number key Disk 126 and/or display/touch tablet 128 export user data.In addition, processor 118 can be from any appropriate memory Access information and information is stored in these storages in (such as non-removable memory 130 and/or removable memory 132) Device.The non-removable memory 130 can include random access memory (RAM), read-only memory (ROM), hard disk or Other any kind of memory storage devices.Removable memory 132 can include subscriber identity module (SIM) card, memory stick, Secure digital (SD) memory card etc..In other embodiments, processor 118 not can be physically located in client from those and set Standby 102 memory access information and data are stored in these memories, wherein for example, the memory can position In server or home computer (not shown).
Processor 118 can receive the electric power from power supply 134, and can be configured to distribute and/or control for visitor The electric power of other assemblies in family end equipment 102.Power supply 134 can be any suitable equipment powered for WTRU 102.It lifts For example, power supply 134 can include one or more dry cell batteries (such as ni-Cd (Ni-Cd), nickel zinc (Ni-Zn), ni-mhs (NiMH), lithium ion (Li-ion) etc.), solar cell, fuel cell, wall outlet etc..
Processor 118 can also be coupled with GPS chip group 136, which may be configured to provide and visitor The relevant location information in current location (such as longitude and latitude) of family end equipment 102.As the letter from GPS chip group 136 The supplement of breath or replacement, WTRU 102 can by location information that communication link 114/115/116/117/118 receives and/ Or determine its position according to from two or more timings of base station received signals nearby.It should be understood that in holding and in fact Apply example it is consistent while, client device 102 can obtain location information by any appropriate localization method.According to one Embodiment, the client device 102 may not include GPS chip group and can not need to location information.
Processor 118 is also coupled to other peripheral equipments 138, can include providing supplementary features, function among these And/or one or more software and/or hardware modules of wired or wireless connection.For example, peripheral equipment 138 can include accelerating Spend meter, electronic compass, satellite transceiver, digital camera (be used for photo or video), universal serial bus (USB) port, Vibratory equipment, television transceiver, Earphone with microphone,Module, frequency modulation (FM) radio unit, digital music are broadcast Put device, media player, video game machine module, explorer etc..
The display currently having in people family although its size constantly increases, still cannot provide real leaching Enter formula experience.Currently, VR HMD it is the most serious be limited to limited resolution ratio and with head tracking and associated motion sickness The problem of related.However, these problems can be improved with next-generation VR HMD.Related VR HMD's another problem is that they The complete visual isolation brought to user.The isolation spatially and in social activity strictly limits experience.
It is built in many research institutions at present and the VR comprising several display walls has been used to set (its VR being referred to as Cave (VR-Cave)).This installation for being similar to cave may be such that becomes possibility to natural explore of VR environment, but this be with High spatial and device requirement and cost are cost.However, these cave systems are allowed to virtual content, strong are immersed and very To it is multiple and deposit user limited support carry out instinct type exploration.
If user can have dependent on content and context and change shape and behavior to support the immersion in 3D patterns The ability of formula virtual experience, then this will be very favorable, and in the 3D patterns, it is such that display can be similar to VR caves And around user.And for conventional contents (for example, film), which can provide maximum sized plane or micro- curved surface Screen.
In order to improve the availability of the deformable display, generate to the system of the output of display can detect automatically it is aobvious Show the variation in device configuration and accordingly adjust output characteristics.The current Graphics output configuration that operating system is supported generally is supported quiet State display is configured, and wherein configuration process can be performed as independent step.Display surface can be to be considered only as 2D and put down Face, and its position and direction are by user Manual definition.
Such dynamic display behavior requirement how in operating system and software application layer to display and output It is made a change in terms of the arrangement being managed.Present disclose provides one kind to be used for the real-time management dynamic during application program performs The scheme of the display configuration of change.
For the automatic calibration of dynamic display configuration
Embodiment described herein may be such that the display setting comprising multiple displaying blocks can play sufficient effect, Or it can either automatically or manually change configuration and the shape of display during operation.This so that more diversified content is shown Show and be possibly realized, and improved user experience can be provided in content viewing and in terms of using context.
Fig. 2 depicts the overview of the component for system, which is configured as receiving sensor-based related Display is configured and the data of user location are as input, and the application program of current active is generated based on the input received Images outputting stream modify, so as to adjust images outputting with match present displays configuration and using context.Such as Fig. 2 institutes Show, Fig. 2 depicts usertracking sensor 202, display 204a and 204b, display sensors configured 206, graphics process list Member (GPU) 208, graphdriver 210, dynamic display configuration manager module 212 and application program module 214.
Display 204a and 204b can be shown for video screen, computer display, smart mobile phone/flat screens, LCD Device, light-emitting diode display or the public display module of other arbitrary classes well known by persons skilled in the art.In some embodiments, at least One display can be tablet.In some embodiments, an at least display is flexible displays.In some embodiments, at least One display can be flexible display.
User sensor tracking transducer 202 can be video camera, have depth transducer (RGB-D) video camera or Region around any type of observable display, which can simultaneously provide, can be analyzed to detect and track user and analyze user's eye The sensor of the data of the approximate location of eyeball.When detecting that multiple users are watching display, user's detection and tracking mould Block can be command by determining primary user to use some heuristics or finding most preferably averagely eye position (position generation All acceptable viewing locations per family) rather than eyes of user position use some static predefined acquiescence eyes positions It puts.
Display sensors configured can be to be embedded into simple sensor together with display device structure, can measure display unit Between connection angle or its can set detect the display by the way that observation is described on the outside of the mechanical structure Relative orientation and position optics, sound, magnetic sensor.
In some embodiments, the process of management dynamic display configuration can be implemented as software module.The software mould Block can be installed to be as driver operating system a part or its can be incorporated as the additional of graphdriver Feature or its can with generate images outputting real-life program be integrated together.In the overview of the associated component of Fig. 2 In, implement process as described below software module dynamic display configuration manager 212 be illustrated as it is independent in operating system layer Drive Module.
Data from these sensors are fed into dynamic display configuration manager module 212, can implement have with The process of lower step:
1. receive the sensing data from the sensor for monitoring the display configuration.
2. identify present displays configuration.
3. receive the sensing data in relation to area's intra domain user.
4. detect number of users and primary user's eye position;
5. being configured using present displays, to will be used for, 3D of the correct rendering content to the user is projected and conversion is asked Solution.
6. the figure as caused by currently running application program projected and conversion injection is captured is called into stream.
It will be given below the more detailed explanation of the process.
In some embodiments, the task of the process is the sensor 202 of user and display configuration from respectively And 206 obtain data, and modify to the application program 214 rendered with match display configuration (and when needed, Match eyes of user position).Fig. 3 shows the process generally.Based on sensing data, display configuration manager 212 can be known The shape and layout of other present displays configuration, such as display shape, position and direction.User sensor 202 can be to display Device configuration manager is provided in relation to watching the number of users of the display and the information of eyes of user position.
The embodiment of the step of continuous process performed by dynamic display configuration manager 212 is shown in detail in Fig. 4. The first step of the process is capture and analyte sensors data.The process can be traced related display shape, position and direction, And watch the user of the display and the information of eyes of user position.Opposite position between each display components are known It is just enough using angle detecting sensor (for example, electronic potentiometer) in the case of putting.It can be in each display screen In the case of carrying out movement with more degree of freedom, other trackings, such as sound or magnetic tracking can be used.
When observing display configuration, it can be used and regarded based on computer when using video camera or depth transducer (RGB-D) The method of feel detects the shape of each individual display components, position or the change of direction.Due to being sent to display Vision data is known, therefore using arbitrary image detection and tracking (for example, for unmarked given by AR Used image detection and tracking in tracking scheme) only from RGB data can testing number display position and direction. It can also be by using such as SALZMANN, Mathieu;HARTLEY,Richard;FUA,Pascal.Convex Optimization for deformable surface 3-d tracking (in computer vision, The 11st international conference of 2007.ICCV2007.IEEE, IEEE, 2007., the 1-8 pages (Computer Vision, 2007.ICCV 2007.IEEE 11th International Conference on.IEEE, 2007.p.1-8) in) (hereinafter referred to as “Salzmann et al.”).The convex surface optimization method of it is proposed or similar approach are from the shape of display at RGB data detection Change.In the case of there are RGB-D data, together with the data by including depth component can be made with reference to RGB data Further to finely tune the shape, position and towards detection.
Other than receiving the sensing data of related display configuration and user location, the process is also intercepted and captured by current All figure API Calls caused by applications active.Figure API Calls are intercepted and captured can call processing routing by figure It is completed by the module, such as by the way that default graphics driver shared object is replaced with the module and by the module Output carries out grafting to become the input of the default graphics driver shared object.The intercepting and capturing of figure API Calls can be according to similar In for example used in NVidia 3D Vision and NVidia 3DTV Play modules principle run.These NVidia Module can capture all figure API Calls caused by unmodified application program, figure API Calls are cached, and Modified projection and modelview matrix are injected in the calling to carry out three-dimensional rendering.Solid, which renders, includes scene by wash with watercolours Dye is twice, primary for each eyes, and the offset to match with slightly different interocular distance approximate with user's Viewpoint.
In addition to that can intercept and capture and change figure API Calls caused by application program, dynamic display configuration manager 212 It also is able to check figure API Calls to identify the type of just shown content.The content type can be used determine should be as What, which modifies to rendering, is configured and with most preferably adapting to display using context.It for example, can be by 3D Content Transformations to consider The immersion of user eye positions renders, and for video content, it is correctly cut out only for each display Cutting and the clipping region of video is converted can be enough so as to generate seamless combination display area.
Viewing parameter calculator can calculate for each different eye positions and 3D contents are rendered to each display Used correct viewport setting (viewport setting), projection and model view conversion.Following present exist The example of projection and model view conversion is calculated under the simple scenario of two non-flat displays and single flexible display.Phase Same method can be extended to calculate the parameter for any number of display with arbitrary direction and shape.
As the last one step of the process, view rendering device can be directed to each display and individually render in entire Hold and in the case where solid renders, each respective display can be directed to and rendered twice.When rendering, view rendering device can The viewport setting dependent on view and figure API Calls are sent, and correctly projection and model view can be by injection graphics drivers Device.Later, these orders can be forwarded to graphics processing unit (GPU) 208 by graphdriver 210, which can be by figure It exports to display 204a and 204b.In the case where display has non-planar surfaces, the rendering step may may require that It renders the scene twice, can utilize for the first time and cover the standard perspective projection for the whole region that display is covered and incite somebody to action entire Scene rendering to renderer caches.Later, the image data being rendered in being cached to renderer is distorted with correction display Device geometry deformation, and the image after distortion can be sent to the display.
Two flat-panel screens
In one embodiment, user may be viewed by being configured by the display that two displays are formed.Two displays it Between angle can be dynamically adjusted, and the junction between the display can add a sensor, which can read angle Number is sent to dynamic display configuration manager 212.Fig. 5 A show that this starts situation, wherein user watching from by with It is set to the 2D figures of the display of plane mode.
Fig. 6 A show when screen by it is arranged in parallel when by dynamic display configuration manager be directed to two display haplopias Field renders calculated projection.User can start the application program that the 3D of a generation virtual world is rendered.The dynamic display Configuration manager identifies the variation in 3D render modes and switches to by checking figure API Calls caused by the application Immersion output mode.
In immersion render mode, two eyes that the dynamic display configuration manager can be directed to user calculate throwing Shadow, as shown in Figure 6B.It can estimate eye position, and required viewpoint translation and projection can be noted based on user sensor data Enter to the figure API Calls generated by the application program captured.
Fig. 5 B show that user is watching the display configuration changed after the angle between display.The new configuration Show that wherein user is shown wall and is surrounded similar to VR caves.In the case, views are shown for correctly rendering two Viewpoint translation be different.Fig. 6 C show and are projected used in non-planar configuration.
Flexible display
In some embodiments, the display by display or changeable shape with on-plane surface display surface can be directed to (for example, being changed to the display surface of curved surface or non-uniform shapes from plane, this can be by flexible display come real for the display on surface Output is calibrated in the configuration now) formed.In the exemplary embodiment, user is watching flexible display, the flexibility Display is placed on the wall first as plane surface, as shown in Figure 7 A.In the case, for the projection of 3D renderings therewith Flat-panel screens seen in the example of preceding Fig. 6 A is the same.It carries out rendering in the case and does not need to additional step, And can projection be calculated by the user's head based on display direction and position and received from head tracking or eye position And transition matrix is changed and is performed.
During application program is run, user can change display shape by being bent display to certain angle.Cause The display shape after deformation caused by user action is as shown in Figure 7 B.Only by modification model view and projection square Battle array, the correct immersion that possibly can not be performed in the deformed display shape caused by being bent because of display render. On the contrary, described render is needed using additional geometric correction to compensate because the image caused by non-flat display plane is turned round It is bent.
Geometric correction can be realized by detecting the new shape of display first.It can be used for being configured in RGB-D sensors In the case of detection, using optical match as auxiliary, detect display shape by depth reconstruct or only have In the case of RGB data is available, display is detected using technology described herein (for example, technology given by Salzmann etc.) Device shape.The optical detection of display setting is only available option.In some embodiments, display sensors configured can To be that can measure the connection angle between display components, Huo Zheqi embedded in simple sensor together with display device structure Can the relative shape, direction and the light of position of display be detected by setting described from mechanical realization outside The sensors such as, sound, magnetism.
Determine deformed display shape, position and towards when, the region that is rendered can be calculated to cover display Whole region.Fig. 8 shows that kind of being projected in viewed from above after visualization for covering whole display region be.For For non-flat display surface, it can be rendered by two steps to perform immersion 3D.In the first rendering approach, it can be used The projection calculated carrys out normal render scenes.The first rendering approach it is targeted be will be in image rendering to rendering cache. In second rendering approach, deformation geometry body can be used define plane of delineation coordinate caused by pixel from the first rendering approach to The distortion of indicator screen coordinate after deformation.The step can be considered as by rendering cache pixel projection by the viewpoint from user Deformation geometry body is textured to deformation geometry body, the deformation geometry body of the veining is stretched into back plan view later Picture, later the image be sent to the display.The result for showing the image after distortion on the display is then user The correct non-distorted view of virtual environment can be seen from its viewpoint.
Additional embodiment
It is also feasible that several possible deformations are carried out to above-described embodiment for slightly different use-case.It below will be simple These deformations are described.
The first possible deformation is then the setting for not including usertracking in it.It is this more limited to a certain extent Method correct content for single predefined viewpoint can be provided render, for dynamic display is configured.Using journey In in the case of head tracking needed for the real immersion rendering of sequence processing generation or real immersion wash with watercolours is not being needed to (such as, such as can be extended 360 degree of videos are exported in the case of dye to covering for video content only by display Big visual field but do not support to change the configuration of the display of the display of viewpoint based on head tracking), what this method may be enough.
Other than only adjusting and rendering to adapt to display configuration, another embodiment can integrate feature actively to change Become display configuration.In the case, example process includes add-on module, which can be based on content type, number of users Amount and user location identification basic display unit configuration, and post command display change configuration to the ideal position that is detected. In some embodiments, motorization display apparatus to be used and can be set with display for the process is utilized in this The interface to communicate is put, the interface can be used to carry out the configuration of order display and change shape according to it is expected.
Other than display configuration only changes the embodiment of shape, process described here can be additionally used in display in ring The situation that domestic dynamic occurs and do not occur.For example, it entitled " is reflected use that August in 2016 submits on the 17th using dynamical output Penetrate system and method (the Systems and Methods for Enhancing Augmented of enhancing augmented reality experience Reality Experience with Dynamic Output Mapping) " International Patent Application PCT/US16/47370 (with In down referred to as [Harviainen]) during described dynamical output mapping, display can be found from environment simultaneously during operation User is provided it to later to be exported.In the case, configurable display sensors configured is to cope in the runtime Between be made available by/not available display, and their shape, position and direction can be detected.In some embodiments, this can It is handled by display vision tracking described in such as [Harviainen].If each rendering step can be directed to detect The position of display can extend to some further embodiments for the display with dynamic position, such as wear-type AR displays.This can be used tracking for example based on Inertial Measurement Unit (IMU) or vision, detectable user see where, For example, AR screens be located at where, and the tracking information can be combined with the process, so as to combine other displays in AR The content of correct part is shown on HMD.
As described above, module 212 can be embodied as to the individual module of operating system layer or be integrated on graphdriver Feature.In the two embodiments, which can run as component, which can know institute not needing to application program Change in the case of stating process and export caused by the application program or can be as the letter needed for its own offer application program It ceases to generate correct rendering.To the process in the above-mentioned pattern for being unaware of dynamic display rendering in application program 214 It is described.However, in some embodiments, module 212 may cooperate with application program, so as to application program 214 can be known dynamic display configuration managers simultaneously can control the rendering.When immersion is needed to render, application program 214 Can ask present displays be configured and it is associated render setting, and later by selection how using the information to control Rendering is stated, and figure API transfers are flowed directly to default graphics driver.In this way, application program 214 can be to described Rendering has more controls, so as to for example select UI elements most dependent on being configured currently using which class display Good rendering mode.
Immersion content is exported in display combinations
By combining conventional display technology (for example, display screen and projecting apparatus) using AR HMD, output leaching may be such that The new paragon for entering the real-time 3D figures of formula is possibly realized.When using the neoteric method, some embodiments can be integrated and are added to The arbitrary following characteristics of general 3D images outputtings.In some embodiments, head tracking may be such that using motion parallax and pass System display carries out immersion rendering and is possibly realized.In some embodiments, stereos copic viewing can be integrated.It can stop being coupled to (block) during the AR HMD of viewing, active stereo (active stereo) mould can be used in the display with sufficiently high frame per second Formula other than being used as showing equipment, also acts as active shutter glasses.In some embodiments, by the way that tradition is shown In zone rendering to AR HMD displays on the outside of device, traditional monitor can be extended.In some embodiments, it can detect By viewer or the shade projected by object when the square projection display (for example, before use), and can be in the display of AR HMD Corresponding content (for example, being stopped by the shade or blocked the content of (occlude)) is shown on device.In some embodiments, Object caused by detectable object between viewer and display blocks, and can be shown on the display of AR HMD corresponding Content (for example, the content for being stopped by medium object or being distorted).In some embodiments, by being divided content (that is, can Virtual element is shown on the display for adapting to distance close to correct human eye), AR HMD can be provided for natural person's adaptation of eye Best match.
It will merge with the advantages of traditional monitor the advantages of AR HMD and can not only improve viewing experience, can also make It must be completely newly possibly realized using traditional monitor to consume the mode of immersion content.
According to an at least embodiment, a variety of methods can be combined by a method, can be to that can penetrate AR HMD using light The immersion that combination with conventional display apparatus can create is maximized.In some embodiments, the method can perform packet Process containing following steps:
As pre-treatment step, the method can perform following steps:
1. it is connected to available external display device in environment using stream protocol.
2. creating rendering cache for each display, and choose whether using three-dimensional pattern and use which kind of three-dimensional pattern. When using three-dimensional render, two rendering caches are created for each display (caching corresponds to an eyes).Work as ring When domestic display supports active stereo pattern, AR HMD can be used to stop viewing (for example, as shutter glasses), to every Rendering/blocking and the Ambient displays for the view that one eyes are watched on AR HMD synchronize.
For each rendering step, the method can perform following steps:
1. divide the content into the part shown on AR HMD or on the external display connected.
2. tracking the head position of user, and calculate that virtual environment is rendered into the environment from the angle of user's viewpoint External display device on additional model view, projection and viewport conversion.
3. pair the image being shown on each external display in the environment is rendered, and described image is flow to The display.There is be overlapped image-region between external display and the AR HMD to be sheltered (mask).This is covered Cover the color that can be minimized and the AR HMD displays are spilled over to from the external display.
4. using the image shown on AR HMD camera shooting machine testing external displays, to detect the artifact caused by object blocks (artifact), it and by the part lost due to blocking or the part with artifact is rendered on AR HMD.
5. by the zone rendering on the outside of external display surface region to AR HMD.
6. it will be selected as shown element being rendered on AR HMD.
7. in the case of active stereo pattern, by using the conversion of another eyes, the process is repeated, and switch positive profit The eyes being blocked with AR HMD.
In some embodiments, output display units of the AR HMD in environment.The AR HMD devices can lead to It crosses using various available technologies and is connected to existing output display unit in environment.In some embodiments, AR HMD can Output display unit in autopolling environment, and it is connected to detected arbitrary output display unit.In some embodiments In, can be shown to user can use output display unit list, and one or more output display units may be selected to carry out in user Connection.Practical connection between output display unit and AR HMD can be formed by using arbitrary connection protocol, such as blue Tooth, the radio play (Airplay) of apple, wireless display (Miracast), television rod (Chromecast) can be shown in outside Show other arbitrary known methods of Wireless Display content in equipment.
In some embodiments, sensor can be used to carry out the dummy model of constructing environment and output equipment for the AR HMD. The AR HMD can the available output equipment of poll (for example, using Wireless networking technologies, such as WiFi or bluetooth), and can connect To detected arbitrary output equipment.Test signal generator in AR HMD can build test signal and content signal, and Send them to connected output equipment.It can be used and be embedded into video camera on AR HMD to the output equipment that is connected Output is observed, and for the output equipment of each connection, can be configured to abandon determining module not generate arbitrarily and it is expected knot The output equipment of fruit.
In some embodiments, in addition to finding and being connected in environment other than available display, " virtual ring can also be built Border model "." virtual environment model " is following model, which can describe environmental structure, and output can be set in map environment Standby position, also exportable device characteristics.The virtual environment model can be used for the process of movement of the tracking user in environment Run time step, also can be traced the external display relative to user position (for example, in some embodiments, relatively In the position of eyes of user).In some embodiments, real-time update can be carried out to virtual environment model to reflect that user moves, And the virtual environment model also can be used to calculate view conversion to be applied to original 3D contents.In some embodiments, institute is calculated It states virtual environment model and includes the use of one or more RGB-D sensors, video camera, depth transducer or its arbitrary combination really Determine the relative position of object and the display in user environment.In some embodiments, the sensor on AR HMD/video camera phase Arrangement for eyes of user is known (for example, being known during some calibration processes).In some embodiments, can lead to It crosses and compares the image that the video camera on AR HMD is captured and determine object and display distance.In some embodiments, can pass through Analysis is sent to the known image of external display detected in environment and the direction of the determining display.
In some embodiments, the next step during described is to create wash with watercolours for the external display device of each connection Dye caching.In using the three-dimensional embodiment rendered, the process can render slow for the display generation two of each connection It deposits, a caching is for an eyes.If AR HMD and display are supported, active stereo imaging can be used (for example, actively filtering Wave) and generate the stereo-picture on connected display.In some embodiments, external display can support automatic stereo into Picture, such as can divide to this crystalline build lens of outfit or automatically the similar beamsplitter technologies of the image for eyes of user. For active stereo, AR HMD can be configured as sequentially showing switching between the view for being synchronized with different eyes, with very high speed Rate stops the viewing of each eyes.In in the embodiment using active stereo viewing, AR HMD can stop the viewing of eyes (wherein, content will not be currently rendered), for example, the process can be rendered when stopping the second eyes in first eye Hold and stop when rendering the content for second eyes and switch between the first eye.
For each rendering cache, 3D contents between the external display device and AR HMD connected draw Point.In order to which by 3D division of teaching contents, to external display device and correctly rendering content, processing module can be to another application program originals The 3D figure API Calls of file are intercepted and captured.It intercepts and captures the figure API Calls and allows the process not to original generation institute The application program for stating 3D contents carries out analyzing and change the 3D contents in the case of any modification.
In some embodiments, virtual environment can move user progress natural response and be in virtual ring to create user Domestic illusion.User's head tracking can enhance immersion feeling, so as to which the virtual element in virtual world can be with natural way The movement of user's head is responded, for example, allowing user by checking element in data collection movement, such as user just Real object is such in viewing physical environment.Motion parallax is can start the phenomenon that this immersion is felt, it means that when When user moves its head, object can move in a manner of naturally.In some embodiments, the process can be based on video camera Or the sensor in embedded AR HMD is (for example, the sensing of gyrosensor, infrared (IR) sensor or various other types Device and or combination) the RGB-D data that are captured perform tracking.It is described based on the sensing data for detecting head tracking How the viewport that process can calculate the modelview matrix of original application program, projection matrix and/or original application should be changed Virtual environment is correctly rendered on external display when becoming in terms of the new viewpoint of user.In some embodiments, it can be based on The sensing data of display shape, position and direction is represented to calculate view conversion.In order to calculate view conversion, the mistake Journey can receive the spatial relationship in eyes of user position and environment between display from above-mentioned virtual environment model.OpenGL can be carried For the various letters that can be used for calculating view transformation matrix (such as, modelview matrix, projection matrix and the setting of above-mentioned viewport) Number.In some embodiments, some such functions sensing data that is subjected to that treated, while existing conversion may be updated Matrix, setting etc..In some embodiments, initialization procedure can be compared captured image by using camera and be determined First transition matrix.In a further embodiment, it can be converted based on user/head tracking sensor data to described first Matrix is continuously updated, so as to provide proper motion parallax from user's viewpoint.
Once intercept and capture 3D contents, the process can by select the part being shown in external display device of the content with And the part being shown on AR HMD of the content, the content is divided.The purpose of the step is the content It is divided into:(i) will be displayed in environment can use the element on external display;And the member that (ii) will be displayed on AR HMD Element can match user to the distance of external display device, while can make so as to which the natural eyes of AR HMD displays adapt to distance It obtains virtual element distance and watches error minimization between the natural eyes focal length needed for the external display device.For each The element of selection, this can be completed by following operation:First determine virtual 3D elements pseudo range, and by the distance with (i) eyes of each available external display device adapt to distance and the natural eyes of (ii) described AR HMD are compared using distance Compared with and selection adapt to distance and given virtual element distance display the most matched (for example, external for the eyes of element Display or AR HMD).Figure 12 depicts on AR HMD with selectively rendering virtual content in external display device to carry The example adapted to for the best natural eyes for being directed to virtual content viewing.As shown in the figure, with preferable virtual with display location It can be rendered in external display device apart from matching virtual element.On the other hand, the void of distance is adapted to close to AR HMD eyes Intending element can be rendered by using AR HMD.Various technologies can be used to determine which display for which pseudo range Best match.For example, the best match can be determined by using ratio, the wherein best match is display distance and void The matching of ratio between quasi-distance close to 1.Such as display distance d can be also based ondWith pseudo range dvAdaptation function m (dd,dv) determine the best match, wherein the display is selected so that the value of the adaptation function m () is maximum Change.The adaptation function m () can based in relation to comfort level and/or virtual objects on the display in different physical distances Different pseudo ranges at true horizon user feedback and empirically determined.
In some embodiments, illustrative methods data divided between external display and AR HMD can be wrapped It includes:For each virtual objects in multiple virtual objects, determine the pseudo range of appearance from the user perspective object.It can be The object that will occur at the pseudo range in the tolerance interval of the AR HMD of user is rendered on AR HMD.To occur in object Pseudo range be more than the maximum acceptable distance that can be rendered in the AR HMD of user in the case of, can be in external display Upper rendering object, wherein actual range of the external display apart from user are more than the maximum acceptable distance.
Fig. 9 depicts user and uses AR HMD in room of the tool there are two external display device.As shown in the figure, aobvious Show between device 1 and display 2 to content into division, while AR HMD render partial content between display.In some realities It applies in example, the process can provide the first part of video display on the first external display, on the second external display The second part of the video display is provided, and the Part III of the video display is shown on the AR HMD, wherein should Part III is displayed in the space between first external display and second external display.In some implementations In example, virtual environment model can be used to determine to show what content in the external display device and be shown on AR HMD Any content shown.If for example, the physical location of system aware external display, direction and shape and the institute that will be shown The aspect ratio and desired size of video display are stated, the process can calculate needs on first and second described external display The amount (and what part) of the video display of display, and render the remainder of the video display in AR later On HMD.In some embodiments, the video camera on the AR HMD and/or sensor can be to first and second described displays The part of upper display is analyzed, and uses the information to determine to need to be filled the Part III between the display Gap.
Next step during described is that will be provided to special rendering cache for the 3D contents of each external display. The special rendering cache can by model view is converted, is projected and viewport setting be applied to by default graphics application program original Begin generate 3D figures API Calls and render the 3D contents.The viewpoint of the original 3D contents can be changed using the conversion And projection, to be compensated to the viewpoint, as user is watched towards the external display.
Once the rendering cache renders selected display elements, then the external display device and AR HMD Between overlapping region can be masked, overflow to avoid the unnecessary color from external display to the picture for being shown in AR HMD. In some embodiments, it can be sheltered by following steps:The element being rendered on AR HMD is projected to outside and is shown The device plane of delineation, and in the region for replacing described image plane later, such as using sheltering color (such as, black) to render State the region of caching.In some embodiments, virtual environment model can be used to determine the void that can be shown on AR HMD Intend the potential region of object masking.Figure 14 depicts the example sheltered to the part of the first external display and Figure 17 The object rendered on AR HMD is depicted, which is shown on masked region.When pair with will be rendered on AR HMD When the region of the equitant rendering cache of element has carried out masking, image cache can be flow to the external display device to carry out Output.
After the rendering cache is flow to external display, AR HMD can be by using taking the photograph on the AR HMD The image of camera sensor capture is analyzed, and checks the output of each external display.Pass through the camera shooting to external display Machine view is analyzed, and can detect that unnecessary exception (for example, artifact).It in some embodiments, can be outer by what is captured The camera view of portion's display is compared with the version of initial data (for example, it may be by conversion).In some realities It applies in example, if being lost partial content in the camera view captured, can detect artifact, may indicate that:(i) exist Occlusion objects;And the position of (ii) occlusion area.The example of the occlusion area can be since object is located at viewer and display The shade and occlusion area on the display after object between device or caused by between projecting apparatus and view field.At some In embodiment, artifact/shield portions of image can be rendered on AR HMD, with the zone errors in repairing output image.Figure 10 Depict the example that the user in the room with projecting apparatus dresses AR HMD.As shown in the figure, due to subscriber station in screen with throwing It, then can cast shadow on the screen between shadow instrument.In some embodiments, AR HMD can be configured as by using face forward Video camera detect the region covered on the screen by the shade of user, and in the screen covered by user's shade The partial video content on AR HMD is rendered on region.Figure 11 depicts the example blocked, and one pair of which is as (being shown as chair Son) user can be stopped to the partially viewed of display.In some embodiments, AR HMD can be configured as detecting it is this block, And the part being blocked can be rendered on AR HMD.
In some embodiments, system can be known is sent to the image buffer storage number of used each external display According to, and can be by the way that actual display output to be compared to detect with being sent to the original image caching of external display device Artifact in the desired output of display.In some embodiments, this can be realized by following steps:One is embedded in from AR HMD It is data cached to be applied to original image by the camera sensor capture images risen according to usertracking data for view conversion, And the output of the display equipment in the described image for later being captured AR HMD video cameras and transformed image buffer storage number According to being compared.If there are significance differences for the region of the image captured and those regions of transformed image buffer storage data It is different, then it is believed that the region in the output of external display device artifact (for example, shade caused by object in environment or It blocks).It in some embodiments, the region that artifact is influenced can be by the artifact region in transformed output image buffer storage It is isolated and shows the transformed output image buffer storage using AR HMD and be corrected.
Due to needing to detect them from display output before artifact is corrected to shade and blocking, user Latency issue can periodically be watched.When detecting artifact from the output of display for the first time, may exist from detecting The initial delay repaired to it.In some embodiments, which depends on the performance of following aspects:AR HMD are imaged Machine capture, artifact detection are converted and are exported to rendering the image buffer storage data after correction on AR HMD.It can be by having The calculated performance of effect, the use of high frame rate camera and bandwidth of memory optimization are detected to minimize artifact between correction Delay.Further, since the real-time renderings on various displays equipment (including AR HMD) higher than 20Hz are it is contemplated that because This display output internal cause object block and shade caused by artifact can be expected to have between continuous output frame it is opposite Relatively low spatial diversity.Therefore, once detecting artifact region, then can assume in render the artifact region can with it is upper One frame remains identical, so as to improve artifact correction, so as to render desired abnormal area automatically on AR HMD.It can be with Detect the variation in artifact region like that similar to detection artifact region for the first time, however updated artifact region can be used for subsequently Frame.In this way, it may be such that the visible artifact of user and abnormal area minimize.In some embodiments, using more Estimate how artifact region will be change from frame to frame for ripe method.This can be by using being similar to Kalman (Kalman) The mode of filtering etc is realized modeling and estimate the movement of abnormal area.
In some embodiments, method may include following steps:Receive about will be rendered to augmented reality experience in The information of multiple objects, the wherein object will be rendered position so as to appear in the enhancing environment position different distance apart from user It puts;Determine that the user that object distance renders the head-mounted display (HMD) that user is used to dress enhances environment position Pseudo range range;And enhance the user that HMD is used to render in the distance range of environment position in object distance, it is right It will be rendered carrying out object rendering to multiple objects in augmented reality experience;By using the real world locations apart from user (actual range has exceeded object distance enhances environment position to the display of actual range by the user that HMD is used to render Distance range maximum distance), to be more than that object distance will use the user that HMD renders to enhance the distance of environment position The distance of the maximum distance of range renders at least some of the multiple object object;And to described more on user HMD Some in a object are blocked, the poor part of object for rendering or being doped is rendered, to improve the quality of overall experience And integrity degree.
In some embodiments, when the products characteristics of AR HMD are RGB-D sensors, it can be used and passed from RGB-D The depth data of sensor predicts blocking caused by the object in environment.It in some embodiments, can be to object solid It is modeled and estimates object's position (for example, virtual environment model), so as to not need to look first at display output In the case of blocking caused by object is estimated.
In some embodiments, the virtual element in the region not covered by external display can be rendered on AR HMD. In this embodiment, AR HMD can be used for extending entire display area.In some embodiments, it can be directed to and appear in wearing AR In the visual field of the user of the HMD but region without available external display surface region, the rendering content on AR HMD.Though Right picture quality may be mismatched with the picture quality of external display, but this method may help to improve how user perceives virtually The world, and can be used as the bridge between different external displays, so as to create preferable immersion sense organ together.In some realities Apply in example, as shown in figure 9, AR HMD the spatial joint clearance between two or more external displays can be bridged or It can simply increase the effective display area domain of one or more displays.Virtual environment model and image can be used in the embodiment Analytical technology determines to show what content and the where on AR HMD show the content on AR HMD.At some In embodiment, the physical distance in the gap between user distance display is known and what is and virtual by the object of appearance Distance have it is great in the case of, the process can generate three-dimensional output by using eyes parallax, so as to bring 3D contents Accurate displaying.
In some embodiments, while three-dimensional export is used, first eye can be directed to and carry out institute in above-mentioned paragraph The render process stated repeats the render process for second eye later.In some embodiments, AR HMD can also be to working as The preceding eyes that the render process is not performed to it stop.
In some embodiments, the AR HMD of the view can be stopped (for example, by using can be configured to display pixel The switching and can be with sufficiently high frame per second come the liquid crystal display layer that switches over or similar between opaque state and pellucidity Equipment) other than showing equipment as AR, also act as active shutter glasses.In addition, AR HMD can be configured as to view Blocking synchronize (for example, by adjusting shutter frame per second to be synchronized with display), so as to correctly support actively The 3D solids output of stereoscopic display.In some embodiments, active stereo display can via wire signal or by infrared or Less radio-frequency (for example, bluetooth, DLP links) transmitter provides the timing signal for shutter glasses.It is correct in order to carry out Synchronization, AR HMD can detect the timing signal provided by display or detect eyes switching frequency by other means.
In some embodiments, IR video cameras can be used to detect infrared (IR) synchronizing signal in AR HMD.Many AR HMD There can be IR video cameras to carry out 3D tracking, and available for detecting synchronizing signal.For example, the RGBD sensors provided on AR HMD There can be the individual video camera run to the RGB video camera of infrared photaesthesia or in infra-red range.
In some embodiments, the product feature of AR HMD can be bluetooth connection.Therefore, it can be configured to detection bluetooth frequency The hardware of synchronizing signal in rate range (being typically around 2.4GHz) is usually already embedded in AR HMD.
In addition to detecting synchronizing signal, have the AR HMD of video camera for operating in sufficiently high frame per second can be by from seeing to master The switching frequency of active stereo display is analyzed and detected to the video frame of the AR HMD video cameras capture of dynamic stereoscopic display. This embodiment may be such that AR HMD are shown shutter operation and arbitrary active stereo by using arbitrary synchronous transmission of signal technology Show that device synchronizes.
It is analyzed to detect eye by the video frame for capturing the AR HMD video cameras from seeing to active stereo display Eyeball view switching frequency can be split into multiple steps, for synchronous AR HMD shutter functions.
In some embodiments, it is whether just defeated in the progress of active stereo pattern that indication display can not be provided to the system The information gone out.The system can detect whether display is just being exported in active stereo pattern.If it detects with actively vertical The display of body output can be then frequency and timing used in the detection display in next step, to be regarded in left/right eyes It is switched between figure.
In some embodiments, system can be known media are just shown, and can be by the way that external display device is exported Image and media content be compared and determine active stereo output eyes sequence (that is, which image be for left eye with And which image is for right eye).However, in some embodiments, system simultaneously not always can be with access media content.Herein In embodiment, system can perform additional context analysis to determine correct eyes sequence.In some embodiments, system can be right Blocking before being happened at display elements is compared.In some embodiments, system can be used to identification external display Known object (for example, face, human body etc.) in the image exported.It is several in relation to this object if detecting known object The knowledge of what body can help system determine which image sequence can generate more correct Depth cue.In some embodiments, It can be used and estimated just with detecting the less heuristic of the neural network of correct left/right eye sequence using great amount of samples training True eyes sequence.
In some embodiments, it in order to solve above-mentioned listed subproblem, needs to the display image captured according to sight The timestamp time series seen is stored and is analyzed.One example process includes:(i) whether detection display is with actively Three-dimensional pattern is exported;And (ii) sets synchronization if detecting that active stereo exports.
By using AR HMD as shutter glasses and active stereo display, it is possible to provide several advantages.A kind of benefit It is then the use that can simplify to various types of contents.In some embodiments, the AR contents that AR HMD can be in display environment, and When needed, it switches to as active enabling glasses.Without being used different according to which kind of content is being consumed Glasses or switching glasses.
AR HMD can detect and track master by analyzing from AR HMD embedded in the image that sensor together is captured Dynamic stereoscopic display surface region.This causes AR HMD itself can be when active stereo display is playing stereo content Blocking shows the equitant certain customers' view of image with active stereo.This can reduce view flicker because only parts of images with High frame per second is into line flicker.This can also reduce the light losing that eyes of user is received, because different from current active shutter Glasses (the entire viewing areas of its one eye of disposable effectively blocking), are only closed with the equitant partial view of display It closes.In some embodiments, AR HMD can be provided using liquid crystal (LC) layer (it can be used for blocking light to reach eyes of user) Shutter glasses function.In some embodiments, LCD can be the thin layer or film being arranged in AR HMD displays, which can It is controlled selectively to stop based on the location of displays and the orientation information that obtain by using the above method by system With the equitant region of display.
In some embodiments, AR HMD as shutter glasses may be such that and is carried out at the same time multiple active stereo displays Viewing is possibly realized.Different active stereo displays can also be operated potentially even with different frequencies.Because AR HMD can detect and track several display areas, is mutually overlapped as various active stereo displays so as to only limit shutter effect Folded region, shutter area can be operated according to the individual frequency of its own, as long as the frame per second update of AR HMD displays It is synchronised with the combination frequency of all displays.In addition, AR HMD can detect it is a variety of same in a variety of different transmission technologys Signal is walked, it is synchronous simultaneously so as to be carried out using different synchronous transmission of signal technologies with a variety of active stereo displays.
In some embodiments, AR HAM can be used as active shutter, and the RGB display capabilities of the AR HMD, which can enhance, actively to be stood The master image shown on body display.In some embodiments, this can be shown by rendering the RGB on AR HMD with active stereo Show that the image rendered on device is combined and promotes the luminosity of active stereo display.In some embodiments, AR HMD may be selected Stop to property the region of active stereo display so that the image-region of black to be shown as can be shown it is more black. The natural dynamic range of the expansible active stereo display of the embodiment at this time, and maximum luminosity can be increased, it is more than actively to make it The maximum luminosity that stereoscopic display individually can be generated.
In some embodiments, relative to the crosstalk between active stereo display and traditional active shutter glasses (crosstalk), the crosstalk between active stereo display and AR HMD can be reduced.Due to the slower pixel turn-off time, When switching Eye View, when the afterimage for being shown to another eyes still stops over the display, some active stereos are shown Device may suffer from crosstalk.In some embodiments, this can be reduced by following operation:It is detected using AR HMD potential Crosstalk zone and potential crosstalk mistake is compensated by showing RGB image on AR HMD, so as to minimize the crosstalk (example Such as, the high contrast in potential crosstalk zone is smoothed).
Some embodiments can be by being directed to the eyes of the current image being just blocked on viewing active stereo display come wash with watercolours RGB image is contaminated, so as to minimize flicker.The AR HMD can be configured as with low resolution or with poor color gamut The content being shown on active stereo display is rendered using AR HMD RGB displays for the eyes being currently blocked, from And reduce flickering that active shutter glasses are presented by and the whole luminosity for reducing view declines.
In main embodiment, it light that can be used can penetrate AR HMD to perform the process, which can penetrate AR HMD can profit Environment is checked continually on embedded video camera or RGB-D sensors.However, in alternative embodiments, the calculating, group Communication, Content Management and rendering between part can be arranged in an alternating manner.In this alternative embodiment, the above process may Some is needed to change and can be performed on the described AR HMD of than the above-mentioned ones in the equipment of various other types.
In some embodiments, blocking caused by object can be detected by using virtual environment model, and non-through Analysis is crossed from the image that display exports to detect the object blocked from user's viewpoint viewing external display.Implement herein In example, the virtual environment model may be created in initial method, and can have sufficiently high resolution ratio, simultaneously also Whole operation region can be covered.It, can be by certainly by the region blocked by the object in front of external display in the alternative method It is dynamic to select to render on AR HMD rather than on external display.In this embodiment, it is divided in order to be shown in content various Runtime processing step for the first time on display can be detected and be chosen by the external display region that other objects stop To render on AR HMD.
In some embodiments, head tracking and display artifact detection are carried out different from the use of AR HMD sensors, Alternative embodiment can be from the sensor receiving sensor data being embedded in environment.In this embodiment, the process is performed Mobile applications user's head tracking can be performed based on the external sensor data from external sensor request data, and Based on known external sensor location determination head tracking.In some embodiments, the external sensor can be that observation is used The RGB-D sensors at family.In some embodiments, the external sensor can be optical sensor, magnetic sensor, depth biography Sensor or sound tracking transducer etc..
In some embodiments, the shade as caused by object and/or viewer and block can also be by external sensor Data are analyzed and are detected.In this embodiment, during the division of teaching contents step of the process, problem area (for example, Occlusion area) it may be contemplated by, so as to which error image needs be avoided to be initially displayed with detected situation.In some realities It applies in example, shadow region can be detected based on the approximate evaluation of user's geometry by observing the RGB-D sensors of user, and is not based on Image is exported to detect shadow region.
In some embodiments, external server can be configured as create immersion experience and perform the above process.One In a little embodiments, the server can receive the sensing data from AR HMD and be performed based on the sensing data received Entire render process.In some embodiments, when sensing data to be flow to the external service that performs the process from AR HMD During device, network interface may introduce the delay of certain rank.It, can be unreal in the embodiment for undergoing sufficiently high network delay Some features described herein are applied (for example, can not implement to carry out image mistake based on camera data analysis during runtime Error detection and correction).
It in alternative embodiments, can also be special other than performing the above-mentioned render process for combined display The application program for generating and showing content is performed on server.In this embodiment, institute of the server in environment There is display.Based on these displays known relative to the position of user location, content can be flow to often by the server One display (the AR HMD devices for including user's wearing).
Figure 13 depicts the user in the room with large-scale forward projection projecting apparatus and regular display and watches virtual 3D The plan view of content.Figure 13 includes user and various virtual 3D elements, projecting apparatus, projection screen, physical object in environment And external display device.As shown in the figure, Figure 13 includes following block:Viewer stops that projecting apparatus and physical object blocking are aobvious Display screen curtain.In some embodiments, user start generate 3D contents (for example, virtual world viewer) application program it Afterwards, implement above-mentioned render process module can in polling environment available output equipment (for example, external display device and/or throwing Shadow instrument), and it is connected to arbitrary available output display unit.In some embodiments, the render process can be directed to each outer Portion's display creates required rendering cache.
Once know output peripheral equipment and its relative position relative to eyes of user position, the mobile applications The figure API Calls intercepted and captured can be divided into the 3D contents that will be sent to different display equipment.In some embodiments, may be used Distance is adapted to based on matching eyes and 3D contents are divided, and in some embodiments, this can lead to closest to user Other virtual element is chosen so as to be shown generally on AR HMD, and other elements for being in relatively large pseudo range can be main It is shown on available external display.In some embodiments, it for each rendering step, can be based on to user movement in space 3D tracking (for example, user's head tracking) (tracking from AR HMD based on being embedded into the sensing that sensor together receives Device data) update the external display position relative to eyes of user position.
It in some embodiments, can be by the way that the setting of required model view, projection and/or viewport be injected intercepted and captured figure Shape API Calls are rendered to selected with rendering the element on external display.Once by view rendering to rendering Caching can shelter the overlapping region of object that will be rendered on AR HMD, and can be by the rendering cache stream from the rendering cache To external display.Figure 14 is shown after selected by masking by the region being rendered on AR HMD on external display Output.As shown in the figure, due to the different 3D elements that will be shown on AR HMD, can shelter will be shown on the projection screen Part 3D elements.
In some embodiments, after display image is flow to and is shown on external display, the mobile application Program pin, can be to the camera sensor data that are captured by the video camera being embedded in AR HMD to the artifact in output image It is analyzed.In some embodiments, it can detect by viewer's projection shade on projection images and by front of display screen Chair caused by block.As shown in figure 15, for the image-region of mistake, it can be rendered on AR HMD and show view just True part.
In some embodiments, the AR HMD can be rendered in the 3D for the areas outside that external display device is covered Hold region.Figure 16 shows exemplary embodiment.As shown in figure 16, the 3D contents rendered on AR HMD can bridge two displays Region between device.
In some embodiments, as shown in figure 17, the 3D content elements for being selected as being shown on AR HMD can be rendered And it is shown on AR HMD.
Shutter glasses change
Complete VR/VR experience is presented by using HMD may generate asthenopia, headache, nausea and tired, this Partially due to caused by influx-adjusting conflict (convergence-accommodation conflict) conflicts.Further , it will be limited using stereoprojection and carry out personalized chance for user and privacy.In some embodiments, the above process The AR HMD/ stereoprojections of combination can be generated, wherein the AR HMD can be used as the active shutter eye for the projection simultaneously Mirror.Figure 18 shows an exemplary embodiment.As shown in the figure, Figure 18 includes the user of wearing AR HMD and two outsides are shown Equipment, each external display have corresponding shutter time synchronization signals transmitter.In some embodiments, the display 3D effect can be started while active shutter method is used.
In some embodiments, method may include following steps:It is captured using forward direction video camera associated with AR HMD The image of the part of first projection screen;Responsively determine captured graphical representation in the associated first part of AR displayings Hold;And the configuration AR HMD generate image, while AR HMD are also opened up with frequency associated with first part's content Show and the associated the second part of AR displayings.In some embodiments, associated with AR displayings described A part of content can be displayed as the alternate frame of stereoprojection.In some embodiments, generated image can be used as replacing Left and right shutter.In some embodiments, the left-eye images of the second part associated with AR displayings are shown In AR HMD parts associated with left eye display, and be used for for right eye generate shutter effect image be displayed on The associated AR HMD parts of right eye display, vice versa.
It in some embodiments, can be same to determine by being analyzed the camera signal for capturing projection screen image Step.In some embodiments, the method further includes the frequency for detecting the left/right display.In some embodiments, it is described Detection is completed by image analysis.In some embodiments, the detection is completed by sheltering analysis.
In some embodiments, include for the method that distribution content is shown between the display equipment in AR systems following Step:Utilize the part of first and second display equipment in forward direction video camera capturing ambient associated with wear-type AR displays Image, it is described first and second display equipment show first and second partial content associated with AR displayings;And institute It states display on wear-type AR displays and shows relevant Part III content with the AR, which is based on using the head It wears the image of the part of first and second display equipment that the forward direction video cameras of formula AR displays is captured and is determined, Described in the AR show that the relevant Part III content determines to include determining with not shown by the first or second Show that the associated AR of field of view portion of hold facility shows related partial content.
In some embodiments, for improve the more equipment of immersion it is virtual/method of enhancing experience includes the following steps:Know Complicated variant tests in environment the available display equipment with content stream interface;Rendering cache is distributed for each available display;By in Hold the part for being divided on AR HMD or being shown on the external display connected;Tracking user's head position simultaneously calculates use (example is converted in additional views used in the external display that the virtual environment watched from user's viewpoint is rendered into environment Such as, model view, projection and viewport);To the image being shown on each external display in environment is rendered simultaneously Image is flow to the display;Masking will will be displayed on outside with what the element being rendered on AR HMD overlapped Image-region on display is overflow with minimizing from external display to the color of the AR HMD displays.
In some embodiments, the method is further included using shown on AR HMD camera shooting machine check external displays Image blocks caused artifact to detect object.In some embodiments, the method also includes rendering institute on AR HMD State the lost part or artifact of image.In some embodiments, the method also includes rendering on the AR HMD described outer Region except the surface region of portion's display.In some embodiments, the method further includes rendering and is selected to be shown in Element on AR HMD.In some embodiments, the AR HMD can utilize active stereo pattern, and eye can be used It converts to repeat the process, while stop the second eyes.In some embodiments, it is just described to further include switching for the method The eyes of AR HMD blockings.
In some embodiments, the method is further included creates two rendering caches, a caching for each display For an eyes or if not three-dimensional pattern, then single rendering cache is created for each display.
Figure 19 depicts the flow chart of process in accordance with some embodiments.As shown in the figure, Figure 19 includes image capture process, The process includes the following steps:Frame is captured from AR HMD;Image is shown from the frame detection captured;Based on from the image captured The corner point (corner point) of the display area of detection is isolated and is mended to the perspective conversion of the image on the HMD It repays;And non-warp image visible on the display is added to display picture queue to be analyzed.Once the display image Amount of images in queue is more than preset need, starts image queue analytic process.As shown in figure 19, described image cohort analysis Process includes the following steps:Calculate difference in the queue between consecutive image and according to the Difference test between different frame Frequency (active stereo output should be used as clear high frequency band to occur).In some embodiments, Fourier transformation can be used to examine Measured frequency is distributed.Image is, based on the high frequency band detected, is matched as three-dimensional right by next step.Three-dimensional pair can formed Pixel or Feature Conversion between image are compared and first and second element of solid between are compared.When right When first or second element of the difference between is compared, offset is very little, and inclined between single three-dimensional pair of image Move the situation that be very similar to pair and between.In a preferred embodiment, between the internal image of the solid Offset will be very little.If this results in satisfaction, the output that can determine the display is the master in given frequency Dynamic three-dimensional pattern.Next step then can be detect stereo pair images in which image for which eye.At some In embodiment, this can be completed by the way that the picture is compared with original media or using Object identifying (blocking).Most Afterwards, active shutter glasses setting can be directed to synchronize.
In one embodiment, there are following method, this method includes:Detect the configuration of convertible display;Detect user Position;And based on the position and the configuration, the figure exported to the convertible display is converted.
In one embodiment, there are following system, which includes:First sensor can be used to detect convertible The configuration of display;Second sensor can be used to the position of detection user;And display configuration manager, it is operable For being based on the position and the configuration, the figure exported to the convertible display is converted.
In one embodiment, there are following method, this method includes:From multiple sensor receiving sensor data;Detection The display configuration of an at least display;It is configured based on the display, generates projection transform;Generation should by the projection transform With the outputting video streams to unmodified video flowing;And the outputting video streams are sent to an at least display. In some embodiments, the method can further comprise that wherein the multiple sensor includes usertracking sensor.One In a little embodiments, the method can further comprise that wherein described usertracking sensor is RGB-D sensors.In some implementations In example, the method can further comprise that wherein described usertracking sensor is video camera.In some embodiments, the side Method can further comprise determining primary user out of groups of users using the usertracking sensor.In some embodiments, The method can further comprise determining eyes of user position using the usertracking sensor.In some embodiments, institute The method of stating can further comprise that wherein the multiple sensor includes display sensors configured.In some embodiments, it is described Method can further comprise that wherein described display sensors configured is angle detecting sensor.In some embodiments, it is described Method can further comprise that wherein described display sensors configured is electronic potentiometer.In some embodiments, the method It can further comprise that wherein detecting the display configuration includes detecting the angle between two or more displays.At some In embodiment, the method can further comprise wherein detecting the display configuration and include between a detection at least display Shape.In some embodiments, the method can further comprise wherein detecting the display configuration include detecting two or The relative position of more displays.In some embodiments, the method can further comprise wherein described at least one display Device includes flat-panel screens.In some embodiments, the method can further comprise that a wherein described at least display includes Flexible displays.In some embodiments, it is flexible aobvious can to further comprise that a wherein described at least display includes for the method Show device.In some embodiments, the method can further comprise wherein generating the projection transform further comprise determining through The region rendered by the outputting video streams.In some embodiments, the method can further comprise wherein described display Configuration pin is determined predesignated subscriber position.In some embodiments, the method can further comprise wherein described display Device configuration is continuously updated.In some embodiments, the method can further comprise that transmitting configuration is controlled at least one display Device.In some embodiments, the method can further comprise that wherein described configuration controls to adjust the display configuration.One In a little embodiments, the method can further comprise that wherein described configuration control is entered via touch screen.In some implementations In example, the method can further comprise that the configuration control is entered via keyboard.In some embodiments, the method It can further comprise that wherein described configuration control is entered via voice command.In some embodiments, the method can be into One step includes wherein described configuration control and is entered via controller.In some embodiments, the method can be wrapped further The configuration control is included to be entered via control stick.In some embodiments, the method can further comprise wherein showing Device dynamic shows.In some embodiments, the method can further comprise that wherein described display dynamic disappears.In some realities It applies in example, the method can further comprise that wherein described outputting video streams are based on three-dimensional render.In some embodiments, it is described Method can further comprise detecting user's head tracking.In some embodiments, the method can further comprise wherein detecting The display configuration includes the direction of an at least display described in detection.In some embodiments, the method can be further Include detecting the display quantity in the range of user's viewing including wherein detecting the display configuration.In some embodiments, The method can further comprise wherein detecting the position that the display configuration includes an at least display described in detection.One In a little embodiments, the method can further comprise that wherein detecting the display configuration includes an at least display described in detection Shape.
In one embodiment, there are following equipment, which includes:User sensor is configured as detection user information; Display sensors configured is configured as the display configuration of a detection at least display;Dynamic display configuration manager, quilt Reception user information, display configuration and figure API Calls, the dynamic display configuration manager is configured to be configured as counting Projection transform is calculated, and is responsively rendered for described in extremely by the way that the projection transform is applied to each unmodified video flowing The corresponding video output of each display of a few display;And graphdriver, it is configured as exporting the corresponding rendering Video output.The equipment can further comprise that wherein described user sensor is configured as detection eyes of user position.Institute Stating equipment can further comprise that wherein described user sensor is configured to determine that primary user.The equipment can further comprise it Described in user sensor be configured to supply head-tracking information.The equipment can further comprise wherein described user's sensing Device includes RGB-D sensors.The equipment can further comprise that wherein described user sensor includes Inertial Measurement Unit (IMU).The equipment can further comprise that wherein described user sensor includes video camera.The equipment can further comprise it Described in display sensors configured be configured as detection available display quantity.The equipment can further comprise wherein institute State the position that display sensors configured is configured as an at least display described in detection.The equipment can further comprise wherein The display sensors configured is configured as detecting the relative position of two or more displays.The equipment can be further The direction of an at least display described in detection is configured as including wherein described display sensors configured.The equipment can be into one Step includes the angle detector that wherein described display sensors configured is the angle being configured as between two displays of detection. The equipment can further comprise that wherein described display sensors configured is configured to determine that the shape of an at least display Shape.The equipment can further comprise that wherein described display sensors configured is depth transducer.The equipment can be further It is sound transducer including wherein described display sensors configured.The equipment can further comprise that wherein described display is matched Sensor is put as optical sensor.The equipment can further comprise that wherein described display sensors configured is magnetic transducing Device.The equipment can further comprise that a wherein described at least display includes flat-panel screens.The equipment can be wrapped further It includes a wherein described at least display and includes non-flat display.The equipment can further comprise wherein described dynamic display Configuration manager is configured as:Read back the first rendering image based on normal projection, and responsively renders image to described first It is distorted with correction display geometry deformation and the image after distortion is sent to an at least display.The equipment can be into One step includes a wherein described at least display and includes flexible display.The equipment can further comprise wherein described at least one Display includes organic light emitting diode (OLED) display) display.The equipment can further comprise a wherein described at least display Including liquid crystal display (LCD).The equipment can further comprise that wherein described display sensors configured is configured as detecting At least display that dynamic occurs.The equipment can further comprise that wherein described display sensors configured is configured as examining Survey at least display that dynamic disappears.The equipment can further comprise wherein described dynamic display configuration manager by with It is set to and receives user configuration control.The equipment can further comprise wherein described display configuration according to the user configuration control It makes and is adjusted.The equipment can further comprise that wherein described user configuration control includes display and asked towards change.Institute Stating equipment can further comprise that wherein described user configuration control includes display on/off request.The equipment can be into one Step includes wherein described user configuration control and includes display deformation request.
In some embodiments, there are following method, this method includes:It is connect from the forward direction video camera being embedded on AR HMD Receive captured frame;The display image through distortion is identified in the frame captured;It is formed using the display image unwrung aobvious Diagram picture;And the unwrung display image is compared, and determine whether there is artifact with original displayed image.One In a little embodiments, the method can further comprise that wherein described artifact includes shadow occlusion.In some embodiments, the side Method can further comprise that wherein described artifact is blocked including object.In some embodiments, the method can further comprise it The middle identification display image through distortion includes identifying the turning of the image through distortion.In some embodiments, it is described Method can further comprise that wherein forming the image without distortion includes:Based on the turning identified, perspective information is identified (perspective information);And based on the perspective information, the first conversion is applied to the display figure through distortion Picture.In some embodiments, the method can further comprise wherein described first conversion based on model view conversion/matrix. In some embodiments, the method can further comprise that wherein described first is converted to viewport conversion.In some embodiments, The method can further comprise:Determine the lost content covered by the artifact;Second conversion is applied to the loss Content;And show transformed lost content on the display on the AR HMD.In some embodiments, the method It can further comprise the wherein described second inverse conversion for being converted to first conversion.
In one embodiment, there are following method, this method includes:The artifact in virtual display is detected, which is in First position;The artifact for appearing in the first position in the virtual display at the first time is corrected;And to Secondary first artifact for appearing in the second position is corrected, the second position be at least partially based on the first position and It is determined.In some embodiments, the method can further comprise estimating the second position based on the first position. In some embodiments, the method can further comprise wherein estimating that the second position is based further on the third place, should The third place appear in the first time before the third time.In some embodiments, the method can further comprise Wherein estimate that the second position is at least partially based on Kalman filter.
In one embodiment, there are following method, this method includes:It is received from the forward direction video camera being embedded on AR HMD The frame captured;The display image through distortion is identified in the frame captured;It is formed based on the conversion of original image through distortion Original image;And the display image through distortion is compared with the original image through distortion, and identify one A or multiple artifacts.In some embodiments, the method can further comprise that the conversion is based on perspective information.One In a little embodiments, the method can further comprise that wherein described perspective information is by identifying the display image through distortion Turning and be acquired.In some embodiments, the method can further comprise that wherein described artifact is the object blocked. In some embodiments, the method can further comprise that wherein described artifact is shadow region.In some embodiments, it is described Method can further comprise wherein described artifact for the virtual objects shown on the AR HMD and the display figure through distortion Lap as between.In some embodiments, the method can further comprise to corresponding to external display (outside this Display is currently displaying the display image through distortion) on the region of the lap sheltered.
In one embodiment, there are following equipment, which includes:Forward direction video camera is embedded on AR HMD, by with It is set to capture frame;Image processor is configured as receiving captured frame, identifies the display figure through distortion in captured frame Picture, and form the display image without distortion using the display image;And comparison module, it is configured as described without distortion Display image be compared with original image, and determine whether there is artifact.In some embodiments, the equipment can be into one Step includes wherein described artifact and includes shadow occlusion.In some embodiments, the equipment can further comprise wherein described vacation Shadow is blocked including object.In some embodiments, the equipment can further comprise that wherein described image processor is configured as Identify the turning of the display image through distortion.In some embodiments, the equipment can further comprise wherein described place Reason device is configured as forming the image without distortion by following operation:Perspective letter is identified based on the turning identified It ceases and the first conversion is applied to by the image through distortion based on the perspective information.In some embodiments, the equipment Wherein described conversion can be further comprised based on model view conversion/matrix.In some embodiments, the equipment can be further Viewport conversion is converted to including wherein described.In some embodiments, the equipment can further comprise described image processor It is configured as:It determines the lost content that the artifact is covered, the second conversion is applied to the lost content;And described Converted lost content is shown on display on AR HMD.In some embodiments, the equipment can further comprise it Described in second be converted to it is described first conversion inverse conversion.
In one embodiment, there are following equipment, which includes:Image detection module is configured as detecting virtual exhibition Show interior artifact, artifact is in first position;Image correction module is configured as to going out at the first time in the virtual display The artifact of the present first position is corrected and to occurring the progress of the first artifact of the second position in the second time Correction, the second position is at least partially based on the first position.In some embodiments, the equipment can further comprise Wherein described image correction module is configured as estimating the second position based on the first position.In some embodiments, The equipment can further comprise wherein estimating that the second position is based further on the third place, which appears in institute State the third time before first time.In some embodiments, the equipment can further comprise wherein estimating described second Position is at least partially based on Kalman filter.
In one embodiment, there are following equipment, which includes:Forward direction video camera is embedded on AR HMD, by with It is set to capture frame;Image processor is configured as receiving captured frame, identifies the display figure through distortion in captured frame Picture, and the original image through distortion is formed based on original image;And comparison module, it is configured as the display through distortion Image is compared, and determine whether there is artifact with the original image through distortion.In some embodiments, the equipment It can further comprise that wherein described image processor is configured as that be applied to the original image described to generate by that will convert Original image through distortion.In some embodiments, the equipment can further comprise that wherein described conversion is based on perspective letter Breath.In some embodiments, the equipment can further comprise that wherein described image processor is described through distortion by identifying Display image turning and obtain perspective information.In some embodiments, the equipment can further comprise wherein described vacation Shadow is occlusion objects.In some embodiments, the equipment can further comprise that wherein described artifact is shadow region.At some In embodiment, the equipment can further comprise wherein described artifact for the virtual objects shown on the AR HMD and the warp Lap between the display image of distortion.In some embodiments, the equipment can further comprise masking block, by with It is set to and the region of the lap corresponded between virtual objects of external display is sheltered.
In one embodiment, there are following method, this method includes:Video display is provided on external display;It determines The video display shielded part for a user;And the video exhibition is shown on the head-mounted display of user The shielded part shown.In some embodiments, the method can further comprise wherein on the head-mounted display The shielded part of the video display of display is aligned from the angle of user with the video display on the external display. In some embodiments, the method can further comprise that wherein described external display is television screen.In some implementations In example, the method can further comprise showing to produce the video exhibition to change original video by being converted according to view Show.In some embodiments, the method can further comprise that wherein described view is converted to model view conversion.In some realities It applies in example, the method can further comprise that wherein described view conversion is set based on viewport.In some embodiments, institute The method of stating can further comprise that wherein described view conversion is tracked based on user's head.In some embodiments, the method can Further comprise wherein described view conversion based on virtual environment model.In some embodiments, the method can be wrapped further It is at least partially by the forward direction video camera on head-mounted display is used to complete to include the determining of wherein described shielded part. In some embodiments, the method can further comprise:Generate the virtual environment model of user environment;And based on the environment Model determines the shielded part.In some embodiments, the method can further comprise wherein described virtual environment Model is generated at least partially by the sensor used on the head-mounted display.In some embodiments, the side Method can further comprise that wherein described sensor includes the equipment selected from the set comprising the following terms:Optical sensor, magnetism Sensor, depth transducer and sound tracking transducer.
In one embodiment, there are following method, this method includes:Video display is projected on surface;It is determining to be thrown There are shades for a part for the video display of shadow;And the dash area of the video display is shown to the wear-type of user and is shown Show on device.In some embodiments, the method can further comprise wherein being shown in described on the head-mounted display The dash area of video display is aligned from user perspective with the video display projected.In some embodiments, it is described Method can further comprise it is closing the projecting apparatus occupy with video display that is the being projected dash area described in The corresponding region of video display.
In one embodiment, there are following method, this method includes:Video display is provided on the first external display First part and the second part that the video display is provided on the second external display;And on head-mounted display Show the Part III of the video display, wherein the Part III is shown between first and second described external display In space.In some embodiments, the method can further comprise the Part III of wherein described video display from institute The angle and first and second described part for stating the user of head-mounted display align.In some embodiments, it is described Method can further comprise that first and second wherein described external display is active stereoscopic display.In some embodiments, The method can further comprise that first and second wherein described external display is respectively provided with first and second corresponding display Frequency.In some embodiments, the method can further comprise that first and second wherein described display frequency is different. In some embodiments, the method can further comprise detecting described first using the video camera on the head-mounted display And second display frequency.In some embodiments, the method can further comprise using the head-mounted display for the One eyes independently stop first and second described external display.In some embodiments, the method can further comprise point Alternately do not stop described first for the first eye and the second eyes with the display frequency that first and second is detected And second external display.In some embodiments, the method can further comprise respectively from the timing of first and second shutter Synchronization signal transmitter receives first and second described display frequency.
In one embodiment, there are following method, this method includes:Video display is provided on the first external display First part and the second part that the video display is provided on the second external display;And on head-mounted display Show the Part III of the video display, wherein the Part III at least partly on the first or second external display Region overlapping.In some embodiments, the method can further comprise sheltering overlapping region.In some embodiments, institute The method of stating can further comprise overturning the pixel in the overlapping region.In some embodiments, the method can be wrapped further It includes wherein masking and is included in the overlapping region display masking color.In some embodiments, the method can further comprise Wherein described masking color is black.
In one embodiment, there are following method, this method includes:It is shown using with wear-type augmented reality (AR) wear-type Show the image of the part of first and second display equipment in the associated forward direction video camera capturing ambient of device (HMD), this first and Second display equipment shows first and second partial content of related AR displayings;And display and the AR on the AR HMD Show related Part III content, the Part III based on use the forward direction video camera captured described in first and second It shows the image of the part of equipment and is determined.In some embodiments, the method can further comprise wherein described and institute It states AR and shows that the determining of the related Part III content includes determining with not showing hold facility by the first or second Field of view portion it is associated in relation to the AR displaying partial content.
In one embodiment, there are following method, this method includes:It is connected to external display;Intercept and capture 3D content sets;It will The 3D content sets are divided into the first virtual element and the second virtual element;Show that described first is virtual on the external display Element;And second virtual element is shown in local display.In some embodiments, the method can be wrapped further Enhancing display (AR) head-mounted display (HMD) can be penetrated for light by including wherein described local display.In some embodiments, institute The method of stating can further comprise that wherein described external display is television set.In some embodiments, the method can be further It is projection screen including wherein described external display.In some embodiments, the method can further comprise described in detection Occlusion area on external display.In some embodiments, the method can further comprise that wherein described occlusion area is The shade of projection on the projection screen.In some embodiments, the method can further comprise that wherein described occlusion area is The region being blocked by object on television screen.In some embodiments, the method can further comprise showing in the local Show the content that rendering is stopped by the occlusion area on device.In some embodiments, the method can further comprise applying and regard Figure conversion is at least one of to first and second virtual element.In some embodiments, the method can be wrapped further It includes wherein described view and is converted to model view conversion.In some embodiments, the method can further comprise wherein described View conversion is set based on viewport.In some embodiments, the method can further comprise that wherein described view conversion is based on User's head is tracked and is determined.In some embodiments, the method can further comprise adapting to based on nature eyes, respectively First and second described virtual element is distributed to the external display and local display.In some embodiments, it is described Method can further comprise generation virtual environment model, which includes virtual layout of the external display relative to user. In some embodiments, the method can further comprise that wherein described virtual environment model includes opposite outer display distance. In some embodiments, it is opposite can to further comprise that wherein described virtual environment model includes one or more objects for the method In external display and the location information of user.In some embodiments, the method can further comprise wherein based on described Virtual environment model determines to block.In some embodiments, the method can further comprise wherein described virtual environment model It is calculated based on the sensing data from depth transducer.In some embodiments, the method can further comprise examining Survey the overlapping region between the local display and external display.In some embodiments, the method can be wrapped further Include the content that the external display is shown in the overlapping region wherein in the local display.In some embodiments In, the method can further comprise utilizing the overlapping region for sheltering external display described in color rendering.In some realities It applies in example, the method can further comprise detecting the image artifact in the outside and/or local display.In some implementations In example, the method can further comprise that wherein described image artifact is blocks.In some embodiments, the method can be into one Step includes wherein lap of the described image artifact between the local and external display.In some embodiments, institute The method of stating can further comprise the display area of the expansible external display of wherein described local display.In some implementations In example, the method can further comprise showing third virtual element on the second external display, the third virtual element point It cuts from the 3D contents.In some embodiments, the method can further comprise described in wherein described local display bridge joint Gap between external display and second external display.In some embodiments, the method can further comprise Wherein described external display and the second external display are active stereoscopic display, and each has corresponding display frequency. In some embodiments, the method can further comprise detecting the phase using the video camera for being attached to the local display Answer display frequency.In some embodiments, the method can further comprise that wherein described corresponding display frequency is different. In some embodiments, the method can further comprise that wherein described local display is used as shutter glasses, and wherein described Ground display stops first eye, and its with the first display frequency in the corresponding display frequency for the external display Described in local display with the second display frequency in the corresponding display frequency for second external display stop First glasses.In some embodiments, the method can further comprise that wherein described external display is active stereo Display.In some embodiments, the method can further comprise wherein described local display blocking user's first eye And show first and second described virtual element for the second eyes of the user.In some embodiments, the method can When further comprising wherein stopping that the first eye includes blocking in terms of the first eye, occupied by the external display Local display part.In some embodiments, the method can further comprise for the first eye at described RGB image is shown on ground display.
In one embodiment, there are following equipment, which includes:Transmission unit is configured as video display flowing to External display;Video camera on head-mounted display is configured to determine that the video display is hidden for user The part covered;And the transparent screen on the head-mounted display, it is configured as showing that the described of the video display is hidden The part covered.In some embodiments, the equipment can further comprise wherein being shown in the institute on the head-mounted display The shielded part for stating video display is aligned from the video display in the angle and external display of user. In some embodiments, the equipment can further comprise that wherein described external display is television screen.In some embodiments In, the equipment can further comprise modular converter, be configured as giving birth to by converting the displaying of modification original video according to view Into the video display.In some embodiments, the equipment can further comprise that wherein described view is converted to model view Conversion.In some embodiments, the equipment can further comprise that wherein described view conversion is set based on viewport.In some realities It applies in example, the equipment can further comprise that wherein described view conversion is tracked based on user's head.In some embodiments, institute Stating equipment can further comprise wherein described view conversion based on virtual environment model.In some embodiments, the equipment can Further comprise processor, which is configured as:Generate the virtual environment model of the user environment;And based on the void Intend environmental model, determine the shielded part.In some embodiments, the equipment can further comprise wherein described place Reason device is at least partially in response to receive the sensing data of the sensor on the head-mounted display, generates the void Intend environmental model.In some embodiments, the equipment can further comprise that wherein described sensor includes being selected from by following The equipment for the set that item is formed:Optical sensor, magnetic sensor, depth transducer and sound tracking transducer.
In one embodiment, there are following system, which includes:Projecting apparatus, be configured as by video display project to Surface;Sensor is configured to determine that there are shades for the part of projected video display;And head-mounted display, by with It is set to the dash area for showing the video display.In some embodiments, the system can further comprise wherein being shown in The dash area of the video display on the head-mounted display is from the angle of user and the video display projected It aligns.In some embodiments, the system can further comprise that wherein described projecting apparatus is configured as closing with being thrown The corresponding region of video display that the dash area of the video display of shadow occupies.In some embodiments, the system can be into One step is mounted to the head-mounted display including wherein described projecting apparatus.In some embodiments, the system can be into one Step includes wherein described projecting apparatus and is located at outside relative to the head-mounted display.
In one embodiment, there are following equipment, which includes:Flow module is configured as in the first external display The first part of upper display video display and the second part that the video display is shown on the second external display;And Head-mounted display is configured as showing the Part III of the video display, and the wherein Part III is displayed on described One and the second space between external display in.In some embodiments, the equipment can further comprise wherein described regard The Part III of frequency displaying is aligned from the angle of the user of the head-mounted display with first and second described part. In some embodiments, the equipment can further comprise that first and second wherein described external display is active stereoscopic display Device.In some embodiments, the equipment can further comprise that first and second wherein described external display is respectively provided with phase First and second display frequency answered.In some embodiments, the equipment can further comprise showing mounted on the wear-type Show the video camera on device, be configured as first and second described display frequency of detection.In some embodiments, the equipment can be into It is semi-transparent display that one step, which includes wherein described head-mounted display, and is configured as first eye with respectively with described First and second display frequency independently stops that the semi-transparent display corresponds to the area of first and second external display Domain.In some embodiments, the equipment can further comprise that wherein described head-mounted display is configured as described the One eyes and the second eyes alternately stop the semi-transparent display respectively with the display frequency that first and second is detected Corresponding to the region of first and second external display.In some embodiments, the equipment can further comprise first And the second shutter time synchronization signals transmitter, it is configured to receive first and second described display frequency.
In one embodiment, there are following equipment, which includes:Flow module is configured as in the first external display The first part of upper display video display and the second part that the video display is shown on the second external display;And Head-mounted display, is configured as showing the Part III of the video display, wherein the Part III and described first or the Region on two external displays it is least partially overlapped.In some embodiments, the equipment can further comprise wherein institute Flow module is stated to be configured as sheltering the overlapping region.In some embodiments, the equipment can further comprise wherein sheltering Including overturning the pixel in the overlapping region.In some embodiments, the equipment can further comprise that wherein masking includes Masking color is shown in the overlapping region.In some embodiments, the equipment can further comprise wherein described masking Color is black.
In one embodiment, there are following equipment, which includes:Forward direction video camera is configured as in capturing ambient One and second display equipment part image, this first and second display equipment show shown with enhancing display (AR) it is relevant First and second partial content;And head-mounted display, it is configured display and shows relevant Part III content with the AR, The Part III based on use the forward direction video camera captured described in first and second display equipment part the figure As and be determined.In some embodiments, the equipment further comprises wherein described determine and AR displayings relevant the Three parts content includes determining the associated AP exhibitions of field of view portion in relation to not shown hold facility by the first or second The partial content shown.
In one embodiment, there are following equipment, which includes:Network module is configured to connect to external display Device;The processor of mobile applications, the mobile applications execute instruction are run, which includes intercepting and capturing 3D content sets simultaneously The 3D content sets are divided into the first virtual element and the second virtual element;Flow module is configured as in the external display Upper display first virtual element;And head-mounted display, it is configured as showing second virtual element.In some realities It applies in example, the equipment can further comprise that wherein described head-mounted display can penetrate enhancing for light and show that (AR) wear-type is shown Show device (HMD).In some embodiments, the equipment can further comprise that wherein described external display is television set.One In a little embodiments, the equipment can further comprise that wherein described external display is projection screen.In some embodiments, institute Video camera can be further comprised by stating equipment, be configured as detecting the occlusion area on the external display.In some embodiments In, the equipment can further comprise that wherein described occlusion area is the shade of projection on the projection screen.In some embodiments In, the equipment can further comprise that wherein described occlusion area is the region being blocked by object on television screen.At some In embodiment, the equipment can further comprise the content stopped by the occlusion area being rendered into the head-mounted display On.In some embodiments, the equipment can further comprise that wherein described mobile applications are further configured to apply View at least one of is converted to first and second virtual element.In some embodiments, the equipment can be further Model view conversion is converted to including wherein described view.In some embodiments, the equipment can further comprise wherein institute View conversion is stated to set based on viewport.In some embodiments, the equipment can further comprise wherein described view conversion base It tracks and is determined in user's head.In some embodiments, the equipment can further comprise wherein described mobile application journey Sequence be configured to based on nature eyes and first and second virtual element distributed to the external display respectively and Local display.In some embodiments, the equipment can further comprise that wherein described processor is configurable to generate virtually Environmental model, the model include virtual layout of the external display relative to user.In some embodiments, the equipment It can further comprise that wherein described virtual environment model includes opposite outer display distance.In some embodiments, it is described to set It is standby to further comprise that wherein described virtual environment model includes one or more objects relative to the external display and institute State the location information of user.In some embodiments, the equipment can further comprise sensor, which is configured as receiving Collect the sensing data for determining the virtual environment model.In some embodiments, the equipment can further comprise it Described in sensor be the equipment selected from the set being made of the following terms:Optical sensor, magnetic sensor, depth transducer, And sound tracking transducer.In some embodiments, the equipment can further comprise video camera, which is configured as Detect the overlapping region between the local display and the external display.In some embodiments, the equipment can be into One step includes the content that wherein described head-mounted display is configured as showing the external display in the overlapping region. In some embodiments, it is outer described in masking color rendering can to further comprise that wherein described mobile applications use for the equipment The overlapping region of portion's display.In some embodiments, the equipment can further comprise video camera, the video camera by with It is set to the image artifact detected on the external display and/or head-mounted display.In some embodiments, the equipment can Further comprise that wherein described image artifact is blocks.In some embodiments, the equipment can further comprise wherein described Lap of the image artifact between the local and external display.In some embodiments, the equipment can be further Include the display area of the expansible external display of wherein described local display.In some embodiments, the equipment It can further comprise that wherein described flow module is configured as third virtual element being shown on the second external display, the third Virtual element is divided from 3D contents.In some embodiments, the equipment can further comprise wherein described head-mounted display It is configured as including the second part between the external display and second external display.In some implementations In example, the equipment can further comprise wherein described external display and the second external display is active stereoscopic display, Each has corresponding display frequency.In some embodiments, the equipment can further comprise video camera, be configured as making With the corresponding display frequency of camera shooting machine testing for being attached to the local display.In some embodiments, the equipment can be into It is different that one step, which includes wherein described corresponding display frequency,.In some embodiments, the equipment can further comprise wherein The head-mounted display is configured as the external display stopping with the first frequency in the corresponding display frequency First eye, and wherein described head-mounted display is configured as with the second frequency in the corresponding display frequency for described Second external display stops the first eye.In some embodiments, the equipment can further comprise wherein described outer Portion's display is active stereoscopic display.In some embodiments, the equipment can further comprise that wherein described wear-type is shown Show device be configured as the first eye of blocking user and the second eyes for the user show it is described first and second virtually Element.In some embodiments, the equipment can further comprise wherein stopping that the first eye includes blocking from First view Eyeball see by it is described external when, display occupy the head-mounted display part.In some embodiments, the equipment can Further comprise instructing, which is directed to the RGB image of the first eye for showing in the local display.

Claims (15)

1. a kind of method, including:
Use the part that (AR) head-mounted display (HMD) the first screen of associated forward direction video camera capture is shown with enhancing Image;
Responsively determine that the image of the capture includes the first part's content being shown on first screen, the first part Content is related with AR displayings;And
The AR HMD are configured with frequency associated with the display of the first part's content on first screen Image is generated, while the AR HMD are also showed that and shown relevant the second part with the AR.
2. according to the method described in claim 1, first part's content wherein in relation to AR displayings is displayed as The alternate frame of stereoprojection.
3. according to the method described in claim arbitrary in claim 1-2, wherein generated image serves as alternate left and right Shutter.
4. according to the method described in claim arbitrary in claim 1-3, wherein related with the AR displayings described second What the left-eye images of partial content were displayed on the AR HMD shows associated part with the left eye, and is used to generate needle The AR HMD are displayed on to the image of the shutter effect of the right eye shows associated part with the right eye.
5. according to the method described in claim arbitrary in claim 1-4, wherein related with the AR displayings described second What the right-eye image of partial content was displayed on the AR HMD shows associated part with the right eye, and is used to generate needle The AR HMD are displayed on to the image of the shutter effect of the left eye shows associated part with the left eye.
6. according to the method described in claim arbitrary in claim 1-5, wherein it is by capturing first screen to synchronize The camera signal of the image of curtain is analyzed and determined.
7. according to the method described in claim arbitrary in claim 1-6, further comprise detecting the institute of first screen State the frequency of image.
8. according to the method described in claim 7, the detection is completed by image analysis.
9. according to the method described in claim 7, wherein described detection is completed by sheltering analysis.
10. according to the method described in claim arbitrary in claim 1-9, further comprise:
The second image of the part of the second screen is captured using the forward direction video camera associated with the AR HMD;And
Determine that the second image of the capture includes being shown in Part III content on second screen, in the Part III Appearance is related with the AR displayings,
The image and second screen of the capture of the wherein described parts of the AR HMD based on first screen The part the capture image, determine the second part to be shown by the AR HMD.
11. according to the method described in claim 10, wherein described AR HMD based on the content with not by first screen The associated part related with the AR displayings of view sections that curtain or second screen occupy, determines the second part Content.
12. a kind of equipment, including:
Network module is configured to connect to external display;
Processor, runs mobile applications, which performs the instruction for including following operation:
Intercept and capture 3D content sets;
The 3D content sets are divided into the first virtual element and the second virtual element;
Flow module is configured as showing first virtual element on the external display;And
Head-mounted display is configured as showing second virtual element.
13. equipment according to claim 12, wherein the flow module is configured as showing on the second external display Third virtual element, the third virtual element are divided from the 3D contents.
14. equipment according to claim 12, wherein the external display and second external display is actively Stereoscopic display, each have corresponding display frequency.
15. equipment according to claim 14, wherein the head-mounted display is configured for use as shutter glasses, and Wherein described head-mounted display is configured as showing for the outside with the first display frequency in the corresponding display frequency Show that device stops first eye, and wherein described head-mounted display configurations are with the second display frequency in the corresponding display frequency Rate stops the first eye for second external display.
CN201680058637.3A 2015-10-08 2016-09-30 Method and system for automatic calibration of dynamic display configuration Active CN108139803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110366934.7A CN113190111A (en) 2015-10-08 2016-09-30 Method and device

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201562239143P 2015-10-08 2015-10-08
US62/239,143 2015-10-08
US201562260069P 2015-11-25 2015-11-25
US62/260,069 2015-11-25
US201562261029P 2015-11-30 2015-11-30
US62/261,029 2015-11-30
PCT/US2016/054931 WO2017062289A1 (en) 2015-10-08 2016-09-30 Methods and systems of automatic calibration for dynamic display configurations

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110366934.7A Division CN113190111A (en) 2015-10-08 2016-09-30 Method and device

Publications (2)

Publication Number Publication Date
CN108139803A true CN108139803A (en) 2018-06-08
CN108139803B CN108139803B (en) 2021-04-20

Family

ID=57218980

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110366934.7A Pending CN113190111A (en) 2015-10-08 2016-09-30 Method and device
CN201680058637.3A Active CN108139803B (en) 2015-10-08 2016-09-30 Method and system for automatic calibration of dynamic display configuration

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110366934.7A Pending CN113190111A (en) 2015-10-08 2016-09-30 Method and device

Country Status (4)

Country Link
US (4) US10545717B2 (en)
EP (3) EP3360029B1 (en)
CN (2) CN113190111A (en)
WO (1) WO2017062289A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194942A (en) * 2018-11-13 2019-01-11 宁波视睿迪光电有限公司 A kind of naked eye 3D video broadcasting method, terminal and server
CN109408128A (en) * 2018-11-10 2019-03-01 歌尔科技有限公司 Split type AR equipment communication means and AR equipment
CN109739353A (en) * 2018-12-27 2019-05-10 重庆上丞科技有限公司 A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus
CN109769111A (en) * 2018-11-22 2019-05-17 利亚德光电股份有限公司 Image display method, device, system, storage medium and processor
CN109814823A (en) * 2018-12-28 2019-05-28 努比亚技术有限公司 3D mode switching method, double-sided screen terminal and computer readable storage medium
CN112306353A (en) * 2020-10-27 2021-02-02 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof
CN113226499A (en) * 2019-01-11 2021-08-06 环球城市电影有限责任公司 Wearable visualization system and method
CN113557463A (en) * 2019-03-08 2021-10-26 Pcms控股公司 Optical method and system for display based on light beam with extended focal depth
CN113711175A (en) * 2019-09-26 2021-11-26 苹果公司 Wearable electronic device presenting a computer-generated real-world environment
US11187823B2 (en) * 2019-04-02 2021-11-30 Ascension Technology Corporation Correcting distortions
CN114128303A (en) * 2019-06-28 2022-03-01 Pcms控股公司 System and method for mixed format spatial data distribution and presentation
CN114207507A (en) * 2019-06-28 2022-03-18 Pcms控股公司 Optical methods and systems for Light Field (LF) displays based on tunable Liquid Crystal (LC) diffusers
CN114128303B (en) * 2019-06-28 2024-06-11 交互数字Vc控股公司 Method for obtaining 3D scene element and client device

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9704298B2 (en) * 2015-06-23 2017-07-11 Paofit Holdings Pte Ltd. Systems and methods for generating 360 degree mixed reality environments
EP3577896A4 (en) * 2017-02-03 2020-11-25 Warner Bros. Entertainment Inc. Rendering extended video in virtual reality
US10514546B2 (en) 2017-03-27 2019-12-24 Avegant Corp. Steerable high-resolution display
IL301087A (en) 2017-05-01 2023-05-01 Magic Leap Inc Matching content to a spatial 3d environment
US10607399B2 (en) * 2017-05-22 2020-03-31 Htc Corporation Head-mounted display system, method for adaptively adjusting hidden area mask, and computer readable medium
EP3531244A1 (en) * 2018-02-26 2019-08-28 Thomson Licensing Method, apparatus and system providing alternative reality environment
US11347466B2 (en) * 2017-08-14 2022-05-31 Imax Theatres International Limited Wireless content delivery for a tiled LED display
EP4030753A1 (en) * 2017-08-23 2022-07-20 InterDigital Madison Patent Holdings, SAS Light field image engine method and apparatus for generating projected 3d light fields
EP3704531B1 (en) 2017-11-02 2023-12-06 InterDigital Madison Patent Holdings, SAS Method and system for aperture expansion in light field displays
KR102029906B1 (en) * 2017-11-10 2019-11-08 전자부품연구원 Apparatus and method for providing virtual reality contents of moving means
US11451881B2 (en) * 2017-12-15 2022-09-20 Interdigital Madison Patent Holdings, Sas Method for using viewing paths in navigation of 360 degree videos
JP7196179B2 (en) 2017-12-22 2022-12-26 マジック リープ, インコーポレイテッド Method and system for managing and displaying virtual content in a mixed reality system
US10585294B2 (en) 2018-02-19 2020-03-10 Microsoft Technology Licensing, Llc Curved display on content in mixed reality
IL301443A (en) 2018-02-22 2023-05-01 Magic Leap Inc Browser for mixed reality systems
JP7139436B2 (en) 2018-02-22 2022-09-20 マジック リープ, インコーポレイテッド Object creation using physical manipulation
CN110554770A (en) * 2018-06-01 2019-12-10 苹果公司 Static shelter
WO2020017327A1 (en) * 2018-07-17 2020-01-23 ソニー株式会社 Head mount display and control method of head mount display, information processing device, display device, and program
US11355019B2 (en) * 2018-08-24 2022-06-07 AeroCine Ventures, Inc. Motion tracking interface for planning travel path
US10627565B1 (en) * 2018-09-06 2020-04-21 Facebook Technologies, Llc Waveguide-based display for artificial reality
US11209650B1 (en) 2018-09-06 2021-12-28 Facebook Technologies, Llc Waveguide based display with multiple coupling elements for artificial reality
US10567743B1 (en) * 2018-09-24 2020-02-18 Cae Inc. See-through based display method and system for simulators
US10567744B1 (en) * 2018-09-24 2020-02-18 Cae Inc. Camera-based display method and system for simulators
EP3857534A4 (en) * 2018-09-24 2021-12-08 CAE Inc. Camera based display method and system for simulators
EP3891546A4 (en) 2018-12-07 2022-08-24 Avegant Corp. Steerable positioning element
KR20240042166A (en) 2019-01-07 2024-04-01 아브간트 코포레이션 Control system and rendering pipeline
EP3912561B1 (en) * 2019-01-15 2024-04-10 FUJIFILM Corporation Ultrasonic system and method for controlling ultrasonic system
US10789780B1 (en) * 2019-03-29 2020-09-29 Konica Minolta Laboratory U.S.A., Inc. Eliminating a projected augmented reality display from an image
WO2020205784A1 (en) 2019-03-29 2020-10-08 Avegant Corp. Steerable hybrid display using a waveguide
JP7440532B2 (en) 2019-04-03 2024-02-28 マジック リープ, インコーポレイテッド Managing and displaying web pages in a virtual three-dimensional space using a mixed reality system
US11265487B2 (en) * 2019-06-05 2022-03-01 Mediatek Inc. Camera view synthesis on head-mounted display for virtual reality and augmented reality
CN114175627B (en) 2019-06-07 2024-04-12 交互数字Vc控股公司 Optical methods and systems for distributed aperture-based light field displays
US11508131B1 (en) * 2019-11-08 2022-11-22 Tanzle, Inc. Generating composite stereoscopic images
FR3104743B1 (en) * 2019-12-17 2022-07-15 Orange Portable 3D content display device, system and method thereof.
CN113010125B (en) 2019-12-20 2024-03-19 托比股份公司 Method, computer program product, and binocular headset controller
KR20220120615A (en) 2020-01-06 2022-08-30 아브간트 코포레이션 Head-mounted system with color-specific modulation
WO2021182124A1 (en) * 2020-03-10 2021-09-16 ソニーグループ株式会社 Information processing device and information processing method
WO2021239223A1 (en) * 2020-05-27 2021-12-02 Telefonaktiebolaget Lm Ericsson (Publ) Method and device for controlling display of content
GB2598927B (en) * 2020-09-18 2024-02-28 Sony Interactive Entertainment Inc Apparatus and method for data aggregation
CN113189776B (en) * 2021-04-25 2022-09-20 歌尔股份有限公司 Calibration system, calibration method and calibration device for augmented reality equipment
DE102021206565A1 (en) 2021-06-24 2022-12-29 Siemens Healthcare Gmbh Display device for displaying a graphical representation of an augmented reality
US11580734B1 (en) * 2021-07-26 2023-02-14 At&T Intellectual Property I, L.P. Distinguishing real from virtual objects in immersive reality
US20240070959A1 (en) * 2022-08-25 2024-02-29 Acer Incorporated Method and computer device for 3d scene generation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120127284A1 (en) * 2010-11-18 2012-05-24 Avi Bar-Zeev Head-mounted display device which provides surround video
WO2012147702A1 (en) * 2011-04-28 2012-11-01 シャープ株式会社 Head-mounted display
JP2014170374A (en) * 2013-03-04 2014-09-18 Kddi Corp Ar system employing optical see-through type hmd
CN104238119A (en) * 2013-06-12 2014-12-24 精工爱普生株式会社 Head-mounted display device and control method of head-mounted display device
CN104380347A (en) * 2012-06-29 2015-02-25 索尼电脑娱乐公司 Video processing device, video processing method, and video processing system
CN104508538A (en) * 2012-07-24 2015-04-08 索尼公司 Image display device and image display method

Family Cites Families (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60126945A (en) 1983-12-14 1985-07-06 Hitachi Ltd Polling system
US5086354A (en) 1989-02-27 1992-02-04 Bass Robert E Three dimensional optical viewing system
US5633993A (en) 1993-02-10 1997-05-27 The Walt Disney Company Method and apparatus for providing a virtual world sound system
US5956180A (en) 1996-12-31 1999-09-21 Bass; Robert Optical viewing system for asynchronous overlaid images
ATE500532T1 (en) 1998-02-20 2011-03-15 Puredepth Ltd MULTI-LAYER DISPLAY DEVICE AND METHOD FOR DISPLAYING IMAGES ON SUCH A DISPLAY DEVICE
US6525699B1 (en) 1998-05-21 2003-02-25 Nippon Telegraph And Telephone Corporation Three-dimensional representation method and an apparatus thereof
US20040004623A1 (en) 1998-12-11 2004-01-08 Intel Corporation Apparatus, systems, and methods to control image transparency
US7342721B2 (en) 1999-12-08 2008-03-11 Iz3D Llc Composite dual LCD panel display suitable for three dimensional imaging
US20010053996A1 (en) 2000-01-06 2001-12-20 Atkinson Paul D. System and method for distributing and controlling the output of media in public spaces
US7133083B2 (en) 2001-12-07 2006-11-07 University Of Kentucky Research Foundation Dynamic shadow removal from front projection displays
JP3880561B2 (en) 2002-09-05 2007-02-14 株式会社ソニー・コンピュータエンタテインメント Display system
SE530896C2 (en) * 2005-04-01 2008-10-14 Diaspect Medical Ab Device for determining a blood hemoglobin concentration
US7533349B2 (en) 2006-06-09 2009-05-12 Microsoft Corporation Dragging and dropping objects between local and remote modules
JP4270264B2 (en) 2006-11-01 2009-05-27 セイコーエプソン株式会社 Image correction apparatus, projection system, image correction method, image correction program, and recording medium
US8269822B2 (en) 2007-04-03 2012-09-18 Sony Computer Entertainment America, LLC Display viewing system and methods for optimizing display view based on active tracking
US9423995B2 (en) 2007-05-23 2016-08-23 Google Technology Holdings LLC Method and apparatus for re-sizing an active area of a flexible display
US20080320126A1 (en) 2007-06-25 2008-12-25 Microsoft Corporation Environment sensing for interactive entertainment
US8203577B2 (en) * 2007-09-25 2012-06-19 Microsoft Corporation Proximity based computer display
US7953462B2 (en) 2008-08-04 2011-05-31 Vartanian Harry Apparatus and method for providing an adaptively responsive flexible display device
US20100053151A1 (en) 2008-09-02 2010-03-04 Samsung Electronics Co., Ltd In-line mediation for manipulating three-dimensional content on a display device
US20100321275A1 (en) 2009-06-18 2010-12-23 Microsoft Corporation Multiple display computing device with position-based operating modes
US20100328447A1 (en) 2009-06-26 2010-12-30 Sony Computer Entertainment, Inc. Configuration of display and audio parameters for computer graphics rendering system having multiple displays
US8684531B2 (en) * 2009-12-28 2014-04-01 Vision3D Technologies, Llc Stereoscopic display device projecting parallax image and adjusting amount of parallax
US8384774B2 (en) * 2010-02-15 2013-02-26 Eastman Kodak Company Glasses for viewing stereo images
US20130038702A1 (en) 2010-03-09 2013-02-14 Imax Corporation System, method, and computer program product for performing actions based on received input in a theater environment
US20110221962A1 (en) 2010-03-10 2011-09-15 Microsoft Corporation Augmented reality via a secondary channel
US8730354B2 (en) 2010-07-13 2014-05-20 Sony Computer Entertainment Inc Overlay video content on a mobile device
EP2605413B1 (en) 2010-08-13 2018-10-10 LG Electronics Inc. Mobile terminal, system comprising the mobile terminal and a display device, and control method therefor
US9946076B2 (en) 2010-10-04 2018-04-17 Gerard Dirk Smits System and method for 3-D projection and enhancements for interactivity
US9035939B2 (en) 2010-10-04 2015-05-19 Qualcomm Incorporated 3D video control system to adjust 3D video rendering based on user preferences
US20120086630A1 (en) 2010-10-12 2012-04-12 Sony Computer Entertainment Inc. Using a portable gaming device to record or modify a game or application in real-time running on a home gaming system
KR20120064557A (en) * 2010-12-09 2012-06-19 한국전자통신연구원 Mixed reality display platform for presenting augmented 3d stereo image and operation method thereof
US9367224B2 (en) 2011-04-29 2016-06-14 Avaya Inc. Method and apparatus for allowing drag-and-drop operations across the shared borders of adjacent touch screen-equipped devices
WO2013048221A2 (en) * 2011-09-30 2013-04-04 Lee Moon Key Image processing system based on stereo image
US8711091B2 (en) 2011-10-14 2014-04-29 Lenovo (Singapore) Pte. Ltd. Automatic logical position adjustment of multiple screens
TW201328323A (en) * 2011-12-20 2013-07-01 Novatek Microelectronics Corp Shutter glasses, three-dimensional video system and shutter glasses control method
JP5832666B2 (en) 2011-12-20 2015-12-16 インテル・コーポレーション Augmented reality representation across multiple devices
US8479226B1 (en) * 2012-02-21 2013-07-02 The Nielsen Company (Us), Llc Methods and apparatus to identify exposure to 3D media presentations
JP5483761B2 (en) * 2012-06-29 2014-05-07 株式会社ソニー・コンピュータエンタテインメント Video output device, stereoscopic video observation device, video presentation system, and video output method
KR101861380B1 (en) 2012-07-16 2018-05-28 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 A Method of Providing Contents Using Head Mounted Display and a Head Mounted Display Thereof
US9833698B2 (en) 2012-09-19 2017-12-05 Disney Enterprises, Inc. Immersive storytelling environment
GB2506203B (en) 2012-09-25 2016-12-14 Jaguar Land Rover Ltd Method of interacting with a simulated object
US9323057B2 (en) 2012-12-07 2016-04-26 Blackberry Limited Mobile device, system and method for controlling a heads-up display
CN105103541B (en) 2013-02-19 2017-03-08 宜客斯股份有限公司 Pattern position detection method, pattern position detecting system and apply these image quality adjustment technology
US8988343B2 (en) 2013-03-29 2015-03-24 Panasonic Intellectual Property Management Co., Ltd. Method of automatically forming one three-dimensional space with multiple screens
KR20140130321A (en) * 2013-04-30 2014-11-10 (주)세이엔 Wearable electronic device and method for controlling the same
KR102077105B1 (en) * 2013-09-03 2020-02-13 한국전자통신연구원 Apparatus and method for designing display for user interaction in the near-body space
KR101510340B1 (en) 2013-10-14 2015-04-07 현대자동차 주식회사 Wearable computer
US9691181B2 (en) 2014-02-24 2017-06-27 Sony Interactive Entertainment Inc. Methods and systems for social sharing head mounted display (HMD) content with a second screen
WO2016003165A1 (en) 2014-07-01 2016-01-07 엘지전자 주식회사 Method and apparatus for processing broadcast data by using external device
CN104299249B (en) * 2014-08-20 2016-02-24 深圳大学 The monumented point coding/decoding method of high robust and system
US10445798B2 (en) 2014-09-12 2019-10-15 Onu, Llc Systems and computer-readable medium for configurable online 3D catalog
US9584915B2 (en) 2015-01-19 2017-02-28 Microsoft Technology Licensing, Llc Spatial audio with remote speakers
US10019849B2 (en) 2016-07-29 2018-07-10 Zspace, Inc. Personal electronic device with a display system
US10380762B2 (en) 2016-10-07 2019-08-13 Vangogh Imaging, Inc. Real-time remote collaboration and virtual presence using simultaneous localization and mapping to construct a 3D model and update a scene based on sparse data
US10522113B2 (en) 2017-12-29 2019-12-31 Intel Corporation Light field displays having synergistic data formatting, re-projection, foveation, tile binning and image warping technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120127284A1 (en) * 2010-11-18 2012-05-24 Avi Bar-Zeev Head-mounted display device which provides surround video
WO2012147702A1 (en) * 2011-04-28 2012-11-01 シャープ株式会社 Head-mounted display
CN104380347A (en) * 2012-06-29 2015-02-25 索尼电脑娱乐公司 Video processing device, video processing method, and video processing system
CN104508538A (en) * 2012-07-24 2015-04-08 索尼公司 Image display device and image display method
JP2014170374A (en) * 2013-03-04 2014-09-18 Kddi Corp Ar system employing optical see-through type hmd
CN104238119A (en) * 2013-06-12 2014-12-24 精工爱普生株式会社 Head-mounted display device and control method of head-mounted display device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAEL FIGL等: "The Control Unit for a Head Mounted Operating Microscope Used for Augmented Reality Visualization in Computer Aided Sugery", 《ISMAR "02: PROCEEDINGS OF THE 1ST INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY》 *
王涌天等: "亦真亦幻的户外增强现实系统——圆明园的数字重建", 《中国科学基金》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109408128A (en) * 2018-11-10 2019-03-01 歌尔科技有限公司 Split type AR equipment communication means and AR equipment
CN109194942A (en) * 2018-11-13 2019-01-11 宁波视睿迪光电有限公司 A kind of naked eye 3D video broadcasting method, terminal and server
CN109194942B (en) * 2018-11-13 2020-08-11 宁波视睿迪光电有限公司 Naked eye 3D video playing method, terminal and server
CN109769111A (en) * 2018-11-22 2019-05-17 利亚德光电股份有限公司 Image display method, device, system, storage medium and processor
CN109739353A (en) * 2018-12-27 2019-05-10 重庆上丞科技有限公司 A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus
CN109814823A (en) * 2018-12-28 2019-05-28 努比亚技术有限公司 3D mode switching method, double-sided screen terminal and computer readable storage medium
CN113226499B (en) * 2019-01-11 2023-09-26 环球城市电影有限责任公司 Wearable visualization system and method
CN113226499A (en) * 2019-01-11 2021-08-06 环球城市电影有限责任公司 Wearable visualization system and method
CN113557463A (en) * 2019-03-08 2021-10-26 Pcms控股公司 Optical method and system for display based on light beam with extended focal depth
US11187823B2 (en) * 2019-04-02 2021-11-30 Ascension Technology Corporation Correcting distortions
CN114128303A (en) * 2019-06-28 2022-03-01 Pcms控股公司 System and method for mixed format spatial data distribution and presentation
CN114207507A (en) * 2019-06-28 2022-03-18 Pcms控股公司 Optical methods and systems for Light Field (LF) displays based on tunable Liquid Crystal (LC) diffusers
US11900532B2 (en) 2019-06-28 2024-02-13 Interdigital Vc Holdings, Inc. System and method for hybrid format spatial data distribution and rendering
CN114128303B (en) * 2019-06-28 2024-06-11 交互数字Vc控股公司 Method for obtaining 3D scene element and client device
CN113711175A (en) * 2019-09-26 2021-11-26 苹果公司 Wearable electronic device presenting a computer-generated real-world environment
CN112306353B (en) * 2020-10-27 2022-06-24 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof
CN112306353A (en) * 2020-10-27 2021-02-02 北京京东方光电科技有限公司 Augmented reality device and interaction method thereof

Also Published As

Publication number Publication date
EP3360029B1 (en) 2019-11-13
US20230119930A1 (en) 2023-04-20
US11544031B2 (en) 2023-01-03
CN108139803B (en) 2021-04-20
EP3629136B1 (en) 2024-04-17
CN113190111A (en) 2021-07-30
US20240103795A1 (en) 2024-03-28
WO2017062289A1 (en) 2017-04-13
US11868675B2 (en) 2024-01-09
US20200089458A1 (en) 2020-03-19
US10545717B2 (en) 2020-01-28
EP3360029A1 (en) 2018-08-15
US20180293041A1 (en) 2018-10-11
EP4380150A2 (en) 2024-06-05
EP3629136A1 (en) 2020-04-01

Similar Documents

Publication Publication Date Title
CN108139803A (en) For the method and system calibrated automatically of dynamic display configuration
US20230245395A1 (en) Re-creation of virtual environment through a video call
US10019831B2 (en) Integrating real world conditions into virtual imagery
EP3242274B1 (en) Method and device for displaying three-dimensional objects
CN108830939B (en) Scene roaming experience method and experience system based on mixed reality
CN109565567A (en) Three-dimensional telepresence system
CN108292489A (en) Information processing unit and image generating method
CN109863533A (en) Virtually, enhancing and mixed reality system and method
CN108351691A (en) remote rendering for virtual image
JP7135123B2 (en) Virtual/augmented reality system with dynamic domain resolution
CN106569044B (en) Electromagnetic spectrum situation observation method based on immersed system of virtual reality
CN105432078A (en) Real-time registration of a stereo depth camera array
CN110537208A (en) Head-mounted display and method
CN108509173A (en) Image shows system and method, storage medium, processor
CN108205823A (en) MR holographies vacuum experiences shop and experiential method
CN109714588A (en) Multi-viewpoint stereo image positions output method, device, equipment and storage medium
CN102316333A (en) Display system and prompting system
CN112367516B (en) Three-dimensional display system based on space positioning
CN113875230A (en) Mixed-mode three-dimensional display system and method
US11310472B2 (en) Information processing device and image generation method for projecting a subject image onto a virtual screen
WO2024040430A1 (en) Method and apparatus to extend field of view of an augmented reality device
US20240169568A1 (en) Method, device, and computer program product for room layout
Zanaty et al. 3D visualization for Intelligent Space: Time-delay compensation in a remote controlled environment
CN116582661B (en) Mixed mode three-dimensional display system and method
CN111782063A (en) Real-time display method and system, computer readable storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230919

Address after: Delaware USA

Patentee after: Interactive Digital VC Holdings

Address before: Wilmington, Delaware, USA

Patentee before: PCMS HOLDINGS, Inc.