WO2019038520A1 - Compressing image data for transmission to a display of a wearable headset based on information on blinking of the eye - Google Patents
Compressing image data for transmission to a display of a wearable headset based on information on blinking of the eye Download PDFInfo
- Publication number
- WO2019038520A1 WO2019038520A1 PCT/GB2018/052325 GB2018052325W WO2019038520A1 WO 2019038520 A1 WO2019038520 A1 WO 2019038520A1 GB 2018052325 W GB2018052325 W GB 2018052325W WO 2019038520 A1 WO2019038520 A1 WO 2019038520A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- blink
- blinking
- data
- display data
- wearable headset
- Prior art date
Links
- 230000004397 blinking Effects 0.000 title claims abstract description 54
- 230000005540 biological transmission Effects 0.000 title description 6
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000012544 monitoring process Methods 0.000 claims abstract description 17
- 230000006835 compression Effects 0.000 claims description 43
- 238000007906 compression Methods 0.000 claims description 43
- 230000008014 freezing Effects 0.000 claims description 12
- 238000007710 freezing Methods 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 8
- 230000003190 augmentative effect Effects 0.000 claims description 2
- 239000011521 glass Substances 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 claims description 2
- 230000006837 decompression Effects 0.000 description 7
- 239000000872 buffer Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000003245 working effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3265—Power saving in display device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/115—Selection of the code volume for a coding unit prior to coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/162—User input
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/637—Control signals issued by the client directed to the server or network components
- H04N21/6373—Control signals issued by the client directed to the server or network components for rate control, e.g. request to the server to modify its transmission rate
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
Definitions
- Virtual reality is becoming an increasingly popular display method, especially for computer gaming but also in other applications. This introduces new problems in the generation and display of image data as virtual reality devices must have extremely fast and high-resolution displays to create an illusion of reality. This means that a very large volume of data must be transmitted to the device from any connected host.
- the invention provides a method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; compressing at least some of the display data at a predetermined level based on the blinking data; and sending the display data compressed at the predetermined level for display on the one or more displays. Compressing at least some of the display data may, in some embodiments, comprise utilising different compression levels based on the blinking data.
- the monitoring and the analysing is carried out at the wearable headset and the compressing and sending is carried out by a host computer, the analysis being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
- the monitoring is carried out at the wearable headset and the analysing, compressing and sending is carried out by a host computer, the information being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
- the predetermined level of compression of the display data is a higher level of compression that a normal level of compression, whereby the quality of the image displayed on the one or more displays is reduced from a normal level.
- compressing the display data at the predetermined level of compression starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the compressing at the predetermined level of compression ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
- the invention provides a method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; freezing at least some of the display data being displayed on the one or more displays for a predetermined time based on the blinking data.
- the freezing at least some of the display data being displayed on the one or more displays comprises repeating a last frame that was displayed, without updating it.
- the monitoring and the analysing is carried out at the wearable headset and the freezing is carried out by a host computer, the blinking data being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
- the monitoring is carried out at the wearable headset and the analysing and the freezing is carried out by a host computer, the information being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
- the freezing of at least some of the display data on the one or more displays starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the freezing of at least some of the display data on the one or more displays ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
- the predetermined time is preferably based on a predetermined model of a human eye.
- the invention provides a system comprising a host computer and a wearable headset, the system configured to perform all the steps of the method described above.
- the monitoring of the eye of the viewer is performed by a detector in the wearable headset, optionally wherein the detector forms part of an eye-tracking mechanism.
- the wearable headset may be a virtual reality headset or an augmented reality set of glasses. This above described methods are advantageous as they allow assumptions regarding user blinking to be used to increase compression, thereby allowing limited bandwidth to be used to send other data while the image data is more compressed than normal (or not sent at all).
- Figure 1 shows a basic overview of a system according to one embodiment of the invention
- Figure 2 shows a schematic view of a VR headset in use in the system of Figure 1; and Figure 3 shows a flow diagram of a method used by the system of Figure 1.
- FIG. 1 is a block diagram showing a basic overview of a display system arranged according to the invention.
- a host [11] is connected by connection [16] to a virtual-reality headset [12].
- This connection [16] may be wired or wireless, and there may be multiple connection channels, or there may be a single bidirectional connection which is used for multiple purposes.
- the connection [16] is assumed to be a general-purpose wireless connection, such as one using the Universal Serial Bus (USB) protocol, although other appropriate protocols could, of course, be used.
- USB Universal Serial Bus
- the host [11] incorporates, among other components, a processor [13] running an application which generates frames of display data using a graphics processing unit (GPU) on the host [11]. These frames are then transmitted to a compression engine [14], which carries out compression on the display data to reduce its volume prior to transmission.
- the transmission itself is carried out by an output engine [15], which controls the connection to the headset [12] and may include display and wireless driver software.
- the headset [12] incorporates an input engine [17] for receiving the transmitted display data, which also controls the connection [16] to the host [11] as appropriate.
- the input engine [17] is connected to a decompression engine [18], which decompresses the received display data as appropriate.
- the decompression engine [18] is in turn connected to two display panels [19], one of which is presented to each of a user's eyes when the headset [12] is in use.
- the display data When the display data has been decompressed, it is transmitted to the display panels [19] for display, possibly via frame or flow buffers to account for any unevenness in the rate of decompression.
- the headset [12] also incorporates blinking sensors [110] which are used to monitor one or both eyes of the user of the headset [12] to enable detection of blinking of the eye or eyes.
- the sensors [110] could, for example be a camera which monitors the eye or eyes so that a determination of whether it is open or closed, and for how long it is closed, may be made, or sensor could be part of an eye tracking system that tracks movement of the eye and/or what the eye is focussed on.
- the sensor could be arranged next to each of the display panels [19], as shown, or could be separate. In any case, the or each sensor [110] is connected to an output engine [120] which is connected to the host [11] and transmit information back to the host [11] to control the operation of applications on the host [11].
- the sensed data can either be sent directly back to the host for analysis, or some analysis can be carried out at the headset [12] by the output engine [120] or a processor, which may also implement the decompression engine [18].
- the sensors [110] provide information to the compression engine [14], via the output engine [120].
- the sensors [110] provide information to a processor on the headset [12], such as decompression engine [18].
- Figure 2 shows a generalised view of the system of Figure 1 when it is in use.
- the headset [12] incorporates two display panels [19], each presented to one of the eyes [24] of a user, when in use. These two panels may in fact be a single display panel which shows two images, but this will not affect the operation of these embodiments of the invention. Only one display panel [19] and eye [24] is shown on Figure 2.
- the sensor [110] mounted on the display panel [19] is the sensor [110], which is used to monitor the eye [24] to enable blinking of the eye to be determined.
- the senor [110] could be a camera that simply provides images of the eye [24] to enable another component to determine whether the eye is open or shut and other factors, such as how long the eye is closed to determine a duration of a blink.
- the sensor [110] itself may have more processing capability, for example if it is part of an eye tracking system, and may be able to make these determinations itself.
- other blinking data may also be determined. For example, it is possible that an onset of a blink may be determined from other physiological changes at or adjacent to the eye just before a blink occurs. Frequency of blinking may also be determined, as well as patterns of repetition of blinking.
- the blinking data can then be used to predict when the user is next likely to blink, for example, from a determination of an onset of the blink or from a determination of the frequency of blinking or of a repetition pattern of blinking. This prediction is then used by the host [11] to determine a level of compression to reduce the amount of data that needs to be transmitted to the headset. This is based on the recognition that while the eye is blinking, and for a short period thereafter, a user of the headset has reduced visibility and cognisance of the image on the display panel, due to the eye being closed, initially, and then requiring refocusing and understanding of what is being seen. Thus, for this period, the data for all parts of the display panel [19] can be compressed to a greater or lesser extent, or even frozen altogether.
- the duration and level of compression may be determined based on the predicted blink and/or on the image data being displayed at the time. For example, if it is determined that a user has relatively long blinks, then the period during which the image data is compressed may be increased and the level of compression may be particularly high at the beginning and during the actual blink, but may decrease towards the end of the blink or during the period immediately thereafter. If it is predicted that the blink is likely to be relatively short, then a lower level of compression may be appropriate. Furthermore, if the image being displayed is relatively static or dark then an increased level of compression may be selected.
- the image may be completely frozen, whereby the same frame of the image is repeated, with no new image date needing to be sent. This may be determined at the headset, which can then repeat the previous frame and tell the host that it does not need to send any data for the particular duration, or it may be determined by the host, which can send sufficient instructions to enable the headset to repeat the image frame either for a particular period of time or until new data is received.
- the image is fast moving, it may be inappropriate to freeze the image completely, as the judder when the viewer next sees the image and notices that a fast-moving object is at a different location may be undesirable.
- the level of compression which can include "complete" compression, where no new image data is sent, can be determined based on the prediction of the blink duration and the predicted period thereafter when the viewer cannot clearly view and understand the image. Since less image data is sent during this period, it will be apparent that other data may be sent at this time. This allows more data to be transmitted through a limited bandwidth connection to the headset, since constant good-quality images are not required for this period.
- the other data that is sent could be used to fill in 3D images, render more computer- generated images, or send data to a buffer in advance of a requirement for the data in case of connection problems, or to facilitate higher frame rates.
- FIG. 3 shows the process for determining compression. It will be described with reference to the example shown in Figure 2.
- the sensor [110] monitors an eye if user.
- the sensed information is then sent, in step SI 02, to a processor.
- the processor may be on the host [11] or may be on the headset [12], for example, forming the decompression engine [13] or as part of the sensor [110] itself.
- the processor then analyses the information to determine blinking data.
- the analysis may include a determination of the times when a blink starts and stops, leading to a duration of the blink.
- the analysis may include a determination of an onset of a blink.
- This may be based on a model of the human eye and the physiological signs that a blink is about to start, and/or on a history of monitoring the eye of the viewer and recognizing one or more signs that occur just before a blink occurred.
- the analysis may further determine a frequency of blinking and/or a pattern of repetition of blinking.
- the blinking data can then be used to predict when a blink is about to happen, so that image data can be appropriately compressed (or not sent at all) by the host for the predicted period of the blink and for a period of time thereafter, which may be predicted based on the model of the human eye, or on a history of the monitoring of the eye of the particular viewer.
- a history may be developed of how soon a user starts focussing on the image after a blink, so that a historical period for the particular viewer can be determined of when the viewer still does not focus on the image after the end of a blink.
- Either particular characteristics determined for the particular viewer can be used to determine the level or levels of compression, and the duration thereof, or they can be determined based on the predetermined model of the human eye.
- the frequency of blinking and the repetition of patterns of blinking may all be used to predict a blink.
- a determination of an onset of a blink may alternatively be used.
- the prediction of a blink is then used, as indicated in step SI 04, by the compression engine [14] (or by another component), to determine the level or levels of compression of the image data (or whether no image data is to be sent at all), based on the blinking data and, if desired, on the particular historical characteristics of the particular viewer or the predetermined model of the human eye, and, if desired, of the image data itself.
- the determined level of compression may be higher than a level of compression used for normal viewing.
- the duration of the determined level of compression may depend on the predicted duration of a blink and on the period thereafter when the viewer is not fully cognisant of the image, which may be based on historical data or on the predetermined model.
- the compression engine [14] compresses the image data received from the processor [13] using the determined compression level(s) for the particular duration(s).
- Step SI 05 the compressed data is sent to the output engine [15] for transmission to the headset [12], where it is decompressed and displayed on the display panels [19], as appropriate, at Step SI 06.
- the level of compression encompasses a level where no image data is sent at all (i.e. an infinite level of compression).
- the determination of the level of compression can be carried out either on the headset or on the host. If a component on the headset determines that the image may be frozen, where the existing frame of the image is repeated on the display panel(s), then it can instruct the decompression engine [18] and/or the display panel(s) [19] to simply repeatedly display the previous frame's image data. The component also sends to the host [11] an indication that this is happening and that no image data is required from the host for the determined period. Alternatively, if the host [11] determines that the image may be frozen, then it can instruct the headset [12] appropriately to simply repeatedly display the previous frame's image data.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Social Psychology (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Ophthalmology & Optometry (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer based on recognition of blinking of an eye of a viewer. The method involves monitoring an eye of the viewer to provide information enabling blinking of the eye to be determined and analysing the information e to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking. Thereafter, at least some of the display data is compressed at a predetermined level based on the blinking data, and sent for display on the one or more displays.
Description
COMPRESSING IMAGE DATA FOR TRANSMISSION TO A DISPLAY OF A WEARABLE HEADSET BASED ON INFORMATION ON BLINKING OF THE EYE
Background
Virtual reality is becoming an increasingly popular display method, especially for computer gaming but also in other applications. This introduces new problems in the generation and display of image data as virtual reality devices must have extremely fast and high-resolution displays to create an illusion of reality. This means that a very large volume of data must be transmitted to the device from any connected host.
As virtual-reality display devices become more popular, it is also becoming desirable for them to be wirelessly connected to their hosts. This introduces considerable problems with the transmission of the large volume of display data required, as wireless connections commonly have very limited bandwidth. It is therefore desirable for as much compression to be applied to the display data as possible without affecting its quality, as reductions in quality are likely to be noticed by a user.
The invention aims to mitigate some of these problems. Summary
Accordingly, in one aspect, the invention provides a method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking;
compressing at least some of the display data at a predetermined level based on the blinking data; and sending the display data compressed at the predetermined level for display on the one or more displays. Compressing at least some of the display data may, in some embodiments, comprise utilising different compression levels based on the blinking data.
In one embodiment, the monitoring and the analysing is carried out at the wearable headset and the compressing and sending is carried out by a host computer, the analysis being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
In another embodiment, the monitoring is carried out at the wearable headset and the analysing, compressing and sending is carried out by a host computer, the information being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset. Preferably, the predetermined level of compression of the display data is a higher level of compression that a normal level of compression, whereby the quality of the image displayed on the one or more displays is reduced from a normal level.
In a preferred embodiment, compressing the display data at the predetermined level of compression starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the compressing at the predetermined level of compression ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or
a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
In an embodiment, while the display data that is sent is compressed at the predetermined level of compression, other data is also sent to the wearable headset.
According to a second aspect, the invention provides a method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; freezing at least some of the display data being displayed on the one or more displays for a predetermined time based on the blinking data.
Preferably, the freezing at least some of the display data being displayed on the one or more displays comprises repeating a last frame that was displayed, without updating it.
In one embodiment, the monitoring and the analysing is carried out at the wearable headset and the freezing is carried out by a host computer, the blinking data being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
In another embodiment, the monitoring is carried out at the wearable headset and the analysing and the freezing is carried out by a host computer, the information being sent from
the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
In a preferred embodiment, the freezing of at least some of the display data on the one or more displays starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the freezing of at least some of the display data on the one or more displays ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
Preferably, while at least some of the display data being displayed on the one or more displays is frozen, other data is sent to the wearable headset.
The predetermined time is preferably based on a predetermined model of a human eye.
According to a third aspect, the invention provides a system comprising a host computer and a wearable headset, the system configured to perform all the steps of the method described above.
Preferably, the monitoring of the eye of the viewer is performed by a detector in the wearable headset, optionally wherein the detector forms part of an eye-tracking mechanism.
The wearable headset may be a virtual reality headset or an augmented reality set of glasses.
This above described methods are advantageous as they allow assumptions regarding user blinking to be used to increase compression, thereby allowing limited bandwidth to be used to send other data while the image data is more compressed than normal (or not sent at all).
Brief Description of the Drawings
Embodiments of the invention will now be more fully described, by way of example, with reference to the drawings, of which:
Figure 1 shows a basic overview of a system according to one embodiment of the invention;
Figure 2 shows a schematic view of a VR headset in use in the system of Figure 1; and Figure 3 shows a flow diagram of a method used by the system of Figure 1.
Detailed Description of the Drawings
Figure 1 is a block diagram showing a basic overview of a display system arranged according to the invention. In this system, a host [11] is connected by connection [16] to a virtual-reality headset [12]. This connection [16] may be wired or wireless, and there may be multiple connection channels, or there may be a single bidirectional connection which is used for multiple purposes. For the purposes of this description, the connection [16] is assumed to be a general-purpose wireless connection, such as one using the Universal Serial Bus (USB) protocol, although other appropriate protocols could, of course, be used.
The host [11] incorporates, among other components, a processor [13] running an application which generates frames of display data using a graphics processing unit (GPU) on the host [11]. These frames are then transmitted to a compression engine [14], which carries out compression on the display data to reduce its volume prior to transmission. The transmission itself is carried out by an output engine [15], which controls the connection to the headset [12] and may include display and wireless driver software. The headset [12] incorporates an input engine [17] for receiving the transmitted display data, which also controls the connection [16] to the host [11] as appropriate. The input engine [17] is connected to a decompression engine [18], which decompresses the received display data as appropriate. The decompression engine [18] is in turn connected to two display panels [19], one of which is presented to each of a user's eyes when the headset [12] is in use. When the display data has been decompressed, it is transmitted to the display panels
[19] for display, possibly via frame or flow buffers to account for any unevenness in the rate of decompression.
The headset [12] also incorporates blinking sensors [110] which are used to monitor one or both eyes of the user of the headset [12] to enable detection of blinking of the eye or eyes. The sensors [110] could, for example be a camera which monitors the eye or eyes so that a determination of whether it is open or closed, and for how long it is closed, may be made, or sensor could be part of an eye tracking system that tracks movement of the eye and/or what the eye is focussed on. The sensor could be arranged next to each of the display panels [19], as shown, or could be separate. In any case, the or each sensor [110] is connected to an output engine [120] which is connected to the host [11] and transmit information back to the host [11] to control the operation of applications on the host [11]. As will be discussed further below, the sensed data can either be sent directly back to the host for analysis, or some analysis can be carried out at the headset [12] by the output engine [120] or a processor, which may also implement the decompression engine [18]. Thus, in one embodiment, the sensors [110] provide information to the compression engine [14], via the output engine [120]. In another embodiment, the sensors [110] provide information to a processor on the headset [12], such as decompression engine [18].
Figure 2 shows a generalised view of the system of Figure 1 when it is in use. For simplicity, the internal workings of the host [11] and the headset [12] are not shown. As previously mentioned, the headset [12] incorporates two display panels [19], each presented to one of the eyes [24] of a user, when in use. These two panels may in fact be a single display panel which shows two images, but this will not affect the operation of these embodiments of the invention. Only one display panel [19] and eye [24] is shown on Figure 2. Mounted on the display panel [19] is the sensor [110], which is used to monitor the eye [24] to enable blinking of the eye to be determined. As mentioned previously, the sensor [110] could be a camera that simply provides images of the eye [24] to enable another component to determine whether the eye is open or shut and other factors, such as how long the eye is closed to determine a duration of a blink. Alternatively, the sensor [110] itself may have more processing capability, for example if it is part of an eye tracking system, and may be able to make these determinations itself. Whether determined by the sensor [110] or some other component, other blinking data may also be determined. For example, it is possible that an onset of a blink may be determined from other physiological changes at or adjacent to the
eye just before a blink occurs. Frequency of blinking may also be determined, as well as patterns of repetition of blinking.
The blinking data can then be used to predict when the user is next likely to blink, for example, from a determination of an onset of the blink or from a determination of the frequency of blinking or of a repetition pattern of blinking. This prediction is then used by the host [11] to determine a level of compression to reduce the amount of data that needs to be transmitted to the headset. This is based on the recognition that while the eye is blinking, and for a short period thereafter, a user of the headset has reduced visibility and cognisance of the image on the display panel, due to the eye being closed, initially, and then requiring refocusing and understanding of what is being seen. Thus, for this period, the data for all parts of the display panel [19] can be compressed to a greater or lesser extent, or even frozen altogether. This means that the amount of image data that needs to be sent may be greatly reduced (perhaps even to zero) for the period of time. The duration and level of compression may be determined based on the predicted blink and/or on the image data being displayed at the time. For example, if it is determined that a user has relatively long blinks, then the period during which the image data is compressed may be increased and the level of compression may be particularly high at the beginning and during the actual blink, but may decrease towards the end of the blink or during the period immediately thereafter. If it is predicted that the blink is likely to be relatively short, then a lower level of compression may be appropriate. Furthermore, if the image being displayed is relatively static or dark then an increased level of compression may be selected.
As mentioned above, in some circumstances, the image may be completely frozen, whereby the same frame of the image is repeated, with no new image date needing to be sent. This may be determined at the headset, which can then repeat the previous frame and tell the host that it does not need to send any data for the particular duration, or it may be determined by the host, which can send sufficient instructions to enable the headset to repeat the image frame either for a particular period of time or until new data is received. Of course, if the image is fast moving, it may be inappropriate to freeze the image completely, as the judder when the viewer next sees the image and notices that a fast-moving object is at a different location may be undesirable. However, it will be apparent that the level of compression, which can include "complete" compression, where no new image data is sent, can be determined based on the prediction of the blink duration and the predicted period thereafter when the viewer cannot clearly view and understand the image.
Since less image data is sent during this period, it will be apparent that other data may be sent at this time. This allows more data to be transmitted through a limited bandwidth connection to the headset, since constant good-quality images are not required for this period. The other data that is sent could be used to fill in 3D images, render more computer- generated images, or send data to a buffer in advance of a requirement for the data in case of connection problems, or to facilitate higher frame rates.
Figure 3 shows the process for determining compression. It will be described with reference to the example shown in Figure 2. At Step S 101 the sensor [110] monitors an eye if user. The sensed information is then sent, in step SI 02, to a processor. The processor may be on the host [11] or may be on the headset [12], for example, forming the decompression engine [13] or as part of the sensor [110] itself. The processor then analyses the information to determine blinking data. The analysis may include a determination of the times when a blink starts and stops, leading to a duration of the blink. The analysis may include a determination of an onset of a blink. This may be based on a model of the human eye and the physiological signs that a blink is about to start, and/or on a history of monitoring the eye of the viewer and recognizing one or more signs that occur just before a blink occurred. The analysis may further determine a frequency of blinking and/or a pattern of repetition of blinking. The blinking data can then be used to predict when a blink is about to happen, so that image data can be appropriately compressed (or not sent at all) by the host for the predicted period of the blink and for a period of time thereafter, which may be predicted based on the model of the human eye, or on a history of the monitoring of the eye of the particular viewer. For example, if the sensor is part of an eye tracking system, then a history may be developed of how soon a user starts focussing on the image after a blink, so that a historical period for the particular viewer can be determined of when the viewer still does not focus on the image after the end of a blink. Either particular characteristics determined for the particular viewer can be used to determine the level or levels of compression, and the duration thereof, or they can be determined based on the predetermined model of the human eye. The frequency of blinking and the repetition of patterns of blinking may all be used to predict a blink. A determination of an onset of a blink may alternatively be used. The prediction of a blink is then used, as indicated in step SI 04, by the compression engine [14] (or by another component), to determine the level or levels of compression of the image data (or whether no image data is to be sent at all), based on the blinking data and, if desired, on the particular historical characteristics of the particular viewer or the
predetermined model of the human eye, and, if desired, of the image data itself. The determined level of compression may be higher than a level of compression used for normal viewing. The duration of the determined level of compression may depend on the predicted duration of a blink and on the period thereafter when the viewer is not fully cognisant of the image, which may be based on historical data or on the predetermined model.
As mentioned above, different levels of compression may be used, so that higher compression may be used at the beginning of, or during, the blink, an intermediate level of compression may be used towards the end of the blink and/or during the short period thereafter, and a lower level of compression may be used thereafter during normal viewing. Once the level(s) of compression has been determined and the duration of application of the particular level(s), the compression engine [14] compresses the image data received from the processor [13] using the determined compression level(s) for the particular duration(s).
At Step SI 05 the compressed data is sent to the output engine [15] for transmission to the headset [12], where it is decompressed and displayed on the display panels [19], as appropriate, at Step SI 06.
Although not shown separately, as indicated above, the level of compression encompasses a level where no image data is sent at all (i.e. an infinite level of compression). As with the above described embodiment, the determination of the level of compression can be carried out either on the headset or on the host. If a component on the headset determines that the image may be frozen, where the existing frame of the image is repeated on the display panel(s), then it can instruct the decompression engine [18] and/or the display panel(s) [19] to simply repeatedly display the previous frame's image data. The component also sends to the host [11] an indication that this is happening and that no image data is required from the host for the determined period. Alternatively, if the host [11] determines that the image may be frozen, then it can instruct the headset [12] appropriately to simply repeatedly display the previous frame's image data.
Although only one particular embodiment has been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims. For example, hardware aspects may be implemented as software where appropriate and vice versa.
Claims
1. A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; compressing at least some of the display data at a predetermined level based on the blinking data; and sending the display data compressed at the predetermined level for display on the one or more displays.
2. A method according to claim 1, wherein compressing at least some of the display data comprises utilising different compression levels based on the blinking data.
3. A method according to either claim 1 or claim 2, wherein the monitoring and the analysing is carried out at the wearable headset and the compressing and sending is carried out by a host computer, the analysis being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
4. A method according to either claim 1 or claim 2, wherein the monitoring is carried out at the wearable headset and the analysing, compressing and sending is carried out by a host computer, the information being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
5. A method according to any preceding claim, wherein the predetermined level of compression of the display data is a higher level of compression that a normal level of compression, whereby the quality of the image displayed on the one or more displays is reduced from a normal level.
6. A method according to any preceding claim, wherein compressing the display data at the predetermined level of compression starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the compressing at the predetermined level of compression ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
7. A method according to any preceding claim, wherein while the display data that is sent is compressed at the predetermined level of compression, other data is also sent to the wearable headset.
8. A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; freezing at least some of the display data being displayed on the one or more displays for a predetermined time based on the blinking data.
9. A method according to claim 8, wherein the freezing at least some of the display data being displayed on the one or more displays comprises repeating a last frame that was displayed, without updating it.
10. A method according to either claim 8 or claim 9, wherein the monitoring and the analysing is carried out at the wearable headset and the freezing is carried out by a host computer, the blinking data being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
11. A method according to either claim 8 or claim 9, wherein the monitoring is carried out at the wearable headset and the analysing and the freezing is carried out by a host computer, the information being sent from the wearable headset to the host computer and the host
computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
12. A method according to any one of claims 8 to 11, wherein the freezing of at least some of the display data on the one or more displays starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the freezing of at least some of the display data on the one or more displays ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
13. A method according to any one of claims 8 to 12, wherein while at least some of the display data being displayed on the one or more displays is frozen, other data is sent to the wearable headset.
14. A method according to any preceding claim, wherein the predetermined time is based on a predetermined model of a human eye.
15. A system comprising a host computer and a wearable headset, the system configured to perform all the steps of a method according to any one of claims 1 to 14.
16. A system according to claim 15, wherein the monitoring of the eye of the viewer is performed by a detector in the wearable headset, optionally wherein the detector forms part of an eye-tracking mechanism.
17. A system according to either claim 15 or claim 16, wherein the wearable headset is a virtual reality headset or an augmented reality set of glasses.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18759691.1A EP3673663A1 (en) | 2017-08-24 | 2018-08-16 | Compressing image data for transmission to a display of a wearable headset based on information on blinking of the eye |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1713647.4A GB2566013B (en) | 2017-08-24 | 2017-08-24 | Compressing image data for transmission to a display |
GB1713647.4 | 2017-08-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019038520A1 true WO2019038520A1 (en) | 2019-02-28 |
Family
ID=60037081
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2018/052325 WO2019038520A1 (en) | 2017-08-24 | 2018-08-16 | Compressing image data for transmission to a display of a wearable headset based on information on blinking of the eye |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3673663A1 (en) |
GB (1) | GB2566013B (en) |
WO (1) | WO2019038520A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11568242B2 (en) | 2019-12-05 | 2023-01-31 | International Business Machines Corporation | Optimization framework for real-time rendering of media using machine learning techniques |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120242570A1 (en) * | 2011-03-24 | 2012-09-27 | Seiko Epson Corporation | Device, head mounted display, control method of device and control method of head mounted display |
US20160025971A1 (en) * | 2014-07-25 | 2016-01-28 | William M. Crow | Eyelid movement as user input |
US20160238852A1 (en) * | 2015-02-13 | 2016-08-18 | Castar, Inc. | Head mounted display performing post render processing |
EP3109689A1 (en) * | 2015-06-22 | 2016-12-28 | Nokia Technologies Oy | Transition from a display power mode to a different display power mode |
US20170084083A1 (en) * | 2015-09-18 | 2017-03-23 | Fove, Inc. | Video system, video generating method, video distribution method, video generating program, and video distribution program |
US20170178408A1 (en) * | 2015-12-22 | 2017-06-22 | Google Inc. | Adjusting video rendering rate of virtual reality content and processing of a stereoscopic image |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9317114B2 (en) * | 2013-05-07 | 2016-04-19 | Korea Advanced Institute Of Science And Technology | Display property determination |
KR102081933B1 (en) * | 2013-08-28 | 2020-04-14 | 엘지전자 주식회사 | Head mounted display and method for controlling the same |
NZ756561A (en) * | 2016-03-04 | 2023-04-28 | Magic Leap Inc | Current drain reduction in ar/vr display systems |
GB2548151B (en) * | 2016-03-11 | 2020-02-19 | Sony Interactive Entertainment Europe Ltd | Head-mountable display |
-
2017
- 2017-08-24 GB GB1713647.4A patent/GB2566013B/en active Active
-
2018
- 2018-08-16 WO PCT/GB2018/052325 patent/WO2019038520A1/en unknown
- 2018-08-16 EP EP18759691.1A patent/EP3673663A1/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120242570A1 (en) * | 2011-03-24 | 2012-09-27 | Seiko Epson Corporation | Device, head mounted display, control method of device and control method of head mounted display |
US20160025971A1 (en) * | 2014-07-25 | 2016-01-28 | William M. Crow | Eyelid movement as user input |
US20160238852A1 (en) * | 2015-02-13 | 2016-08-18 | Castar, Inc. | Head mounted display performing post render processing |
EP3109689A1 (en) * | 2015-06-22 | 2016-12-28 | Nokia Technologies Oy | Transition from a display power mode to a different display power mode |
US20170084083A1 (en) * | 2015-09-18 | 2017-03-23 | Fove, Inc. | Video system, video generating method, video distribution method, video generating program, and video distribution program |
US20170178408A1 (en) * | 2015-12-22 | 2017-06-22 | Google Inc. | Adjusting video rendering rate of virtual reality content and processing of a stereoscopic image |
Also Published As
Publication number | Publication date |
---|---|
EP3673663A1 (en) | 2020-07-01 |
GB2566013B (en) | 2022-12-07 |
GB2566013A (en) | 2019-03-06 |
GB201713647D0 (en) | 2017-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3809706B1 (en) | Method and apparatus for transmitting scene image of virtual scene, computer device and computer readable storage medium | |
CN114449282B (en) | Video transmission control method and device, equipment and storage medium | |
CN113347405B (en) | Scaling related method and apparatus | |
JP4405419B2 (en) | Screen transmitter | |
US10255713B2 (en) | System and method for dynamically adjusting rendering parameters based on user movements | |
US6549641B2 (en) | Screen image observing device and method | |
US5808588A (en) | Shutter synchronization circuit for stereoscopic systems | |
US10706631B2 (en) | Image generation based on brain activity monitoring | |
GB2577024A (en) | Using headset movement for compression | |
US10916040B2 (en) | Processing image data using different data reduction rates | |
US20190235817A1 (en) | Method and device for processing display data | |
US20090276541A1 (en) | Graphical data processing | |
US10957020B2 (en) | Systems and methods for frame time smoothing based on modified animation advancement and use of post render queues | |
WO2019038520A1 (en) | Compressing image data for transmission to a display of a wearable headset based on information on blinking of the eye | |
GB2607455A (en) | Compressing image data for transmission to a display | |
EP3951476A1 (en) | Head-mountable display device and display method thereof | |
CN113407138A (en) | Application program picture processing method and device, electronic equipment and storage medium | |
US11876976B2 (en) | Data transmission to mobile devices | |
US20100049832A1 (en) | Computer program product, a system and a method for providing video content to a target system | |
US20090274379A1 (en) | Graphical data processing | |
EP4036690A1 (en) | Information processing device, information processing method, server device, and program | |
KR102103430B1 (en) | Method and system for measuring latency in cloud based virtual reallity services | |
EP3550824B1 (en) | Methods and apparatus for remotely controlling a camera in an environment with communication latency | |
US11750861B2 (en) | Compensating for interruptions in a wireless connection | |
EP3217256B1 (en) | Interactive display system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18759691 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018759691 Country of ref document: EP Effective date: 20200324 |