GB2607455A - Compressing image data for transmission to a display - Google Patents

Compressing image data for transmission to a display Download PDF

Info

Publication number
GB2607455A
GB2607455A GB2209291.0A GB202209291A GB2607455A GB 2607455 A GB2607455 A GB 2607455A GB 202209291 A GB202209291 A GB 202209291A GB 2607455 A GB2607455 A GB 2607455A
Authority
GB
United Kingdom
Prior art keywords
blink
blinking
data
eye
wearable headset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2209291.0A
Other versions
GB2607455B (en
GB202209291D0 (en
Inventor
Morse Douglas
Kenneth Hamaker Eric
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB2209291.0A priority Critical patent/GB2607455B/en
Priority claimed from GB1713647.4A external-priority patent/GB2566013B/en
Publication of GB202209291D0 publication Critical patent/GB202209291D0/en
Publication of GB2607455A publication Critical patent/GB2607455A/en
Application granted granted Critical
Publication of GB2607455B publication Critical patent/GB2607455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/162User input
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6373Control signals issued by the client directed to the server or network components for rate control, e.g. request to the server to modify its transmission rate
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer based on recognition of blinking of an eye of a viewer. The method involves monitoring an eye of the viewer to provide information enabling blinking of the eye to be determined and analysing the information to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking. Thereafter, at least some of the display data is frozen for a predetermined period of time based on the blinking data. The analysing may be carried out at the wearable headset or at a host computer. The freezing of the display data may be carried out by the host computer.

Description

Compressing Image Data for Transmission to a Display Background Virtual reality is becoming an increasingly popular display method, especially for computer gaming but also in other applications. This introduces new problems in the generation and display of image data as virtual reality devices must have extremely fast and high-resolution displays to create an illusion of reality. This means that a very large volume of data must be transmitted to the device from any connected host.
As virtual-reality display devices become more popular, it is also becoming desirable for them to be wirelessly connected to their hosts. This introduces considerable problems with the transmission of the large volume of display data required, as wireless connections commonly have very limited bandwidth. It is therefore desirable for as much compression to be applied to the display data as possible without affecting its quality, as reductions in quality are likely to be noticed by a user.
The invention aims to mitigate some of these problems. 15 Summary Accordingly, in one aspect, the invention provides a method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; compressing at least some of the display data at a predetermined level based on the blinking data; and sending the display data compressed at the predetermined level for display on the one or more displays Compressing at least some of the display data may, in some embodiments, comprise utilising different compression levels based on the blinking data.
In one embodiment, the monitoring and the analysing is carried out at the wearable headset and the compressing and sending is carried out by a host computer, the analysis being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
In another embodiment, the monitoring is carried out at the wearable headset and the analysing, compressing and sending is carried out by a host computer, the information being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
Preferably, the predetermined level of compression of the display data is a higher level of compression that a normal level of compression, whereby the quality of the image displayed on the one or more displays is reduced from a normal level In a preferred embodiment, compressing the display data at the predetermined level of compression starts when the blinking data indicates.
that an onset of a particular blink has commenced, or that a particular blink has started, or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the compressing at the predetermined level of compression ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink, or a third predetermined time after the particular blink has ended. or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
In an embodiment, while the display data that is sent is compressed at the predetermined level of compression, other data is also sent to the wearable headset.
According to a second aspect, the invention provides a method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; freezing at least some of the display data being displayed on the one or more displays for a predetermined time based on the blinking data.
Preferably, the freezing at least some of the display data being displayed on the one or more displays comprises repeating a last frame that was displayed, without updating it.
In one embodiment, the monitoring and the analysing is carried out at the wearable headset and the freezing is carried out by a host computer, the blinking data being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
In another embodiment, the monitoring is carried out at the wearable headset and the analysing and the freezing is carried out by a host computer, the information being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
In a preferred embodiment, the freezing of at least some of the display data on the one or more displays starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence or a prediction of a blink about to start and the freezing of at least some of the display data on the one or more displays ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended-o a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks Preferably, while at least some of the display data being displayed on the one or more displays is frozen, other data is sent to the wearable headset The predetermined time is preferably based on a predetermined model of a human eye.
According to a third aspect, the invention provides a system comprising a host computer and a wearable headset, the system configured to perform all the steps of the method described above.
Preferably, the monitoring of the eye of the viewer is performed by a detector in the wearable headset, optionally wherein the detector forms part of an eye-tracking mechanism.
The wearable headset may be a virtual reality headset or an augmented reality set of glasses.
This above described methods are advantageous as they allow assumptions regarding user blinking to be used to increase compression, thereby allowing limited bandwidth to be used to send other data while the image data is more compressed than normal (or not sent at all).
Brief Description of the Drawings
Embodiments of the invention will now be more fully described, by way of example, with reference to the drawings, of which: Figure 1 shows a basic overview of a system according to one embodiment of the invention; Figure 2 shows a schematic view of a VR headset in use in the system of Figure 1; and Figure 3 shows a flow diagram of a method used by the system of Figure 1.
Detailed Description of the Drawings
Figure 1 is a block diagram showing a basic overview of a display system arranged according to the invention. In this system, a host [11] is connected by connection [16] to a virtual-reality headset [12]. This connection [16] may be wired or wireless, and there may be multiple connection channels, or there may be a single bidirectional connection which is used for multiple purposes. For the purposes of this description, the connection [16] is assumed to be a general-purpose wireless connection, such as one using the Universal Serial Bus (USB) protocol, although other appropriate protocols could, of course, be used.
The host [11] incorporates, among other components, a processor [13] running an application which generates frames of display data using a graphics processing unit (GPU) on the host [11]. These frames are then transmitted to a compression engine [14], which carries out compression on the display data to reduce its volume prior to transmission. The transmission itself is carried out by an output engine [15], which controls the connection to the headset [12] and may include display and wireless driver software The headset [12] incorporates an input engine [17] for receiving the transmitted display data, which also controls the connection [16] to the host [11] as appropriate. The input engine [17] is connected to a decompression engine [18], which decompresses the received display data as appropriate. The decompression engine [18] is in turn connected to two display panels [19], one of which is presented to each of a user's eyes when the headset [12] is in use. When the display data has been decompressed, it is transmitted to the display panels [19] for display, possibly via frame or flow buffers to account for any unevenness in the rate of decompression The headset [12] also incorporates blinking sensors [110] which are used to monitor one or both eyes of the user of the headset [12] to enable detection of blinking of the eye or eyes The sensors [110] could, for example be a camera which monitors the eye or eyes so that a determination of whether it is open or closed, and for how long it is closed, may be made, or sensor could be part of an eye tracking system that tracks movement of the eye and/or what the eye is focussed on. The sensor could be arranged next to each of the display panels [19], as shown, or could be separate. In any case, the or each sensor [110] is connected to an output engine [120] which is connected to the host [11] and transmit information back to the host [11] to control the operation of applications on the host [11]. As will be discussed further below, the sensed data can either be sent directly back to the host for analysis, or some analysis can be carried out at the headset [12] by the output engine [120] or a processor, which may also implement the decompression engine [18].
Thus, in one embodiment, the sensors [110] provide information to the compression engine [14], via the output engine [120] In another embodiment, the sensors [110] provide information to a processor on the headset [12], such as decompression engine [18] Figure 2 shows a generalised view of the system of Figure 1 when it is in use. For simplicity, the internal workings of the host [11] and the headset [12] are not shown. As previously mentioned, the headset [12] incorporates two display panels [19], each presented to one of the eyes [24] of a user, when in use. These two panels may in fact be a single display panel which shows two images, but this will not affect the operation of these embodiments of the invention. Only one display panel [19] and eye [24] is shown on Figure 2. Mounted on the display panel [19] is the sensor [110], which is used to monitor the eye [24] to enable blinking of the eye to be determined. As mentioned previously, the sensor [110] could be a camera that simply provides images of the eye [24] to enable another component to determine whether the eye is open or shut and other factors, such as how long the eye is closed to determine a duration of a blink. Alternatively, the sensor [110] itself may have more processing capability, for example if it is part of an eye tracking system, and may be able to make these determinations itself Whether determined by the sensor [110] or some other component, other blinking data may also be determined. For example, it is possible that an onset of a blink may be determined from other physiological changes at or adjacent to the eye just before a blink occurs. Frequency of blinking may also be determined, as well as patterns of repetition of blinking The blinking data can then be used to predict when the user is next likely to blink, for example, from a determination of an onset of the blink or from a determination of the frequency of blinking or of a repetition pattern of blinking. This prediction is then used by the host [11] to determine a level of compression to reduce the amount of data that needs to be transmitted to the headset. This is based on the recognition that while the eye is blinking, and for a short period thereafter, a user of the headset has reduced visibility and cognisance of the image on the display panel, due to the eye being closed, initially, and then requiring refocusing and understanding of what is being seen. Thus, for this period, the data for all parts of the display panel [19] can be compressed to a greater or lesser extent, or even frozen altogether. This means that the amount of image data that needs to be sent may be greatly reduced (perhaps even to zero) for the period of time. The duration and level of compression may be determined based on the predicted blink and/or on the image data being displayed at the time. For example, if it is determined that a user has relatively long blinks, then the period during which the image data is compressed may be increased and the level of compression may be particularly high at the beginning and during the actual blink, but may decrease towards the end of the blink or during the period immediately thereafter. If it is predicted that the blink is likely to be relatively short, then a lower level of compression may be appropriate. Furthermore, if the image being displayed is relatively static or dark then an increased level of compression may be selected.
As mentioned above, in some circumstances, the image may be completely frozen, whereby the same frame of the image is repeated, with no new image date needing to be sent. This may be determined at the headset, which can then repeat the previous frame and tell the host that it does not need to send any data for the particular duration, or it may be determined by the host, which can send sufficient instructions to enable the headset to repeat the image frame either for a particular period of time or until new data is received. Of course, if the image is fast moving, it may be inappropriate to freeze the image completely, as the judder when the viewer next sees the image arid notices that a fast-moving object is at a different location may be undesirable. However, it will be apparent that the level of compression, which can include "complete" compression, where no new image data is sent, can be determined based on the prediction of the blink duration and the predicted period thereafter when the viewer cannot clearly view and understand the image.
Since less image data is sent during this period, it will be apparent that other data may be sent at this time This allows more data to be transmitted through a limited bandwidth connection to the headset, since constant good-quality images are not required for this period. The other data that is sent could be used to fill in 3D images, render more computer-generated images, or send data to a buffer in advance of a requirement for the data in case of connection problems, or to facilitate higher frame rates Figure 3 shows the process for determining compression. It will be described with reference to the example shown in Figure 2. At Step S101 the sensor [110] monitors an eye if user. The sensed information is then sent, in step S102, to a processor. The processor may be on the host [1]] or may be on the headset [12], for example, forming the decompression engine [13] or as part of the sensor [110] itself The processor then analyses the information to determine blinking data. The analysis may include a determination of the times when a blink starts and stops, leading to a duration of the blink. The analysis may include a determination of an onset of a blink. This may be based on a model of the human eye and the physiological signs that a blink is about to start, and/or on a history of monitoring the eye of the viewer and recognizing one or more signs that occur just before a blink occurred. The analysis may further determine a frequency of blinking and/or a pattern of repetition of blinking. The blinking data can then be used to predict when a blink is about to happen, so that image data can be appropriately compressed (or not sent at all) by the host for the predicted period of the blink and for a period of time thereafter, which may be predicted based on the model of the human eye, or on a history of the monitoring of the eye of the particular viewer. For example, if the sensor is part of an eye tracking system, then a history may be developed of how soon a user starts focussing on the image after a blink, so that a historical period for the particular viewer can be determined of when the viewer still does not focus on the image after the end of a blink. Either particular characteristics determined for the particular viewer can be used to determine the level or levels of compression, and the duration thereof, or they can be determined based on the predetermined model of the human eye. The frequency of blinking and the repetition of patterns of blinking may all be used to predict a blink. A determination of an onset of a blink may alternatively be used.
The prediction of a blink is then used, as indicated in step S104, by the compression engine [14] (or by another component), to determine the level or levels of compression of the image data (or whether no image data is to be sent at all), based on the blinking data and, if desired, on the particular historical characteristics of the particular viewer or the predetermined model of the human eye, and, if desired, of the image data itself The determined level of compression may be higher than a level of compression used for normal viewing. The duration of the determined level of compression may depend on the predicted duration of a blink and on the period thereafter when the viewer is not fully cognisant of the image, which may be based on historical data or on the predetermined model.
As mentioned above, different levels of compression may be used, so that higher compression may be used at the beginning of, or during, the blink, an intermediate level of compression may be used towards the end of the blink and/or during the short period thereafter, and a lower level of compression may be used thereafter during normal viewing.
Once the level(s) of compression has been determined and the duration of application of the particular level(s), the compression engine [14] compresses the image data received from the processor [13] using the determined compression level(s) for the particular duration(s).
At Step S105 the compressed data is sent to the output engine [15] for transmission to the headset [12], where it is decompressed and displayed on the display panels [19], as appropriate, at Step S106.
Although not shown separately, as indicated above, the level of compression encompasses a level where no image data is sent at all (i.e. an infinite level of compression). As with the above described embodiment, the determination of the level of compression can be carried out either on the headset or on the host. If a component on the headset determines that the image may be frozen, where the existing frame of the image is repeated on the display panel(s), then it can instruct the decompression engine [18] and/or the display panel(s) [19] to simply repeatedly display the previous frame's image data. The component also sends to the host [11] an indication that this is happening and that no image data is required from the host for the determined period. Alternatively, if the host [11] determines that the image may be frozen, then it can instruct the headset [12] appropriately to simply repeatedly display the previous frame's image data.
Although only one particular embodiment has been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims. For example, hardware aspects may be implemented as software where appropriate and vice versa.
Aspects of the apparatus and methods described herein are further exemplified in the following numbered CLAUSES: CLAUSE 1. A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; compressing at least some of the display data at a predetermined level based on the blinking data; and sending the display data compressed at the predetermined level for display on the one or more displays.
CLAUSE 2. A method according to clause 1, wherein compressing at least some of the display data comprises utilising different compression levels based on the blinking data.
CLAUSE 3. A method according to either clause I or clause 2, wherein the monitoring and the analysing is carried out at the wearable headset and the compressing and sending is carried out by a host computer, the analysis being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
CLAUSE 4. A method according to either clause 1 or clause 2, wherein the monitoring is carried out at the wearable headset and the analysing, compressing and sending is carried out by a host computer, the information being sent from the wearable headset to the host computer and the compressed display data being sent to the wearable headset.
CLAUSE 5. A method according to any preceding clause, wherein the predetermined level of compression of the display data is a higher level of compression that a normal level of compression, whereby the quality of the image displayed on the one or more displays is reduced from a normal level.
CLAUSE 6. A method according to any preceding clause, wherein compressing the display data at the predetermined level of compression starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence; or a prediction of a blink about to start and the compressing at the predetermined level of compression ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink, or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
CLAUSE 7. A method according to any preceding clause, wherein while the display data that is sent is compressed at the predetermined level of compression, other data is also sent to the wearable headset.
CLAUSE 8. A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring an eye of the viewer of the image to provide information enabling blinking of the eye to be determined; analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking; freezing at least some of the display data being displayed on the one or more displays for a predetermined time based on the blinking data.
CLAUSE 9. A method according to clause 8, wherein the freezing at least some of the display data being displayed on the one or more displays comprises repeating a last frame that was displayed, without updating it.
CLAUSE 10. A method according to either clause 8 or clause 9, wherein the monitoring and the analysing is carried out at the wearable headset and the freezing is carried out by a host computer, the blinking data being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
CLAUSE 11. A method according to either clause 8 or clause 9, wherein the monitoring is carried out at the wearable headset and the analysing and the freezing is carried out by a host computer, the information being sent from the wearable headset to the host computer and the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen CLAUSE 12 A method according to any one of clauses 8 to 11, wherein the freezing of at least some of the display data on the one or more displays starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence or a prediction of a blink about to start and the freezing of at least some of the display data on the one or more displays ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
CLAUSE 13. A method according to any one of clauses 8 to 12, wherein while at least some of the display data being displayed on the one or more displays is frozen, other data is sent to the wearable headset.
CLAUSE 14. A method according to any preceding clause, wherein the predetermined time is based on a predetermined model of a human eye.
CLAUSE 15. A system comprising a host computer and a wearable headset, the system configured to perform all the steps of a method according to any one of clauses 1 to 14 CLAUSE 16 A system according to clause 15, wherein the monitoring of the eye of the viewer is performed by a detector in the wearable headset, optionally wherein the detector forms part of an eye-tracking mechanism CLAUSE 17. A system according to either clause 15 or clause 16, wherein the wearable headset is a virtual reality headset or an augmented reality set of glasses.

Claims (10)

  1. Claims 1. A method for adapting display data forming an image for display on one or more displays of a wearable headset to a viewer, the method comprising: monitoring, by the wearable headset, an eye of the viewer of the image to provide information enabling blinking of the eye to be determined, analysing the information regarding the monitored eye to determine blinking data comprising one or more of: an onset of a blink, a start of a blink, a duration of a blink, an end of a blink, frequency of blinking, and a repetition pattern of blinking, wherein the analysing is carried out at the wearable headset and the analysis is sent from the wearable headset to a host computer remote from the wearable headset over a wireless connection or wherein the information is sent from the wearable headset to the host computer over the wireless connection and the analysing is carried out by the host computer; freezing at least some of the display data being displayed on the one or more displays for a predetermined time based on the blinking data.
  2. 2 A method according to claim 1, wherein the freezing at least some of the display data being displayed on the one or more displays comprises repeating a last frame that was displayed, without updating it.
  3. 3. A method according to either claim 1 or claim 2, wherein the freezing is carried out by a host computer, the host computer sending instructions to the wearable headset so that at least some of the display data being displayed on the one or more displays is frozen.
  4. 4. A method according to any one of claims 1 to 3, wherein the freezing of at least some of the display data on the one or more displays starts when the blinking data indicates: that an onset of a particular blink has commenced; or that a particular blink has started; or a prediction of an onset of a blink about to commence or a prediction of a blink about to start and the freezing of at least some of the display data on the one or more displays ends after: a first predetermined time after the onset or the predicted onset of the particular blink; or a second predetermined time after the start or the predicted start of the particular blink; or a third predetermined time after the particular blink has ended; or a fourth predetermined time after a predicted duration of the particular blink, wherein the predictions are based on the frequency and/or repetition pattern and/or durations of previous blinks.
  5. 5. A method according to any one of claims 1 to 4, wherein while at least some of the display data being displayed on the one or more displays is frozen, other data is sent to the wearable headset.
  6. 6. A method according to any preceding claim, wherein the predetermined time is based on a predetermined model of a human eye.
  7. 7. A system comprising a host computer and a wearable headset, the system configured to perform all the steps of a method according to any one of claims 1 to 6.
  8. 8 A system according to claim 7, wherein the monitoring of the eye of the viewer is performed by a detector in the wearable headset.
  9. 9 A system according to claim 8, wherein the detector forms part of an eye-tracking mechanism.
  10. 10. A system according to any one of claims 7 to 9, wherein the wearable headset is a virtual reality headset or an augmented reality set of glasses.
GB2209291.0A 2017-08-24 2017-08-24 Compressing image data for transmission to a display Active GB2607455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2209291.0A GB2607455B (en) 2017-08-24 2017-08-24 Compressing image data for transmission to a display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2209291.0A GB2607455B (en) 2017-08-24 2017-08-24 Compressing image data for transmission to a display
GB1713647.4A GB2566013B (en) 2017-08-24 2017-08-24 Compressing image data for transmission to a display

Publications (3)

Publication Number Publication Date
GB202209291D0 GB202209291D0 (en) 2022-08-10
GB2607455A true GB2607455A (en) 2022-12-07
GB2607455B GB2607455B (en) 2023-02-15

Family

ID=83997747

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2209291.0A Active GB2607455B (en) 2017-08-24 2017-08-24 Compressing image data for transmission to a display

Country Status (1)

Country Link
GB (1) GB2607455B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8836641B1 (en) * 2013-08-28 2014-09-16 Lg Electronics Inc. Head mounted display and method of controlling therefor
WO2016014608A1 (en) * 2014-07-25 2016-01-28 Microsoft Technology Licensing, Llc Eyelid movement as user input
EP3109689A1 (en) * 2015-06-22 2016-12-28 Nokia Technologies Oy Transition from a display power mode to a different display power mode
US20170178408A1 (en) * 2015-12-22 2017-06-22 Google Inc. Adjusting video rendering rate of virtual reality content and processing of a stereoscopic image
WO2017151974A1 (en) * 2016-03-04 2017-09-08 Magic Leap, Inc. Current drain reduction in ar/vr display systems
GB2548151A (en) * 2016-03-11 2017-09-13 Sony Computer Entertainment Europe Ltd Head-mountable display

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8836641B1 (en) * 2013-08-28 2014-09-16 Lg Electronics Inc. Head mounted display and method of controlling therefor
WO2016014608A1 (en) * 2014-07-25 2016-01-28 Microsoft Technology Licensing, Llc Eyelid movement as user input
EP3109689A1 (en) * 2015-06-22 2016-12-28 Nokia Technologies Oy Transition from a display power mode to a different display power mode
US20170178408A1 (en) * 2015-12-22 2017-06-22 Google Inc. Adjusting video rendering rate of virtual reality content and processing of a stereoscopic image
WO2017151974A1 (en) * 2016-03-04 2017-09-08 Magic Leap, Inc. Current drain reduction in ar/vr display systems
GB2548151A (en) * 2016-03-11 2017-09-13 Sony Computer Entertainment Europe Ltd Head-mountable display

Also Published As

Publication number Publication date
GB2607455B (en) 2023-02-15
GB202209291D0 (en) 2022-08-10

Similar Documents

Publication Publication Date Title
CN113347405B (en) Scaling related method and apparatus
US7129981B2 (en) Rendering system and method for images having differing foveal area and peripheral view area resolutions
US6919907B2 (en) Anticipatory image capture for stereoscopic remote viewing with foveal priority
US11106422B2 (en) Method for processing display data
US5808588A (en) Shutter synchronization circuit for stereoscopic systems
US6917715B2 (en) Foveal priority in stereoscopic remote viewing system
US20020001397A1 (en) Screen image observing device and method
US10706631B2 (en) Image generation based on brain activity monitoring
US20090276541A1 (en) Graphical data processing
US10916040B2 (en) Processing image data using different data reduction rates
GB2577024A (en) Using headset movement for compression
JP2024038128A (en) Systems and methods for driving displays
WO2019038520A1 (en) Compressing image data for transmission to a display of a wearable headset based on information on blinking of the eye
GB2607455A (en) Compressing image data for transmission to a display
EP3951476A1 (en) Head-mountable display device and display method thereof
CN105468320A (en) Black and white screen display method and device based on Android platform and intelligent terminal
US11876976B2 (en) Data transmission to mobile devices
US20090274379A1 (en) Graphical data processing
EP4036690A1 (en) Information processing device, information processing method, server device, and program
EP3550824B1 (en) Methods and apparatus for remotely controlling a camera in an environment with communication latency
US20200359070A1 (en) Compensating for interruptions in a wireless connection
EP3217256B1 (en) Interactive display system and method
US10962773B2 (en) Method and apparatus for determining whether an eye of a user of a head mounted display is directed at a fixed point
GB2606889A (en) Using headset movement for compression
GB2606651A (en) Using headset movement for compression