WO2017136554A1 - System and method for timing input sensing, rendering, and display to minimize latency - Google Patents

System and method for timing input sensing, rendering, and display to minimize latency Download PDF

Info

Publication number
WO2017136554A1
WO2017136554A1 PCT/US2017/016222 US2017016222W WO2017136554A1 WO 2017136554 A1 WO2017136554 A1 WO 2017136554A1 US 2017016222 W US2017016222 W US 2017016222W WO 2017136554 A1 WO2017136554 A1 WO 2017136554A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
touch
input
application
display
Prior art date
Application number
PCT/US2017/016222
Other languages
French (fr)
Inventor
Bruno RODRIGUES DE ARAUJO
Ricardo Jorge Jota COSTA
Clifton Forlines
Original Assignee
Tactual Labs Co.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tactual Labs Co. filed Critical Tactual Labs Co.
Priority to JP2018536510A priority Critical patent/JP2019505045A/en
Priority to DE112017000610.4T priority patent/DE112017000610T5/en
Priority to CN201780019312.9A priority patent/CN109074189A/en
Publication of WO2017136554A1 publication Critical patent/WO2017136554A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • G06F3/04184Synchronisation with the driving of the display or the backlighting unit to avoid interferences generated internally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/393Arrangements for updating the contents of the bit-mapped memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2310/00Command of the display device
    • G09G2310/06Details of flat display driving waveforms
    • G09G2310/067Special waveforms for scanning, where no circuit details of the gate driver are given
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2310/00Command of the display device
    • G09G2310/08Details of timing specific for flat panels, other than clock recovery
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/127Updating a frame memory using a transfer of data from a source area to a destination area

Definitions

  • the disclosed systems and methods relate in general to the field of user input to a touch sensitive device, and in particular to user input systems and methods which can reduce the latency between a most recent input event and the displaying of a rendered frame reflecting such input.
  • FIG. 1 shows a diagram illustrating a prior double-buffered solution.
  • FIGS. 2 and 3 show diagrams illustrating prior methods of rendering.
  • FIGS. 4 and 5 show diagrams illustrating embodiments of a solution to reduce lag by rendering faster and refreshing the screen faster.
  • FIG. 6 shows a diagram illustrating an embodiment of the presently disclosed system and method in which a system is provided that times the rendering of the GUI.
  • each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations may be implemented by means of analog or digital hardware and computer program instructions.
  • These computer program instructions may be stored on computer-readable media and provided to a processor of a general purpose computer, special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implements the functions/acts specified in the block diagrams or operational block or blocks.
  • the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
  • Interactive devices can be seen as a composition of multiple parts, including input sensory, processing and rendering tasks, and output display.
  • the pipeline required to convert input sensory (e.g.: finger touch) into a visual response is not immediate; each action introduces latency for numerous reasons, such as time required to process information or, in the case of displays, frequency that the display uses to re-draw its entire screen, or parts of thereof.
  • a system is only as fast as the slowest of the actions. For example, displays may only refresh once every 60Hz, which means that, if everything else is faster (or available) before the next refresh cycle, the system is forced to wait for the display to be ready to display the new information.
  • the time to refresh the display and the time required to sample the input sensor and compute the touch point are known.
  • the input sensing is delayed to coincide to end when the refresh is about to start.
  • the rendering is started at the last possible moment so that it completes and the output is ready for the next display refresh. This just-in-time rendering will have access to the most recent input events; thus, the resulting display will include graphics with minimum amounts of latency.
  • the time to refresh or the time to sample the input sensor are not known a priori and require measurements. Measurements are executed based on key timestamps, and this allows us to ensure temporal ordering and sequencing. Through measuring, the input sample rate and display refresh rate become known, and the rendering of the graphics can be timed as outlined above in order to minimize latency.
  • such measurements are detected using external measure paraphernalia and user-input (or defined as constants).
  • these measures vary according to system workload and measures are not precise, but mere attempts to reduce the wait for the display.
  • the time required to render the output is a function of the complexity of the output and competing system activities (more complex outputs generally require more time to render, competing activities on the CPU or GPU can slow down rendering, and so on).
  • the system estimates the rendering time based on some combination of: models of the output complexity; knowledge of competing activities on key system components; the time required to render previous frames; and, the time required to render similar outputs in the past.
  • the system renders the output as soon as the display completes its vertical refresh and the appropriate buffer is available to render into. After rendering is complete, and until the next vertical refresh, the system updates this rendering based on additional input event samples by the input sensor. For example, the system might render a view of a GUI and then translate it based on additional input events until such a time as the rendering must be displayed.
  • FIG. 1 shows a prior double-buffered solution.
  • the display In such a solution, the display
  • FIGS. 2 and 3 show a prior method of rendering.
  • the GUI is rendered using the most recent input data and application state. Latency (lag) occurs when there is a difference between the application state (including input) when the GUI is rendered and when it's finally displayed on screen.
  • the lag of rendering A is shown as the time between the start of rendering A and the vertical refresh that displays the result of rendering A on the screen.
  • FIGS. 4 and 5 show embodiments of a solution to reduce lag by rendering faster and refreshing the screen faster.
  • a faster GPU in their system or other such improvements, and may run their display at a faster refresh rate (say 100Hz rather than 60Hz).
  • the rendered graphics are both created faster and display faster onscreen to the user, thus reducing the time between the start of rendering and the display of the GUI to the user.
  • this approach will reduce lag at the expense of more capable rendering engines and faster more expensive displays.
  • FIG. 6 shows an embodiment of the disclosed system and method wherein the rendering of the GUI is timed so that it finishes as close to the vertical refresh as possible. In this manner, the rendering can use the most recently available input from the user and most recently available application state to produce the rendered image, thus reducing lag.
  • a system and method are provided for decreasing latency between an acquisition of touch data and processing of an associated rendering task in a touch sensitive device having a touch sensing system capable of producing touch data at a touch sampling rate and having a display system that displays frames at a refresh rate.
  • the system estimates at least one of (a) a period of time for sampling touch data from the touch sensing system, (b) a period of time for computing touch event data from sampled touch data, and (c) a period of time for rendering of a frame to a frame buffer.
  • the system determines a period of time Tc for (a) sampling touch data from the touch sensing system, (b) computing touch event data from sampled touch data, and (c) rendering of a frame to a frame buffer, based at least in part on the estimate.
  • the system determines a point in time Tr at which the display system will be refreshed from the frame buffer.
  • a sampling start time is computed based at least in part upon Tr and Tc.
  • Sampling of the touch sensing system is initiated to obtain sampled touch data at the sampling start time.
  • Touch event data is computed from the sampled touch data, and a frame that reflects the touch event data is rendered to the frame buffer prior to the time Tr.
  • the display is then refreshed from the frame buffer.
  • the system determines a period of time Tc required to compute touch event data from sampled touch data and render a frame to a frame buffer and a point in time Tr at which the display system will be refreshed from the frame buffer.
  • the touch sensing system is sampled to create sampled touch data.
  • Touch event data is computed from the sampled touch data, and the beginning of this computing step is delayed to occur at a point in time that is at least as early as (Tr - Tc).
  • a frame is rendered to the frame buffer prior to the point in time Tr, and the display system is then refreshed from the frame buffer.
  • a method for decreasing latency between an acquisition of touch data and processing of an associated rendering task in a touch sensitive device having (a) a touch sensing system capable of producing touch data a touch sampling rate and having a sampling sync, and (b) a display system that displays frames at a refresh rate having a refresh sync.
  • Sampling of touch sensor output is commenced on a sampling sync and sampled output is placed in a sampling buffer at a sampling rate.
  • Frame rendering to one of a plurality of display buffers is commenced on a refresh sync, and display images corresponding to a rendered frame are displayed on a refresh sync.
  • a period of time Tc corresponding to an estimated time for collecting the output in the sampling buffer is determined, a period of time Tm corresponding to an estimated time for computing touch event data from collected output is determined, and a period of time Tr corresponding to an estimated time for rendering of a frame corresponding to the touch event data is determined.
  • a start time is computed based upon the refresh sync, Tc, Tm and Tr, and collecting of the output in the sampling buffer is initiated at the start time. Thereafter, touch event data is computed from collected output and a frame corresponding to the touch event data is rendered.
  • the swap buffer duration may be used to estimate when an image rendered by the application is ready to be presented by the display.
  • alternative strategies for estimating the image availability may be employed; for example, without limitation, an alternative strategy for estimating the image availability may include monitoring of a buffer-based abstraction of the frame buffer (e.g. a queue, dual buffer, triple- buffer, (or any multi-buffer strategy) stack, hash, heap, linked list, doubly-linked list, etc.).
  • an application can be provided with an API that allows it to specify the time required to perform a render, for example by indicating a render start and a render end.
  • a random period of time may be chosen as an estimate the period of rendering.
  • estimates of input sampling and/or output processing are used to adjust scheduling; thus, for example, the amount of time taken by various processes within a system may be monitored, and the scheduling of input sampling and/or output processing may be adjusted to compensate for that time.
  • multiple overlapping schedules may exist, with multiple candidate outputs generated and a correct one chosen given a particular amount of time taken to perform rendering or otherwise process an input.
  • one or more of the following may be monitored: sampling time, computing time (of one or more of the operating system, UI framework, application, or other process), rendering, the time required for the hardware to render a given input, and the time required to deliver an input to the application.
  • time taken by, e.g., the operating system for input and output processing can be monitored and fed to a scheduler.
  • the latency of the system is improved by scheduling the processing of input and the display of its effects on the screen. Such may be desirable when, e.g. , an operating system is leveraged in native mode, and no access is natively provided for insertion at the render. For a given duration of a processing step, improved latency may be achieved by scheduling that processing to take place after a particular input, or before a particular refresh of the display (or on a recurring, pre-determined or continuously adjusted schedule with respect to the input or display). In an embodiment, output and/or input events might be skipped. In an embodiment, input events from several input frames are be packed together by the scheduling process to be delivered to the application.
  • scheduling is performed to determine when the rendering of an application's view should take place.
  • scheduling of when the input delivery to the application is conducted and which sampled input should be included is considered, to ensure that the most recent input is the one which is processed and whose effects are shown on the screen.
  • scheduling of input is conducted.
  • scheduling of output is conducted.
  • both are conducted.
  • an operating system forwards event data (e.g., touch event data) to a process that consolidates and schedules the delivery of the event data to the application.
  • the process may consolidate of multiple events, including events from a plurality of sensors (which may or may not run and similar sampling rates).
  • scheduling the delivery to the application may be based on the input processing and output processing time for the consolidated events.
  • the delivery of the events may be deferred until a point prior to the subsequent frame buffer switch (e.g., just prior to the subsequent frame buffer switch), but will nonetheless include the most recent information when delivered.
  • the consolidated events are determined to exceed the available time left until a frame buffer switch, one or more of the consolidated events may be skipped, and thus the input decimated to permit display on the frame buffer switch.
  • simultaneous (or near simultaneous) processing of multiple input samples may be performed. Such processing allows the decoupling of the input sampling from the scheduling process. Thus, in an embodiment, multiple samples of input are taken. When the time has come for an input to be processed for modifying the application state, that most recent event is sent to the application to update its state. In an embodiment, to mitigate time by an application updating its state multiple times between output frames, the application may be apprised of input only when it is appropriate for it to perform an update (and render). While this strategy is effective in reducing latency by only rendering the most recent event, it is possible that the skipped events might contain needed or desired information.
  • input events are queued (or otherwise packed/grouped) and delivered as a group to the application (or otherwise processed simultaneously or near simultaneously), thus reducing the number of times an application updates its state.
  • Such delivery of input events reduces the number of updates occurring, increases the likelihood that the input events are delivered at the correct time for a particular output method/process.
  • applications may thus receive the same (or largely the same) input as they would otherwise without the benefit of this queueing.
  • touch may be used to describe events or periods of time in which a user's finger, a stylus, an object or a body part is detected by the sensor. In some embodiments, these detections occur only when the user is in physical contact with a sensor, or a device in which it is embodied. In other embodiments, the sensor may be tuned to allow the detection of "touches” that are hovering a distance above the touch surface or otherwise separated from the touch sensitive device.
  • touch event and the word “touch” when used as a noun include a near touch and a near touch event, or any other gesture that can be identified using a sensor.
  • At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a special purpose or general purpose computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • processor such as a microprocessor
  • a memory such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
  • Routines executed to implement the embodiments may be implemented as part of an operating system, firmware, ROM, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as "computer programs.” Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface).
  • the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
  • a machine-readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods.
  • the executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
  • the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in their entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine-readable medium in entirety at a particular instance of time.
  • Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others.
  • recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others.
  • a machine readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
  • a machine e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.
  • hardwired circuitry may be used in combination with software instructions to implement the techniques.
  • the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.

Abstract

The disclosed systems and methods relate in general to the field of user input to a touch sensitive device, and in particular to user input systems and methods which can reduce the latency between a most recent input event and the displaying of a rendered frame reflecting such input. In an embodiment, a method for decreasing latency between an input touch event and the display of a frame reflecting the input touch event in a touch sensitive device includes estimating the time of a next frame refresh, receiving from the operating system touch data reflective of an input touch event, determining the application associated with the input touch event, estimating the time it will take the application to process and render the received touch data, determining a time at which delivery of the touch data to the application will permit the application to process and render the touch data prior to the time of the next frame refresh, based at least in part on the estimated time it will take the application to process and render the touch data, and the estimated time of the next frame refresh, and providing the touch data to the application just prior to the determined time.

Description

SYSTEM AND METHOD FOR TIMING INPUT SENSING, RENDERING, AND DISPLAY
TO MINIMIZE LATENCY
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is also a non-provisional of, and claims priority to, U.S.
Provisional Patent Application No. 62/290,347 filed February 2, 2016, the entire disclosure of which is incorporated herein by reference. This application is a continuation-in-part of, and claims priority to, U.S. Patent Application No. 14/945,083 filed November 18, 2015, the entire disclosure of which is incorporated herein by reference, which itself is a non-provisional of and claims priority to U.S. Patent Application No. 62/081,261 filed November 18, 2014. This application includes material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office files or records, but otherwise reserves all copyright rights whatsoever.
FIELD
[0002] The disclosed systems and methods relate in general to the field of user input to a touch sensitive device, and in particular to user input systems and methods which can reduce the latency between a most recent input event and the displaying of a rendered frame reflecting such input.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description of preferred embodiments as illustrated in the accompanying drawings, in which reference characters refer to the same parts throughout the various views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating principles of the invention.
[0004] FIG. 1 shows a diagram illustrating a prior double-buffered solution.
[0005] FIGS. 2 and 3 show diagrams illustrating prior methods of rendering.
[0006] FIGS. 4 and 5 show diagrams illustrating embodiments of a solution to reduce lag by rendering faster and refreshing the screen faster.
[0007] FIG. 6 shows a diagram illustrating an embodiment of the presently disclosed system and method in which a system is provided that times the rendering of the GUI. DETAILED DESCRIPTION
[0008] Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
[0009] Reference in this specification to "an embodiment" or "the embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an embodiment of the disclosure. The appearances of the phrase "in an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
[0010] The present invention is described below with reference to block diagrams and operational illustrations of methods and devices for timing input sensing, rendering and display to minimize latency. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, may be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions may be stored on computer-readable media and provided to a processor of a general purpose computer, special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implements the functions/acts specified in the block diagrams or operational block or blocks. In some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0011] Interactive devices can be seen as a composition of multiple parts, including input sensory, processing and rendering tasks, and output display. The pipeline required to convert input sensory (e.g.: finger touch) into a visual response is not immediate; each action introduces latency for numerous reasons, such as time required to process information or, in the case of displays, frequency that the display uses to re-draw its entire screen, or parts of thereof. As such, even if these actions work in parallel, when possible, a system is only as fast as the slowest of the actions. For example, displays may only refresh once every 60Hz, which means that, if everything else is faster (or available) before the next refresh cycle, the system is forced to wait for the display to be ready to display the new information.
[0012] It is an object of the present invention to reduce the latency (that is, the time between the user's input to the system and the system's graphical response to that input). It is also an object to enable the most recent input information to be displayed, by synchronizing the input capture and processing with the display refresh. Currently, any input information is processed as soon as possible and often is left waiting for the next refresh cycle. This alone introduces a wait period and is reflected in the overall system latency. In accordance with an embodiment of the presently disclosed methods, the system considers when the next refresh will be and takes that into account to provide the most up-to-date events, just in time for rendering.
[0013] In an embodiment, the time to refresh the display and the time required to sample the input sensor and compute the touch point are known. Thus, the input sensing is delayed to coincide to end when the refresh is about to start. In an embodiment, the rendering is started at the last possible moment so that it completes and the output is ready for the next display refresh. This just-in-time rendering will have access to the most recent input events; thus, the resulting display will include graphics with minimum amounts of latency.
[0014] In another embodiment, the time to refresh or the time to sample the input sensor are not known a priori and require measurements. Measurements are executed based on key timestamps, and this allows us to ensure temporal ordering and sequencing. Through measuring, the input sample rate and display refresh rate become known, and the rendering of the graphics can be timed as outlined above in order to minimize latency.
[0015] In another embodiment such measurements are detected using external measure paraphernalia and user-input (or defined as constants).
[0016] In another embodiment, these measures vary according to system workload and measures are not precise, but mere attempts to reduce the wait for the display. The time required to render the output is a function of the complexity of the output and competing system activities (more complex outputs generally require more time to render, competing activities on the CPU or GPU can slow down rendering, and so on). In this embodiment, the system estimates the rendering time based on some combination of: models of the output complexity; knowledge of competing activities on key system components; the time required to render previous frames; and, the time required to render similar outputs in the past.
[0017] In another embodiment, the system renders the output as soon as the display completes its vertical refresh and the appropriate buffer is available to render into. After rendering is complete, and until the next vertical refresh, the system updates this rendering based on additional input event samples by the input sensor. For example, the system might render a view of a GUI and then translate it based on additional input events until such a time as the rendering must be displayed.
[0018] FIG. 1 shows a prior double-buffered solution. In such a solution, the display
"flips" or "swaps" between two buffers which are used for display and rendering (A & B). When Buffer A is visible, the device renders into Buffer B. At a pre-determined rate, the system flips/swaps the buffers so that B is now visible and A can be rendered into. When a Buffer is offscreen, it is cleared and rendered into.
[0019] FIGS. 2 and 3 show a prior method of rendering. The GUI is rendered using the most recent input data and application state. Latency (lag) occurs when there is a difference between the application state (including input) when the GUI is rendered and when it's finally displayed on screen. In figure 3, the lag of rendering A is shown as the time between the start of rendering A and the vertical refresh that displays the result of rendering A on the screen.
[0020] FIGS. 4 and 5 show embodiments of a solution to reduce lag by rendering faster and refreshing the screen faster. For example, one might include a faster GPU in their system or other such improvements, and may run their display at a faster refresh rate (say 100Hz rather than 60Hz). In this manner, the rendered graphics are both created faster and display faster onscreen to the user, thus reducing the time between the start of rendering and the display of the GUI to the user. As long as there is time to clear the buffer and render the GUI between vertical refreshes of the screen, this approach will reduce lag at the expense of more capable rendering engines and faster more expensive displays.
[0021] FIG. 6 shows an embodiment of the disclosed system and method wherein the rendering of the GUI is timed so that it finishes as close to the vertical refresh as possible. In this manner, the rendering can use the most recently available input from the user and most recently available application state to produce the rendered image, thus reducing lag. [0022] In an embodiment, a system and method are provided for decreasing latency between an acquisition of touch data and processing of an associated rendering task in a touch sensitive device having a touch sensing system capable of producing touch data at a touch sampling rate and having a display system that displays frames at a refresh rate. The system estimates at least one of (a) a period of time for sampling touch data from the touch sensing system, (b) a period of time for computing touch event data from sampled touch data, and (c) a period of time for rendering of a frame to a frame buffer. The system determines a period of time Tc for (a) sampling touch data from the touch sensing system, (b) computing touch event data from sampled touch data, and (c) rendering of a frame to a frame buffer, based at least in part on the estimate. The system determines a point in time Tr at which the display system will be refreshed from the frame buffer. A sampling start time is computed based at least in part upon Tr and Tc. Sampling of the touch sensing system is initiated to obtain sampled touch data at the sampling start time. Touch event data is computed from the sampled touch data, and a frame that reflects the touch event data is rendered to the frame buffer prior to the time Tr. The display is then refreshed from the frame buffer.
[0023] In an embodiment, the system determines a period of time Tc required to compute touch event data from sampled touch data and render a frame to a frame buffer and a point in time Tr at which the display system will be refreshed from the frame buffer. The touch sensing system is sampled to create sampled touch data. Touch event data is computed from the sampled touch data, and the beginning of this computing step is delayed to occur at a point in time that is at least as early as (Tr - Tc). A frame is rendered to the frame buffer prior to the point in time Tr, and the display system is then refreshed from the frame buffer.
[0024] In an embodiment, a method is provided for decreasing latency between an acquisition of touch data and processing of an associated rendering task in a touch sensitive device having (a) a touch sensing system capable of producing touch data a touch sampling rate and having a sampling sync, and (b) a display system that displays frames at a refresh rate having a refresh sync. Sampling of touch sensor output is commenced on a sampling sync and sampled output is placed in a sampling buffer at a sampling rate. Frame rendering to one of a plurality of display buffers is commenced on a refresh sync, and display images corresponding to a rendered frame are displayed on a refresh sync. A period of time Tc corresponding to an estimated time for collecting the output in the sampling buffer is determined, a period of time Tm corresponding to an estimated time for computing touch event data from collected output is determined, and a period of time Tr corresponding to an estimated time for rendering of a frame corresponding to the touch event data is determined. A start time is computed based upon the refresh sync, Tc, Tm and Tr, and collecting of the output in the sampling buffer is initiated at the start time. Thereafter, touch event data is computed from collected output and a frame corresponding to the touch event data is rendered.
[0025] In an embodiment, the swap buffer duration may be used to estimate when an image rendered by the application is ready to be presented by the display. In an embodiment, alternative strategies for estimating the image availability may be employed; for example, without limitation, an alternative strategy for estimating the image availability may include monitoring of a buffer-based abstraction of the frame buffer (e.g. a queue, dual buffer, triple- buffer, (or any multi-buffer strategy) stack, hash, heap, linked list, doubly-linked list, etc.). In an embodiment, an application can be provided with an API that allows it to specify the time required to perform a render, for example by indicating a render start and a render end. In an embodiment, a random period of time may be chosen as an estimate the period of rendering.
[0026] In an embodiment, estimates of input sampling and/or output processing are used to adjust scheduling; thus, for example, the amount of time taken by various processes within a system may be monitored, and the scheduling of input sampling and/or output processing may be adjusted to compensate for that time. In an embodiment, multiple overlapping schedules may exist, with multiple candidate outputs generated and a correct one chosen given a particular amount of time taken to perform rendering or otherwise process an input. In an embodiment, one or more of the following may be monitored: sampling time, computing time (of one or more of the operating system, UI framework, application, or other process), rendering, the time required for the hardware to render a given input, and the time required to deliver an input to the application. In an embodiment, time taken by, e.g., the operating system for input and output processing can be monitored and fed to a scheduler.
[0027] In an embodiment, the latency of the system is improved by scheduling the processing of input and the display of its effects on the screen. Such may be desirable when, e.g. , an operating system is leveraged in native mode, and no access is natively provided for insertion at the render. For a given duration of a processing step, improved latency may be achieved by scheduling that processing to take place after a particular input, or before a particular refresh of the display (or on a recurring, pre-determined or continuously adjusted schedule with respect to the input or display). In an embodiment, output and/or input events might be skipped. In an embodiment, input events from several input frames are be packed together by the scheduling process to be delivered to the application.
[0028] As discussed, scheduling is performed to determine when the rendering of an application's view should take place. In an embodiment, scheduling of when the input delivery to the application is conducted and which sampled input should be included is considered, to ensure that the most recent input is the one which is processed and whose effects are shown on the screen. In an embodiment, scheduling of input is conducted. In an embodiment, scheduling of output is conducted. In an embodiment, both are conducted. Thus, in an embodiment, an operating system forwards event data (e.g., touch event data) to a process that consolidates and schedules the delivery of the event data to the application. In an embodiment, the process may consolidate of multiple events, including events from a plurality of sensors (which may or may not run and similar sampling rates). In an embodiment, scheduling the delivery to the application may be based on the input processing and output processing time for the consolidated events. In an embodiment, where the consolidated events are determined to exceed the available time left until a frame buffer switch, the delivery of the events may be deferred until a point prior to the subsequent frame buffer switch (e.g., just prior to the subsequent frame buffer switch), but will nonetheless include the most recent information when delivered. In an embodiment, where the consolidated events are determined to exceed the available time left until a frame buffer switch, one or more of the consolidated events may be skipped, and thus the input decimated to permit display on the frame buffer switch.
[0029] In an embodiment, simultaneous (or near simultaneous) processing of multiple input samples may be performed. Such processing allows the decoupling of the input sampling from the scheduling process. Thus, in an embodiment, multiple samples of input are taken. When the time has come for an input to be processed for modifying the application state, that most recent event is sent to the application to update its state. In an embodiment, to mitigate time by an application updating its state multiple times between output frames, the application may be apprised of input only when it is appropriate for it to perform an update (and render). While this strategy is effective in reducing latency by only rendering the most recent event, it is possible that the skipped events might contain needed or desired information. Thus, in an embodiment, input events are queued (or otherwise packed/grouped) and delivered as a group to the application (or otherwise processed simultaneously or near simultaneously), thus reducing the number of times an application updates its state. Such delivery of input events reduces the number of updates occurring, increases the likelihood that the input events are delivered at the correct time for a particular output method/process. Further, applications may thus receive the same (or largely the same) input as they would otherwise without the benefit of this queueing.
[0030] Throughout this disclosure, the terms "touch", "touches," or other descriptors may be used to describe events or periods of time in which a user's finger, a stylus, an object or a body part is detected by the sensor. In some embodiments, these detections occur only when the user is in physical contact with a sensor, or a device in which it is embodied. In other embodiments, the sensor may be tuned to allow the detection of "touches" that are hovering a distance above the touch surface or otherwise separated from the touch sensitive device. Therefore, the use of language within this description that implies reliance upon sensed physical contact should not be taken to mean that the techniques described apply only to those embodiments; indeed, nearly all, if not all, of what is described herein would apply equally to "touch" and "hover" sensors. As used herein, the phrase "touch event" and the word "touch" when used as a noun include a near touch and a near touch event, or any other gesture that can be identified using a sensor.
[0031] At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a special purpose or general purpose computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
[0032] Routines executed to implement the embodiments may be implemented as part of an operating system, firmware, ROM, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as "computer programs." Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface). The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
[0033] A machine-readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in their entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine-readable medium in entirety at a particular instance of time.
[0034] Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others.
[0035] In general, a machine readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
[0036] In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
[0037] The above embodiments and preferences are illustrative of the present invention.
It is neither necessary, nor intended for this patent to outline or define every possible combination or embodiment. The inventor has disclosed sufficient information to permit one skilled in the art to practice at least one embodiment of the invention. The above description and drawings are merely illustrative of the present invention and that changes in components, structure and procedure are possible without departing from the scope of the present invention as defined in the following claims. For example, elements and/or steps described above and/or in the following claims in a particular order may be practiced in a different order without departing from the invention. Thus, while the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims

CLAIMS What is claimed is:
1. A method for decreasing latency between an input touch event and the display of a frame reflecting the input touch event in a touch sensitive device having an operating system operatively connected to a touch sensing system sensitive to an input touch event and a display system, the display system displaying frames at a refresh rate, the method comprising:
estimating the time of a next frame refresh;
receiving from the operating system touch data reflective of an input touch event; determining the application associated with the input touch event; estimating the time it will take the application to process and render the received touch data;
determining a time at which delivery of the touch data to the application will permit the application to process and render the touch data prior to the time of the next frame refresh, based at least in part on the estimated time it will take the application to process and render the touch data, and the estimated time of the next frame refresh; and providing the touch data to the application just prior to the determined time.
2. A method for decreasing latency between a latest input touch event and the display of a frame reflecting the input touch event in a touch sensitive device having an operating system operatively connected to a touch sensing system sensitive to an input touch event and a display system, the display system displaying frames at a refresh rate, the method comprising:
estimating the time of a next frame refresh;
receiving from the operating system touch data reflective of a plurality of input touch events associated with a first application;
estimating the time it will take the application to process and render the plurality of input touch events in the received touch data;
determining a time at which delivery of the touch data to the application will permit the application to process and render the touch data prior to the time of the next frame refresh, based at least in part on the estimated time it will take the application to process and render the touch data, and the estimated time of the next frame refresh; and providing the touch data to the application just prior to the determined time.
PCT/US2017/016222 2016-02-02 2017-02-02 System and method for timing input sensing, rendering, and display to minimize latency WO2017136554A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018536510A JP2019505045A (en) 2016-02-02 2017-02-02 System and method for measuring input sensing, rendering, and display times to minimize latency
DE112017000610.4T DE112017000610T5 (en) 2016-02-02 2017-02-02 A system and method for timing an input, a rendering, and a display to minimize latency
CN201780019312.9A CN109074189A (en) 2016-02-02 2017-02-02 For being timed the system and method to minimize time delay to input sensing, rendering and display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662290347P 2016-02-02 2016-02-02
US62/290,347 2016-02-02

Publications (1)

Publication Number Publication Date
WO2017136554A1 true WO2017136554A1 (en) 2017-08-10

Family

ID=59501044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/016222 WO2017136554A1 (en) 2016-02-02 2017-02-02 System and method for timing input sensing, rendering, and display to minimize latency

Country Status (4)

Country Link
JP (1) JP2019505045A (en)
CN (1) CN109074189A (en)
DE (1) DE112017000610T5 (en)
WO (1) WO2017136554A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971972B (en) * 2019-11-27 2021-12-17 北京凯视达科技股份有限公司 Method and device for playing video, storage medium and electronic equipment
CN113867517A (en) * 2020-06-30 2021-12-31 北京小米移动软件有限公司 Touch method and device, electronic equipment and storage medium
CN111984181A (en) * 2020-09-09 2020-11-24 Oppo(重庆)智能科技有限公司 Picture refreshing method and device, terminal equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300123A1 (en) * 2011-05-27 2012-11-29 Nokia Corporation Processing image content for content motion or touch input
WO2013110194A1 (en) * 2012-01-27 2013-08-01 Research In Motion Limited Touch display with variable touch-scan rate
US20140204036A1 (en) * 2013-01-24 2014-07-24 Benoit Schillings Predicting touch input
US20150194137A1 (en) * 2014-01-06 2015-07-09 Nvidia Corporation Method and apparatus for optimizing display updates on an interactive display device
US20150355774A1 (en) * 2014-06-09 2015-12-10 Sony Corporation Adaptive touch panel synchronization

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8614690B2 (en) * 2008-09-26 2013-12-24 Apple Inc. Touch sensor panel using dummy ground conductors
US8797340B2 (en) * 2012-10-02 2014-08-05 Nvidia Corporation System, method, and computer program product for modifying a pixel value as a function of a display duration estimate
US9430067B2 (en) * 2013-01-11 2016-08-30 Sony Corporation Device and method for touch detection on a display panel
US9529525B2 (en) * 2013-08-30 2016-12-27 Nvidia Corporation Methods and apparatus for reducing perceived pen-to-ink latency on touchpad devices
CN103593155B (en) * 2013-11-06 2016-09-07 华为终端有限公司 Display frame generating method and terminal device
US10156976B2 (en) * 2014-01-30 2018-12-18 Samsung Display Co., Ltd. System and method in managing low-latency direct control feedback
US9710098B2 (en) * 2014-03-31 2017-07-18 Samsung Display Co., Ltd. Method and apparatus to reduce latency of touch events

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300123A1 (en) * 2011-05-27 2012-11-29 Nokia Corporation Processing image content for content motion or touch input
WO2013110194A1 (en) * 2012-01-27 2013-08-01 Research In Motion Limited Touch display with variable touch-scan rate
US20140204036A1 (en) * 2013-01-24 2014-07-24 Benoit Schillings Predicting touch input
US20150194137A1 (en) * 2014-01-06 2015-07-09 Nvidia Corporation Method and apparatus for optimizing display updates on an interactive display device
US20150355774A1 (en) * 2014-06-09 2015-12-10 Sony Corporation Adaptive touch panel synchronization

Also Published As

Publication number Publication date
CN109074189A (en) 2018-12-21
JP2019505045A (en) 2019-02-21
DE112017000610T5 (en) 2018-10-11

Similar Documents

Publication Publication Date Title
US10402009B2 (en) System and method for timing input sensing, rendering, and display to minimize latency
US10254955B2 (en) Progressively indicating new content in an application-selectable user interface
KR102457724B1 (en) Method for performing image process and electronic device thereof
US8933952B2 (en) Pre-rendering new content for an application-selectable user interface
GB2501298A (en) Approximating electronic document last reading position
KR20160146281A (en) Electronic apparatus and method for displaying image
KR20150091474A (en) Low latency image display on multi-display device
EP3489906B1 (en) Electronic device, and method for controlling operation of electronic device
WO2017136554A1 (en) System and method for timing input sensing, rendering, and display to minimize latency
WO2019135313A1 (en) Information processing device, information processing method and program
US9779466B2 (en) GPU operation
US9946398B2 (en) System and method for timing input sensing, rendering, and display to minimize latency
WO2015196804A1 (en) Virtual desktop image processing method and device, virtual desktop server and thin terminal
EP3852070A1 (en) Information processing device, drawing control method, and recording medium having said program recorded thereon
CN113407138B (en) Application program picture processing method and device, electronic equipment and storage medium
US20160110002A1 (en) Touch Input Method and Apparatus
CN115079832A (en) Virtual reality scene display processing method and virtual reality equipment
US10551957B2 (en) Latency reduction for detached content
CN110800308B (en) Methods, systems, and media for presenting a user interface in a wearable device
CN111258415B (en) Video-based limb movement detection method, device, terminal and medium
CN117631941A (en) Image updating method and device and electronic equipment
CN114827575A (en) VR display control method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17748157

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018536510

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 112017000610

Country of ref document: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.11.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 17748157

Country of ref document: EP

Kind code of ref document: A1