US20230065331A1 - Methods and systems for reducing latency on collaborative platform - Google Patents
Methods and systems for reducing latency on collaborative platform Download PDFInfo
- Publication number
- US20230065331A1 US20230065331A1 US18/049,243 US202218049243A US2023065331A1 US 20230065331 A1 US20230065331 A1 US 20230065331A1 US 202218049243 A US202218049243 A US 202218049243A US 2023065331 A1 US2023065331 A1 US 2023065331A1
- Authority
- US
- United States
- Prior art keywords
- user input
- display
- overlay image
- receiver
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000010801 machine learning Methods 0.000 claims abstract description 19
- 238000013213 extrapolation Methods 0.000 claims abstract description 15
- 239000003550 marker Substances 0.000 claims description 16
- 238000013473 artificial intelligence Methods 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 14
- 238000004891 communication Methods 0.000 abstract description 42
- 238000012545 processing Methods 0.000 description 29
- 238000010586 diagram Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4622—Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/10—Map spot or coordinate position indicators; Map reading aids
- G09B29/106—Map spot or coordinate position indicators; Map reading aids using electronic means
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/37—Details of the operation on graphic patterns
- G09G5/377—Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/762—Media network packet handling at the source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4227—Providing Remote input by a user located remotely from the client device, e.g. at work
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/02—Networking aspects
- G09G2370/025—LAN communication management
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/06—Consumer Electronics Control, i.e. control of another device by a display or vice versa
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/12—Use of DVI or HDMI protocol in interfaces along the display data pipeline
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/16—Use of wireless transmission of display information
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/20—Details of the management of multiple sources of image data
Definitions
- the present disclosure relates, in general, to methods and systems for generating an overlay image based on user input for at least temporary display to reduce latency on a collaborative platform.
- Miracast® wireless display standard certified by the Wi-Fi Alliance, which defines a protocol for displaying multimedia between devices using Wi-Fi CERTIFIED Wi-Fi Direct®.
- Wi-Fi CERTIFIED Miracast® provides operations for negotiating video capabilities, setting up content protection, streaming content, and maintaining a video session.
- Wi-Fi CERTIFIED Miracast® allows for sending up to 1080p HD, or even higher resolution video and thus is suitable for video streaming and screen to screen content projection.
- Miracast® makes it possible to wirelessly stream video content from a laptop computer to a television display.
- Undesirable latency of content projection systems arises during collaboration, for example, when making edits to the content being projected on a display, e.g., computing device, where the original data file is not stored.
- a teacher's desktop may have an original data file stored thereon, which may be projected on a display in front of the classroom visible to the classroom of students using content projection systems known in the art.
- a receiver is typically used to transmit data between the teacher's desktop or the student's tablet, and the display.
- the receiver may be coupled to the display via a USB cable for transferring user input data, and further coupled to the display via an HDMI cable for transferring image(s).
- the receiver may communicate with the teacher's desktop and the student's tablet wirelessly over a network (e.g., local network, corporate network, or internet).
- a network e.g., local network, corporate network, or internet
- the student may attempt to answer the math problem by drawing directly on the display.
- input data representing the user input is transferred from the display via the USB cable to the receiver.
- the receiver transmits the user input data via WiFi to the teacher's desktop, where the original file is stored.
- a processor on the teacher's desktop modifies the original file based on the user input data, e.g., adding the number “3” to the math problem as the student draws it, thereby generating a new real image, which is transmitted via WiFi to the receiver.
- the receiver then transmits the new real image via the HDMI cable to the display so that the formation of number “3” is displayed on the display as the student draws it.
- the data flow from the display to the receiver to the teacher's desktop, back to the receiver, and then finally back to the display occurs continuously as the student draws on the display, and results in latency of the collaborative content projection system.
- the present invention is directed to systems and methods for generating an overlay image based on user input for at least temporary display to reduce latency on a collaborative platform. For example, in accordance with one aspect of the invention, a method for reducing latency on a collaborative platform is provided.
- the method includes receiving, by a first device, e.g., a receiver, a first real image from a third device, e.g., a moderator device; receiving, by the first device, user input data indicative of user input on the second device, e.g., a display via a USB cable; transmitting, by the first device, the user input data to the third device; determining, by the first device, an overlay image based on the user input data; determining, by the first device, an overlaid image based on the overlay image and the first real image; and transmitting, by the first device, the overlaid image to the second device, e.g., via an HDMI cable, to cause the overlaid image to be displayed on the second device, e.g., via a touchscreen display.
- a first device e.g., a receiver
- transmitting by the first
- a portion of the overlay image of the overlaid image may be displayed on the second device for a predetermined period of time.
- the predetermined period of time may be at least as long as the latency on the collaborative platform.
- the overlay image may include a leading end and a trailing end, such that, as the leading end extends on the second device at a rate, the trailing end is removed from the second device at the rate.
- a portion of spatial coordinates of the trailing end may be removed from the second device depending on the latency and/or the speed of the user input data.
- the overlay image determined by the first device may include a first portion of the overlay image indicative of the user input at the second device based on the user input data, and an extended, predicted portion of the overlay image based on the user input data.
- the first device may predict the extended portion of the overlay image based on at least one of spatial or time coordinates of the user input data, e.g., via at least one of extrapolation, machine learning, artificial intelligence, or a neural network.
- the first device may predict the extended portion of the overlay image based on a velocity of the user input data.
- the extended portion of the overlay image may include a curved portion formed of a plurality of finite line segments, such that predicting, by the first device, the extended portion of the overlay image includes predicting the curved portion based on an angle of each finite line segment of the plurality of finite line segments.
- the user input data may include input type data indicative of at least one of thickness, color, or marker or eraser type.
- the method further includes determining, by the first device, the input type based on the user input data and machine learning.
- the input type may be determined by analyzing a pattern of spatial inputs of the user input data from the second device. Accordingly, the overlay image determined may be determined based on the determined input type.
- the method further may include receiving, by the first device, data indicative of the input type from the third device.
- the first device may receive data indicative of the input type from an application running on the third device via a defined TCP port.
- the first device may receive data indicative of the input type from an operating system running on the third device via a user input back channel (UIBC) extension.
- UIBC user input back channel
- FIG. 1 A is a block diagram of a collaborative platform in accordance with an illustrative embodiment of the present invention.
- FIG. 1 B is a block diagram of the collaborative platform of FIG. 1 A illustrating various communication mechanisms in accordance with the principles of the present invention.
- FIG. 2 is a diagram of a collaborative platform in an exemplary setting in accordance with one aspect of the present invention.
- FIGS. 3 A- 3 D are schematic views of the exemplary hardware and software components of an exemplary display, receiver, moderator device, and member device, respectively.
- FIG. 4 A is a block diagram of the collaborative platform in accordance with one aspect of the present invention.
- FIG. 4 B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted in FIG. 4 A .
- FIG. 5 A is a flow chart illustrating exemplary steps of reducing latency on a collaborative platform in accordance with the principles of the present invention.
- FIG. 5 B is a flow chart illustrating the steps of overlaid image generation of FIG. 5 A .
- FIG. 5 C illustrates overlaid image generation in accordance with the principles of the present invention.
- FIGS. 6 A- 6 E illustrate the steps of reducing latency on a collaborative platform in accordance with the principles of the present invention.
- FIGS. 7 A- 7 D illustrate overlay image prediction generation in accordance with the principles of the present invention.
- FIGS. 8 A and 8 B illustrate user type data collection in accordance with one aspect of the present invention.
- FIG. 9 A is a block diagram of an alternative embodiment of the collaborative platform in accordance with another aspect of the present invention.
- FIG. 9 B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted in FIG. 9 A .
- FIG. 10 A is a block diagram of another alternative embodiment of the collaborative platform in accordance with yet another aspect of the present invention.
- FIG. 10 B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted in FIG. 10 A .
- FIG. 11 is a flow chart illustrating alternative exemplary steps of reducing latency on a collaborative platform in accordance with the principles of the present invention.
- FIG. 12 A is a block diagram of yet another alternative embodiment of the collaborative platform in accordance with yet another aspect of the present invention.
- FIG. 12 B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted in FIG. 12 A .
- a teacher may desire to display a problem to a classroom of students, and have a student solve the problem on the display, such that the student's efforts are visible to the entire classroom.
- the problem may be stored in a computer file on the teacher's computer, and displayed on a main display visible to the classroom of students. A selected student may then perform work on the main display directly, such that their work is visible to the classroom of students.
- it may be advantageous to quickly and easily display an overlay image illustrating the student's work over the original problem on the main display.
- the principles of the present invention described herein may be used in settings other than the classroom, e.g., remotely across a campus or other geographical locations via WiFi or the internet, for conducting other collaborative efforts such as meetings or presentations.
- the present invention is directed to a collaborative platform for use in, for example, a classroom setting or a product presentation meeting, to facilitate presenting materials in real time while reducing latency in the collaborative platform.
- the present invention permits a user to provide user input, such as a marking on an original image being displayed, such that the user input is illustratively overlaid on the original image on a main display almost immediately after the user input is provided, and before a real image is able to be generated by the collaborative platform.
- the collaborative platform involves a main display, a moderator device, one or more member devices, and a receiver in communication with the display, the moderator device, and the one or more member devices.
- the moderator device may be used by a teacher/administrator and may store an original data file, and an original image may be displayed on the main display based on the original data file such that a student may edit the original data file by providing user input via the main display.
- the receiver is configured to run an overlay image generation application which generates an overlay image based on the user input provided by the student via the display, and displays the overlaid image over the original image while the collaborative platform updates the original data file based on the user input data for display on the main display. By displaying the overlaid image before displaying an updated image generated using the real data, the receiver reduces latency in the collaborative platform.
- FIG. 1 A is a block diagram of an illustrative collaborative platform constructed in accordance with the principles of the present invention.
- Collaborative platform 100 includes display 105 , receiver 120 , network 101 in which receiver 120 serves as the hub, moderator device 130 to be used by the moderator client, e.g., a teacher, and optionally, one or more member devices 140 to be used by the one or more member clients, e.g., students.
- Receiver 120 may be a ScreenBeam® Wireless Display Kit, available from Actiontec Electronics, Inc., Sunnyvale, Calif. In one preferred embodiment, receiver 120 is Miracast® aware and compatible.
- FIG. 1 A three member devices 140 are depicted, as a person having ordinary skill in the art will understand, fewer or more than three member devices may be used in collaborative platform 100 .
- network 101 may be based on wireless communication, such that moderator device 130 and member devices 140 interact with receiver 120 over WiFi or the internet.
- Network 101 may be a local peer-to-peer network, for example, a Wi-Fi peer-to-peer interface.
- Display 105 may be any suitable computing device, e.g., a touchscreen device, and provides an interface for presenting information received from receiver 120 to external systems, users, or memory, as well as for collecting user input directly via the interface of display 105 , e.g., via touch sensors embedded on the interface.
- display 105 may comprise multiple individual displays, and even may constitute the displays associated with each of member devices 140 and/or moderator device 130 .
- member device 130 may be any suitable computing device as described above, e.g., a touchscreen device.
- Receiver 120 may be coupled to display 105 , by one or more wired connections.
- receiver 120 and display 105 may connect using a universal serial bus (USB) cable for communicating user input data
- receiver 120 and display 105 may connect using a high-definition multimedia interface (HDMI) cable for communicating image(s).
- receiver 120 and display 105 may connect using a wireless connection such as Bluetooth. Accordingly, receiver 120 receives an original image, e.g., still images, from moderator device 130 via WiFi, and passes along the original image provided by moderator device 130 to display 105 via the HDMI cable, which illustratively is shown on display 105 .
- the local display of moderator device 130 and display 105 may display the same information (e.g., the same graphics, video, image, chart, presentation, document, program, application, window, view, etc.).
- receiver 120 receives user input data indicative of user input, from display 105 via the USB cable and/or a wireless connection such as Bluetooth, and passes along the user input data provided by display 105 to moderator device 130 via WiFi for processing.
- Moderator device 130 processes the user input data provided by display 105 , and modifies the original image stored in its memory based on the user input data received to generate an image for redistribution to receiver 120 via WiFi, and ultimately to display 105 via receiver 120 .
- the path of data flow user input data from display 105 to receiver 120 via USB and/or Bluetooth, user input data from receiver 120 to moderator device 130 via WiFi, generation of real image based on the user input data by moderator device 130 , real image from moderator device 130 to receiver 120 via WiFi, and real image from receiver 120 to display 105 via HDMI—will suffer from a time delay due to latency of the content projection system.
- moderator device 130 may designate member device 140 as the moderator as described in U.S. patent application Ser. No. 14/986,468, the entire contents of which is incorporated by reference herein. Accordingly, moderator device 130 may elect to share the screen of member device 140 on display 105 , such that user input provided by a user on display 105 will be transmitted to member device 140 to modify the original file stored in the memory of member device 140 .
- receiver 120 may be incorporated into moderator device 130 .
- receiver 120 may be incorporated into a laptop serving as moderator device 130 .
- any suitable arrangement of receiver 120 and display 105 may be employed.
- receiver 120 and display 105 may be separate components or be combined into a single device.
- FIG. 2 depicts an embodiment of collaborative platform 100 constructed in accordance with the principles of the present invention for use in a classroom setting.
- main display 105 is visible to the classroom of students and includes input/output device(s) 110 , e.g., a touchscreen, such that a student can directly provide user input to display 105 in communication with receiver 120 .
- input/output device(s) 110 e.g., a touchscreen
- a student can directly provide user input to member device 140 via input/output device(s) 145 in communication with receiver 120 , which will then be displayed on display 105 .
- input/output device(s) 110 e.g., a touchscreen
- the teacher's desktop computer is designated as moderator device 130 having input/output device(s) 135 , e.g., a touchscreen, while wireless tablets located at each student's desk serve as member devices 140 having input/output device(s) 145 , e.g., a touchscreen.
- moderator device 130 and member devices 140 wirelessly communicate with receiver 120 .
- collaborative platform 100 may be used across multiple classrooms and/or other collaborative work environment settings.
- moderator device 130 may be in a first classroom having a first display and a first plurality of member devices, and moderator device 130 may communicate, e.g., via WiFi, with a second display and a second plurality of member devices in a second classroom.
- a student in the second classroom may modify an image displayed on the second display, thereby modifying the original filed stored on moderator device 130 in the first classroom, such that the modification to the image is visible on the first and second displays in the first and second classrooms.
- FIGS. 3 A- 3 D exemplary functional blocks representing the hardware and software components of display 105 , receiver 120 , moderator device 130 , and member device 140 , respectively, are provided.
- hardware and software components of display 105 may include processing unit 106 , memory 107 , storage 111 , communication unit 108 , power source 109 , input/output (I/O) device(s) 110 .
- Processing unit 106 may be one or more processors configured to run operating system 112 and perform the tasks and operations of display 105 set forth herein.
- Memory 107 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.
- Communication unit 108 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example, communication unit 108 may transmit information, e.g., user input data, to receiver 120 of collaborative platform 100 via a USB cable and/or a wireless connection such as Bluetooth, and may receive information, e.g., an image, from receiver 120 via an HDMI cable.
- Power source 109 may be a battery or may connect display 105 to a wall outlet or any other external source of power.
- Storage 111 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape.
- the input device of I/O device(s) 110 may be one or more devices coupled to or incorporated into display 105 for inputting data to display 105 .
- the input device of I/O device 110 may be a touch input device (e.g., touch pad or touch screen) or an array of location sensors, configured to receive user input from the user and generate user input data indicative of the user input.
- the input device of I/O device 110 may work in conjunction with a smart stylet that interacts with the array of location sensors.
- the output device of I/O device 110 may be any device coupled to or incorporated into display 105 for outputting or otherwise displaying images. Accordingly, I/O device(s) 110 may be a touchscreen for receiving and displaying images.
- Operating system 112 may be stored in storage 111 and executed on processing unit 106 . Operating system 112 may be suitable for controlling the general operation of display 105 to achieve the functionality of display 105 described herein. Display 105 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood that display 105 may include additional or fewer components than those illustrated in FIG. 3 A and may include more than one of each type of component.
- receiver 120 may include processing unit 121 , memory 122 , storage 126 , communication unit 123 , power source 124 , input/output (I/O) device(s) 125 .
- Processing unit 121 may be one or more processors configured to run operating system 127 , collaborative application 128 , and overlay image generator application 129 and perform the tasks and operations of receiver 120 set forth herein.
- Memory 122 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.
- Communication unit 123 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection.
- communication unit 123 may receive information, e.g., user input data from display 105 via a USB cable and/or a wireless connection such as Bluetooth, and real images from moderator device 130 via WiFi, and may transmit information, e.g., image(s), to display 105 via an HDMI cable. Moreover, communication unit 123 may communicate both user input data and images to moderator device 130 and/or member devices 140 via network 101 , e.g., WiFi. In accordance with one aspect of the present invention, communication unit 123 may receive information, e.g., data indicative of one or more user types of the user input from moderator device 130 via, e.g., a defined TCP port or a UIBC extension.
- Power source 124 may be a battery or may connect receiver 120 to a wall outlet or any other external source of power.
- Storage 126 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape.
- the input device of I/O device(s) 125 may be one or more devices coupled to or incorporated into receiver 120 for inputting data to receiver 120 .
- the output device of I/O device 110 may be any device coupled to or incorporated into receiver 120 for outputting or otherwise displaying images.
- Collaboration application 128 may be stored in storage 126 and executed on processing unit 121 .
- Collaboration application 128 may be a software application and/or software modules having one or more set of instructions suitable for performing the operations of receiver 120 set forth herein, including facilitating the exchange of information with moderator device 130 .
- collaboration application 128 may cause receiver 120 to receive user input data from display 105 via communication unit 123 , e.g., via a USB cable and/or a wireless connection such as Bluetooth, and to pass along the user input data to moderator device 130 via communication unit 123 , e.g., via WiFi.
- collaboration application 128 further may cause receiver device 130 to receive real images from moderator device 130 via communication unit 123 , e.g.
- collaboration application 128 may cause receiver 120 to receive data indicative of one or more user types from moderator device 130 via communication unit 123 , e.g., a defined TCP port or a modified user input back channel (UIBC), as described in further detail below.
- communication unit 123 e.g., a defined TCP port or a modified user input back channel (UIBC), as described in further detail below.
- Overlay image generator application 129 may be stored in storage 126 and executed on processing unit 121 .
- Overlay image generator application 129 may be a software application and/or software modules having one or more sets of instructions suitable for performing the operations of receiver 120 set forth herein, including facilitating the exchange of information with display 105 , moderator device 130 , and member devices 140 .
- overlay image generator application 129 may cause processing unit 121 of receiver 120 to process and analyze the user input data received from display 105 via collaboration application 128 and generate an overlay image based on the user input data, and generate an overlaid image based on the overlay image, and to transmit the overlaid image to display 105 for display via communication unit 123 , e.g., via an HDMI cable.
- overlay image generator application 129 may cause receiver 120 to derive one or more user types based on the user input data received from display 105 via collaboration application 128 , such that the overlay image is also generated based on the user type, as described in further detail below.
- overlay image generator application 129 may cause receiver 120 to generate an overlay image based on the data indicative of one or more user types received from moderator device 130 via communication unit 123 , e.g., a defined TCP port, instead of deriving one or more user types based on the user input data received from display 105 , as described in further detail below.
- overlay image generator application 129 may cause receiver 120 to generate an overlay image based on the data indicative of one or more user types received from moderator device 130 via communication unit 123 , e.g., a modified user input back channel (UIBC), instead of deriving one or more user types based on the user input data received from display 105 , as described in further detail below.
- UIBC modified user input back channel
- Operating system 127 may be stored in storage 126 and executed on processing unit 121 . Operating system 127 may be suitable for controlling the general operation of receiver 120 and may work in concert with overlay image generator application 129 to achieve the functionality of receiver 120 described herein. Receiver 120 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood that receiver 120 may include additional or fewer components than those illustrated in FIG. 3 B and may include more than one of each type of component.
- moderator device 130 may include processing unit 131 , memory 132 , storage 136 , communication unit 133 , power source 134 , input/output (I/O) device(s) 135 .
- Processing unit 131 may be one or more processors configured to run operating system 137 , collaboration application 138 , and optional overlay image application 139 and perform the tasks and operations of moderator device 130 set forth herein.
- Memory 132 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.
- Communication unit 133 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example, communication unit 133 may receive information, e.g., user input data, from receiver 120 via WiFi, and may transmit information, e.g., image(s), to receiver 120 via WiFi.
- Power source 134 may be a battery or may connect moderator device 130 to a wall outlet or any other external source of power.
- Storage 136 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape.
- the input device of I/O device(s) 135 may be one or more devices coupled to or incorporated into moderator device 130 for inputting data to moderator device 130 .
- the input device of I/O device 135 may be a touch input device (e.g., touch pad or touch screen) or an array of location sensors, configured to receive user input from the user and generate user input data indicative of the user input.
- the input device of I/O device 135 may work in conjunction with a smart stylet that interacts with the array of location sensors.
- the output device of I/O device 135 may be any device coupled to or incorporated into moderator device 130 for outputting or otherwise displaying images. Accordingly, I/O device(s) 135 may be a touchscreen for receiving and displaying images.
- Collaboration application 138 may be stored in storage 136 and executed on processing unit 131 .
- Collaboration application 138 may be a software application and/or software modules having one or more set of instructions suitable for performing the operations of moderator device 130 set forth herein, including facilitating the exchange of information with receiver 120 .
- collaboration application 138 may cause moderator device 130 to transmit a first real image from an original image file stored on storage 136 to receiver 120 via communication unit 133 , e.g., via WiFi, for display via display 105 .
- collaboration application 138 may cause moderator device 130 to receive user input data from receiver 120 via communication unit 133 , e.g., via WiFi.
- Collaboration application 138 further may cause processing unit 131 to process and analyze the user input data received from receiver 120 and to modify the original image file stored on storage 136 by generating a real image based on the user input data, and to store the real image on storage 136 . Additionally, collaboration application 138 may cause moderator device 130 to transmit the real image, e.g., the real image stored on storage 136 , to receiver 120 via communication unit 133 , e.g., via WiFi, for display via display 105 .
- communication unit 133 e.g., via WiFi
- Optional overlay image application 139 may be stored in storage 136 and executed on processing unit 131 .
- Overlay image application 139 may be a software application and/or software modules having one or more sets of instructions suitable for performing the operations of moderator device 130 set forth herein, including facilitating the exchange of information with receiver 120 .
- overlay image application 139 may cause processing unit 131 of moderator device 130 to derive user type data indicative of one or more user types from the user input data received by moderator device 130 through collaboration application 138 , and to transmit the user type data to receiver 120 via communication unit 133 , e.g., via a defined TCP port.
- Operating system 137 may be stored in storage 136 and executed on processing unit 131 . Operating system 137 may be suitable for controlling the general operation of moderator device 130 and may work in concert with collaboration application 138 and optional overlay image application 139 to achieve the functionality of moderator device 130 described herein. Moderator device 130 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood that moderator device 130 may include additional or fewer components than those illustrated in FIG. 3 C and may include more than one of each type of component.
- operating system 137 may cause processing unit 131 of moderator device 130 to derive user type data indicative of one or more user types from the user input received by moderator device 130 through collaboration application 138 , and to transmit the user type data to receiver 120 via communication unit 133 , e.g., via a modified user input back channel (UIBC).
- UIBC modified user input back channel
- hardware and software components of one or more member devices 140 may include processing unit 141 , memory 142 , storage 146 , communication unit 143 , power source 144 , input/output (I/O) device(s) 145 .
- Processing unit 141 may be one or more processors configured to run operating system 147 , collaboration application 148 , and optional overlay image application 149 and perform the tasks and operations of member device 140 set forth herein.
- Memory 142 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.
- Communication unit 143 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example, communication unit 143 may transmit information, e.g., user input data, to receiver 120 of collaborative platform 100 via WiFi, and may receive information, e.g., image(s), from receiver 120 via WiFi.
- Power source 144 may be a battery or may connect member device 140 to a wall outlet or any other external source of power.
- Storage 146 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape.
- the input device of I/O device(s) 145 may be one or more devices coupled to or incorporated into member device 140 for inputting data to member device 140 .
- the input device of I/O device 145 may be a touch input device (e.g., touch pad or touch screen) or an array of location sensors, configured to receive user input from the user and generate user input data indicative of the user input.
- the input device of I/O device 145 may work in conjunction with a smart stylet that interacts with the array of location sensors.
- the output device of I/O device 145 may be any device coupled to or incorporated into member device 140 for outputting or otherwise displaying images. Accordingly, I/O device(s) 145 may be a touchscreen for receiving and displaying images.
- Collaboration application 148 may be stored in storage 146 and executed on processing unit 141 .
- Collaboration application 148 may be a software application and/or software modules having one or more set of instructions suitable for performing the operations of member device 140 set forth herein, including facilitating the exchange of information with receiver 120 .
- collaboration application 148 may cause member device 140 to transmit user input data received via the input device of I/O device(s) 145 to receiver 120 via communication unit 143 , e.g., via WiFi, for further transmission to moderator device 130 .
- collaboration application 148 may cause member device 140 to receive image(s) from receiver 120 via communication unit 133 , e.g., via WiFi, for display via the output device of I/O device(s) 145 .
- Optional overlay image application 149 may be stored in storage 146 and executed on processing unit 141 .
- Overlay image application 149 may be a software application and/or software modules having one or more sets of instructions suitable for performing the operations of member device 140 set forth herein, including facilitating the exchange of information with receiver 120 .
- overlay image application 149 may operate similar to overlay image application 139 .
- Operating system 147 may be stored in storage 146 and executed on processing unit 141 . Operating system 147 may be suitable for controlling the general operation of member device 140 and may work in concert with collaboration application 148 and optional overlay image application 149 to achieve the functionality of member device 140 described herein. Member device 140 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood that member device 140 may include additional or fewer components than those illustrated in FIG. 3 D and may include more than one of each type of component.
- user input data may be transmitted from display 105 to receiver 120 via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- a wired connection e.g., a USB cable
- a wireless connection such as Bluetooth
- user input data and the real images may be communicated between receiver 120 and moderator device 130 across a wireless connection, e.g., WiFi.
- the overlaid image based on the real image and the overlay image may be transmitted from receiver 120 to display 105 via a wired connection, e.g., an HDMI cable.
- collaboration platform 100 may run a collaboration application, e.g., third party application such as Microsoft Whiteboard available from Microsoft, Redmond, Wash., or Google Drive available from Google LLC, Mountain View, Calif., for displaying a first real image based on an original image file stored on moderator device 130 , receiving user input, modifying the original image file stored on moderator device 130 based on the user input, and displaying a second real image based on the modified original image file.
- a user may provide user input directly to display 105 , e.g., a touchscreen.
- a first real image may already be displayed on display 105 , e.g., a math problem, from an original image file stored on moderator device 130 , or display 105 may initially be blank if the original image file stored on moderator device 130 is blank.
- the user input may be a pattern of interactions (e.g., clicks and drags) with the touchscreen of display 105 forming, e.g., a number “3” in the color red.
- the shape forming the number “3” is an example of the user input
- the color red is an example of a user type of the user input.
- Other possible user types may include, for example, different colors (e.g., gray, black, red, blue, etc.), thickness level (e.g., thin, normal, thick), or marker or eraser type, etc.
- User input data based on the user input received by display 105 is then transmitted via wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth, to receiver 120 , which then passes along the user input data to moderator device 130 via a wireless connection, e.g., WiFi.
- moderator device 130 modifies the original image file stored in memory therein based on the user input data, and generates a real image file corresponding to a real image, e.g., where the red “3” is superimposed on the math problem.
- the real image is then transmitted to receiver 120 via a wireless connection, e.g., WiFi, which then passes along the real image to display 105 via a wired connection, e.g., an HDMI cable, to be displayed.
- a wireless connection e.g., WiFi
- a wired connection e.g., an HDMI cable
- the collaborative platform does not wait for, e.g., the entire number “3,” to be drawn before generating the real image; instead, this process occurs continuously as the user draws the number “3.”
- collaboration platform 100 may run an overlay image generator application for generating an overlay image by receiver 120 based on the user input provided by the user, generating an overlaid image based on the overlay image and the real image received by moderator device 130 , and displaying the overlaid image on the original image on display 105 to reduce latency of collaboration platform 100 .
- receiver 120 may generate overlay image based on the user input data, generate an overlaid image based on the overlay image and the real image received from moderator device 130 , and transmit the overlaid image via a wired connection, e.g., an HDMI cable, to display 105 to be displayed over the original image displayed on display 105 , thereby reducing latency of collaboration platform 100 .
- receiver 120 may determine the user type of the user input by deriving data indicative of the user type from the user input data received from display 105 using, e.g., machine learning, artificial intelligence, or neural network, as described in further detail below with regard to FIGS. 7 A and 7 B . Accordingly, receiver 120 may generate overlay image based on both the user input data and the user type, as it determines the user type.
- receiver 120 of collaborative platform 100 may be used to generate an overlay image based on user input, and generate an overlaid image based on the overlay image and the real image received from moderator device 130 such that the overlaid image is displayed, thereby reducing latency of collaborative platform 100 .
- an original image is received by receiver 120 .
- the original image may be received from moderator device 130 and may include, e.g., a blank screen, a math problem, a picture, etc.
- receiver 120 sets the original image received from moderator device 130 as a current image. This may involve decoding an original image and/or placing an original image in a buffer.
- user input data indicative of user input may be received by receiver 120 , e.g., via a USB cable and/or a wireless connection such as Bluetooth, from display 105 .
- the user type of the user input may be set to preprogrammed default settings, e.g., default color (gray), default thickness (normal), and default marker user type, until optionally changed by the user as described with regards to steps 504 to 506 .
- default settings e.g., default color (gray), default thickness (normal), and default marker user type
- the process may proceed to step 503 . If receiver 120 does not receive user input data from display 105 at step 502 , the process may proceed directly to step 508 described in further detail below.
- receiver 120 running the collaboration application, transmits the user input data to the source of the original image, e.g., moderator device 130 , for further processing and analysis.
- moderator device 130 generates real image(s) based on the user input data received from receiver 120 .
- receiver 120 running the overlay image generation application generates an overlay image based on the user input data for immediate display.
- receiver 120 analyzes the user input data received from display 105 at step 502 to determine if the at least one user type changed. For example, receiver 120 may compare the user input's spatial location on display 105 as well as the physical contact with display 105 at various points of time to determine using, e.g., machine learning, artificial intelligence, or neural network, if the user has selected a different user type. If receiver 120 determines that a different user type has not been selected, e.g., the user is not click on a different user type icon, at step 505 , receiver 120 will continue using the previous user type, e.g., the color gray.
- the previous user type e.g., the color gray.
- receiver 120 determines that a different user type has been selected, e.g., the user selected the color red, based on the spatial location of the user input and the fact that the user discontinued contact with display 105 and re-contacted display 105 at that specific spatial location on display 105 , at step 506 , receiver 120 selects the new user type, e.g., the color red.
- a different user type e.g., the user selected the color red
- receiver 120 generates a leading end of an overlay image based on the user input data received at step 502 as described in further detail with regard to FIG. 5 B , as well as the user type selected at step 505 or 506 if a user type is selected at step 505 or 506 .
- the overlay image may be generated based on the user input data and a default user type, e.g., a default color and/or default line thickness, and thus step 507 may be initiated after step 503 without steps 504 to 506 .
- the overlay image generated will be representative of the user's actual input provided by the user, and further may include predicted user input based on the user's actual input.
- receiver 120 to generate an overlay image based on the user input data and optionally the user type, at step 511 , receiver 120 generates a first portion of the overall overlay image which is representative of the user's actual input received by receiver 120 , e.g., via a USB cable and/or a wireless connection such as Bluetooth, from display 105 . Accordingly, the first portion of the overlay image, when displayed on display 105 as an overlaid image, will illustrate what the user actually inputted on display 105 .
- receiver 120 generates a second, extended portion of the overall overlay image, which may be a prediction of the user's intended input based on the user input data received by receiver 120 , e.g., via a USB cable and/or a wireless connection such as Bluetooth, from display 105 .
- receiver 120 may analyze the spatial coordinates and/or the time coordinates of the user's input from the user input data to predict the user's intended input, e.g., what the user's next input will be, as described in further detail below.
- receiver 120 generates an overlay image based on the first and second, extended portions of the overlay image, such that the overlay image will include what the user actually inputted on display 105 and what the user is predicted to input on display 105 .
- receiver 120 may remove a portion of the trailing end of the overlay image as receiver 120 generates the leading end of an overlay image.
- the portion of the overlay image of the overlaid image displayed on display 105 may be removed as a function of time, or as a function of the spatial amount of overlay image of the overlaid image displayed on display 105 at a given time.
- each spatial coordinate of the overlay image of the overlaid image displayed on display 105 may remain displayed for a predetermined amount of time, e.g., 100 to 300 milliseconds or more.
- each spatial coordinate that makes up the overlay image of the overlaid image on display 105 may remain on display 105 for the same amount of time, and may be removed after that time has lapsed.
- Each spatial coordinate of the overlay image is initially displayed on display 105 at the leading end of the overlay image of the overlaid image, and as time lapses and additional spatial coordinates are displayed, the initial leading spatial coordinate ends up being at the trailing end of the overlay image of the overlaid image before it is removed, e.g., after the predetermined amount of time has lapsed.
- the predetermined amount of time that each spatial coordinate is displayed may be at least as long as the latency period of the real image to be received by and appear on display 105 . Accordingly, for a given amount of spatial coordinates displayed on display 105 within a predetermined time period, e.g., the same amount of spatial coordinates will be removed within the same predetermined time period from display 105 .
- the portion of the overlay image of the overlaid image displayed on display 105 may have a maximum spatial distribution, e.g., length between the leading end and the trailing end of the overlay image of the overlaid image and/or amount of spatial coordinates, for a given amount of time.
- the initial spatial coordinate of the overlay image of the overlaid image will be removed from display 105 when the amount of additional spatial coordinates displayed on display 105 exceeds the predetermined maximum amount of spatial coordinates permitted on display 105 .
- receiver 120 does not receive user input data from display 105 at step 502 , at step 508 , no additional leading end will be added to the overlay image, e.g., when the user removes their stylet/finger from display 105 such that no additional user input is provided to display 105 , while a portion of the trailing end of the overlay image will gradually be removed from the trailing end of the overlay image and replaced with the current real images received from moderator device 130 until, e.g., the overlay image of the overlaid image displayed on display 105 is completely replaced by the current image or additional user input is received by receiver 120 from display 105 at step 502 .
- receiver 120 generates an overlaid image based on the overlay image generated at step 507 and the current image set at step 501 .
- the overlaid image generated will be representative of the user's actual input provided by the user, and further may include predicted user input based on the user's actual input, superimposed on the current image.
- the overlay image may be superimposed on the real image to form the overlaid image, as described with regard to FIG. 5 C below, which may then be sent by receiver 120 to display 105 . Accordingly, no latency of collaborative platform 100 is perceived on display 105 as the predicted portion of the overlaid image is displayed seemingly analogously with the user's input.
- the current image may be periodically updated as receiver 120 receives additional images (e.g., real images) from moderator device 130 .
- additional images e.g., real images
- a received additional image may be decoded and/or added to a buffer and may become the current image.
- the overlaid image generated by receiver 120 may be superimposed on the updated current image.
- the overlay image may be superimposed on the real image to form the overlaid image.
- the real image may include line 515 , generated by moderator device 130 based on user input data corresponding to user input received by receiver 120 from display 105 .
- Line 515 represents what the user actually draws on display 105 , but only includes that much which has been generated by moderator device 130 based on the user input data.
- the user's actual input in real-time may be at another point on display 105 as denoted by stylet 700 .
- the overlay image generated by receiver 120 includes first portion 516 , which is representative of the user's actual input received by receiver 120 , and second, extended portion 517 , which may be a prediction of the user's intended input based on the user input data received by receiver 120 .
- the overlay image e.g., lines 516 and 517
- the overlay image may be superimposed on the real image, e.g., line 515 , to form the overlaid image, e.g., lines 515 , 516 , and 517 .
- portions of the overlay image may be removed as a function of, e.g., time, and thus, as the real image grows, e.g., the “3” is being drawn, line 515 gets longer, while lines 516 and 517 of the overlay image may be displayed only toward the growing leading end of line 515 of the overlaid image, as shown in FIG. 5 C .
- the overlay image may further be generated based on the speed of the user input, the overlay image, e.g., lines 516 and 517 , may be displayed as longer lines when the user input is received faster by display 105 , and as shorter lines when the user input is received slower by display 105 .
- receiver 120 transmits the overlaid image, e.g., the first and second, extended portions of the overlaid image superimposed on the current image, to display 105 , thereby reducing and/or eliminating latency of collaborative platform 100 .
- an additional real image corresponding to additional user input data from display 105 may be received by receiver 120 from moderator device 130 and be set as additional current image, and an additional overlaid image may be generated by receiver 120 based on the overlay image created by these additional user input data and superimposed on the additional current image.
- the user input provided by the user is illustrated in conjunction with the display of the overlaid image generated by receiver 120 to illustrate the latency of the real image.
- the original image displayed on display 105 e.g., a touchscreen, may be blank, and the user may use stylet 700 to interact with display 105 by pressing stylet 700 against display 105 at point 605 .
- the user drags stylet 700 from point 605 to point 606 on display 105 .
- the dragging motion of stylet 700 by the user i.e., the user input
- An overlaid image is then generated by receiver 120 based on the user input data (and optionally the user type) and the real image received from moderator device 130 , and transmitted to display 105 and displayed.
- the overlaid image may be formed by an overlay image superimposed on the real image, whereas the overlay image includes a first portion representative of the user's actual input received by display 105 , and a second, extended portion, which may be a prediction of the user's intended input based on the user input data received by display 105 .
- the real image is still the blank original image, and thus, the overlaid image appears to only include overlay image 701 of the overlaid image. Accordingly, latency is reduced on collaborative platform 100 as the overlaid image is displayed almost immediately after the user drags stylet 700 from point 605 to 606 , and thus, is hardly noticeable by the user or other observers looking at display 105 .
- the latency of the collaboration application of collaboration platform 100 is illustrated in FIG. 6 C .
- the user continues to drag stylet 700 from point 606 to point 607 .
- the user input is continuously converted to user input data by display 105 and transmitted to receiver 120 , which is then continuously transmitted to moderator device 130 via a wireless connection, e.g., WiFi, for processing.
- moderator device 130 modifies the original image stored in memory thereof based on the user input data, and generates a real image representing the user input, e.g., the dragging motion of stylet 700 by the user on display 105 .
- a wireless connection e.g., WiFi
- moderator device 130 when stylet 700 is at point 607 , moderator device 130 has only processed the user input data representing the user's dragging motion of stylet 700 from point 605 to point 606 , and accordingly generates a real image, e.g., real image 702 , representing the user's input.
- the real image generated by moderator device 130 is then transmitted to receiver 120 .
- receiver 120 generates an overlaid image, which includes overlay image 701 , e.g., the first portion representative of the user's actual input received by display 105 , and the predicted second, extended portion representative of the user's intended input, superimposed on real image 702 .
- the overlaid image is then transmitted to display 105 via a wired connection, e.g., an HDMI cable, to be displayed.
- FIG. 6 C As the data flow of the collaboration application requires the user input data to be transmitted via a wired connection from display 105 to receiver 120 and via a wireless connection from receiver 120 to moderator device 130 , and the real image via a wireless connection from moderator device 130 to receiver 120 and ultimately via a wired connection from receiver 120 to display 105 , undesirable latency of collaboration platform 100 is observed.
- This is illustrated in FIG. 6 C as real image 702 being displayed with a delay behind overlay image 701 .
- overlay image 701 appears as a mark from point 605 to immediately adjacent 607 , while real image 702 has only reached point 606 .
- FIG. 6 D when stylet 700 is at point 608 , overlay image 701 appears as a mark from point 605 to immediately adjacent 608 , while real image 702 has only reached point 607 .
- FIG. 6 E illustrates display 105 after some time after the latency of collaborative platform 100 such that overlay image 701 and real image 702 extend from point 605 to point 608 .
- receiver 120 may derive and/or receive information indicative of one or more user types, such that the overlay image generated is also based on the one or more user types.
- the user may select one or more user types, e.g., thickness, color, or marker or eraser type, and provide user input in accordance with the selected user type. Accordingly, as the user begins to draw, e.g., a number “3” in the color red on display 105 , an overlay image will be generated by receiver 120 and transmitted to display 105 as an overlaid image such that an overlaid image of the number “3” in the color red will begin to be displayed on display 105 with reduced latency.
- user types e.g., thickness, color, or marker or eraser type
- the user input provided by the user is illustrated in conjunction with the display of the overlaid image generated by receiver 120 , such that the overlaid image includes the user's actual input in addition to the predicted user input generated by receiver 120 , superimposed on the real image.
- the user may use stylet 700 to interact with display 105 by pressing stylet 700 against display 105 at point 705 ( 5 , 6 ), and dragging stylet 700 from point 705 ( 5 , 6 ) to point 706 ( 5 , 7 ) to point 707 ( 5 , 8 ) to point 708 ( 5 , 9 ) to point 709 ( 5 , 10 ) on display 105 .
- the user's actual input is depicted as line 703 as shown in FIG. 7 A .
- the dragging motion of stylet 700 by the user i.e., the user input, is converted to user input data by display 105 and transmitted to receiver 120 as described above.
- the user input data includes the user's actual input, e.g., spatial coordinates ( 5 , 6 ), ( 5 , 7 ), ( 5 , 8 ), ( 5 , 9 ), and ( 5 , 10 ).
- An overlay image is then generated by receiver 120 based on the user input data (and optionally the user type), and transmitted to display 105 to be displayed as an overlaid image.
- the overlay image includes the user's actual input, e.g., line 703 , as well as the predicted user input, e.g., line 704 , generated by receiver 120 , as shown in FIG. 7 B .
- line 704 may be predicted by receiver 120 based on spatial coordinates ( 5 , 6 ), ( 5 , 7 ), ( 5 , 8 ), ( 5 , 9 ), and ( 5 , 10 ) of the user input data using extrapolation, e.g., linear extrapolation, polynomial extrapolation, conic extrapolation, French curve extrapolation and/or any other well-known extrapolation techniques, machine learning, artificial intelligence, or a neural network.
- extrapolation e.g., linear extrapolation, polynomial extrapolation, conic extrapolation, French curve extrapolation and/or any other well-known extrapolation techniques, machine learning, artificial intelligence, or a neural network.
- receiver 120 predicts that the user's next input will be to continue dragging stylet 700 from point 709 ( 5 , 10 ) to point 710 ( 5 , 11 ) to point 711 ( 5 , 12 ) to point 712 ( 5 , 13 ) to point 713 ( 5 , 14 ).
- line 704 may be predicted by receiver 120 based on the time coordinates of the user input data using extrapolation, machine learning, artificial intelligence, or a neural network.
- the user input data received by receiver 120 may include data indicating that point 705 was touched by stylet 700 at T 1 , point 706 at T 2 , point 707 at T 3 , point 708 at T 4 , and point 709 at T 5 , and receiver 120 may determine the velocity of stylet 700 based on T 1 -T 5 .
- receiver 120 will predict that point 710 will be touched by stylet 700 at T 6 , point 711 at T 7 , point 712 at T 8 , and point 713 at T 9 , such that the velocity between T 6 -T 9 corresponds with the velocity of T 1 -T 5 . Accordingly, points 710 - 713 of line 704 will be displayed on display 105 with a velocity corresponding to the velocity based on T 1 -T 5 , such that points 710 , 711 , 712 , and 713 of line 704 will appear on display 105 at the same time the user drags stylet 700 to point 710 , 711 , 712 , and 713 in real time, thereby eliminating any latency on collaborative platform 100 .
- receiver 120 may determine the acceleration of stylet 700 based on T 1 -T 5 , such that the acceleration between T 6 -T 9 corresponds with the acceleration of T 1 -T 5 . Accordingly, points 710 - 713 of line 704 will be displayed on display 105 with a modified velocity corresponding to the acceleration based on T 1 -T 5 , such that points 710 , 711 , 712 , and 713 of line 704 will appear on display 105 at the same time the user drags stylet 700 to point 710 , 711 , 712 , and 713 in real time, thereby eliminating any latency on collaborative platform 100 .
- receiver 120 may predict complex curved lines by predicting finite line segments forming the curve as well as predicting the angle of each finite line segment and the change of angle between adjacent line segments. For example, receiver 120 may detect a first angle of a first line segment of the user's actual input, and detect a second angle of a second line segment of the user's actual input, and determine the change of angle between the first angle and the second angle. Based on the first angle, second angle, and change of angle of the user's actual input, receiver 120 may predict the curve of the user's next input of finite line segments.
- receiver 120 may detect a rate of change of the change of angle between adjacent finite line segments of the user's actual input and predict the user's next input based on the detected rate of change of the change of angle between adjacent finite line segments.
- the user may use stylet 700 to interact with display 105 by pressing stylet 700 against display 105 at point 716 ( 2 , 5 ), and dragging stylet 700 from point 716 ( 2 , 5 ) to point 717 ( 5 , 6 ) to point 718 ( 8 , 8 ) on display 105 .
- the user's actual input is depicted as line 714 as shown in FIG. 7 C .
- the dragging motion of stylet 700 by the user i.e., the user input, is converted to user input data by display 105 and transmitted to receiver 120 as described above.
- the user input data includes the user's actual input, e.g., a first line segment from spatial coordinate ( 2 , 5 ) to spatial coordinate ( 5 , 6 ) having a first angle, and a second line segment from spatial coordinate ( 5 , 6 ) to spatial coordinate ( 8 , 8 ).
- An overlay image is then generated by receiver 120 based on the user input data (and optionally the user type), and transmitted to display 105 to be displayed as an overlaid image.
- the overlay image includes the user's actual input, e.g., line 714 , as well as the predicted user input, e.g., line 715 , generated by receiver 120 , as shown in FIG. 7 D .
- line 715 may be predicted by receiver 120 based on spatial coordinates ( 2 , 5 ), ( 5 , 6 ), and ( 8 , 8 ) of the user input data using extrapolation, machine learning, artificial intelligence, or a neural network. Based on the first angle of the first line segment from spatial coordinate ( 2 , 5 ) to spatial coordinate ( 5 , 6 ), and the second angle of the second line segment from spatial coordinate ( 5 , 6 ) to spatial coordinate ( 8 , 8 ), receiver 120 predicts that the user's next input will be to continue dragging stylet 700 from point 718 ( 8 , 8 ) to point 719 ( 11 , 11 ) to point 720 ( 14 , 15 ).
- the angle of the line segment from point 718 to point 719 and from point 719 to point 720 will correspond with the rate of change between the first angle of the line segment from point 716 to point 717 and the second angle of the line segment from point 717 to point 718 .
- line 715 may also be predicted by receiver 120 based on the time coordinates of the user input data using extrapolation, machine learning, artificial intelligence, or a neural network.
- the user input data received by receiver 120 may include data indicating that point 716 was touched by stylet 700 at T 1 , point 717 at T 2 , and point 718 at T 3 , and receiver 120 may determine the velocity of stylet 700 based on the T 1 -T 3 .
- receiver 120 will predict that point 719 will be touched by stylet 700 at T 4 , and point 720 at T 5 , such that the velocity between T 3 -T 5 corresponds with the velocity of T 1 -T 3 .
- receiver 120 may determine the acceleration of stylet 700 based on T 1 -T 3 , such that the acceleration between T 3 -T 5 corresponds with the acceleration of T 1 -T 3 .
- points 719 and 720 of line 715 will be displayed on display 105 with a modified velocity corresponding to the acceleration based on T 1 -T 3 , such that points 719 and 720 of line 715 will appear on display 105 at the same time the user drags stylet 700 to point 719 and 720 in real time, thereby eliminating any latency on collaborative platform 100 .
- FIG. 8 A is a screenshot of display 105 at a first time
- FIG. 8 B is a screenshot of display 105 at a second time
- the interface displayed on display 105 may include user-friendly icons in a ribbon at the top of the screen representing selectable user types including, but not limited to, marker icon 601 , thickness icon 602 , eraser icon 603 , and color icon 604 .
- a drop down menu may appear with additional sub-icons for selecting between thickness levels such as “thin,” “normal,” and “thick.”
- a drop down menu may appear with additional sub-icons for selecting between different colors such as “gray,” “black,” “blue,” “yellow,” etc.
- the user type of the user input may be set to preprogrammed default settings, e.g., default color (gray), default thickness (normal), and default marker user type, until subsequently changed by the user.
- receiver 120 may receive user input data from display 105 via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth, and from the user input data, determine one or more user types of the user input. For example, using, e.g., machine learning, artificial intelligence, and/or neural network, receiver 120 may analyze and/or process user input data to determine the user type. Using machine learning, artificial intelligence, and/or neural networks receiver 120 may determine user type based on patterns of the user's movement with regard to display 105 , and/or observing the user's actions, e.g., what types of marks are drawn, that follow.
- a wired connection e.g., a USB cable, and/or a wireless connection such as Bluetooth
- receiver 120 may analyze and/or process user input data to determine the user type.
- receiver 120 may determine user type based on patterns of the user's movement with regard to display 105 , and/or observing the user's actions, e.g., what types
- the user drew a line extending from point 605 to point 606 to point 607 by, e.g., contacting display 105 and moving from point 605 to point 606 to point 607 without discontinuing contact with display 105 .
- marker icon 601 was previously selected, for example, by contacting any point within a perimeter of points on display 105 corresponding to marker icon 601 .
- receiver 120 can identify the interface of display 105 and correlate specific actions by the user (e.g., clicking on the point of display 105 where marker icon 601 resides) with specific user types.
- receiver 120 will learn that by clicking on the point of display 105 where marker icon 601 resides, the marker user type has been selected, which permits the user to draw lines. Thus, receiver 120 will associate the spatial region of marker icon 601 with the function of drawing solid lines. Using machine learning and comparing a plurality of user inputs taken at various time points, receiver 120 can deduce the various icons of any interface, and their respective functions. Accordingly, receiver 120 may include a database by which it compares actions of the user relative to display 105 , given a specific interface, to determine what user type has been selected.
- receiver 120 receives user input data indicating that the user discontinued contact with display 105 , and then contacted display 105 at a point on display 105 associated with eraser icon 603 , which has been associated with the function of erasing through machine learning. Accordingly, upon clicking eraser icon 603 , receiver 120 determines that the eraser user type has been selected, and generates an overlay image of an eraser mark from point 607 to point 606 in response to the user contacting display 105 at point 607 and dragging the stylet from point 607 to point 606 as shown in FIG. 8 B .
- receiver 120 by analyzing the user input data received from display 105 to determine which user type is selected, receiver 120 generates the overlay image based on not only user input, but the user type of the user input to accurately display the overlaid image corresponding to the user's selected user type and user input.
- FIG. 9 A a block diagram of another exemplary embodiment of collaborative platform 100 ′ in accordance with the principles of the present invention is provided.
- user input data may be transmitted from display 105 ′ to receiver 120 ′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- user input data and the real image may be communicated between receiver 120 ′ and moderator device 130 ′ across a wireless connection, e.g., WiFi.
- the overlaid image may be transmitted from receiver 120 ′ to display 105 ′ via a wired connection, e.g., an HDMI cable.
- data indicative of the user type of the user input may be transmitted from moderator device 130 ′ to receiver 120 ′ via a wireless connection, e.g., a defined TCP port.
- collaboration platform 100 ′ of FIG. 9 A also runs a collaboration application for displaying a first image based on an original image file stored on moderator device 130 ′, receiving user input, modifying the original image file stored on moderator device 130 ′ based on the user input, and displaying a second image based on the modified original image file.
- collaboration platform 100 ′ may run an overlay image generator application for generating an overlay image by receiver 120 ′ based on the user input provided by the user, generating an overlaid image based on the overlay image, and displaying the overlaid image on the original image on display 105 ′ to reduce latency of collaboration platform 100 ′.
- Collaboration platform 100 ′ differs from collaboration platform 100 in that, receiver 120 ′ may receive data indicative of user type directly from moderator device 130 ′ via a wireless connection, e.g., a defined TCP port, in addition to user input data received from display 105 ′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- a wireless connection e.g., a defined TCP port
- receiver 120 ′ does not need to derive information regarding the selected user type of the user input from user input data received from display 105 .
- moderator device 130 ′ may include overlay image application 139 for processing and analyzing the user input data received from display 105 ′ through receiver 120 ′, determining the user type selected from the user input data, and transmitting the data indicative of the selected user type to receiver 120 ′ via the defined TCP port.
- receiver 120 ′ generates an overlay image based on both the user input data and the user type data, generates an overlaid image based on the overlay image, and transmits the overlaid image via a wired connection, e.g., an HDMI cable, to display 105 ′, thereby reducing latency of collaboration platform 100 ′.
- a wired connection e.g., an HDMI cable
- FIG. 10 A a block diagram of another exemplary embodiment of collaborative platform 100 ′′ in accordance with the principles of the present invention is provided.
- user input data may be transmitted from display 105 ′′ to receiver 120 ′′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- user input data and real image(s) may be communicated between receiver 120 ′′ and moderator device 130 ′′ across a wireless connection, e.g., WiFi.
- the overlaid image may be transmitted from receiver 120 ′′ to display 105 ′′ via a wired connection, e.g., an HDMI cable.
- a wired connection e.g., an HDMI cable.
- data indicative of the user type of the user input may be transmitted from the operating system of moderator device 130 ′′ to receiver 120 ′′ via a modified user input back channel (UIBC) extension.
- UIBC user input back channel
- a UIBC extension would generally be used to transmit user input data from the receiver to the moderator device; however, here the UIBC extension is modified to permit transmission of data from moderator device 130 ′′ to receiver 120 ′′.
- collaboration platform 100 ′′ of FIG. 10 A also runs a collaboration application for displaying a first image based on an original image file stored on moderator device 130 ′′, receiving user input, modifying the original image file stored on moderator device 130 ′′ based on the user input, and displaying a second image based on the modified original image file.
- collaboration platform 100 ′′ may run an overlay image generator application for generating an overlay image by receiver 120 ′′ based on the user input provided by the user, generating an overlaid image based on the overlay image, and displaying the overlaid image on the original image on display 105 ′′ to reduce latency of collaboration platform 100 ′′.
- Collaboration platform 100 ′′ differs from collaboration platform 100 in that, receiver 120 ′′ may receive data indicative of user type directly from the operating system of moderator device 130 ′′ via the UIBC extension described above, in addition to user input data received from display 105 ′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- a wired connection e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- receiver 120 ′′ does not need to derive information regarding the selected user type of the user input from user input data received from display 105 .
- operating system 137 of moderator device 130 ′′ may process and analyze the user input data received from display 105 ′′ through receiver 120 ′′, determine the user type selected from the user input data, and transmit the data indicative of the selected user type to receiver 120 ′′ via the UIBC extension.
- receiver 120 ′′ generates an overlay image based on both the user input data and the user type data, generates an overlaid image based on the overlay image, and transmits the overlaid image via a wired connection, e.g., an HDMI cable, to display 105 ′′ to be displayed over the original image displayed on display 105 ′′, thereby reducing latency of collaboration platform 100 ′′.
- a wired connection e.g., an HDMI cable
- receiver 120 ′ of collaborative platform 100 ′ and receiver 120 ′′ of collaborative platform 100 ′′ may be used to generate an overlay image based on user input, and generate an overlaid image based on the overlay image such that the overlaid image is displayed while a real image is being generated by moderator device 130 ′, 130 ′′, thereby reducing latency of collaborative platform 100 ′, 100 ′′.
- receiver 120 ′, 120 ′′ running the collaboration application, transmits the user input data to the source of the original image, e.g., moderator device 130 ′, 130 ′′, for further processing and analysis.
- moderator device 130 ′, 130 ′′ may derive data indicative of at least one user type of the user input.
- user type data is received by receiver 120 ′, e.g., via a defined TCP port, from an application of moderator device 130 ′, or by receiver 120 ′′, e.g., a UIBC extension, from moderator device 130 ′′.
- receiver 120 ′, 120 ′′ generates an overlay image based on the user input data received at step 1101 as well as the user type data received at step 1103 .
- the overlay image may be generated based on the user input data and a default user type until a new user type is received at step 1103 .
- the user type of the user input may be set to preprogrammed default settings, e.g., default color (gray), default thickness (normal), and default marker user type, until subsequently changed by the user.
- receiver 120 ′, 120 ′′ receives the real image generated by moderator device 130 ′, 130 ′′ based on the user input data received from display 105 ′, 105 ′′.
- receiver 120 ′, 120 ′′ generates an overlaid image based on the overlay image and the real image.
- the overlaid image may be formed by an overlay image superimposed on the real image, whereas the overlay image includes a first portion representative of the user's actual input received by receiver 120 ′, 120 ′′, and a second, extended portion, which may be a prediction of the user's intended input based on the user input data received by receiver 120 ′, 120 ′′.
- receiver 120 ′, 120 ′′ transmits the overlaid image to, e.g., display 105 ′, 105 ′′, to be displayed on the original image, thereby reducing latency of collaborative platform 100 ′, 100 ′′.
- FIG. 12 A a block diagram of another exemplary embodiment of collaborative platform 100 ′′′ in accordance with the principles of the present invention is provided.
- user input data may be transmitted from display 105 ′′′ to receiver 120 ′′′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.
- Receiver 120 ′′′ may be able to perform the functionalities of a moderator device described herein. For example, receiver 120 ′′′ may generate a real image based on the user input data received from display 105 ′′′, and further generate an overlay image including a predicted portion based on the user input data, as well as an overlaid image based on the overlay image and the real image.
- the overlaid image may be transmitted from receiver 120 ′′′ to display 105 ′′′ via a wired connection, e.g., an HDMI cable.
- Collaboration platform 100 ′′′ of FIG. 12 A may run a collaboration application for displaying a first image based on an original image file stored on receiver 120 ′′′, receiving user input, modifying the original image file stored on receiver 120 ′′′ based on the user input, and displaying a second image based on the modified original image file.
- collaboration platform 100 ′′′ may run an overlay image generator application for generating an overlay image by receiver 120 ′′′ based on the user input provided by the user, including a predicted portion based on the user input data, generating an overlaid image based on the overlay image, and displaying the overlaid image on the original image on display 105 ′′′ to reduce latency of collaboration platform 100 ′′′.
- Collaboration platform 100 ′′′ differs from collaboration platform 100 in that, receiver 120 ′′′ may function as a moderator device described herein and generate modified real images based on the user input data received from display 150 ′′′, without having to transmit the user input data to an external moderator device.
- receiver 120 ′′′ generates a modified real image based on user input user input data and optionally user type data, generates an overlay image based on user input data and optionally user type data, generates an overlaid image based on the overlay image and the real image, and transmits the overlaid image via a wired connection, e.g., an HDMI cable, to display 105 ′′′ to be displayed over the original image displayed on display 105 ′′′, thereby reducing latency of collaboration platform 100 ′′′.
- a wired connection e.g., an HDMI cable
- the collaborative platforms described herein for generating overlaid images for display will reduce latency due to the necessity of transmitting data across a wireless network, e.g., between the receiver and the moderator and member devices.
- additional sources of delay include processor and application delays.
- the computing device for receiving user input e.g., a touchscreen display, will be limited in its processing time of the user input to generate user input data for transmission to the receiver.
- extrapolation, artificial intelligence, machine learning, and/or neural networks may be implemented to predict user input as the user interacts with the touchscreen, such that the overlay image generator application of the receiver may generate overlaid images based on the predicted user input rather than waiting for the user input data from the touchscreen and/or the moderator device (which may suffer from application delays in processing the user input data), thereby further reducing latency of the collaborative platform.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Information Transfer Between Computers (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Systems and methods for reducing latency on a collaborative platform are provided. The collaborative platform involves a display, a moderator device, one or more member devices, and a receiver in communication with the display, the moderator device, and the one or more member devices. To reduce latency of the collaborative platform, the receiver generates an overlay image based on user input received from the display, as well as user type of the user input, generates an overlaid image based on the overlay image, and transmits the overlaid image for display, while a collaboration application generates new real image(s) based on the user input for display. The overlaid image generated may be indicative of actual user input as well as predicted user input using extrapolation and/or machine learning.
Description
- This application is a continuation application of U.S. patent application Ser. No. 17/105,419, filed Nov. 25, 2020, now U.S. Pat. No. 11,483,367, which claims the benefit of the filing date of U.S. Provisional Patent Application No. 62/941,677, filed Nov. 27, 2019, the entire contents of each of which are incorporated herein by reference.
- The present disclosure relates, in general, to methods and systems for generating an overlay image based on user input for at least temporary display to reduce latency on a collaborative platform.
- Methods and products for projecting content both by wired connection and wirelessly over a network are well known in the art. One example is the Miracast® wireless display standard, certified by the Wi-Fi Alliance, which defines a protocol for displaying multimedia between devices using Wi-Fi CERTIFIED Wi-Fi Direct®. Implementing Wi-Fi Direct, Miracast® provides operations for negotiating video capabilities, setting up content protection, streaming content, and maintaining a video session. Unlike Bluetooth technology, Wi-Fi CERTIFIED Miracast® allows for sending up to 1080p HD, or even higher resolution video and thus is suitable for video streaming and screen to screen content projection. For example, Miracast® makes it possible to wirelessly stream video content from a laptop computer to a television display.
- Undesirable latency of content projection systems arises during collaboration, for example, when making edits to the content being projected on a display, e.g., computing device, where the original data file is not stored. For example, in a classroom setting, a teacher's desktop may have an original data file stored thereon, which may be projected on a display in front of the classroom visible to the classroom of students using content projection systems known in the art. A receiver is typically used to transmit data between the teacher's desktop or the student's tablet, and the display. For example, the receiver may be coupled to the display via a USB cable for transferring user input data, and further coupled to the display via an HDMI cable for transferring image(s). Moreover, the receiver may communicate with the teacher's desktop and the student's tablet wirelessly over a network (e.g., local network, corporate network, or internet).
- When the original file, e.g., a math problem, stored on the teacher's desktop is projected on the display, e.g., a touchscreen, the student may attempt to answer the math problem by drawing directly on the display. As the student begins to draw, e.g., the number “3,” on the display, in order for the formation of the number “3” to start appearing on the display, input data representing the user input is transferred from the display via the USB cable to the receiver. The receiver then transmits the user input data via WiFi to the teacher's desktop, where the original file is stored. A processor on the teacher's desktop then modifies the original file based on the user input data, e.g., adding the number “3” to the math problem as the student draws it, thereby generating a new real image, which is transmitted via WiFi to the receiver. The receiver then transmits the new real image via the HDMI cable to the display so that the formation of number “3” is displayed on the display as the student draws it. The data flow from the display to the receiver to the teacher's desktop, back to the receiver, and then finally back to the display occurs continuously as the student draws on the display, and results in latency of the collaborative content projection system.
- Therefore, it is desirable to provide systems and methods for reducing latency of the collaborative content projection system.
- The present invention is directed to systems and methods for generating an overlay image based on user input for at least temporary display to reduce latency on a collaborative platform. For example, in accordance with one aspect of the invention, a method for reducing latency on a collaborative platform is provided. The method includes receiving, by a first device, e.g., a receiver, a first real image from a third device, e.g., a moderator device; receiving, by the first device, user input data indicative of user input on the second device, e.g., a display via a USB cable; transmitting, by the first device, the user input data to the third device; determining, by the first device, an overlay image based on the user input data; determining, by the first device, an overlaid image based on the overlay image and the first real image; and transmitting, by the first device, the overlaid image to the second device, e.g., via an HDMI cable, to cause the overlaid image to be displayed on the second device, e.g., via a touchscreen display.
- Moreover, a portion of the overlay image of the overlaid image may be displayed on the second device for a predetermined period of time. For example, the predetermined period of time may be at least as long as the latency on the collaborative platform. For example, the overlay image may include a leading end and a trailing end, such that, as the leading end extends on the second device at a rate, the trailing end is removed from the second device at the rate. Alternatively, as a number of spatial coordinates of the leading end increases on the second device, a portion of spatial coordinates of the trailing end may be removed from the second device depending on the latency and/or the speed of the user input data.
- In accordance with some aspects of the present invention, the overlay image determined by the first device may include a first portion of the overlay image indicative of the user input at the second device based on the user input data, and an extended, predicted portion of the overlay image based on the user input data. For example, the first device may predict the extended portion of the overlay image based on at least one of spatial or time coordinates of the user input data, e.g., via at least one of extrapolation, machine learning, artificial intelligence, or a neural network. For example, the first device may predict the extended portion of the overlay image based on a velocity of the user input data. The extended portion of the overlay image may include a curved portion formed of a plurality of finite line segments, such that predicting, by the first device, the extended portion of the overlay image includes predicting the curved portion based on an angle of each finite line segment of the plurality of finite line segments.
- In addition, the user input data may include input type data indicative of at least one of thickness, color, or marker or eraser type. In accordance with one aspect of the present invention, the method further includes determining, by the first device, the input type based on the user input data and machine learning. For example, the input type may be determined by analyzing a pattern of spatial inputs of the user input data from the second device. Accordingly, the overlay image determined may be determined based on the determined input type.
- In accordance with another aspect of the present invention, the method further may include receiving, by the first device, data indicative of the input type from the third device. For example, the first device may receive data indicative of the input type from an application running on the third device via a defined TCP port. Alternatively, the first device may receive data indicative of the input type from an operating system running on the third device via a user input back channel (UIBC) extension. The third device and the first device may communicate over a wireless connection.
-
FIG. 1A is a block diagram of a collaborative platform in accordance with an illustrative embodiment of the present invention. -
FIG. 1B is a block diagram of the collaborative platform ofFIG. 1A illustrating various communication mechanisms in accordance with the principles of the present invention. -
FIG. 2 is a diagram of a collaborative platform in an exemplary setting in accordance with one aspect of the present invention. -
FIGS. 3A-3D are schematic views of the exemplary hardware and software components of an exemplary display, receiver, moderator device, and member device, respectively. -
FIG. 4A is a block diagram of the collaborative platform in accordance with one aspect of the present invention. -
FIG. 4B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted inFIG. 4A . -
FIG. 5A is a flow chart illustrating exemplary steps of reducing latency on a collaborative platform in accordance with the principles of the present invention. -
FIG. 5B is a flow chart illustrating the steps of overlaid image generation ofFIG. 5A . -
FIG. 5C illustrates overlaid image generation in accordance with the principles of the present invention. -
FIGS. 6A-6E illustrate the steps of reducing latency on a collaborative platform in accordance with the principles of the present invention. -
FIGS. 7A-7D illustrate overlay image prediction generation in accordance with the principles of the present invention. -
FIGS. 8A and 8B illustrate user type data collection in accordance with one aspect of the present invention. -
FIG. 9A is a block diagram of an alternative embodiment of the collaborative platform in accordance with another aspect of the present invention. -
FIG. 9B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted inFIG. 9A . -
FIG. 10A is a block diagram of another alternative embodiment of the collaborative platform in accordance with yet another aspect of the present invention. -
FIG. 10B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted inFIG. 10A . -
FIG. 11 is a flow chart illustrating alternative exemplary steps of reducing latency on a collaborative platform in accordance with the principles of the present invention. -
FIG. 12A is a block diagram of yet another alternative embodiment of the collaborative platform in accordance with yet another aspect of the present invention. -
FIG. 12B is a sequence diagram for using the collaborative platform in accordance with the illustrative embodiment depicted inFIG. 12A . - The foregoing and other features of the present invention will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
- There are many instances in which a computer user may wish to share the display of his or her computer screen with others. For example, in connection with a classroom lesson being given in a classroom setting, a teacher may desire to display a problem to a classroom of students, and have a student solve the problem on the display, such that the student's efforts are visible to the entire classroom. For example, the problem may be stored in a computer file on the teacher's computer, and displayed on a main display visible to the classroom of students. A selected student may then perform work on the main display directly, such that their work is visible to the classroom of students. In such instances, it may be advantageous to quickly and easily display an overlay image illustrating the student's work over the original problem on the main display. As will be understood by a person of ordinary skill in the art, the principles of the present invention described herein may be used in settings other than the classroom, e.g., remotely across a campus or other geographical locations via WiFi or the internet, for conducting other collaborative efforts such as meetings or presentations.
- The present invention is directed to a collaborative platform for use in, for example, a classroom setting or a product presentation meeting, to facilitate presenting materials in real time while reducing latency in the collaborative platform. For example, the present invention permits a user to provide user input, such as a marking on an original image being displayed, such that the user input is illustratively overlaid on the original image on a main display almost immediately after the user input is provided, and before a real image is able to be generated by the collaborative platform. The collaborative platform involves a main display, a moderator device, one or more member devices, and a receiver in communication with the display, the moderator device, and the one or more member devices. The moderator device may be used by a teacher/administrator and may store an original data file, and an original image may be displayed on the main display based on the original data file such that a student may edit the original data file by providing user input via the main display. The receiver is configured to run an overlay image generation application which generates an overlay image based on the user input provided by the student via the display, and displays the overlaid image over the original image while the collaborative platform updates the original data file based on the user input data for display on the main display. By displaying the overlaid image before displaying an updated image generated using the real data, the receiver reduces latency in the collaborative platform.
-
FIG. 1A is a block diagram of an illustrative collaborative platform constructed in accordance with the principles of the present invention.Collaborative platform 100 includesdisplay 105,receiver 120,network 101 in whichreceiver 120 serves as the hub,moderator device 130 to be used by the moderator client, e.g., a teacher, and optionally, one ormore member devices 140 to be used by the one or more member clients, e.g., students.Receiver 120 may be a ScreenBeam® Wireless Display Kit, available from Actiontec Electronics, Inc., Sunnyvale, Calif. In one preferred embodiment,receiver 120 is Miracast® aware and compatible. Although inFIG. 1A , threemember devices 140 are depicted, as a person having ordinary skill in the art will understand, fewer or more than three member devices may be used incollaborative platform 100. - As shown in
FIG. 1A ,moderator device 130 andmember devices 140 interact withreceiver 120 wirelessly throughnetwork 101. As shown inFIG. 1B ,network 101 may be based on wireless communication, such thatmoderator device 130 andmember devices 140 interact withreceiver 120 over WiFi or the internet.Network 101 may be a local peer-to-peer network, for example, a Wi-Fi peer-to-peer interface.Display 105 may be any suitable computing device, e.g., a touchscreen device, and provides an interface for presenting information received fromreceiver 120 to external systems, users, or memory, as well as for collecting user input directly via the interface ofdisplay 105, e.g., via touch sensors embedded on the interface. In an alternative embodiment,display 105 may comprise multiple individual displays, and even may constitute the displays associated with each ofmember devices 140 and/ormoderator device 130. Similarly, when the user interacts directly on the screen of his or hermember device 140 for making edits,member device 130 may be any suitable computing device as described above, e.g., a touchscreen device. -
Receiver 120 may be coupled todisplay 105, by one or more wired connections. For example, as shown inFIG. 1B ,receiver 120 anddisplay 105 may connect using a universal serial bus (USB) cable for communicating user input data, andreceiver 120 anddisplay 105 may connect using a high-definition multimedia interface (HDMI) cable for communicating image(s). Alternative,receiver 120 anddisplay 105 may connect using a wireless connection such as Bluetooth. Accordingly,receiver 120 receives an original image, e.g., still images, frommoderator device 130 via WiFi, and passes along the original image provided bymoderator device 130 to display 105 via the HDMI cable, which illustratively is shown ondisplay 105. Thus, the local display ofmoderator device 130 anddisplay 105 may display the same information (e.g., the same graphics, video, image, chart, presentation, document, program, application, window, view, etc.). In addition,receiver 120 receives user input data indicative of user input, fromdisplay 105 via the USB cable and/or a wireless connection such as Bluetooth, and passes along the user input data provided bydisplay 105 tomoderator device 130 via WiFi for processing. -
Moderator device 130 processes the user input data provided bydisplay 105, and modifies the original image stored in its memory based on the user input data received to generate an image for redistribution toreceiver 120 via WiFi, and ultimately to display 105 viareceiver 120. As will be understood by a person having ordinary skill in the art, the path of data flow—user input data fromdisplay 105 toreceiver 120 via USB and/or Bluetooth, user input data fromreceiver 120 tomoderator device 130 via WiFi, generation of real image based on the user input data bymoderator device 130, real image frommoderator device 130 toreceiver 120 via WiFi, and real image fromreceiver 120 to display 105 via HDMI—will suffer from a time delay due to latency of the content projection system. - In accordance with one aspect of the present invention,
moderator device 130 may designatemember device 140 as the moderator as described in U.S. patent application Ser. No. 14/986,468, the entire contents of which is incorporated by reference herein. Accordingly,moderator device 130 may elect to share the screen ofmember device 140 ondisplay 105, such that user input provided by a user ondisplay 105 will be transmitted tomember device 140 to modify the original file stored in the memory ofmember device 140. - In accordance with another aspect of the present invention,
receiver 120 may be incorporated intomoderator device 130. For example,receiver 120 may be incorporated into a laptop serving asmoderator device 130. In accordance with another aspect of the present invention, any suitable arrangement ofreceiver 120 anddisplay 105 may be employed. For example,receiver 120 anddisplay 105 may be separate components or be combined into a single device. -
FIG. 2 depicts an embodiment ofcollaborative platform 100 constructed in accordance with the principles of the present invention for use in a classroom setting. As shown inFIG. 2 ,main display 105 is visible to the classroom of students and includes input/output device(s) 110, e.g., a touchscreen, such that a student can directly provide user input to display 105 in communication withreceiver 120. In accordance with another aspect of the present invention, a student can directly provide user input tomember device 140 via input/output device(s) 145 in communication withreceiver 120, which will then be displayed ondisplay 105. As shown inFIG. 2 , the teacher's desktop computer is designated asmoderator device 130 having input/output device(s) 135, e.g., a touchscreen, while wireless tablets located at each student's desk serve asmember devices 140 having input/output device(s) 145, e.g., a touchscreen. As described above,moderator device 130 andmember devices 140 wirelessly communicate withreceiver 120. - In accordance with another aspect of the present invention,
collaborative platform 100 may be used across multiple classrooms and/or other collaborative work environment settings. For example,moderator device 130 may be in a first classroom having a first display and a first plurality of member devices, andmoderator device 130 may communicate, e.g., via WiFi, with a second display and a second plurality of member devices in a second classroom. Accordingly, a student in the second classroom may modify an image displayed on the second display, thereby modifying the original filed stored onmoderator device 130 in the first classroom, such that the modification to the image is visible on the first and second displays in the first and second classrooms. - Referring now to
FIGS. 3A-3D , exemplary functional blocks representing the hardware and software components ofdisplay 105,receiver 120,moderator device 130, andmember device 140, respectively, are provided. Referring now toFIG. 3A , hardware and software components ofdisplay 105 may include processingunit 106,memory 107,storage 111,communication unit 108,power source 109, input/output (I/O) device(s) 110. -
Processing unit 106 may be one or more processors configured to runoperating system 112 and perform the tasks and operations ofdisplay 105 set forth herein.Memory 107 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.Communication unit 108 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example,communication unit 108 may transmit information, e.g., user input data, toreceiver 120 ofcollaborative platform 100 via a USB cable and/or a wireless connection such as Bluetooth, and may receive information, e.g., an image, fromreceiver 120 via an HDMI cable.Power source 109 may be a battery or may connectdisplay 105 to a wall outlet or any other external source of power.Storage 111 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape. - The input device of I/O device(s) 110 may be one or more devices coupled to or incorporated into
display 105 for inputting data to display 105. For example, the input device of I/O device 110 may be a touch input device (e.g., touch pad or touch screen) or an array of location sensors, configured to receive user input from the user and generate user input data indicative of the user input. In addition, the input device of I/O device 110 may work in conjunction with a smart stylet that interacts with the array of location sensors. The output device of I/O device 110 may be any device coupled to or incorporated intodisplay 105 for outputting or otherwise displaying images. Accordingly, I/O device(s) 110 may be a touchscreen for receiving and displaying images. -
Operating system 112 may be stored instorage 111 and executed onprocessing unit 106.Operating system 112 may be suitable for controlling the general operation ofdisplay 105 to achieve the functionality ofdisplay 105 described herein.Display 105 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood thatdisplay 105 may include additional or fewer components than those illustrated inFIG. 3A and may include more than one of each type of component. - Referring now to
FIG. 3B , hardware and software components ofreceiver 120 may include processingunit 121,memory 122,storage 126,communication unit 123,power source 124, input/output (I/O) device(s) 125. -
Processing unit 121 may be one or more processors configured to runoperating system 127,collaborative application 128, and overlayimage generator application 129 and perform the tasks and operations ofreceiver 120 set forth herein.Memory 122 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.Communication unit 123 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example,communication unit 123 may receive information, e.g., user input data fromdisplay 105 via a USB cable and/or a wireless connection such as Bluetooth, and real images frommoderator device 130 via WiFi, and may transmit information, e.g., image(s), to display 105 via an HDMI cable. Moreover,communication unit 123 may communicate both user input data and images tomoderator device 130 and/ormember devices 140 vianetwork 101, e.g., WiFi. In accordance with one aspect of the present invention,communication unit 123 may receive information, e.g., data indicative of one or more user types of the user input frommoderator device 130 via, e.g., a defined TCP port or a UIBC extension. -
Power source 124 may be a battery or may connectreceiver 120 to a wall outlet or any other external source of power.Storage 126 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape. The input device of I/O device(s) 125 may be one or more devices coupled to or incorporated intoreceiver 120 for inputting data toreceiver 120. The output device of I/O device 110 may be any device coupled to or incorporated intoreceiver 120 for outputting or otherwise displaying images. -
Collaboration application 128 may be stored instorage 126 and executed onprocessing unit 121.Collaboration application 128 may be a software application and/or software modules having one or more set of instructions suitable for performing the operations ofreceiver 120 set forth herein, including facilitating the exchange of information withmoderator device 130. For example,collaboration application 128 may causereceiver 120 to receive user input data fromdisplay 105 viacommunication unit 123, e.g., via a USB cable and/or a wireless connection such as Bluetooth, and to pass along the user input data tomoderator device 130 viacommunication unit 123, e.g., via WiFi. In addition,collaboration application 128 further may causereceiver device 130 to receive real images frommoderator device 130 viacommunication unit 123, e.g. via WiFi, and to pass along an overlaid image based on the real image to display 105, e.g., via an HDMI cable. In accordance with another aspect of the present invention,collaboration application 128 may causereceiver 120 to receive data indicative of one or more user types frommoderator device 130 viacommunication unit 123, e.g., a defined TCP port or a modified user input back channel (UIBC), as described in further detail below. - Overlay
image generator application 129 may be stored instorage 126 and executed onprocessing unit 121. Overlayimage generator application 129 may be a software application and/or software modules having one or more sets of instructions suitable for performing the operations ofreceiver 120 set forth herein, including facilitating the exchange of information withdisplay 105,moderator device 130, andmember devices 140. For example, overlayimage generator application 129 may causeprocessing unit 121 ofreceiver 120 to process and analyze the user input data received fromdisplay 105 viacollaboration application 128 and generate an overlay image based on the user input data, and generate an overlaid image based on the overlay image, and to transmit the overlaid image to display 105 for display viacommunication unit 123, e.g., via an HDMI cable. In addition, overlayimage generator application 129 may causereceiver 120 to derive one or more user types based on the user input data received fromdisplay 105 viacollaboration application 128, such that the overlay image is also generated based on the user type, as described in further detail below. - Alternatively, overlay
image generator application 129 may causereceiver 120 to generate an overlay image based on the data indicative of one or more user types received frommoderator device 130 viacommunication unit 123, e.g., a defined TCP port, instead of deriving one or more user types based on the user input data received fromdisplay 105, as described in further detail below. In accordance with another embodiment of the present invention, overlayimage generator application 129 may causereceiver 120 to generate an overlay image based on the data indicative of one or more user types received frommoderator device 130 viacommunication unit 123, e.g., a modified user input back channel (UIBC), instead of deriving one or more user types based on the user input data received fromdisplay 105, as described in further detail below. -
Operating system 127 may be stored instorage 126 and executed onprocessing unit 121.Operating system 127 may be suitable for controlling the general operation ofreceiver 120 and may work in concert with overlayimage generator application 129 to achieve the functionality ofreceiver 120 described herein.Receiver 120 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood thatreceiver 120 may include additional or fewer components than those illustrated inFIG. 3B and may include more than one of each type of component. - Referring now to
FIG. 3C , hardware and software components ofmoderator device 130 may include processingunit 131,memory 132,storage 136,communication unit 133,power source 134, input/output (I/O) device(s) 135. -
Processing unit 131 may be one or more processors configured to runoperating system 137,collaboration application 138, and optionaloverlay image application 139 and perform the tasks and operations ofmoderator device 130 set forth herein.Memory 132 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.Communication unit 133 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example,communication unit 133 may receive information, e.g., user input data, fromreceiver 120 via WiFi, and may transmit information, e.g., image(s), toreceiver 120 viaWiFi. Power source 134 may be a battery or may connectmoderator device 130 to a wall outlet or any other external source of power.Storage 136 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape. - The input device of I/O device(s) 135 may be one or more devices coupled to or incorporated into
moderator device 130 for inputting data tomoderator device 130. For example, the input device of I/O device 135 may be a touch input device (e.g., touch pad or touch screen) or an array of location sensors, configured to receive user input from the user and generate user input data indicative of the user input. In addition, the input device of I/O device 135 may work in conjunction with a smart stylet that interacts with the array of location sensors. The output device of I/O device 135 may be any device coupled to or incorporated intomoderator device 130 for outputting or otherwise displaying images. Accordingly, I/O device(s) 135 may be a touchscreen for receiving and displaying images. -
Collaboration application 138 may be stored instorage 136 and executed onprocessing unit 131.Collaboration application 138 may be a software application and/or software modules having one or more set of instructions suitable for performing the operations ofmoderator device 130 set forth herein, including facilitating the exchange of information withreceiver 120. For example,collaboration application 138 may causemoderator device 130 to transmit a first real image from an original image file stored onstorage 136 toreceiver 120 viacommunication unit 133, e.g., via WiFi, for display viadisplay 105. Further,collaboration application 138 may causemoderator device 130 to receive user input data fromreceiver 120 viacommunication unit 133, e.g., via WiFi.Collaboration application 138 further may causeprocessing unit 131 to process and analyze the user input data received fromreceiver 120 and to modify the original image file stored onstorage 136 by generating a real image based on the user input data, and to store the real image onstorage 136. Additionally,collaboration application 138 may causemoderator device 130 to transmit the real image, e.g., the real image stored onstorage 136, toreceiver 120 viacommunication unit 133, e.g., via WiFi, for display viadisplay 105. - Optional
overlay image application 139 may be stored instorage 136 and executed onprocessing unit 131.Overlay image application 139 may be a software application and/or software modules having one or more sets of instructions suitable for performing the operations ofmoderator device 130 set forth herein, including facilitating the exchange of information withreceiver 120. For example,overlay image application 139 may causeprocessing unit 131 ofmoderator device 130 to derive user type data indicative of one or more user types from the user input data received bymoderator device 130 throughcollaboration application 138, and to transmit the user type data toreceiver 120 viacommunication unit 133, e.g., via a defined TCP port. -
Operating system 137 may be stored instorage 136 and executed onprocessing unit 131.Operating system 137 may be suitable for controlling the general operation ofmoderator device 130 and may work in concert withcollaboration application 138 and optionaloverlay image application 139 to achieve the functionality ofmoderator device 130 described herein.Moderator device 130 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood thatmoderator device 130 may include additional or fewer components than those illustrated inFIG. 3C and may include more than one of each type of component. In accordance with one embodiment of the present invention,operating system 137 may causeprocessing unit 131 ofmoderator device 130 to derive user type data indicative of one or more user types from the user input received bymoderator device 130 throughcollaboration application 138, and to transmit the user type data toreceiver 120 viacommunication unit 133, e.g., via a modified user input back channel (UIBC). - Referring now to
FIG. 3D , hardware and software components of one ormore member devices 140 may include processingunit 141,memory 142,storage 146,communication unit 143,power source 144, input/output (I/O) device(s) 145. -
Processing unit 141 may be one or more processors configured to runoperating system 147,collaboration application 148, and optionaloverlay image application 149 and perform the tasks and operations ofmember device 140 set forth herein.Memory 142 may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof.Communication unit 143 may be any well-known communication infrastructure facilitating communication over any well-known wired or wireless connection. For example,communication unit 143 may transmit information, e.g., user input data, toreceiver 120 ofcollaborative platform 100 via WiFi, and may receive information, e.g., image(s), fromreceiver 120 viaWiFi. Power source 144 may be a battery or may connectmember device 140 to a wall outlet or any other external source of power.Storage 146 may include, but is not limited to, removable and/or non-removable storage such as, for example, magnetic disks, optical disks, or tape. - The input device of I/O device(s) 145 may be one or more devices coupled to or incorporated into
member device 140 for inputting data tomember device 140. For example, the input device of I/O device 145 may be a touch input device (e.g., touch pad or touch screen) or an array of location sensors, configured to receive user input from the user and generate user input data indicative of the user input. In addition, the input device of I/O device 145 may work in conjunction with a smart stylet that interacts with the array of location sensors. The output device of I/O device 145 may be any device coupled to or incorporated intomember device 140 for outputting or otherwise displaying images. Accordingly, I/O device(s) 145 may be a touchscreen for receiving and displaying images. -
Collaboration application 148 may be stored instorage 146 and executed onprocessing unit 141.Collaboration application 148 may be a software application and/or software modules having one or more set of instructions suitable for performing the operations ofmember device 140 set forth herein, including facilitating the exchange of information withreceiver 120. For example,collaboration application 148 may causemember device 140 to transmit user input data received via the input device of I/O device(s) 145 toreceiver 120 viacommunication unit 143, e.g., via WiFi, for further transmission tomoderator device 130. Further,collaboration application 148 may causemember device 140 to receive image(s) fromreceiver 120 viacommunication unit 133, e.g., via WiFi, for display via the output device of I/O device(s) 145. - Optional
overlay image application 149 may be stored instorage 146 and executed onprocessing unit 141.Overlay image application 149 may be a software application and/or software modules having one or more sets of instructions suitable for performing the operations ofmember device 140 set forth herein, including facilitating the exchange of information withreceiver 120. Whenmember device 140 is dubbed as the moderator bymoderator device 130 as described above,overlay image application 149 may operate similar tooverlay image application 139. -
Operating system 147 may be stored instorage 146 and executed onprocessing unit 141.Operating system 147 may be suitable for controlling the general operation ofmember device 140 and may work in concert withcollaboration application 148 and optionaloverlay image application 149 to achieve the functionality ofmember device 140 described herein.Member device 140 may also optionally run a graphics library, other operating systems, and/or any other application programs. It of course is understood thatmember device 140 may include additional or fewer components than those illustrated inFIG. 3D and may include more than one of each type of component. - Referring now to
FIG. 4A , a block diagram of an exemplary embodiment ofcollaborative platform 100 in accordance with the principles of the present invention is provided. As shown inFIG. 4A , user input data may be transmitted fromdisplay 105 toreceiver 120 via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth. In addition, user input data and the real images may be communicated betweenreceiver 120 andmoderator device 130 across a wireless connection, e.g., WiFi. Further, the overlaid image based on the real image and the overlay image may be transmitted fromreceiver 120 to display 105 via a wired connection, e.g., an HDMI cable. - Referring now to
FIG. 4B , a sequence diagram for usingcollaborative platform 100 depicted inFIG. 4A is provided. As described above,collaboration platform 100 may run a collaboration application, e.g., third party application such as Microsoft Whiteboard available from Microsoft, Redmond, Wash., or Google Drive available from Google LLC, Mountain View, Calif., for displaying a first real image based on an original image file stored onmoderator device 130, receiving user input, modifying the original image file stored onmoderator device 130 based on the user input, and displaying a second real image based on the modified original image file. Specifically, as shown inFIG. 4B , a user may provide user input directly to display 105, e.g., a touchscreen. A first real image may already be displayed ondisplay 105, e.g., a math problem, from an original image file stored onmoderator device 130, or display 105 may initially be blank if the original image file stored onmoderator device 130 is blank. The user input may be a pattern of interactions (e.g., clicks and drags) with the touchscreen ofdisplay 105 forming, e.g., a number “3” in the color red. The shape forming the number “3” is an example of the user input, and the color red is an example of a user type of the user input. Other possible user types may include, for example, different colors (e.g., gray, black, red, blue, etc.), thickness level (e.g., thin, normal, thick), or marker or eraser type, etc. - User input data based on the user input received by
display 105 is then transmitted via wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth, toreceiver 120, which then passes along the user input data tomoderator device 130 via a wireless connection, e.g., WiFi. Running the collaboration application,moderator device 130 modifies the original image file stored in memory therein based on the user input data, and generates a real image file corresponding to a real image, e.g., where the red “3” is superimposed on the math problem. Typically, the real image is then transmitted toreceiver 120 via a wireless connection, e.g., WiFi, which then passes along the real image to display 105 via a wired connection, e.g., an HDMI cable, to be displayed. Accordingly, there is an undesirable delay between the time the user provides the user input to display 105 and when the real image reachesdisplay 105, i.e., when the red “3” begins to appear ondisplay 105. As will be understood by a person having ordinary skill in the art, the collaborative platform does not wait for, e.g., the entire number “3,” to be drawn before generating the real image; instead, this process occurs continuously as the user draws the number “3.” - In accordance with the principles of the present invention,
collaboration platform 100 may run an overlay image generator application for generating an overlay image byreceiver 120 based on the user input provided by the user, generating an overlaid image based on the overlay image and the real image received bymoderator device 130, and displaying the overlaid image on the original image ondisplay 105 to reduce latency ofcollaboration platform 100. - Specifically, as shown in
FIGS. 4A and 4B ,receiver 120 may generate overlay image based on the user input data, generate an overlaid image based on the overlay image and the real image received frommoderator device 130, and transmit the overlaid image via a wired connection, e.g., an HDMI cable, to display 105 to be displayed over the original image displayed ondisplay 105, thereby reducing latency ofcollaboration platform 100. In addition,receiver 120 may determine the user type of the user input by deriving data indicative of the user type from the user input data received fromdisplay 105 using, e.g., machine learning, artificial intelligence, or neural network, as described in further detail below with regard toFIGS. 7A and 7B . Accordingly,receiver 120 may generate overlay image based on both the user input data and the user type, as it determines the user type. - Referring now to
FIG. 5A , a flowchart is illustrated detailing the data flow and decisions made in implementing the overlaid image generation functionality ofreceiver 120 ofcollaborative platform 100. As mentioned above,receiver 120 ofcollaborative platform 100 may be used to generate an overlay image based on user input, and generate an overlaid image based on the overlay image and the real image received frommoderator device 130 such that the overlaid image is displayed, thereby reducing latency ofcollaborative platform 100. - To initiate the process set forth in
FIG. 5A , atstep 500, an original image is received byreceiver 120. For example, the original image may be received frommoderator device 130 and may include, e.g., a blank screen, a math problem, a picture, etc. Atstep 501,receiver 120 sets the original image received frommoderator device 130 as a current image. This may involve decoding an original image and/or placing an original image in a buffer. Atstep 502, user input data indicative of user input may be received byreceiver 120, e.g., via a USB cable and/or a wireless connection such as Bluetooth, fromdisplay 105. Preferably, the user type of the user input may be set to preprogrammed default settings, e.g., default color (gray), default thickness (normal), and default marker user type, until optionally changed by the user as described with regards tosteps 504 to 506. Ifreceiver 120 receives user input data fromdisplay 105 atstep 502, the process may proceed to step 503. Ifreceiver 120 does not receive user input data fromdisplay 105 atstep 502, the process may proceed directly to step 508 described in further detail below. - At
step 503,receiver 120, running the collaboration application, transmits the user input data to the source of the original image, e.g.,moderator device 130, for further processing and analysis. As described above,moderator device 130 generates real image(s) based on the user input data received fromreceiver 120. In addition,receiver 120 running the overlay image generation application generates an overlay image based on the user input data for immediate display. - Optionally, at
step 504,receiver 120 analyzes the user input data received fromdisplay 105 atstep 502 to determine if the at least one user type changed. For example,receiver 120 may compare the user input's spatial location ondisplay 105 as well as the physical contact withdisplay 105 at various points of time to determine using, e.g., machine learning, artificial intelligence, or neural network, if the user has selected a different user type. Ifreceiver 120 determines that a different user type has not been selected, e.g., the user is not click on a different user type icon, atstep 505,receiver 120 will continue using the previous user type, e.g., the color gray. Ifreceiver 120 determines that a different user type has been selected, e.g., the user selected the color red, based on the spatial location of the user input and the fact that the user discontinued contact withdisplay 105 andre-contacted display 105 at that specific spatial location ondisplay 105, atstep 506,receiver 120 selects the new user type, e.g., the color red. - At
step 507,receiver 120 generates a leading end of an overlay image based on the user input data received atstep 502 as described in further detail with regard toFIG. 5B , as well as the user type selected atstep step step 503 withoutsteps 504 to 506. The overlay image generated will be representative of the user's actual input provided by the user, and further may include predicted user input based on the user's actual input. - For example, as shown in
FIG. 5B , to generate an overlay image based on the user input data and optionally the user type, atstep 511,receiver 120 generates a first portion of the overall overlay image which is representative of the user's actual input received byreceiver 120, e.g., via a USB cable and/or a wireless connection such as Bluetooth, fromdisplay 105. Accordingly, the first portion of the overlay image, when displayed ondisplay 105 as an overlaid image, will illustrate what the user actually inputted ondisplay 105. Atstep 512,receiver 120 generates a second, extended portion of the overall overlay image, which may be a prediction of the user's intended input based on the user input data received byreceiver 120, e.g., via a USB cable and/or a wireless connection such as Bluetooth, fromdisplay 105. For example, using, e.g., extrapolation, machine learning, artificial intelligence, and/or neural network,receiver 120 may analyze the spatial coordinates and/or the time coordinates of the user's input from the user input data to predict the user's intended input, e.g., what the user's next input will be, as described in further detail below. Atstep 513,receiver 120 generates an overlay image based on the first and second, extended portions of the overlay image, such that the overlay image will include what the user actually inputted ondisplay 105 and what the user is predicted to input ondisplay 105. - Referring again to
FIG. 5A , atstep 508,receiver 120 may remove a portion of the trailing end of the overlay image asreceiver 120 generates the leading end of an overlay image. For example, the portion of the overlay image of the overlaid image displayed ondisplay 105 may be removed as a function of time, or as a function of the spatial amount of overlay image of the overlaid image displayed ondisplay 105 at a given time. For example, each spatial coordinate of the overlay image of the overlaid image displayed ondisplay 105 may remain displayed for a predetermined amount of time, e.g., 100 to 300 milliseconds or more. Accordingly, each spatial coordinate that makes up the overlay image of the overlaid image ondisplay 105 may remain ondisplay 105 for the same amount of time, and may be removed after that time has lapsed. Each spatial coordinate of the overlay image is initially displayed ondisplay 105 at the leading end of the overlay image of the overlaid image, and as time lapses and additional spatial coordinates are displayed, the initial leading spatial coordinate ends up being at the trailing end of the overlay image of the overlaid image before it is removed, e.g., after the predetermined amount of time has lapsed. For example, the predetermined amount of time that each spatial coordinate is displayed may be at least as long as the latency period of the real image to be received by and appear ondisplay 105. Accordingly, for a given amount of spatial coordinates displayed ondisplay 105 within a predetermined time period, e.g., the same amount of spatial coordinates will be removed within the same predetermined time period fromdisplay 105. - In accordance with another aspect of the present invention, the portion of the overlay image of the overlaid image displayed on
display 105 may have a maximum spatial distribution, e.g., length between the leading end and the trailing end of the overlay image of the overlaid image and/or amount of spatial coordinates, for a given amount of time. Thus, after a spatial coordinate of the overlay image of the overlaid image is initially displayed ondisplay 105, after a predetermined amount of additional spatial coordinates are displayed such that the initial spatial coordinate is now at the trailing end of the overlay image of the overlaid image, the initial spatial coordinate of the overlay image of the overlaid image will be removed fromdisplay 105 when the amount of additional spatial coordinates displayed ondisplay 105 exceeds the predetermined maximum amount of spatial coordinates permitted ondisplay 105. - Accordingly, if
receiver 120 does not receive user input data fromdisplay 105 atstep 502, atstep 508, no additional leading end will be added to the overlay image, e.g., when the user removes their stylet/finger fromdisplay 105 such that no additional user input is provided to display 105, while a portion of the trailing end of the overlay image will gradually be removed from the trailing end of the overlay image and replaced with the current real images received frommoderator device 130 until, e.g., the overlay image of the overlaid image displayed ondisplay 105 is completely replaced by the current image or additional user input is received byreceiver 120 fromdisplay 105 atstep 502. - At
step 509,receiver 120 generates an overlaid image based on the overlay image generated atstep 507 and the current image set atstep 501. Thus, the overlaid image generated will be representative of the user's actual input provided by the user, and further may include predicted user input based on the user's actual input, superimposed on the current image. For example, the overlay image may be superimposed on the real image to form the overlaid image, as described with regard toFIG. 5C below, which may then be sent byreceiver 120 to display 105. Accordingly, no latency ofcollaborative platform 100 is perceived ondisplay 105 as the predicted portion of the overlaid image is displayed seemingly analogously with the user's input. Moreover, the current image may be periodically updated asreceiver 120 receives additional images (e.g., real images) frommoderator device 130. For example, a received additional image may be decoded and/or added to a buffer and may become the current image. In this manner, the overlaid image generated byreceiver 120 may be superimposed on the updated current image. - As shown in
FIG. 5C , the overlay image may be superimposed on the real image to form the overlaid image. For example, the real image may includeline 515, generated bymoderator device 130 based on user input data corresponding to user input received byreceiver 120 fromdisplay 105.Line 515 represents what the user actually draws ondisplay 105, but only includes that much which has been generated bymoderator device 130 based on the user input data. For example, the user's actual input in real-time may be at another point ondisplay 105 as denoted bystylet 700. As described above, the overlay image generated byreceiver 120 includesfirst portion 516, which is representative of the user's actual input received byreceiver 120, and second,extended portion 517, which may be a prediction of the user's intended input based on the user input data received byreceiver 120. Moreover, the overlay image, e.g.,lines line 515, to form the overlaid image, e.g.,lines line 515 gets longer, whilelines line 515 of the overlaid image, as shown inFIG. 5C . Moreover, as the overlay image may further be generated based on the speed of the user input, the overlay image, e.g.,lines display 105, and as shorter lines when the user input is received slower bydisplay 105. - Referring again to
FIG. 5A , atstep 510,receiver 120 transmits the overlaid image, e.g., the first and second, extended portions of the overlaid image superimposed on the current image, to display 105, thereby reducing and/or eliminating latency ofcollaborative platform 100. Moreover, an additional real image corresponding to additional user input data fromdisplay 105 may be received byreceiver 120 frommoderator device 130 and be set as additional current image, and an additional overlaid image may be generated byreceiver 120 based on the overlay image created by these additional user input data and superimposed on the additional current image. - Referring now to
FIGS. 6A-6E , the user input provided by the user is illustrated in conjunction with the display of the overlaid image generated byreceiver 120 to illustrate the latency of the real image. As shown inFIG. 6A , the original image displayed ondisplay 105, e.g., a touchscreen, may be blank, and the user may usestylet 700 to interact withdisplay 105 by pressingstylet 700 againstdisplay 105 atpoint 605. As shown inFIG. 6B , the user dragsstylet 700 frompoint 605 to point 606 ondisplay 105. The dragging motion ofstylet 700 by the user, i.e., the user input, is converted to user input data bydisplay 105 and transmitted toreceiver 120, which then transmits the user input data tomoderator device 130 to modify the original image and generate a real image based on the user input data as described above. An overlaid image is then generated byreceiver 120 based on the user input data (and optionally the user type) and the real image received frommoderator device 130, and transmitted to display 105 and displayed. As described above, the overlaid image may be formed by an overlay image superimposed on the real image, whereas the overlay image includes a first portion representative of the user's actual input received bydisplay 105, and a second, extended portion, which may be a prediction of the user's intended input based on the user input data received bydisplay 105. As shown inFIG. 6B , the real image is still the blank original image, and thus, the overlaid image appears to only includeoverlay image 701 of the overlaid image. Accordingly, latency is reduced oncollaborative platform 100 as the overlaid image is displayed almost immediately after the user dragsstylet 700 frompoint 605 to 606, and thus, is hardly noticeable by the user or other observers looking atdisplay 105. - The latency of the collaboration application of
collaboration platform 100 is illustrated inFIG. 6C . As shown inFIG. 6C , the user continues to dragstylet 700 frompoint 606 topoint 607. Meanwhile, the user input is continuously converted to user input data bydisplay 105 and transmitted toreceiver 120, which is then continuously transmitted tomoderator device 130 via a wireless connection, e.g., WiFi, for processing. As described above,moderator device 130 modifies the original image stored in memory thereof based on the user input data, and generates a real image representing the user input, e.g., the dragging motion ofstylet 700 by the user ondisplay 105. As shown inFIG. 6C , whenstylet 700 is atpoint 607,moderator device 130 has only processed the user input data representing the user's dragging motion ofstylet 700 frompoint 605 to point 606, and accordingly generates a real image, e.g.,real image 702, representing the user's input. The real image generated bymoderator device 130 is then transmitted toreceiver 120. As described above,receiver 120 generates an overlaid image, which includesoverlay image 701, e.g., the first portion representative of the user's actual input received bydisplay 105, and the predicted second, extended portion representative of the user's intended input, superimposed onreal image 702. The overlaid image is then transmitted to display 105 via a wired connection, e.g., an HDMI cable, to be displayed. - As the data flow of the collaboration application requires the user input data to be transmitted via a wired connection from
display 105 toreceiver 120 and via a wireless connection fromreceiver 120 tomoderator device 130, and the real image via a wireless connection frommoderator device 130 toreceiver 120 and ultimately via a wired connection fromreceiver 120 to display 105, undesirable latency ofcollaboration platform 100 is observed. This is illustrated inFIG. 6C asreal image 702 being displayed with a delay behindoverlay image 701. As an illustrative example inFIG. 6C , whenstylet 700 is atpoint 607,overlay image 701 appears as a mark frompoint 605 to immediately adjacent 607, whilereal image 702 has only reachedpoint 606. - Moreover, as shown in
FIG. 6D , whenstylet 700 is atpoint 608,overlay image 701 appears as a mark frompoint 605 to immediately adjacent 608, whilereal image 702 has only reachedpoint 607.FIG. 6E illustratesdisplay 105 after some time after the latency ofcollaborative platform 100 such thatoverlay image 701 andreal image 702 extend frompoint 605 topoint 608. - In addition,
receiver 120 may derive and/or receive information indicative of one or more user types, such that the overlay image generated is also based on the one or more user types. For example, the user may select one or more user types, e.g., thickness, color, or marker or eraser type, and provide user input in accordance with the selected user type. Accordingly, as the user begins to draw, e.g., a number “3” in the color red ondisplay 105, an overlay image will be generated byreceiver 120 and transmitted to display 105 as an overlaid image such that an overlaid image of the number “3” in the color red will begin to be displayed ondisplay 105 with reduced latency. - Referring now to
FIGS. 7A-7D , the user input provided by the user is illustrated in conjunction with the display of the overlaid image generated byreceiver 120, such that the overlaid image includes the user's actual input in addition to the predicted user input generated byreceiver 120, superimposed on the real image. As shown inFIG. 7A , the user may usestylet 700 to interact withdisplay 105 by pressingstylet 700 againstdisplay 105 at point 705 (5, 6), and draggingstylet 700 from point 705 (5, 6) to point 706 (5, 7) to point 707 (5, 8) to point 708 (5, 9) to point 709 (5, 10) ondisplay 105. Accordingly, the user's actual input is depicted asline 703 as shown inFIG. 7A . The dragging motion ofstylet 700 by the user, i.e., the user input, is converted to user input data bydisplay 105 and transmitted toreceiver 120 as described above. Thus, the user input data includes the user's actual input, e.g., spatial coordinates (5, 6), (5, 7), (5, 8), (5, 9), and (5, 10). An overlay image is then generated byreceiver 120 based on the user input data (and optionally the user type), and transmitted to display 105 to be displayed as an overlaid image. As described above, the overlay image includes the user's actual input, e.g.,line 703, as well as the predicted user input, e.g.,line 704, generated byreceiver 120, as shown inFIG. 7B . For example,line 704 may be predicted byreceiver 120 based on spatial coordinates (5, 6), (5, 7), (5, 8), (5, 9), and (5, 10) of the user input data using extrapolation, e.g., linear extrapolation, polynomial extrapolation, conic extrapolation, French curve extrapolation and/or any other well-known extrapolation techniques, machine learning, artificial intelligence, or a neural network. Based on spatial coordinates (5, 6), (5, 7), (5, 8), (5, 9), and (5, 10),receiver 120 predicts that the user's next input will be to continue draggingstylet 700 from point 709 (5, 10) to point 710 (5, 11) to point 711 (5, 12) to point 712 (5, 13) to point 713 (5, 14). - In addition,
line 704 may be predicted byreceiver 120 based on the time coordinates of the user input data using extrapolation, machine learning, artificial intelligence, or a neural network. For example, the user input data received byreceiver 120 may include data indicating thatpoint 705 was touched bystylet 700 at T1,point 706 at T2,point 707 at T3,point 708 at T4, andpoint 709 at T5, andreceiver 120 may determine the velocity ofstylet 700 based on T1-T5. Thus,receiver 120 will predict that point 710 will be touched bystylet 700 at T6,point 711 at T7,point 712 at T8, andpoint 713 at T9, such that the velocity between T6-T9 corresponds with the velocity of T1-T5. Accordingly, points 710-713 ofline 704 will be displayed ondisplay 105 with a velocity corresponding to the velocity based on T1-T5, such thatpoints line 704 will appear ondisplay 105 at the same time the user dragsstylet 700 to point 710, 711, 712, and 713 in real time, thereby eliminating any latency oncollaborative platform 100. In addition,receiver 120 may determine the acceleration ofstylet 700 based on T1-T5, such that the acceleration between T6-T9 corresponds with the acceleration of T1-T5. Accordingly, points 710-713 ofline 704 will be displayed ondisplay 105 with a modified velocity corresponding to the acceleration based on T1-T5, such thatpoints line 704 will appear ondisplay 105 at the same time the user dragsstylet 700 to point 710, 711, 712, and 713 in real time, thereby eliminating any latency oncollaborative platform 100. - Using extrapolation, machine learning, artificial intelligence, or a neural network,
receiver 120 may predict complex curved lines by predicting finite line segments forming the curve as well as predicting the angle of each finite line segment and the change of angle between adjacent line segments. For example,receiver 120 may detect a first angle of a first line segment of the user's actual input, and detect a second angle of a second line segment of the user's actual input, and determine the change of angle between the first angle and the second angle. Based on the first angle, second angle, and change of angle of the user's actual input,receiver 120 may predict the curve of the user's next input of finite line segments. For example, ifreceiver 120 detects that the user's actual input is a sequence of finite line segments that form a curve with known changes of angles between each adjacent line segment,receiver 120 will generate an overlay image having a predicted extended portion with the same curvature. Moreover,receiver 120 may detect a rate of change of the change of angle between adjacent finite line segments of the user's actual input and predict the user's next input based on the detected rate of change of the change of angle between adjacent finite line segments. - As shown in
FIG. 7C , the user may usestylet 700 to interact withdisplay 105 by pressingstylet 700 againstdisplay 105 at point 716 (2, 5), and draggingstylet 700 from point 716 (2, 5) to point 717 (5, 6) to point 718 (8, 8) ondisplay 105. Accordingly, the user's actual input is depicted asline 714 as shown inFIG. 7C . The dragging motion ofstylet 700 by the user, i.e., the user input, is converted to user input data bydisplay 105 and transmitted toreceiver 120 as described above. Thus, the user input data includes the user's actual input, e.g., a first line segment from spatial coordinate (2, 5) to spatial coordinate (5, 6) having a first angle, and a second line segment from spatial coordinate (5, 6) to spatial coordinate (8, 8). An overlay image is then generated byreceiver 120 based on the user input data (and optionally the user type), and transmitted to display 105 to be displayed as an overlaid image. As described above, the overlay image includes the user's actual input, e.g.,line 714, as well as the predicted user input, e.g.,line 715, generated byreceiver 120, as shown inFIG. 7D . For example,line 715 may be predicted byreceiver 120 based on spatial coordinates (2, 5), (5, 6), and (8, 8) of the user input data using extrapolation, machine learning, artificial intelligence, or a neural network. Based on the first angle of the first line segment from spatial coordinate (2, 5) to spatial coordinate (5, 6), and the second angle of the second line segment from spatial coordinate (5, 6) to spatial coordinate (8, 8),receiver 120 predicts that the user's next input will be to continue draggingstylet 700 from point 718 (8, 8) to point 719 (11, 11) to point 720 (14, 15). The angle of the line segment frompoint 718 to point 719 and frompoint 719 to point 720 will correspond with the rate of change between the first angle of the line segment frompoint 716 to point 717 and the second angle of the line segment frompoint 717 topoint 718. - As described above,
line 715 may also be predicted byreceiver 120 based on the time coordinates of the user input data using extrapolation, machine learning, artificial intelligence, or a neural network. For example, the user input data received byreceiver 120 may include data indicating thatpoint 716 was touched bystylet 700 at T1,point 717 at T2, andpoint 718 at T3, andreceiver 120 may determine the velocity ofstylet 700 based on the T1-T3. Thus,receiver 120 will predict thatpoint 719 will be touched bystylet 700 at T4, andpoint 720 at T5, such that the velocity between T3-T5 corresponds with the velocity of T1-T3. Accordingly, points 719 and 720 ofline 715 will be displayed ondisplay 105 with a velocity corresponding to the velocity based on T1-T3, such thatpoints line 715 will appear ondisplay 105 at the same time the user dragsstylet 700 to point 719, and 720 in real time, thereby eliminating any latency oncollaborative platform 100. In addition,receiver 120 may determine the acceleration ofstylet 700 based on T1-T3, such that the acceleration between T3-T5 corresponds with the acceleration of T1-T3. Accordingly, points 719 and 720 ofline 715 will be displayed ondisplay 105 with a modified velocity corresponding to the acceleration based on T1-T3, such thatpoints line 715 will appear ondisplay 105 at the same time the user dragsstylet 700 to point 719 and 720 in real time, thereby eliminating any latency oncollaborative platform 100. - Referring now to
FIGS. 8A and 8B , an exemplary method of collecting user type data in accordance with one aspect of the present invention is provided.FIG. 8A is a screenshot ofdisplay 105 at a first time, andFIG. 8B is a screenshot ofdisplay 105 at a second time. As shown inFIGS. 8A and 8B , the interface displayed ondisplay 105 may include user-friendly icons in a ribbon at the top of the screen representing selectable user types including, but not limited to,marker icon 601,thickness icon 602,eraser icon 603, andcolor icon 604. Upon clicking, for example,thickness icon 602, a drop down menu may appear with additional sub-icons for selecting between thickness levels such as “thin,” “normal,” and “thick.” Further, upon clicking, for example,color icon 604, a drop down menu may appear with additional sub-icons for selecting between different colors such as “gray,” “black,” “blue,” “yellow,” etc. Preferably, the user type of the user input may be set to preprogrammed default settings, e.g., default color (gray), default thickness (normal), and default marker user type, until subsequently changed by the user. - As described above,
receiver 120 may receive user input data fromdisplay 105 via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth, and from the user input data, determine one or more user types of the user input. For example, using, e.g., machine learning, artificial intelligence, and/or neural network,receiver 120 may analyze and/or process user input data to determine the user type. Using machine learning, artificial intelligence, and/orneural networks receiver 120 may determine user type based on patterns of the user's movement with regard to display 105, and/or observing the user's actions, e.g., what types of marks are drawn, that follow. - Referring to
FIG. 8A , the user drew a line extending frompoint 605 to point 606 to point 607 by, e.g., contactingdisplay 105 and moving frompoint 605 to point 606 to point 607 without discontinuing contact withdisplay 105. As shown inFIG. 8A ,marker icon 601 was previously selected, for example, by contacting any point within a perimeter of points ondisplay 105 corresponding tomarker icon 601. Based on machine learning, artificial intelligence, or neural network,receiver 120 can identify the interface ofdisplay 105 and correlate specific actions by the user (e.g., clicking on the point ofdisplay 105 wheremarker icon 601 resides) with specific user types. For example, whenmarker icon 601 is observed to be clicked, and the immediately following user input data indicates that following the clicking ofmarker icon 601, dragging motion of the stylet by the user frompoint 605 to point 606 to point 607 results in a mark extending frompoint 605 to point 606 to point 607,receiver 120 will learn that by clicking on the point ofdisplay 105 wheremarker icon 601 resides, the marker user type has been selected, which permits the user to draw lines. Thus,receiver 120 will associate the spatial region ofmarker icon 601 with the function of drawing solid lines. Using machine learning and comparing a plurality of user inputs taken at various time points,receiver 120 can deduce the various icons of any interface, and their respective functions. Accordingly,receiver 120 may include a database by which it compares actions of the user relative to display 105, given a specific interface, to determine what user type has been selected. - As shown in
FIG. 8B , at the second time,receiver 120 receives user input data indicating that the user discontinued contact withdisplay 105, and then contacteddisplay 105 at a point ondisplay 105 associated witheraser icon 603, which has been associated with the function of erasing through machine learning. Accordingly, upon clickingeraser icon 603,receiver 120 determines that the eraser user type has been selected, and generates an overlay image of an eraser mark frompoint 607 to point 606 in response to theuser contacting display 105 atpoint 607 and dragging the stylet frompoint 607 to point 606 as shown inFIG. 8B . As will be understood by a person having ordinary skill in the art, by analyzing the user input data received fromdisplay 105 to determine which user type is selected,receiver 120 generates the overlay image based on not only user input, but the user type of the user input to accurately display the overlaid image corresponding to the user's selected user type and user input. - Referring now to
FIG. 9A , a block diagram of another exemplary embodiment ofcollaborative platform 100′ in accordance with the principles of the present invention is provided. As shown inFIG. 9A , user input data may be transmitted fromdisplay 105′ toreceiver 120′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth. In addition, user input data and the real image may be communicated betweenreceiver 120′ andmoderator device 130′ across a wireless connection, e.g., WiFi. Further, the overlaid image may be transmitted fromreceiver 120′ to display 105′ via a wired connection, e.g., an HDMI cable. As shown inFIG. 9A , data indicative of the user type of the user input may be transmitted frommoderator device 130′ toreceiver 120′ via a wireless connection, e.g., a defined TCP port. - Referring now to
FIG. 9B , a sequence diagram for usingcollaborative platform 100′ depicted inFIG. 9A is provided. As described above with reference toFIG. 4B ,collaboration platform 100′ ofFIG. 9A also runs a collaboration application for displaying a first image based on an original image file stored onmoderator device 130′, receiving user input, modifying the original image file stored onmoderator device 130′ based on the user input, and displaying a second image based on the modified original image file. - Like
collaboration platform 100 ofFIG. 4A ,collaboration platform 100′ may run an overlay image generator application for generating an overlay image byreceiver 120′ based on the user input provided by the user, generating an overlaid image based on the overlay image, and displaying the overlaid image on the original image ondisplay 105′ to reduce latency ofcollaboration platform 100′.Collaboration platform 100′ differs fromcollaboration platform 100 in that,receiver 120′ may receive data indicative of user type directly frommoderator device 130′ via a wireless connection, e.g., a defined TCP port, in addition to user input data received fromdisplay 105′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth. In this embodiment,receiver 120′ does not need to derive information regarding the selected user type of the user input from user input data received fromdisplay 105. For example, as described above with regard toFIG. 3C ,moderator device 130′ may includeoverlay image application 139 for processing and analyzing the user input data received fromdisplay 105′ throughreceiver 120′, determining the user type selected from the user input data, and transmitting the data indicative of the selected user type toreceiver 120′ via the defined TCP port. Accordingly,receiver 120′ generates an overlay image based on both the user input data and the user type data, generates an overlaid image based on the overlay image, and transmits the overlaid image via a wired connection, e.g., an HDMI cable, to display 105′, thereby reducing latency ofcollaboration platform 100′. - Referring now to
FIG. 10A , a block diagram of another exemplary embodiment ofcollaborative platform 100″ in accordance with the principles of the present invention is provided. As shown inFIG. 10A , user input data may be transmitted fromdisplay 105″ toreceiver 120″ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth. In addition, user input data and real image(s) may be communicated betweenreceiver 120″ andmoderator device 130″ across a wireless connection, e.g., WiFi. Further, the overlaid image may be transmitted fromreceiver 120″ to display 105″ via a wired connection, e.g., an HDMI cable. As shown inFIG. 10A , data indicative of the user type of the user input may be transmitted from the operating system ofmoderator device 130″ toreceiver 120″ via a modified user input back channel (UIBC) extension. A UIBC extension would generally be used to transmit user input data from the receiver to the moderator device; however, here the UIBC extension is modified to permit transmission of data frommoderator device 130″ toreceiver 120″. - Referring now to
FIG. 10B , a sequence diagram for usingcollaborative platform 100″ depicted inFIG. 10A is provided. As described above with reference toFIGS. 4B and 9B ,collaboration platform 100″ ofFIG. 10A also runs a collaboration application for displaying a first image based on an original image file stored onmoderator device 130″, receiving user input, modifying the original image file stored onmoderator device 130″ based on the user input, and displaying a second image based on the modified original image file. - Like
collaboration platform 100 ofFIG. 4A ,collaboration platform 100″ may run an overlay image generator application for generating an overlay image byreceiver 120″ based on the user input provided by the user, generating an overlaid image based on the overlay image, and displaying the overlaid image on the original image ondisplay 105″ to reduce latency ofcollaboration platform 100″.Collaboration platform 100″ differs fromcollaboration platform 100 in that,receiver 120″ may receive data indicative of user type directly from the operating system ofmoderator device 130″ via the UIBC extension described above, in addition to user input data received fromdisplay 105′ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth. - In this embodiment,
receiver 120″ does not need to derive information regarding the selected user type of the user input from user input data received fromdisplay 105. For example, as described above with regard toFIG. 3C ,operating system 137 ofmoderator device 130″ may process and analyze the user input data received fromdisplay 105″ throughreceiver 120″, determine the user type selected from the user input data, and transmit the data indicative of the selected user type toreceiver 120″ via the UIBC extension. Accordingly,receiver 120″ generates an overlay image based on both the user input data and the user type data, generates an overlaid image based on the overlay image, and transmits the overlaid image via a wired connection, e.g., an HDMI cable, to display 105″ to be displayed over the original image displayed ondisplay 105″, thereby reducing latency ofcollaboration platform 100″. - Referring now to
FIG. 11 , a flowchart is illustrated detailing the data flow and decisions made in implementing the overlaid image generation functionality ofreceiver 120′ ofcollaborative platform 100′ orreceiver 120″ ofcollaborative platform 100″. As mentioned above,receiver 120′ ofcollaborative platform 100′ andreceiver 120″ ofcollaborative platform 100″ may be used to generate an overlay image based on user input, and generate an overlaid image based on the overlay image such that the overlaid image is displayed while a real image is being generated bymoderator device 130′, 130″, thereby reducing latency ofcollaborative platform 100′, 100″. - To initiate the process set forth in
FIG. 11 , atstep 1101, user input data corresponding to user input is received byreceiver 120′, 120″, e.g., via a USB cable and/or a wireless connection such as Bluetooth fromdisplay 105′, 105″. Atstep 1102,receiver 120′, 120″, running the collaboration application, transmits the user input data to the source of the original image, e.g.,moderator device 130′, 130″, for further processing and analysis. For example,moderator device 130′, 130″ may derive data indicative of at least one user type of the user input. Accordingly, atstep 1103, user type data is received byreceiver 120′, e.g., via a defined TCP port, from an application ofmoderator device 130′, or byreceiver 120″, e.g., a UIBC extension, frommoderator device 130″. - At
step 1104,receiver 120′, 120″ generates an overlay image based on the user input data received atstep 1101 as well as the user type data received atstep 1103. The overlay image may be generated based on the user input data and a default user type until a new user type is received atstep 1103. Preferably, the user type of the user input may be set to preprogrammed default settings, e.g., default color (gray), default thickness (normal), and default marker user type, until subsequently changed by the user. Atstep 1106,receiver 120′, 120″ receives the real image generated bymoderator device 130′, 130″ based on the user input data received fromdisplay 105′, 105″. Atstep 1106,receiver 120′, 120″ generates an overlaid image based on the overlay image and the real image. As described above, the overlaid image may be formed by an overlay image superimposed on the real image, whereas the overlay image includes a first portion representative of the user's actual input received byreceiver 120′, 120″, and a second, extended portion, which may be a prediction of the user's intended input based on the user input data received byreceiver 120′, 120″. Atstep 1107,receiver 120′, 120″ transmits the overlaid image to, e.g., display 105′, 105″, to be displayed on the original image, thereby reducing latency ofcollaborative platform 100′, 100″. - Referring now to
FIG. 12A , a block diagram of another exemplary embodiment ofcollaborative platform 100′″ in accordance with the principles of the present invention is provided. As shown inFIG. 12A , user input data may be transmitted fromdisplay 105′″ toreceiver 120′″ via a wired connection, e.g., a USB cable, and/or a wireless connection such as Bluetooth.Receiver 120′″ may be able to perform the functionalities of a moderator device described herein. For example,receiver 120′″ may generate a real image based on the user input data received fromdisplay 105′″, and further generate an overlay image including a predicted portion based on the user input data, as well as an overlaid image based on the overlay image and the real image. The overlaid image may be transmitted fromreceiver 120′″ to display 105′″ via a wired connection, e.g., an HDMI cable. - Referring now to
FIG. 12B , a sequence diagram for usingcollaborative platform 100′″ depicted inFIG. 12A is provided.Collaboration platform 100′″ ofFIG. 12A may run a collaboration application for displaying a first image based on an original image file stored onreceiver 120′″, receiving user input, modifying the original image file stored onreceiver 120′″ based on the user input, and displaying a second image based on the modified original image file. - Like
collaboration platform 100 ofFIG. 4A ,collaboration platform 100′″ may run an overlay image generator application for generating an overlay image byreceiver 120′″ based on the user input provided by the user, including a predicted portion based on the user input data, generating an overlaid image based on the overlay image, and displaying the overlaid image on the original image ondisplay 105′″ to reduce latency ofcollaboration platform 100′″.Collaboration platform 100′″ differs fromcollaboration platform 100 in that,receiver 120′″ may function as a moderator device described herein and generate modified real images based on the user input data received from display 150′″, without having to transmit the user input data to an external moderator device. - Accordingly,
receiver 120′″ generates a modified real image based on user input user input data and optionally user type data, generates an overlay image based on user input data and optionally user type data, generates an overlaid image based on the overlay image and the real image, and transmits the overlaid image via a wired connection, e.g., an HDMI cable, to display 105′″ to be displayed over the original image displayed ondisplay 105′″, thereby reducing latency ofcollaboration platform 100′″. - The collaborative platforms described herein for generating overlaid images for display will reduce latency due to the necessity of transmitting data across a wireless network, e.g., between the receiver and the moderator and member devices. As will be understood by a person having ordinary skill in the art, additional sources of delay include processor and application delays. For example, the computing device for receiving user input, e.g., a touchscreen display, will be limited in its processing time of the user input to generate user input data for transmission to the receiver. In accordance with the principles of the present invention, extrapolation, artificial intelligence, machine learning, and/or neural networks may be implemented to predict user input as the user interacts with the touchscreen, such that the overlay image generator application of the receiver may generate overlaid images based on the predicted user input rather than waiting for the user input data from the touchscreen and/or the moderator device (which may suffer from application delays in processing the user input data), thereby further reducing latency of the collaborative platform.
- It should be understood that any of the computer operations described herein above may be implemented at least in part as computer-readable instructions stored on a computer-readable memory. It will of course be understood that the embodiments described herein are illustrative, and components may be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are contemplated and fall within the scope of this disclosure.
- The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
Claims (20)
1. A method for reducing latency on a collaborative platform, the method comprising:
receiving, by a first device, a first image from a third device;
receiving, by the first device, user input data indicative of user input on a second device;
transmitting, by the first device, the user input data to the third device;
determining, by the first device, an overlay image based on the user input data;
determining, by the first device, an overlaid image based on the overlay image and the first image; and
transmitting, by the first device, the overlaid image to the second device to cause the overlaid image to be displayed on the second device.
2. The method of claim 1 , wherein determining, by the first device, the overlay image comprises determining, by the first device, a first portion of the overlay image indicative of the user input on the second device based on the user input data.
3. The method of claim 2 , wherein determining, by the first device, the overlay image comprises predicting, by the first device, an extended portion of the overlay image based on the user input data.
4. The method of claim 3 , wherein predicting, by the first device, the extended portion of the overlay image based on the user input data comprises predicting, by the first device, the extended portion of the overlay image based on at least one of spatial or time coordinates of the user input data.
5. The method of claim 3 , wherein predicting, by the first device, the extended portion of the overlay image based on the user input data comprises predicting, by the first device, the extended portion of the overlay image based on a velocity of the user input data.
6. The method of claim 3 , wherein the extended portion of the overlay image comprises a curved portion comprising a plurality of finite line segments, and wherein predicting, by the first device, the extended portion of the overlay image comprising predicting the curved portion based on an angle of each finite line segment of the plurality of finite line segments.
7. The method of claim 3 , wherein predicting, by the first device, the extended portion of the overlay image based on the user input data comprises predicting, by the first device, the extended portion of the overlay image based on at least one of extrapolation, machine learning, artificial intelligence, or a neural network.
8. The method of claim 3 , wherein determining, by the first device, the overlay image comprises determining, by the first device, the overlay image comprising the first and extended portions of the overlay image.
9. The method of claim 1 , wherein a portion of the overlay image is displayed on the second device for a predetermined period of time.
10. The method of claim 9 , wherein the predetermined period of time is at least as long as the latency on the collaborative platform.
11. The method of claim 1 , wherein the overlay image comprises a leading end and a trailing end, such that, as a number of spatial coordinates of the leading end increases on the second device, a portion of spatial coordinates of the trailing end is removed from the second device depending on a length of the latency on a collaborative platform.
12. The method of claim 1 , wherein the overlay image comprises a maximum amount of spatial coordinates, such that, when an additional spatial coordinate is displayed that exceeds the maximum amount of spatial coordinates, an initial displayed spatial coordinate is removed from the overlay image.
13. The method of claim 1 , wherein the overlay image comprises a leading end, a trailing end, and a maximum spatial length, such that, as the leading end extends, the trailing end is removed to maintain the maximum spatial length of the overlay image of the overlaid image displayed on the second device.
14. The method of claim 1 , wherein the overlay image comprises a leading end and a trailing end, such that, as the leading end extends on the second device at a rate, the trailing end is removed from the second device at the rate.
15. The method of claim 1 , wherein the overlay image comprises a leading end and a trailing end, such that, as a number of spatial coordinates of the lending end increases on the second device, a portion of spatial coordinates of the trailing end is removed from the second device depending on a speed of the increasing spatial coordinates.
16. The method of claim 1 , further comprising determining, by the first device, an input type corresponding to the user input on the second device, wherein the input type comprises at least one of thickness, color, or marker or eraser type.
17. The method of claim 16 , wherein determining, by the first device, the input type comprises determining, by the first device, the input type based on the user input data and machine learning.
18. The method of claim 1 , further comprising receiving, by the first device, data indicative of an input type corresponding to the user input from the third device.
19. The method of claim 18 , wherein receiving, by the first device, data indicative of the input type comprises receiving, by the first device, data indicative of the input type from an application running on the third device via a defined TCP port.
20. The method of claim 18 , wherein receiving, by the first device, data indicative of the input type comprises receiving, by the first device, data indicative of the input type from an operating system running on the third device via a user input back channel (UIBC) extension.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/049,243 US20230065331A1 (en) | 2019-11-27 | 2022-10-24 | Methods and systems for reducing latency on collaborative platform |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962941677P | 2019-11-27 | 2019-11-27 | |
US17/105,419 US11483367B2 (en) | 2019-11-27 | 2020-11-25 | Methods and systems for reducing latency on a collaborative platform |
US18/049,243 US20230065331A1 (en) | 2019-11-27 | 2022-10-24 | Methods and systems for reducing latency on collaborative platform |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/105,419 Continuation US11483367B2 (en) | 2019-11-27 | 2020-11-25 | Methods and systems for reducing latency on a collaborative platform |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230065331A1 true US20230065331A1 (en) | 2023-03-02 |
Family
ID=73857256
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/105,419 Active US11483367B2 (en) | 2019-11-27 | 2020-11-25 | Methods and systems for reducing latency on a collaborative platform |
US18/049,243 Abandoned US20230065331A1 (en) | 2019-11-27 | 2022-10-24 | Methods and systems for reducing latency on collaborative platform |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/105,419 Active US11483367B2 (en) | 2019-11-27 | 2020-11-25 | Methods and systems for reducing latency on a collaborative platform |
Country Status (7)
Country | Link |
---|---|
US (2) | US11483367B2 (en) |
EP (1) | EP4066509A1 (en) |
JP (1) | JP2023503641A (en) |
CN (1) | CN115516867B (en) |
AU (1) | AU2020393923A1 (en) |
CA (1) | CA3163096A1 (en) |
WO (1) | WO2021108716A1 (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070176893A1 (en) * | 2006-01-31 | 2007-08-02 | Canon Kabushiki Kaisha | Method in an information processing apparatus, information processing apparatus, and computer-readable medium |
US20090027397A1 (en) * | 2007-07-26 | 2009-01-29 | Tufts University | Method for fitting a parametric representation to a set of objects generated by a digital sketching device |
US20140192058A1 (en) * | 2013-01-07 | 2014-07-10 | Yu KODAMA | Image processing apparatus, image processing method, and recording medium storing an image processing program |
US20150089452A1 (en) * | 2012-05-02 | 2015-03-26 | Office For Media And Arts International Gmbh | System and Method for Collaborative Computing |
US20150326654A1 (en) * | 2009-11-02 | 2015-11-12 | Samsung Electronics Co., Ltd. | Method and apparatus for providing user input back channel in audio/video system |
US20170195374A1 (en) * | 2015-12-31 | 2017-07-06 | Actiontec Electronics, Inc. | Displaying content from multiple devices |
US20190121599A1 (en) * | 2017-10-13 | 2019-04-25 | Slack Technologies, Inc. | Method, apparatus, and computer program product for sharing interface annotations among participating devices within a group-based communication system |
US20190155498A1 (en) * | 2013-11-19 | 2019-05-23 | Wacom Co., Ltd. | Method and system for ink data generation, ink data rendering, ink data manipulation and ink data communication |
US20200204606A1 (en) * | 2018-12-20 | 2020-06-25 | Cisco Technology, Inc. | Realtime communication architecture over hybrid icn and realtime information centric transport protocol |
US20210153801A1 (en) * | 2019-11-26 | 2021-05-27 | The Chinese University Of Hong Kong | Methods based on an analysis of drawing behavior changes for cognitive dysfunction screening |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6343313B1 (en) | 1996-03-26 | 2002-01-29 | Pixion, Inc. | Computer conferencing system with real-time multipoint, multi-speed, multi-stream scalability |
US7434166B2 (en) | 2003-06-03 | 2008-10-07 | Harman International Industries Incorporated | Wireless presentation system |
US20050091359A1 (en) | 2003-10-24 | 2005-04-28 | Microsoft Corporation | Systems and methods for projecting content from computing devices |
US20060031779A1 (en) | 2004-04-15 | 2006-02-09 | Citrix Systems, Inc. | Selectively sharing screen data |
US7379968B2 (en) | 2004-06-03 | 2008-05-27 | International Business Machines Corporation | Multiple moderation for networked conferences |
US7895639B2 (en) | 2006-05-04 | 2011-02-22 | Citrix Online, Llc | Methods and systems for specifying and enforcing access control in a distributed system |
US20100257457A1 (en) * | 2009-04-07 | 2010-10-07 | De Goes John A | Real-time content collaboration |
US8904421B2 (en) | 2009-06-30 | 2014-12-02 | At&T Intellectual Property I, L.P. | Shared multimedia experience including user input |
US20110154192A1 (en) | 2009-06-30 | 2011-06-23 | Jinyu Yang | Multimedia Collaboration System |
WO2011014772A1 (en) * | 2009-07-31 | 2011-02-03 | Citizenglobal Inc. | Systems and methods for content aggregation, editing and delivery |
US8892628B2 (en) | 2010-04-01 | 2014-11-18 | Microsoft Corporation | Administrative interface for managing shared resources |
US8909704B2 (en) | 2010-04-29 | 2014-12-09 | Cisco Technology, Inc. | Network-attached display device as an attendee in an online collaborative computing session |
US20120030289A1 (en) * | 2010-07-30 | 2012-02-02 | Avaya Inc. | System and method for multi-model, context-sensitive, real-time collaboration |
WO2012088419A1 (en) | 2010-12-22 | 2012-06-28 | Via Response Technologies, LLC | Educational assessment system and associated methods |
US10739941B2 (en) * | 2011-03-29 | 2020-08-11 | Wevideo, Inc. | Multi-source journal content integration systems and methods and systems and methods for collaborative online content editing |
EP2509277A1 (en) | 2011-04-05 | 2012-10-10 | Research In Motion Limited | System and method for shared binding maintenance |
US9767195B2 (en) | 2011-04-21 | 2017-09-19 | Touchstream Technologies, Inc. | Virtualized hosting and displaying of content using a swappable media player |
US8904289B2 (en) | 2011-04-21 | 2014-12-02 | Touchstream Technologies, Inc. | Play control of content on a display device |
EP2737729A1 (en) | 2011-07-29 | 2014-06-04 | 3M Innovative Properties Company | Wireless presentation system allowing automatic association and connection |
US9774658B2 (en) | 2012-10-12 | 2017-09-26 | Citrix Systems, Inc. | Orchestration framework for connected devices |
US8915441B2 (en) | 2012-10-15 | 2014-12-23 | At&T Intellectual Property I, L.P. | Synchronizing mobile devices and displays |
US9106652B2 (en) | 2012-12-18 | 2015-08-11 | International Business Machines Corporation | Web conference overstay protection |
US9460474B2 (en) | 2013-05-03 | 2016-10-04 | Salesforce.Com, Inc. | Providing access to a private resource in an enterprise social networking system |
KR20140134088A (en) | 2013-05-13 | 2014-11-21 | 삼성전자주식회사 | Method and apparatus for using a electronic device |
US9398059B2 (en) | 2013-11-22 | 2016-07-19 | Dell Products, L.P. | Managing information and content sharing in a virtual collaboration session |
US20150188838A1 (en) | 2013-12-30 | 2015-07-02 | Texas Instruments Incorporated | Disabling Network Connectivity on Student Devices |
US10270819B2 (en) * | 2014-05-14 | 2019-04-23 | Microsoft Technology Licensing, Llc | System and method providing collaborative interaction |
JP6497184B2 (en) | 2014-06-23 | 2019-04-10 | 株式会社リコー | Terminal device, program, content sharing method, and information processing system |
EP3259480A4 (en) * | 2015-02-17 | 2019-02-20 | Dresser-Rand Company | Internally-cooled compressor diaphragm |
JP2016195304A (en) | 2015-03-31 | 2016-11-17 | ブラザー工業株式会社 | Management program, conference management method, and conference management server device |
CN105491414B (en) * | 2015-11-19 | 2017-05-17 | 深圳市鹰硕技术有限公司 | Synchronous display method and device of images |
US11036712B2 (en) * | 2016-01-12 | 2021-06-15 | Microsoft Technology Licensing, Llc. | Latency-reduced document change discovery |
US10063660B1 (en) * | 2018-02-09 | 2018-08-28 | Picmonkey, Llc | Collaborative editing of media in a mixed computing environment |
WO2020046843A1 (en) | 2018-08-25 | 2020-03-05 | Actiontec Electronics, Inc. | Classroom assistance system |
-
2020
- 2020-11-25 WO PCT/US2020/062427 patent/WO2021108716A1/en unknown
- 2020-11-25 CA CA3163096A patent/CA3163096A1/en active Pending
- 2020-11-25 AU AU2020393923A patent/AU2020393923A1/en active Pending
- 2020-11-25 EP EP20828712.8A patent/EP4066509A1/en active Pending
- 2020-11-25 JP JP2022531417A patent/JP2023503641A/en active Pending
- 2020-11-25 US US17/105,419 patent/US11483367B2/en active Active
- 2020-11-25 CN CN202080093360.4A patent/CN115516867B/en active Active
-
2022
- 2022-10-24 US US18/049,243 patent/US20230065331A1/en not_active Abandoned
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070176893A1 (en) * | 2006-01-31 | 2007-08-02 | Canon Kabushiki Kaisha | Method in an information processing apparatus, information processing apparatus, and computer-readable medium |
US20090027397A1 (en) * | 2007-07-26 | 2009-01-29 | Tufts University | Method for fitting a parametric representation to a set of objects generated by a digital sketching device |
US20150326654A1 (en) * | 2009-11-02 | 2015-11-12 | Samsung Electronics Co., Ltd. | Method and apparatus for providing user input back channel in audio/video system |
US20150089452A1 (en) * | 2012-05-02 | 2015-03-26 | Office For Media And Arts International Gmbh | System and Method for Collaborative Computing |
US20140192058A1 (en) * | 2013-01-07 | 2014-07-10 | Yu KODAMA | Image processing apparatus, image processing method, and recording medium storing an image processing program |
US20190155498A1 (en) * | 2013-11-19 | 2019-05-23 | Wacom Co., Ltd. | Method and system for ink data generation, ink data rendering, ink data manipulation and ink data communication |
US20170195374A1 (en) * | 2015-12-31 | 2017-07-06 | Actiontec Electronics, Inc. | Displaying content from multiple devices |
US20190121599A1 (en) * | 2017-10-13 | 2019-04-25 | Slack Technologies, Inc. | Method, apparatus, and computer program product for sharing interface annotations among participating devices within a group-based communication system |
US20200204606A1 (en) * | 2018-12-20 | 2020-06-25 | Cisco Technology, Inc. | Realtime communication architecture over hybrid icn and realtime information centric transport protocol |
US20210153801A1 (en) * | 2019-11-26 | 2021-05-27 | The Chinese University Of Hong Kong | Methods based on an analysis of drawing behavior changes for cognitive dysfunction screening |
Also Published As
Publication number | Publication date |
---|---|
CA3163096A1 (en) | 2021-06-03 |
CN115516867B (en) | 2024-04-02 |
US20210160302A1 (en) | 2021-05-27 |
CN115516867A (en) | 2022-12-23 |
AU2020393923A1 (en) | 2022-06-09 |
US11483367B2 (en) | 2022-10-25 |
EP4066509A1 (en) | 2022-10-05 |
WO2021108716A1 (en) | 2021-06-03 |
JP2023503641A (en) | 2023-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200348900A1 (en) | Method of providing annotation track on the content displayed on an interactive whiteboard, computing device and non-transitory readable storage medium | |
US9736540B2 (en) | System and method for multi-device video image display and modification | |
CN112866734B (en) | Control method for automatically displaying handwriting input function and display device | |
US9980008B2 (en) | Meeting system that interconnects group and personal devices across a network | |
USRE48494E1 (en) | Network accessible projectors that display multiple client screens at once | |
JP2009230579A (en) | Screen sharing system, screen sharing method, server terminal control program, client terminal control program and recording medium | |
US7870280B2 (en) | Synchronized viewing of file manipulations | |
US9342267B2 (en) | Displaying regions of user interest in sharing sessions | |
US20110074667A1 (en) | Specific user field entry | |
JP2018182761A (en) | System and method for interactive and real-time visualization of distributed media | |
JP4951912B2 (en) | Method, system, and program for optimizing presentation visual fidelity | |
US20170185269A1 (en) | Display management solution | |
US20160182579A1 (en) | Method of establishing and managing messaging sessions based on user positions in a collaboration space and a collaboration system employing same | |
US20140282090A1 (en) | Displaying Image Information from a Plurality of Devices | |
US11483367B2 (en) | Methods and systems for reducing latency on a collaborative platform | |
US20170017632A1 (en) | Methods and Systems of Annotating Local and Remote Display Screens | |
JP2013232123A (en) | Electronic conference system, terminal, and file providing server | |
US20150007054A1 (en) | Capture, Store and Transmit Snapshots of Online Collaborative Sessions | |
EP2974288B1 (en) | Generating an image stream | |
CN107992253B (en) | Method and device for regulating and controlling display state of combined three-dimensional graph and computer equipment | |
US11704626B2 (en) | Relocation of content item to motion picture sequences at multiple devices | |
WO2018098735A1 (en) | Synchronous teaching-based message processing method and device | |
CN111556156A (en) | Interaction control method, system, electronic device and computer-readable storage medium | |
JP6766486B2 (en) | Display control device, display control system and program | |
JP2005025399A (en) | Information exchange means using internet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SCREENBEAM INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VU, CHUONG;EHLENBERGER, MIKE;LI, WEI;AND OTHERS;SIGNING DATES FROM 20201011 TO 20201124;REEL/FRAME:061520/0720 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |