WO2016154660A1 - Improved systems and methods for sharing physical writing actions - Google Patents

Improved systems and methods for sharing physical writing actions Download PDF

Info

Publication number
WO2016154660A1
WO2016154660A1 PCT/AU2016/000107 AU2016000107W WO2016154660A1 WO 2016154660 A1 WO2016154660 A1 WO 2016154660A1 AU 2016000107 W AU2016000107 W AU 2016000107W WO 2016154660 A1 WO2016154660 A1 WO 2016154660A1
Authority
WO
WIPO (PCT)
Prior art keywords
writing
computing device
server
physical
actions
Prior art date
Application number
PCT/AU2016/000107
Other languages
French (fr)
Inventor
Vahid KOLAHDOUZAN
Original Assignee
Inkerz Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2015901117A external-priority patent/AU2015901117A0/en
Application filed by Inkerz Pty Ltd filed Critical Inkerz Pty Ltd
Priority to US15/562,380 priority Critical patent/US10915288B2/en
Priority to AU2016240387A priority patent/AU2016240387B2/en
Publication of WO2016154660A1 publication Critical patent/WO2016154660A1/en
Priority to US17/247,656 priority patent/US11614913B2/en
Priority to AU2022200055A priority patent/AU2022200055B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1827Network arrangements for conference optimisation or adaptation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1831Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • the present invention relates generally to improved systems and methods for sharing physical writing methods.
  • a server and computer implemented method for sharing physical writing actions comprising the steps of: detecting, at each of a plurality of computing devices associated with a meeting, one or more physical writing actions being performed on physical writing surfaces; generating writing signals based on the physical writing actions; transmitting the generated writing signals to a server; forwarding, via the server, the writing signals for receipt at the pluralit of computing devices associated with the meeting; and each computing device outputting a representation of the physical writing actions.
  • a server implemented method for sharing physical writing actions comprising the steps of: receiving, at a server, generated writing signals associated with a meeting from two or more computing devices, wherein the generated writing signals are associated with physical writing actions captured by the two or more computing devices; and forwarding, from the server to the computing devices, the generated writing signals associated with the meeting to enable each computing device to output a representation of the physical writing actions.
  • a computer implemented method for detecting a physical writing action comprising the steps of; accessin an image generated by a camera associated with a computing device; analysing the image to detect a first physical writing action; generating a first writing signal based on the analysis; and outputting the first writing signal.
  • a computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the methods described above.
  • a server, computing device or electronic device arranged to implement any one of the methods described above.
  • FIG. 1 A and 1 B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced:
  • FIG. 2A and 2B collectively form a schematic block diagram representation of an electronic device upon which described arrangements can be practised;
  • FIG. 3A shows a system block diagram according to this disclosure
  • FIG. 3B shows a system block diagram according to this disclosure
  • FIG. 4A shows a system block diagram according to this disclosure
  • Fig. 4B shows a server-client block diagram according to this disclosure
  • Fig. 5 shows a process flow diagram according to this disclosure
  • FIG. 6 shows a process flow diagram according to this disclosure
  • FIG. ?A shows a system block diagram according to this disclosure
  • Fig. 7B shows a system block diagram according to this disclosure
  • FIG. 8 shows a process flow diagram according to this disclosure
  • FIGs. 9A to 9K show a user interface according to this disclosure
  • This disclosure describes methods and systems for combining electronic hand writing detection systems and methods wit web conferencing type sysiems and methods.
  • the methods and systems disclosed demonstrate the creation of a cloud based collaboration platform with the integration of a digital ink pen into the platform that allows for real-time collaboration, and the exchange of ideas through online remote meetings while users can still use traditional ink and pape (or any other suitable writing medium and surface).
  • the systems and method described allow users to collaborate and interact using their own handwriting.
  • the sysiems and methods described allow each user to have their own workspace accessible only to them, or to other users that they may select.
  • the systems and methods described allow users to collaborate with one or more other users of the system, such as users that are attending the same meeting space (such as a lecture or classroom, for example).
  • One or more of the users may provide instant feedback to one or more of the other users.
  • the collaboration may be 1 to 1. 1 to many, few to many or many to many.
  • the systems and methods described provide the ability to transmit and record the interactions of two or more users together in a shared workspace and to provide those interactions in a real time manner via a live dashboard. Further, various systems and methods are disclosed for tracking writing actions using a camera. These tracking systems and methods may be incorporated into the other systems and methods described herein.
  • the systems and methods described also provide other key features and advantages as described herein.
  • FIGs. 1 A and 1 B depict a general-purpose computer system 100, upon which the various arrangements described can be practiced.
  • the computer system 100 includes: a computer module 101 : input devices such as a keyboard 102, a mouse pointer device 103, a scanner 126, a camera 127, and a microphone 180; and output devices including a printer 115, a display device 114 and loudspeakers 117.
  • An external Modulator-Demodulator (Modem) transceiver device 118 may be used by the computer module 101 for communicating to and from a communications network 120 via a connection 121.
  • the communications network 120 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN.
  • WAN wide-area network
  • the modem 116 may be a traditional "dial-up" modem.
  • the modem 116 may be a broadband modem.
  • a wireless modem may also be used for wireless connection to the communications network 120.
  • the computer module 101 typically includes at least one processor unit 05, and a memory unit 106.
  • the memory unit 106 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM).
  • the computer module 101 also includes an number of input/output (I/O) interfaces including: an audio-video interface 107 that couples to the video display , loudspeakers 1 17 and microphone 180; an I/O interface 1 13 that couples to the keyboard 102, mouse 103, scanner 126, camera 127 and optionally a joystick or other human interface device (not illustrated), or a projector; and an interface 108 for the external modem 1 16 and printer 115.
  • the modem 116 may be incorporated within the computer module 101 , for example within the interface 108.
  • the computer module 101 also has a local network interface 111, which permits coupling of the computer system 100 via a connection 123 to a local-area communications network 122, known as a Local Area Network (LAN).
  • a local-area communications network 122 known as a Local Area Network (LAN).
  • the local communications network 122 may also couple to the wid network 120 via a connection 124, which would typically include a so-called "firewall” device or device of similar functionality.
  • the local network interface 111 may comprise an Ethernet circuit card, a Bluetooth* wireless arrangement or an IEEE 802.11 ireiess arrangement; however, numerous other types of interfaces may be practiced for the interface 11.
  • the I/O interfaces 108 and 113 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated).
  • Storage devices 109 are provided and typically include a hard disk drive (HDD) 110.
  • HDD hard disk drive
  • Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • An optica! disk drive 12 is typically provided to act as a non-volatile source of data.
  • Portable memory devices such optical disks (e.g., CD- ROM, DVD, B!u-ray DiscTM ⁇ , USB-RA , portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 100.
  • the components 105 to 1 3 of the computer module 101 typically communicate via an interconnected bus 104 and in a manner that results in a conventional mode of operation of the computer system 100 known to those in the relevant art.
  • the processor 105 is coupled to the system bus 104 using a connection 118.
  • the memory 108 and optical disk drive 112 are coupled to the system bus 104 by connections 119.
  • Examples of computers on which the described arrangemenis can be practised include IBM-PC's and compatibies, Sun Sparcstations, Apple MacTM or a like computer systems.
  • the methods as described herein may be implemented using the computer system 100 wherein the processes of Figs. 5, 6 and 8, to be described, may be implemented as one or more software application programs 133 executable within the computer system 100.
  • the steps of the methods described herein are effected by instructions 131 (see Fig. 1 B) in the software 133 that are carried out within the computer system 100.
  • the software instructions 131 may foe formed as one or more code modules, each for performing one or more particular tasks.
  • the software may be stored in a computer readable medium, including the storage devices described below, for example.
  • the software is loaded into the computer system 100 from the computer readable medium, and then executed by the computer system 1 0.
  • a computer readable medium having suc software or computer program recorded on the computer readable medium is a computer program product.
  • the use of the computer program produci in the computer system 100 preferabiy effects an advantageous apparatus for detecting and/or sharing writing actions.
  • the software 133 is typically stored in the HDD 1 10 or the memory 106.
  • the software is loaded into the computer system 100 from a computer readable medium, and executed by the computer system 100.
  • the software 133 may be stored on an optically readable disk storage medium ⁇ e.g., CD-ROM) 125 that Is read by the optical disk drive 112.
  • a computer readable medium hawing such software or computer program recorded on it is a computer program product.
  • the use of the computer program product in the computer system 100 preferably effects an apparatus for detecting and/or sharing writing actions.
  • th application programs 133 may be supplied to the user encoded on one or more CD-ROMs 125 and read via the corresponding drive 112, or alternatively may be read by the user from the networks 120 or 122. Still further, the software can also be toade into the computer system 100 from other computer readable media.
  • Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 100 fo execution and/or processing.
  • Examples of such storage media include flopp disks, magnetic tape, CD-ROM, DVD, Blu-rayTM Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or externa! of the computer module 101.
  • Examples of transitory or non-tangib!e computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 101 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • GUIs graphical user interfaces
  • a user of th computer system 100 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 117 and user voice commands input via the microphone 180.
  • the memory 134 represents a logical aggregation of all the memory moduies (including the HDD 109 and semtconductor memory 106 ⁇ that can be accessed by the computer module 101 in Fig, 1A.
  • a power-on self-test (POST) program 150 executes.
  • the POST program 160 i typically stored in a ROM 149 of the semiconductor memory 106 of Fig. 1A.
  • a hardware device such as the ROM 149 storing software is sometimes referred to as firmware.
  • the POST program 150 examines hardware within the computer module 101 to ensure proper functioning and typically checks the processor 105, the memory 134 (109, 106), and a basic input-output systems software (BIOS) module 151 , also typically stored in the ROM 149, for correct operation. Once the POST program 150 has run successfully, the BIOS 151 activates th hard disk drive 110 of Fig, 1A.
  • BIOS basic input-output systems software
  • Activation of the hard disk drive 110 causes a bootstrap loader program 152 that is resident on the hard disk drive 110 to execute via the processo 105.
  • the operating system 153 is a system level application, executable by the processor 105, to fulfil various high level functions, including processor management, memory management, devic management, storage management, software application interface, and generic user interface.
  • the operating system 153 manages the memory 134 (109, 106) to ensure that each process or application running on the computer module 101 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 100 of Fig. 1A must be used property so that each process can run effectively. Accordingly, the aggregated memory 134 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the compute system 100 and how such is used.
  • the processor 105 includes a number of functional modules including a control unit 139, an arithmetic logic unit (ALU) 140, and a local or interna! memor 148, sometimes called a cache memory.
  • the cache memory 148 typically includes a number of storage registers 144 - 146 in a register section.
  • One or more internal busses 141 functionally interconnect these functional moduies.
  • the processor 105 iypicaliy also has one or more interfaces 142 for communicating with external devices via the system bus 104, using a connection 118.
  • the memory 134 is coupled to the bus 104 using a connection 119.
  • the application program 133 includes a sequence of instructions 131 that may include conditional branch and loop instructions, The program 133 may also include data 132 which is used in execution of the program 133.
  • the instructions 131 and the data 132 are stored in memory locations 128, 129, 130 and 135, 136, 137, respectively.
  • a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 130.
  • an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 128 and 1 9.
  • the processor 105 is given a set of instructions which are executed therein.
  • the processor 105 waits for a subsequent input, to which the processor 105 reacts to by executing another set of instructions.
  • Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 102, 103, data received from an externa! source across one of the networks 120, 02, data retrieved from one of the storage devices 106, 109 or data retrieved from a storage medium 125 inserted into the corresponding reader 112, all depicted in Fig. 1A.
  • the execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 134.
  • the disclosed writing detection and sharing arrangements use input variables 154, which are stored in the memory 134 in corresponding memory locations 155, 56, 157,
  • the writing detection and sharing arrangements produce output variables 161 , which are stored in the memory 134 in corresponding memory locations 162, 163, 164.
  • Intermediate variables 158 may be stored in memory locations 159, 160, 168 and 167.
  • each fetch, decode, and execute cycle comprises:
  • a further fetch, decode, and execute cycie for the next instruction may be executed.
  • a store cycle may be performed by which the control unit 39 stores or writes a value to a memory location 132.
  • Each step or sub-process in the processes of Figs. S, 6 and 8 is associated with one or more segments of the program 133 and is performed by the register section 144, 145, 147, the ALU 140, and the control unit 139 in the processor 105 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 133.
  • the methods described herein may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the writing detection and sharing methods.
  • dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
  • Figs. 2A and 28 collectively form a schematic block diagram of a genera! purpose electronic device 201 including embedded components, upon which the writing detection and/or sharing methods to be described are desirably practiced.
  • the electronic device 201 may be, for example, a mobile phone, a portable media player, virtual reality glasses or a digital camera, in which processing resources are limited. Nevertheless, the methods to be described may also be performed on higher-level devices such as desktop computers, server computers, and other such devices with significantly larger processing resources.
  • the electronic device 201 comprises an embedded controller 202, Accordingly, the electronic device 201 may be referred to as an "embedded device.”
  • the controller 202 has a processing unit (or processor) 205 which is bi- directionally coupled to an internal storage module 209.
  • the storage module 209 may be formed from non-volatile semiconductor read only memory (ROM) 260 and semiconductor random access memory (RAM) 270, as seen in Fig. 2B,
  • the RAM 270 may be volatile, nonvolatile or a combination of volatile and non-volatile memory.
  • the electronic device 201 includes a display controller 207, which is connected to a video display 214, such as a liquid crystal display (LCD) panel or the like.
  • the display controller 207 is configured for displaying graphical images on the video display 214 in accordance with instructions received from the embedded controller 202, to which the display controller 207 is connected.
  • the electronic device 201 also includes user input devices 213 which are typically formed by keys, a keypad or like controls, i some implementations, the user input devices 213 may include a touch sensitive panel physically associated with the display 214 to collectively form a touch-screen. Such a touch-screen ma thus operate as one form of graphical user interface (GUI) as opposed to a prompt or menu driven GUI typically used with keypad-display combinations.
  • GUI graphical user interface
  • Other forms of user input devices may also be used, such as a microphone (not illustrated) for voice commands or a joystick/thumb wheel (not illustrated) for ease of navigation about menus.
  • the electronic device 201 also comprises a portable memory interface 208, which is coupled to the processor 205 via a connection 219,
  • the portable memory interface 206 allows a complementary portable memory device 225 to be coupled to the electronic device 201 to act as a source or destination of data or to supplement the internal storage module 209. Examples of such interfaces permit coupling with portable memory devices such as Universal Serial Bus (USB) memory devices, Secure Digital (SD) cards, Persona] Computer Memory Card International Association (PCM!A) cards, optical disks and magnetic disks.
  • USB Universal Serial Bus
  • SD Secure Digital
  • PCM!A Computer Memory Card International Association
  • the electronic device 201 also has a communications interface 208 to permit coupling of the device 201 to a computer or communications network 220 via a connection 221.
  • the connection 221 may be wired or wireless.
  • the connection 221 may be radio frequenc or optical.
  • An example of a wired connection includes Ethernet.
  • an example of wireless connection includes BluetoothTM type local interconnection, Wi-Fi (including protocols based on the standards of the IEEE 802.11 family), Infrared Data Association (irDa) and the like.
  • the electronic device 201 is configured to perform some special function.
  • the embedded controller 202 possibly in conjunction with further special function
  • the components 210 is provided to perform that special function.
  • the components 210 may represent a lens, focus control and image senso of the camera.
  • the special function component 210 is connected to the embedded controller 202.
  • the device 201 may be a mobile telephone handset.
  • the components 210 may represent those components required for communications in a cellular telephone environment
  • the special function components 210 may represent a number of encoders and decoders of a type including Joint Photographic Experts Group (JPEG), (Moving Picture Experts Group) MPEG, MPEG-1 Audio Layer 3 (MP3), and the like,
  • the methods described hereinafter may be implemented using the embedded controller 202, where the processes of Figs. 5, 6 and 8 may be implemented as one or more software application programs 233 executable within the embedded controller 202,
  • the electronic device 201 of Fig. 2A implements the described methods. I n particular, with reference to Fig. 28 , the steps of the described methods are effected by instructions in the software 233 that are carried out within the controller 202.
  • the software instructions may be formed as one or more code modules, each for performing one or more particular tasks.
  • the software ma also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
  • the software 233 of the embedded controller 202 is typically stored in the non-volatile ROM 260 of the internal storag module 209.
  • the software 233 stored in the ROM 260 can be updated when required from a computer readable medium.
  • the software 233 can be loaded into and executed by the processor 205. I n some instances, the processor 205 may execute software instructions that are located in RAM 270. Software instructions may be loaded into the RAM 270 by the processor 205 initiating a copy of one or more code modules from ROM 280 info RAM 270. Alternatively, the software instructions of one o more code modules may be pre- installed in a non-volatile region of RAM 270 by a manufactu er. After one or more code modules have bee located in RAM 270, the processor 205 may execute software instructions of the one or more code modules.
  • the application program 233 is typically pre-installed and stored in the ROM 260 by a manufacturer, prior to distribution of the electronic device 20 .
  • the application programs 233 may be supplied to the user encoded on one or more CD-ROM (not shown) and read via the portable memory interface 206 of Fig. 2A prior to storage in the internal storage module 209 or in the portable memory 225.
  • the software application program 233 may be read by the processor 205 from the network 220, or loaded into the controller 202 or the portable storage medium 225 from other computer readable media.
  • Computer readable storage media refers to any non -transitory tangible storage medium that participates in providing instructions and/or data to the controlle 202 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, flash memory, or a computer readabie card such as a PCMCiA card and the like, whether or not such devices are internal or external of the device 201.
  • Examples of transitory or non-tangible computer readable transmission media that may aiso participate in the provision of software, application programs, instructions and/or data to the device 201 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • a com uter readable medium having such software or computer program recorded on it is a computer program product.
  • the second part of the application programs 233 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 214 of Fig, 2A.
  • GUIs graphical user interfaces
  • a user of the device 201 and the application programs 233 may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers (not illustrated) and user voice commands input via the microphone (not illustrated).
  • Fig. 28 illustrates in detail the embedded controller 202 having the processor 205 for executing the application programs 233 and the internal storage 209.
  • the internal storage 209 comprises read only memory (ROM) 260 and random access memory (RAM) 270.
  • the processor 205 is able to execute the application programs 233 stored in one or both of the connected memories 260 and 270.
  • ROM read only memory
  • RAM random access memory
  • the application program 233 is executed.
  • firmware Permanent!y stored in the ROM 260 is sometimes referred to as "firmware”.
  • Execution of the firmware by the processor 205 may fulfil various functions, including processor management, memor management, device management, storage management and user interface.
  • the processor 205 typically includes a number of functional modules including a control unit (CU) 251, an arithmetic logic unit (ALU) 252, a digital signal processor (DSP) 2 53 and a local or internal memory comprising a set of registers 254 which typically contain atomic data elements 256, 257, along with internal buffer or cache memory 255, One or more internal buses 259 i terconnect these functional modules.
  • the processor 205 typically also has one or more interfaces 258 for communicating with external devices via system bus 281 , using a connection 261,
  • the application program 233 includes a sequence of instructions 262 though 263 that may include conditional branch and loop instructions, The program 233 may also include data, which is used in execution of the program 233. This data may be stored as part of the instruction or in a separate location 264 within the ROM 260 or RAM 270.
  • the processor 205 is given a set of instructions, which are executed therein. This set of instructions may be organised into blocks, which perform specific tasks or handle specific events that occur in the electronic device 201. Typically, the application program 233 waits for events and subsequently executes the block of code associated with that event.
  • Events may be triggered in response to input from a user, via the user input devices 213 of Fig. 2A, as detected by the processor 205. Events may also be triggered in response to other sensors and interfaces in the electronic device 201.
  • the execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in the RAM 270.
  • the disclosed method uses input variables 271 that are stored in known locations 272, 273 in the memory 270.
  • the input variables 271 are processed to produce output variables 277 that ar stored in known locations 278, 279 in the memory 270.
  • Intermediate variables 274 may be stored in additional memory locations in locations 275, 276 of the memory 270. Alternatively, some intermediate variables may only exist in the registers 254 of the processor 205.
  • the execution of a sequence of instructions is achieved in the processor 205 by repeated application of a fetch-execute cycle.
  • the control unit 251 of the processor 205 maintains a register called the program counter, which contains the address in ROM 260 or RAM 270 of the next instruction to be executed.
  • the contents of the memory address indexed by the program counter is loaded into the control unit 251.
  • the instruction thus loaded controls the subsequent operation of the processor 205, causing for example, data to be loaded from ROM memory 260 into processor registers 254, the contents of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on.
  • the program counter is updated to point to the next instruction in the system program code. Depending on the instruction just executed this may involve incrementing the address contained in the program counter o loading the program counter with a new address in order to achieve a branch operation.
  • Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 233, and is performed by repeated execution of a fetch-execute cycle in the processor 205 or similar programmatic operation of other independent processor blocks in the electronic device 201.
  • the herein described systems and methods fill the current gaps in traditional and virtual collaborations.
  • in classrooms only the teacher can create notes, present and record the entire session,
  • an in-person lesson may be replicated using the herein described highly interactive application.
  • This system and method can solve the time spent on attempting to communicate paradigms using dated web conferencing tools. Students and the teacher can intuitively create and share their own materials utilising new ways of writing and drawing. The teacher can easily access students' activities and provide instant feedback. At the end, all the participants can archive their own personalized recorded videos of their works on their device.
  • the herein described system and methods may provide a new and innovative technology (as well as buiid on existing technologies developed) to provide an online marketplace providing high qualify online group learning via a real-time educational platform.
  • This provides the capability of creating an online group environment as close as possible to face-to-face group collaborations (such as classrooms or the like).
  • t e system can be used in any environment where collaboration is required between two or more users.
  • the herein described system includes a group based collaboration platform combining hardware and software elements to allow for next level collaboration.
  • the system is a learning tool.
  • the hardware components of the pen system include a base unit and a digital ink pen that connects via Bluetooth to a user's computer and the disclosed desktop application.
  • a special driver has been developed for the base unit to enable transfer of the writing signals to the desktop application using Bluetooth.
  • IR (Infra-Red) readers capture the handwriting on paper and transmit the writing signals to the screen simultaneously.
  • Meetings can be set up amongst users of the system through a web based scheduling application. Once meetings are scheduled, the users are notified via email and can log into the system with their credentials to attend a meeting. Users can join at any time, as long as the meeting is five.
  • the platform allows teachers and students, for example, to intuitively create their own material by writing and drawing on their own notepad, with the base unit attached to it. It also allows them to share those notes and drawings in real-time, and remotely. In addition to that, they can create pages on screen and share those instantly.
  • the system provides all users with the ability to record their own notes and the general meeting conversations. All these are saved in the cloud on the server and the system provides users with the ability to access the notes and conversations on-demand.
  • the system emulates a physical classroom.
  • a teacher has the ability to write exercises for the class on a shared notepad.
  • the teacher may then monitor the progress of the students via a live dashboard accessible via the teacher's computing device.
  • the teacher's dashboard on the teacher's computer will show each of the students' live virtual notepad.
  • the teacher may see the status of each worksheet of each student by zooming in on each page.
  • the system provides teachers with the ability to put students' work side by side for comparison.
  • the system and method may also provide instant feedback.
  • the system enables siudents to "put their hands up" to speak, just like in a standard classroom.
  • the teacher can either communicate to the student's desktop via pre-populated messages or send them a personalised message.
  • the teacher may also use the system to notify a student that they are busy with someone else in class by putting up a virtual sign that can be viewed from the student's desktop.
  • the system and method allows for real-time collaboration. Users may allow several people to speak and write at the same time on the same virtual notepad, using a pen and paper,
  • Students may also instantly chat to their teacher.
  • the system enables a teacher to either chat to individuals or a group.
  • the integrated voice conferencing in the application may be used to discuss topics.
  • the host can allow students to speak one at a time. All voice conferencing can be recorded.
  • the host may control permissions to allow users to share notes, speak one at a time or even illustrate their notes.
  • Alt users including the host, may share their notes with anyone in the meeting. They can also duplicate their handwriting for the purpose of annotation and editing.
  • Alt users may create thei own unlimited number of pages that can also be shared with everyone (or a selected few) in the meeting.
  • Alt users may save their notes and handwriting as PDFs, or they may open multiple PDF documents for viewing and editing within the desktop application.
  • Alt notes and PDFs may also be saved as one document. Further, notes may be shared via the server with other users,
  • Users may also use a mouse instead of the digital pen, to make notes and annotate documents.
  • the application pen may allow the user to switch between different brushes, thicknesses and colours.
  • FIG. 3A shows a system block diagram according to an embodiment of the present invention.
  • a server 301 is provided which performs methods described herein.
  • the server may be a computing device as described with reference to Figs, 1 A and 1 B,
  • the server is connected to a network 303.
  • the network 303 may be the internet, for example. Alternativeltop the network may be any other suitable network.
  • a first computing devic 305A is also connected to the network 303, Again, the f irst computing device 305A may be a computing device as described with reference to Figs. 1 A and 1 B, [0092]
  • a second computing device 305B is also connected to the network 303, Again, the second computing device may be a computing device as described above with reference to Figs. 1A and 1 B.
  • the first and second computing devices may be remote from each other.
  • the first computing device may be operated by a teacher or a student.
  • the second computing device may also be operated by a teacher or a student.
  • the computing devices may be operated by any other suitable entity.
  • the system described with reference to Fig. 3A shows a one-one relationship to enable one person to collaborate with another person.
  • the first and second computing devices ⁇ 305A and 305B) are connected via a Bluetooth connection to a driver of a pen detection system. That is, the first computing device 305A is connected via a Bluetooth connection to an electronic pen driver 307A.
  • the second computing device 3G5B is connected via a Bluetooth connection to a pen driver 307B.
  • the electronic pen driver 307A emits infrared signals to detect the movement of an electronic pen 309A.
  • the electronic pen driver 307B emits infrared signals to an electronic pen 3098,
  • Each of the electronic pen drivers (307A, 307B) detects the movement of the pens (3G9A, 309B) to enable the detection of physical writing actions on a physical medium. That is, the electronic pen 3G9A is used to write words, symbols or images onto a physical medium 311 A.
  • the physical medium 31 A is a physical writing surface upon which a physical writing action may be performed.
  • the physical medium 3 1 A may be a piece of paper or the like.
  • the physical medium 311B may also be a piece of paper.
  • an electronic pen driver communicates with the electronic pen via a standard infrared process.
  • the driver between the electronic pen driver 307A and the computing device 305A has been updated to enable Bluetooth connections to be made.
  • one or more of the electronic pen systems may be replaced with a camera system as described herein,
  • the herein described system and method enables a meeting to be setup via a desktop application running on each of the first and second computing devices (305A and 3Q5B),
  • the meeting is controlled by the server 301. That is, upon joining the meeting at each of the first and second computing devices, the server is arranged to share physical writing actions thai are performed by each of the electronic pen systems. That is, the desktop applications running on the first and second computing devices in association with the electronic pen systems are arranged to detect for a particular meeting one or more physical writing actions that are being performed on physical writing surfaces.
  • Writing signals are generated at the first and second computing devices via the pen systems.
  • the writing signals generated at the first computing device are forwarded to the server 301 via the network 303. Further, the writing signals generated at the second computing device 305B are also forwarded to the server 301 by the network 303.
  • the server then forwards these writing signals to the other computing device. That is, the server 301 forwards the writing signals received from the first computing device 305A to the second computing device 3Q5B, Further, the server 301 forward the writing signals generated at the second computing device 305B to the first computing device 305A. Therefore, the writing signals are forwarded by the server for receipt at each of the computing devices associated with the meeting. However, it will be understood thai the server may forward all of the generated writing signals to ail of the computing devices. Alternatively, the server may be arranged to forward only the writing signals generated by other computing devices to a particular computing device.
  • the first computing device may then output a representation of the physical writing actions.
  • the output may be in the form of a display on a connected screen.
  • the output may be any other suitable output such as storing the received signals and the internally generated signals info an internal or external memory.
  • the output may be in the form of forwarding the generated writing signals to a printer.
  • Other suitable outputs are also envisaged.
  • the first computing device may output a representation of the physical writing action generated: at the second computing device based on the received writing signals from the server 301. Further, the representation of the physical writing actions also show the actions performed by the electronic pen system connected to the first computing device. That is, the writing actions of both of the first and second pen systems are represented or output at the computing devices (305A and 305),
  • Fig. SB shows a system block diagram according to an alternative example.
  • the server 301 is still connected to the network 303 as described above in relationship to Fig. 3A.
  • the first and second computing device are also still connected to the network and the server 301.
  • further computing devices 305C and others may also be connected to the server via the network 303.
  • Each of the computing devices may be remote from each other,
  • a connection may be made from one computing device to many other computing devices. Further, it will be understood that a few computing devices may be connected to many other computing devices. Further, it will be understood that many computing devices may be conneeted to many other computing devices. For example, in this scenario, a first computing device may be operated by a teacher in a teaching
  • Figs. 4A and 4B show a system block diagram according to the herein described example.
  • one or more of the electronic pen systems may be replaced with a camera system as described herein.
  • the system is described in the context of an education-orientated, real-time online collaboration application. It will be understood that the system may be used in other environments besides education.
  • the system is designed to make online tutoring lessons easier.
  • this is well suited fo interactive and visual lessons such as Maths and Science tutoring, where hand-written notes are necessary for proper communication.
  • the system is designed to be an 'all-in-one' application for a iesson, i.e. the only application required for the lesson. Therefore it provides many other features such as audio communication, text chat functionality and permission & administration capabilities for tutors,
  • Tutors or faculty staff can manage, plan, book and archive meetings using a web control panel.
  • IMS learning management software
  • the software of the system may be written in any suitable language, in particular, in this example, the software used was main!y Java.
  • Other components, such as drivers, web panel software and deployment tools (e.g. installers) were developed in a variety of other languages which were mostly C based.
  • the server manages meetings, handles all: connections, passes messages between clients, etc.
  • the client which connects to the server, makes requests, joins meetings, sends drawings (via the server) etc.
  • the server there are 3 core elements: the server, the desktop application and the control panel
  • the control panel is also a client (and is based off the same network client module as the deskto application. See below for more details).
  • Client-server communication occurs over a TCP connection.
  • a specific protocol has been developed for structuring messages, basic logic and some constants, called Steeves Protocol.
  • Drivers for 3rd party hardware are provided as entirely separate applications that run in separate processes to the main application. They communicate with the main application using a very basic network protocol that operates only on the local machine. Communication is over TCP. There are designated ports for each driver that th driver opens and the application connects to. Communication is only one-way. Drivers simply send the state of all pens connected to them constantly, and the application extrapolates active pens and filters out useless/redundant messages.
  • the electronic pen systems use triangulate methods to determine the location of the writing tip. Module overview
  • Figures 4A and 4B show various modules that are part of the software suite.
  • Figure 4A shows the clients and client control panel connected to the server.
  • Figure 4B shows the desktop connected to the electronic pen systems.
  • VMCLASS Server This is the main server module, it is a standalone Java application. Network IO is built off the VMCLASS project. The server is monolithic: It manages all meetings and handles ail connections. TCP (main) connections begin in SteevesProiocol Adapter. Java. UDP (audio) connections begin in VelvetProtocalAdapierjava. The actual application starts in V classServerJava
  • St ves Protocol This module contains objects and basic logic shared between client and server for the main TCP network protocol. Every object that is sent ove the wire is a basic Java bean object, Messages are serialised into JSON objects.
  • Velvet Protocol This modul contains objects and basic logic shared between client and server for the audio UDP network protocol. Every object that is sent over the wire is manually serialised and de-serialised. Objects are classified by their length.
  • Driver Protocol This module contains objects and basic logic shared between drivers and the application that connects to them.
  • VMCLASS Client Contains connection logic for the client only. Network SO is built off the VMCLASS project VMCLASS Client provides simple functions for calling RPGs (Remot Procedure Calls) on the server and getting the result. An RPC effectively sends instructions to the server to enable the server to perform a process. If an exception is thrown on the server while processing a request it will be re-thrown on the client, on whatever called the function. Also provides listeners for events ⁇ unsolicited messages from the server),
  • Velvet Client Similar to VMCLASS client, for audio. Network IO is built off the
  • VMCLASS project Also contains recording functionality (directly connecting to the microphone and processing audio), encoding/decoding functionality and mixing/ layback functionality.
  • Email/Simple Moodle user providers (Java Eclipse Project) These are piugins that authenticate users through various methods and decides permissions for them (tutor/student).
  • Email User Provider is for adding meeting participants by their email address (and letting them login with their email address).
  • Simple User Provider loads users, passwords and permissions from a fiat configuration file (the main server config file, prop-properties).
  • Moodl User Provider connects to a moodie server over HTTP to authenticate users and get their permissions. Both the Moodie and Email plugins run on both control panel and server.
  • the Simple user provider runs on the server only
  • Zigma Control ' Panel This is the control panel client, written as a servfei. Though possible to run on other servlet engines, it is designed to be ran on the server, it serves the control panel pages and manages client connections to the server, ft performs actions such as deleting, creating and managing meetings on the browser's behalf.
  • Driver Manager This modul spawns separate driver processes depending on the current platform (see High Level Architecture) and connects to them, It passes messages from the individual drivers to the main application so they can eventually be rendered.
  • Driver USB Win This is the USB driver that only runs on Windows computers. There are actually two projects to this module: The Java loader, under DriverUSB Win (which saves the native e e fiie in a temporary folder and loads it) and the actual native application, written in C# and found under UsbPenSupport (whic connects to the pen and forwards events over the network connection to the Driver Manager)
  • Driver BT Mac QSXLin This single module connects to pen hardware over Biuetooth. It is built off the Bluecove library to handle the Bluetooth connections. It interprets the raw byte stream from the Biuetooth receiver, turns them into events and forwards said events over the network connection to the Driver Manager.
  • installer This contains a number of various scripts, setups and tools to build the application for its various desktop target platforms, (See Build Process)
  • Steeves Protocol is broken up into two main systems; the RFC system and the event system (built off the RPC system). When used together, they handle everything networked in the system, except audio. Meeting updates, drawings, login information etc, are ail handled over Steeves Protocol.
  • every message in Steeves Protocol is Java Bean. These Beans are serialised as JSON strings, and these strings are sent over a TCP stream, separated by newiines.
  • SteevesProiocolAdapter for server
  • VMCLASSCiient for client
  • ProtocolHandler This separation is on purpose. That is, there are two levels to handling a message, one level simply parses the message, the othe apples state and other information to draw meanin from a message, A similar structure can be seen in the Velvet audio protocol structure.
  • RPC Steeves Protocol
  • Everything is considered an RPC.
  • the main application makes a call to the SteevesAdapter, for example to authenticate with a usemame and password.
  • the adapter first makes sure it is connected to the server through its VMCLASSClient (if it can't connect it will throw an exception).
  • SteevesAdapter then prepares a message bean to send to the server, fills it, and passes it onto the VMCLASS ' CHent to send (and block until it receives a response).
  • the V CLASSC!ien then writes the message (complete and in order) and a newline to the server over the V GLASS TCP connection.
  • the message is first received by the SteevesProtoeolAdapter, which de-serialises the message back into a bean, it then decides whether to send it to the plugin RPC handler for processing or the main protocol! handler, if the message is bound for the main protocol handler, the SteevesProtoeolAdapter then extracts the function name and parameter from the bean, and calls the function specified in the ProtocolHand!er. It then expects a
  • the client does a similar thing to the response message once received, it immediately attempts to parse it, and if that fails closes the connection.
  • the response is matched to the request, and the calling function thread continues execution with the right result, if an exception was returned then the exception will be re-thrown In the calling function's thread.
  • Events are built off the RPC system.
  • An event may be considered at least a portion of a physical writing action that has been recorded. It is best to think of an event as an 'unsolicited' RPC. They are response objects without a request, or a request object without expecting a response.
  • the server handles received events tike it would most messages.
  • the client has a listener-publisher pattern for dealing with received events.
  • the method is extremely similar to sending RPCs, except the event itself is wrapped up in another bean called a MeetingEvent and sent to a constant handling function (update eetingChannel) on the server's Protoco!Hand!er.
  • the calling client thread does not block like it does for RPCs.
  • Velvet is the name given to the system's audio trans missiors protocol.
  • the protocol and logic is built of UDP, It has its own implementation for pinging, clock alignment, retransmission , missing packet resolution, packet reordering etc.
  • Clients are authenticated via an integer 'token' that they collect from the
  • Ve!vetClient object that will handle all networking on its end. It also creates an AudioRecorder for collecting audio from the microphone. After the main application receives its audio token from the server, it will initiate a connection by calling connectQ in the VelvetClient object.
  • the client then creates a handshake packet (class: Auth equest) and fills it with various connection parameters and other connection information (such as system time for clock alignment, etc. ⁇ . It then calls writeQ in V CLASS to send the message to the server.
  • the server immediately attempts to parse the message in the VeivetProtocolAdapter,
  • the VelvetProtocolAdapter is a parser, sending and receiving messages in a dumb, stateless way. Management of almost everything audio related happens in the AudioManager.
  • the message is parsed, it is sent to the AudioManager fo processing.
  • the AudioManager looks up the token and matches it with a Meeting and
  • the AudioManager will create an AudioMeeting and AudioConnection for the connection. For every meeting the server is handling, there should be an AudtofVteefing that handles audio. For every audio connection the server is handling, there should be an AudioConnection object that handles sending, receiving and buffering of audio on the server. The AudioManager then prepares a response and writes it directly to the ctient. if the VeiveiClient receives a success message, it will, begin processing its buffer of audio and begin sending audio to the server.
  • Audio is first collected from the microphone in. the AudioCapture class.
  • the main application then encodes and wraps the audio in a voicefvlessage packet and sends it to the VeiveiClient via offerVoiceMessageQ,
  • the client then puts the message in the toSend queue and returns the current thread.
  • the VeiveiClient has a thread running that waifs for messages in the toSend queue. It receives the message some time later and writes it over the network.
  • the server initially receives the VoiceMessage in the VelvetProtocolAdapter, the main network entry point for audio in the server.
  • the VetvetProtocolAdapter quickly attempts to parse the message, and if successful passes it straight onto the Audio Manager.
  • the Audiofvlanager then quickly looks up which AudioMeeting this connection is in, and which AudioConnection is supposed to handle this connection. If none can be found, the server will close/block the connection, if found, the AudioManager passes the message directly to the AudioConnection object.
  • the AudioConnection object then puts it in its buffer/queue for re ordering and buffering, it then returns the current thread.
  • a client receives audio from the server, it performs almost the exact same behaviour as the server. That is, it puts the audio in a queue, one for each client, and pops the audio from the queues every 20ms for playback on the speaker line.
  • a software audio mixer is also provided that mixes different user's audio.
  • Fig. 5 shows a flow diagram of a process for connecting one computing device to another computing device for a meeting. The process starts at step 501.
  • the first computing device detects a first writing action. That is, a first physical writing action being performed on a first physical writing surface, such as a portion of a document, is detected at the first computing device. According to this example, the detection is performed in association with an electronic pen system as described above, it will be understood that any other writing detection system may he used.
  • the first computing system generates a first writing signal based on that detected first physical writing action.
  • the first computing device transmits the first writing signal from the first computing device to the server.
  • a second computing device is also detecting a second writtng action at step 509. That is, a second computing device detects a second physical writtng action that is being performed on a second physical writing surface (e.g. a portion of the document).
  • the second computing device generates a second writing signal based on the detected second physical writing action at step 511.
  • the second computing device transmits the second writing signal from the second computing device to the server.
  • the server receives both the first writing signal from the first computing device and the second writing signal from the second computing device, as described in more detail below.
  • the server forwards the first writing signal to the second computing device and forwards the second writing signal to the first computing device. That is, the first computing device receives the second writing signal from the server at step 515. Further, the second computing device receives the first writing signal from the server at step 521.
  • the first computing device then generates an output, in this example, a display, of the first and second writing signals at step 51 .
  • the process then ends at step 519 fo the first computing device.
  • the second computer generates an output, in this example in the form of a display, of the second and first writing signals at step 523.
  • Fig. 8 shows a flow diagram of a process according to an alternative example where the first computing device is connected to two or more other computing device for a meeting,
  • one of the computing devices is associated with a host of the meeting (steps 603, 605, 607, 615, 617, 619),
  • One or more of the other computing devices are associated with attendees of the meeting (steps 609, 61 , 813, 82 , 823, 625).
  • the host may be a teacher and the attendees may be students.
  • the process starts at step 801.
  • the first computing device detects a first writing action. That is, a first physical writing actton being performed on a first physical writing surface, such as a portion of a document, is detected at the first computing device. That is, the detection is performed in association with an electronic pen system as described above.
  • the first computing system generates a first writing signal based on that detected first physical writing action.
  • the first computing device transmits the first writing signal from the first computing device to the server.
  • a second computing device (attendee) is also detecting a second writing action at step 609. That is, a second computing device detects a second physical writi g action that is being performed on a second physical writing surface (e.g. a portion of the document).
  • the second computing device generates a second writing signal based on the detected second physical writing action at step 611.
  • the second computing device transmits the second writing signal from the second computing device to the server.
  • Further computing devices may also be generating writing signals for transmission to the server, as indicated by the dots in Fig 6.
  • the server receives all the writing signals from all of the computing devices (host and attendees), as described in more detail below. Upon receipt, the server forwards the first writing signal (host) to all of the other computing devices (attendees) in that meeting. The server also forwards all of the other computing devices writing signals (attendees) to the first computing device (host). That is, the first computing device receives all the writing signals for that meeting from the server at step 815, Further, th second computing device (and other computing devices in the meeting) receives the first writing signal from the server at step 621.
  • the first computing device then generates an output, in this example, a display, of the first and further writing signals at step 617 according to a number of different options, which are explained with reference to the user interface shown in Figs 9A to 9K, The process then ends for the first computing device at step 619.
  • an output Is generated which in this example is in the form of a display.
  • the output at step 623 is of the second and first writing signals only. That is, the second computing device can only dispiay the writing signals generated by the host's computing device and its own computing device.
  • the process then ends at step 825.
  • a writing action may be multiple strokes, a stroke or partial strokes of the pen.
  • FIG. 8 It will be understood as shown in Fig. 8 that further computing devices may perform the same actions as shown in steps 809, 811, 813, 621 , 623 to allow a one-many connection via the server for showing a meeting. That is, the meeting sharing procedure enables users to share the physical writing actions being performed on a physical medium with other users also sharing their physical writing actions on physical mediums.
  • first writing signals that are being output from the first computing devices may be output prior to transmitting the generated first writing signal to the server.
  • the second or further writing signals being generated by the second: or further computing devices may be output prior to transmitting the generated second (or further) writing signals to the seven
  • the representation being displayed on a screen connected to the computing devices of the physical writing actions may be shown in relation to a virtual writing space that corresponds with the physical writing surface on which the physical writing action is being performed. That is, the image shown on the screen may represent a piece of paper upon which the user is actually performing the physical: writing actions. That is, the user may select via a menu dropdown the type of pape upon which they are performing th physical writing actions. Further, the position of the physical writing actions in relation to the physical writing medium may be detected by th computing devices via the pen systems in order to display the representation of the physical writing actions of the virtual writing spaces in a position that corresponds with the position that the physical writing actions were originally being performed on the physical writing surfaces.
  • a first computing device receives ail the writing signals of the remaining computing devices and outputs a re resentation of the physical writing actions based on its own generated writing signal and the received writing signals from the other computing devices. Further, the other computing devices (one or more) receive the writing signals from the first computing device only (via the server) and output a re resentation of the physical writing actions based on its own generated writing signal and the writing signals received from the first computing device only.
  • the computing devices include a rendering process to render the writing signals on the user interface. Any suitable rendering process may be used.
  • any suitable compression and decompression process may be used when transmitting and receiving various data packets at various points throughout the system.
  • the server may be connected to a pubiicaify accessible network.
  • the server may be located in the cloud to enable any user with a suitable use name and login to access the server functions.
  • the processor forwarding the writing signals from the computing devices involves the step of forwarding to each of the multiple computing devices all the writing signals that have been generated by all of the other computing devices. Further, the writing signal generated by a particular computing device may not be forwarded to that particular computing device to avoid duplication. That is, a writing signal generated by particular computing devices does not need to be sent from the server back to that computing device. That is, the server is arranged to only send writing signals to computing devices that have not been generated by those computing devices. The writing signals generated by those computing devices are stored locally and output locally in reai-time.
  • the IR and/or ultrasound transmitters may not be !ocaied at the tip of the pen but may be particular distance away, such as 1c, for example. This can caus an error in terms of the detected position of a 1cm diamete circle. This error may ary between left and right handed people and also between different personal styles of writing.
  • the herein described system and method performed at the computing device provides a calibration option for the users of the system.
  • the calibration option includes the steps of the user first positioning the pen at an angle of 90° to the paper surface, The desktop .application running on the computing device is notified of this position by the user.
  • the user selecting a calibration option on the desktop application.
  • the user then subsequently positions the pen with at the user's normal writing position at the same location on the paper. This position is then sent to the desktop application.
  • the difference in the location readings is used as a calibration in order to minimize the variable writing error.
  • the error circle may be reduced from 10mm to 2mm.
  • the server enables the sharing of physical writing actions b receiving generated writing signals that are associated with a meeting from two or more computing devices. These generated writing signals are associated with physical writing actions captured by the two or more computing devices. The server then forwards to the computing devices the generated writing signals associated with the meeting to enable each computing device to output a representation of the physical writing actions.
  • the server receives a first writing signal from a first computing device.
  • the first writing signal is based on a first physical writing action being performed on a first physical writin surface detected at the first computing device.
  • the server also receives a second writing signal from a second computing device.
  • the second writing signal is based on a second physical writing action being performed on a second physical writing surface detected at the second computing device.
  • the server then forwards the first writing signal from the serve to the second computing device and forwards the second writing signal from the server to the first computing device. This therefore enables the first computing device and the second computing device to output a representation of both of the detected first and second physica! writing actions.
  • the server may transmit to the first computing device all the writing signals of the remaining computing devices to enable the first computing device to output a representation of the physical writing actions based on its own generated writing signal and the received writing signals.
  • the server may transmit to one or more of the plurality of computing devices the writing signals from the first computing device only. This enables the one or more of the plurality of computing devices to output a representation of the physica! writing actions based on the one or more of the plurality of computing devices own generated writing signal and the writing signals received from the first computing device only.
  • the writing signals may be forwarded by the server by forwarding to each of the plurality of computing devices ail writing signals thai have been generated by ail of the other computing devices. The writing signal generated by a particular computing device may not be forwarded to that particular computing device.
  • the generated writing signais are received by the server in real. time.
  • the server may store each of the writing signais at the server for retrieval by the plurality of computing devices after completion of the meeting.
  • the writing signais may be part of a personalised workspace associated with each computing device associated with the meeting.
  • the server may record the physical writing actions that occur in a meeting for a particular user.
  • the server may record all physical writing actions associated with a host of a meeting and attendees of the meeting. Further, the server may send to a computing device associated with th host of meeting, all the recorded physical writing actions.
  • the server may record all physical writing actions associated with a host of a meeting and attendees of the meeting.
  • the server may send to a computing device associated with a first attendee of the meeting a combination of the host's recorded physical writing actions and the physical writing actions of the first attendee of the meeting to the first attendee, while excluding the physical writing actions of other attendees.
  • the server may record personalised audio signals in addition to the physical writing actions associated with a meeting.
  • Fig. 7A shows a system block diagram of a similar system to that shown and described in relation to Fig. 3A.
  • the first computing device 305A is not connected to a standard electronic pen system.
  • the standard electronic pen system of Fig. 3A is replaced with a camera or an electronic device 701 including or i corporati g a camera, as described with reference to Figs. 2A and 2B.
  • a camera incorporated into a computing device as described with reference to Figs 1.A and 1 B ma be used.
  • the electronic pen 309A is replaced with a standard non-electronic pen 703.
  • the second computing device 305 B is using the writing detection sysiem as described with reference to Fig. 3A.
  • Fig. 7B shows a system biock diagram of a similar system to that shown and described in relation to Fig. 3B.
  • the first computing device 305A and second computing device 305B are not connected to standard electronic pen systems.
  • the standard electronic pen system of Fig. 3B is replaced with a camera, or an electronic device (701 A, 701B] including or incorporating a camera as described with reference to Figs. 2A and 2B, or a computing device incorporating a camera as described with reference to Figs 1A and 1 B.
  • the device may be a smartphone or tablet device with an inbuilt camera.
  • the device may be positioned in or on a stand or base to hold the camera steady.
  • the further computing device 305C is using the writing detection system as described with reference to Fig. 3B.
  • the electronic pens (3G9A, 3Q9B) are replaced with standard non-electronic pens (7Q3A, 703B).
  • the pens (7G3A, 703B) may be the same or different, in this example, the third computing device 305C is using a standard electronic pen system as described with reference to Fig 3B. it will be understood that any other combination of standard electronic pen systems and the herein described camera system for detecting physical writing actions may be connected to the computing devices.
  • This electronic (or computing) camera device is arranged to detect a physical writing action via the lens of the camera. That is, the device incorporates a software algorithm that performs the process of accessing an image generated by the camera associated with an electronic device, it will also be understood that the electronic device may also be a computing device as described with reference to Figs. 1A and 1B where a camera (e.g. a webcam) is connected to that computing device.
  • a camera e.g. a webcam
  • the electronic (or computing) device also performs the process of analysing the image to detect a first physical writing action.
  • the device generates a first writing signal based on the analysis and then outputs that generated first writing signal.
  • the generated writing signal may be stored either locally in local memor or externa fly.
  • the generated writing signal may be stored on a connected external memory or transferred to the server. The transfer to the server may be in real time.
  • the device is arranged to detect within the image a plurality of edges of the writing surface upon which the writing actions are being performed.
  • the device defines a boundary of the writing surface based on the detected edges, and then defines a number of distinct areas within the defined boundary.
  • the device then analyses the image to detect a first physical writing action in one or more of the definec! areas based on the detected movement of the writing implement being used, and in particular the detected movement of the writing tip of the writing implement.
  • the device in order to determine whether the writing implement i actually making a mark on the physical writing surface and not just being moved in the air, the device is arranged to only record the movement being tracked if a determination is made that actual writing has occurred.
  • the device is arranged to analyse the image In order to determine whether writing has occurred on a physical writing surface, and, upon a positive determination, analyse the image to detect the writing tip of the writing implement performing the first physical writing action, detect movement of the writing tip in the area previously defined within the boundary, and then generate the first writing signal based on the detected movement, i this way, the detection of physical writing enables the device to accurately record the movement of the writing tip to record the writing action.
  • the camera lens is not mereiy recording the action of physical writing, but is recording the movement of the writing implement (and in particular its tip) only upon the detection that a physical writing action is occurring. That is, movement of the tip is constantly being monitored, but the recordal of a writing action is only made upon detection within the image that a physical writing action is occurring (i.e. a mark is being made).
  • the device is in communication with the computing device 305A and may utilise a database of writing implements either stored on the computing device 305A or in the server 301 to enable detection of the type of writing implement within the image.
  • a comparison of the sub-image of the detected writing implement may be made with images of writing implements in the database.
  • the server 301 or computing device 305A may use the data in the database entry for that writing tip to determine the physical location in space of the writing tip of that writing implement
  • this writing detection system a d method may be used in conjunction with the writing collaboration system also described herein.
  • the writing detection system provides reliable optical/camera based detection.
  • the following steps provide reliable writing detection.
  • the software operates by classifying and analysing the image content fo characteristic (pen) features in the consecutive frames usi g the following steps;
  • Edge detection This is used to identify points in an image at which the image changes sharply. An edge is a boundary between two regions with relatively distinct properties.
  • Detect/Follow identified areas in image This allows detection of movement in particular areas within the image.
  • the contour detector combines areas based on spectral clustering.
  • Detect Pen objects in image First, a classifier is developed. After a classifier is developed, the classifier can be applied to input images collected from the camera. Cascade classifiers may be used as a machine learning method for training the classifier to detect an object in different images; in this case the image of the pen and the tip of the pen.
  • Detect new writing in image This allows automatic detection of actual writing actions occurring for the purpose of classifying the pen action as active or .non-active. This allows classification of pen movements to write or non-write modes based on whether actual writing has occurred or not.
  • the system can detect physical writing actions being performed using a writing implement (e.g. a pen) that is writing on any type of surface, such as a piece of paper or a whiteboard, which is visible to the camera.
  • a writing implement e.g. a pen
  • the camera based solution may detect the pen and follow its. movement in relation to defined areas within a defined boundary, and subsequently display the written images or text on a display connected to a computing device.
  • the pen can be moved in three-dimensional space and tracked by the camera relative to the areas and boundary based on the images captured by the camera. Corresponding images may be displayed on a display of the computing device.
  • the camera tracks the different positions of the pen while a user is writing or drawing and then either stores the data to be uploaded later or transmits the data simultaneously to a computing device.
  • the computer displays th images and text drawn on the surface.
  • the computer may b located anywhere, as long as it is able to communicate wireiessly with the camera, and be able to display the written text
  • Fig. 8 shows a process flow diagram. The process starts at step 8 1. At step 803, an image captured by the camera is accessed. At step 805, the image is analysed to detect a physical writing action. At step 807, a writing signa! is generated. At step 809, the writing signal is output. Th process then ends at step 811. [00213] Figs. 9A to 9K show a number of images of a user interface that may be generated at one or more of the computing devices (305).
  • Fig. 9A shows a user interface screen displayed by the desktop application running on the computing device operated by a host of a meeting.
  • the host may be a teacher for example,
  • the user must enter a user name and password to log in to a session. The use then has the option to host a particular session such as a "math NSW" session. Alternatively, the user may attend a particular session that is being hosted by someone else.
  • the user may schedule a meeting with another user.
  • FIG. 9B shows a further user interface.
  • This user interface is displayed by a desktop application running on a computing device operated by another user.
  • the user is not a host but is an attendee of a meeting being run by a host.
  • the attendee must enter their usemame and password to log in to a session.
  • the user may then select a meeting that they have been invited to by a host in order to attend that meeting.
  • An indicator is provided in the dropdown list of available meetings for that particular user to show whether the meeting has already started or not.
  • the user Upon selecting login and the system confirming their login details are correct, the user is then taken to a landing page upon which they can view a canvas of the meeting.
  • the canvas displays all the physical writing actions that have been performed by the host of the meeting. Further, the canvas also shows the physical writing actions that are being performed b the attendee. The physical writing actions of the other attendees are not shown on the attendees screen.
  • Fig. 9C shows a user interface for the host of the meeting (all session) a large canvas page (virtual writing space) is shown for this session.
  • the page indicates all physical writing actions that have been performed by the host,
  • a window is provided within the user interface indicating all the attendees attending the session.
  • a furthe window is also provided that enables the host to provide feedback to one o more of the attendees.
  • Fig. 9D shows a further user interface available to the host of the session. This interface indicates ail the help information available to the host. For example, help information is provided to show the user that various views may be available to the host. As described with reference to Fig. 9G, 9H, 91
  • Fig. 9E shows a further user interface for the host.
  • This particular user interface is allowing the host to view the page of one particular attendee (Sam). While viewing this page, the host may perform physical writing actions that are detected by the hosts computing device and forwarded b the server to the attendees computing device for display on the attendee's page. That is, the host may perform writing actions that are then displayed on Sam's page.
  • Fig. 9F shows a further user interface available to the hosts and the attendees'. According to this interface, the users ma chat with the host to ask questions and provide feedback,
  • Fig. 9G provides a further user Interface available to the host.
  • the host may switch the different views to enable the host to view all of the pages being produced by the individual attendees in one screen. That is, all of the attendee's pages are displayed on the screen available to the host. Further, the host may select an individual's page in order to interact with that user's page.
  • the user interface shown in Fig. 9H shows the host selecting a particular page of a particular user to enable the host to interact with that user via their page.
  • [GQ225J Fig. 9£ shows a user interface where the host may select two or more users to view the pages of those users side-by-sid in order to make a comparison. That is, the physical writing actions of two or more attendees may be monitored by the host and the host may interact with each of those pages to generate physical writing actions that are then transmitted to those individual attendees.
  • Fig. 9 shows a user interface available to an attendee of the session.
  • the attendee's virtual writing space is displayed here, ft shows the attendee's writing actions as well as those of the host. Other attendee's writing actions are not shown. According to this user interface, the attendee may receive instant feedback from the host.
  • Fig. ⁇ shows a further user interface available to the host (as well as available to the attendee) wherein the attendees and hosts may record their session. Sessions that are recorded are personalised sessions in the form of a video that records alt the physical writing actions performed by both the host and the attendee. Further, audio signals are also recorded alongside the images of the interaction by the screen.
  • Each attendee receives a recording of their own interactions and with the host, The attendees do not receive copies of interactions of other attendee's physical writing actions or audio. That is, any audio signals or voice signals that are generated and forwarded to each of the host and the attendees are personalised for those particular users. That is, each attendee only receives audio that is generated by them or generated by the host. The do not receive any audio from other attendees. Therefore, a personalised voice recording of the meeting is provided to eac attendee. Further, the host is able to receive voice recordings from all of the attendees including themselves;
  • the server may incorporate an algorithm to track which attendee's page the host is viewing and to then link that determination with the ability to enable the host to interact with that attendee via the attendee's page. Also, the server can determine which of multiple pages multiple attendees are viewing in order to assist the host in determining further actions with the attendees. An indication may be provided on the user interface to indicate to the host which of the pages is the active page being viewed by the attendee to assist the host in collaborating and sharing information via that active page.
  • the server may provide the ability to buffer content between the server and client. These buffers may be used to transmit custom page backgrounds (PDF documents, JPG/PNG images etc.).
  • synchronisation between the server and the clients may be provided in any suitable form to enable users to leave and join meetings seamlessly.
  • the system enables a user to leave a meeting by choice or because of failure of communication or one of the other peer components.
  • Recovery and restart is the automated continuation of processing for the meeting after failure or restart.
  • the applications are made aware that recovery has taken place. The recovery allows the applications to continue the meeting process with only a short delay. The recovery process will provide for no loss of data in the meeting history.
  • the live dashboard available to the host enables the teacher to monitor active pages being used by the attendees.
  • the host may monitor the active pages in real time and interact with each attendee or two or more attendees via those active pages.
  • the various algorithms and software elements may be incorporated within a computer readable medium, which has recorded thereon the computer program fo implementing any of the methods as described herein.
  • the desktop and the scheduling appiication may be combined into one web based interface.
  • the herein described systems and methods provide the following advantages: Desktop applications are stable and work on main operating systems; There are no lag or latency issues providing real-time collaboration in an instant; The resolution of the displayed writing is very high; The server platform is a cloud based making it easy to access, and secure; Users can write on any notepad they prefer; Pre-printed paper is not necessary; Recordings of meeting collaborations are stored on the- server; The grou meetings are scalable; The servers are capable of running multiple meetings concurrently.
  • virtual glasses may be incorporated into the system.
  • Th virtual glasses include a camera which is used with the pen writing system to pick up the writing actions of the user.
  • the virtual glasses may provide the user with an immersive experience. For example, a student studying at home may wear the virtual glasses and feel as if they are in a real classroom environment. This may assist in reducing distractions.
  • augmented reality images may be provided to introduce additional information as well as viewing writing actions from the pen system.
  • the glasses may com unicate with a local computer system using Bluetooth, or may communicate with a remote computer system via any other medium such as Wi-Fi or the internet.
  • the physical elements of writing may be mixed with the virtual classroom, as weii as the augmented information.
  • the remote computer may foe a web server that is in communication with the user's computing device.
  • the we server may take control of the camera of the user's computer or electronic device.
  • the web server may then carry out the process of detecting and processing the hand writing.
  • the movement detection of the pen connected to the user's computer is carried out by a software plug-in executed on the user's computer.
  • the movement data is then transferred to the web server for analysis at the server.
  • the system may be used to provide online examinations.
  • the camera may be used to monito the user taking the exam to ensure that the rules of the exam are followed.
  • the system may incorporate analytical fools to monitor when and how long a user was using the system. For example, the system may monitor the length of time a user was in a virtual class by analysing when writing actions were being recorded.
  • the server may communicate with a CRM system in order to obtain and utilise corporate information.
  • data may be imported from and shared with the CRM system and associate database to enable access rights to be established.
  • the CRM system may enable student details to be retrieved and imported into the herein described system.
  • the system may support distributing, supervising and the collecting of test results in a secure and reliable manner using a cloud based connected service.
  • tests may be created via the described cloud-based system.
  • Optical Mark Recognition ⁇ OMR Optical Mark Recognition ⁇ OMR
  • identification information for the user may be created automatically. Items and question numbers in the template may b numbered automatically. Written response areas may be created. Different grid layouts may be used to ensure optimal layout of the OMR elements. [00246]
  • the tests may be distributed using a secure VPN network. All tests are stored temporarily in an encrypted file system via a controller.
  • the cloud server acts as Certificate Authorit (CA) providing a PKi (public key infrastructure).
  • the PKI consists of i) separate certificate (also known as a public key) and private key for the server and each client, and ii) master Certificate Authority (CA) certificate, a key that is used to sign each of the server and client certificates.
  • CA Certificate Authority
  • the controller prints the tests out with a laser printer.
  • the forms are then placed in the student unit (i.e. a computing or electronic device that connects the web server),
  • Identifier codes are included in the forms within the system, so that the system can automatically identify who owns the form and what test it belongs to.
  • the system collects test results automatically.
  • the system captures people's hand marked responses made in the checkboxes.
  • the system also reads a barcode or QR code data for form-page recognition and uses OMR to detect and capture pen marks made in the checkboxes. Capturing of handwritten text and drawings is also supported.
  • the forms that cannot be collected automatically can be scanned an stored via the controller.
  • the controller pushes completed test results over the secure VPN connection to the cloud server.
  • the cloud server automaticall marks multiple-choice OMR tests with the services provided.
  • the system exports the captured data to a database/spreadsheet or other
  • Tests which are uniquely identified as belonging to a specific respondent, will connect the captured data with the respondent's record. Tests which are not bound to a specific respondent wiii connect with a data table record using a captured ID number, of an automatically generated numeric record will be created in the database.
  • the cloud based management interface may provide one or more of the following functions:
  • test name and template information is stored in server.
  • An interface may be provided to a customer's IT-system.
  • the system provides a secure, private reliable service and is able to collect at least 90% results automatically without the requirement for manual scanning.
  • the controller and Student Units are connected using encrypted communication link over local secure Wi-Fi connection.
  • WPA2 mode with device MAC address filtering may also be used.
  • the controller provides for:
  • the Student Unit provides for:
  • the cloud server provides functionality for:
  • the cloud server processes the incoming images. When the processor finds a barcode or QR code, the server will detect the ID and look at the ID information in the database, read the form and capture the results from the form.
  • the herein described systems and methods may be used in the following areas: Tutoring Companies focally and giobally; Remote Private Tutoring; Distance Education; Remote Language Tutoring; Arts and Design Tutoring; In classroom Learning; Law-firm Document Annotation and Editing; Flipped classrooms, On-line collaborations across multiple users. Further, the herein described systems and methods may be used in mining,

Abstract

A server and computer implemented method for sharing physical writing actions, the method comprising the steps of: detecting, at each of a plurality of computing devices associated with a meeting, one or more physical writing actions being performed on physical writing surfaces; generating writing signals based on the physical writing actions: transmitting the generated writing signals to a server; forwarding, via the server, the writing signals for receipt at the plurality of computing devices associated with the meeting; and each computing device outputting a representation of the physical writing actions.

Description

IMPROVED SYSTEMS AND METHODS FOR SHARING PHYSICAL WRITING ACTIONS
Technical Field
[0001] The present invention relates generally to improved systems and methods for sharing physical writing methods.
Background
[QD02] Existing internet collaboration platforms (and in particular those in the education industry) do not allow for use of pen and paper when it comes to collaborating with other users of the system .
[0003] Existing web-conferencing applications (such as WebEx, Adobe connect etc.) are tools that are not specifically designed to replicat face-to-face environment.
[0004] Existing writing detection systems in general are specialised systems that rely on Infra- Red and Ultrasound to detect writi g movement.
Summary
[0005] It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
[0006] According to a first aspect of the present disclosure, there is provided a server and computer implemented method for sharing physical writing actions, the method comprising the steps of: detecting, at each of a plurality of computing devices associated with a meeting, one or more physical writing actions being performed on physical writing surfaces; generating writing signals based on the physical writing actions; transmitting the generated writing signals to a server; forwarding, via the server, the writing signals for receipt at the pluralit of computing devices associated with the meeting; and each computing device outputting a representation of the physical writing actions.
[0007] According to a second aspect of the present disclosure, there is provided a server implemented method for sharing physical writing actions, the method comprising the steps of: receiving, at a server, generated writing signals associated with a meeting from two or more computing devices, wherein the generated writing signals are associated with physical writing actions captured by the two or more computing devices; and forwarding, from the server to the computing devices, the generated writing signals associated with the meeting to enable each computing device to output a representation of the physical writing actions.
[0008] According to another aspect of the present disclosure, there is provided a computer implemented method for detecting a physical writing action; the method comprising the steps of; accessin an image generated by a camera associated with a computing device; analysing the image to detect a first physical writing action; generating a first writing signal based on the analysis; and outputting the first writing signal.
[0009] According to another aspect of the present disclosure, there is provided a computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the methods described above.
[0010] According to another aspect of the present disclosure, there is also provided a server, computing device or electronic device arranged to implement any one of the methods described above.
[0011 ] Other as ects are a fso disclosed .
Brief Description of the Drawings
[0012] At least one embodiment of the present invention will now be described with reference to the drawings and appendices, in which:
[0013] Figs. 1 A and 1 B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced:
[0014] Figs, 2A and 2B collectively form a schematic block diagram representation of an electronic device upon which described arrangements can be practised;
[001S] Fig. 3A shows a system block diagram according to this disclosure;
[0016] Fig. 3B shows a system block diagram according to this disclosure;
[0017] Fig. 4A shows a system block diagram according to this disclosure;
[0018] Fig. 4B shows a server-client block diagram according to this disclosure; [0019] Fig. 5 shows a process flow diagram according to this disclosure;
[0020] Fig. 6 shows a process flow diagram according to this disclosure;
[0021] Fig. ?A shows a system block diagram according to this disclosure;
[0022] Fig. 7B shows a system block diagram according to this disclosure;
[0023] Fig. 8 shows a process flow diagram according to this disclosure;
[0024] Figs. 9A to 9K show a user interface according to this disclosure;
Detailed Description including Best Mode
[0025] This disclosure describes methods and systems for combining electronic hand writing detection systems and methods wit web conferencing type sysiems and methods. The methods and systems disclosed demonstrate the creation of a cloud based collaboration platform with the integration of a digital ink pen into the platform that allows for real-time collaboration, and the exchange of ideas through online remote meetings while users can still use traditional ink and pape (or any other suitable writing medium and surface). The systems and method described allow users to collaborate and interact using their own handwriting. The sysiems and methods described allow each user to have their own workspace accessible only to them, or to other users that they may select. The systems and methods described allow users to collaborate with one or more other users of the system, such as users that are attending the same meeting space (such as a lecture or classroom, for example). One or more of the users may provide instant feedback to one or more of the other users. The collaboration may be 1 to 1. 1 to many, few to many or many to many. The systems and methods described provide the ability to transmit and record the interactions of two or more users together in a shared workspace and to provide those interactions in a real time manner via a live dashboard. Further, various systems and methods are disclosed for tracking writing actions using a camera. These tracking systems and methods may be incorporated into the other systems and methods described herein. The systems and methods described also provide other key features and advantages as described herein.
[0026] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operations), unless the contrary intention appears.
[0027] it is to be noted that the discussions contained in the "Background" section and that above relating to prior art arrangements relate to discussions of documents or devices which form public knowledge throug their respective publication and/or use. Such should not be interpreted as a representation by the present invenior(s) or the patent applicant thai such documents or devices in any way form part of the common general knowledge in the art.
[0028] Figs. 1 A and 1 B depict a general-purpose computer system 100, upon which the various arrangements described can be practiced.
[0029] As seen in Fig. 1AT the computer system 100 includes: a computer module 101 : input devices such as a keyboard 102, a mouse pointer device 103, a scanner 126, a camera 127, and a microphone 180; and output devices including a printer 115, a display device 114 and loudspeakers 117. An external Modulator-Demodulator (Modem) transceiver device 118 may be used by the computer module 101 for communicating to and from a communications network 120 via a connection 121. The communications network 120 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 121 is a telephone line, the modem 116 may be a traditional "dial-up" modem. Alternatively, where the connection 121 is a high capacity (e.g., cable) connection, the modem 116 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 120.
[0030] The computer module 101 typically includes at feast one processor unit 05, and a memory unit 106. For example, the memory unit 106 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 101 also includes an number of input/output (I/O) interfaces including: an audio-video interface 107 that couples to the video display , loudspeakers 1 17 and microphone 180; an I/O interface 1 13 that couples to the keyboard 102, mouse 103, scanner 126, camera 127 and optionally a joystick or other human interface device (not illustrated), or a projector; and an interface 108 for the external modem 1 16 and printer 115. in some implementations, the modem 116 may be incorporated within the computer module 101 , for example within the interface 108. The computer module 101 also has a local network interface 111, which permits coupling of the computer system 100 via a connection 123 to a local-area communications network 122, known as a Local Area Network (LAN). As illustrated in Fig. 1 A, the local communications network 122 may also couple to the wid network 120 via a connection 124, which would typically include a so-called "firewall" device or device of similar functionality. The local network interface 111 may comprise an Ethernet circuit card, a Bluetooth* wireless arrangement or an IEEE 802.11 ireiess arrangement; however, numerous other types of interfaces may be practiced for the interface 11.
[0031 ] The I/O interfaces 108 and 113 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 109 are provided and typically include a hard disk drive (HDD) 110. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optica! disk drive 12 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD- ROM, DVD, B!u-ray Disc™}, USB-RA , portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 100.
[0032] The components 105 to 1 3 of the computer module 101 typically communicate via an interconnected bus 104 and in a manner that results in a conventional mode of operation of the computer system 100 known to those in the relevant art. For example, the processor 105 is coupled to the system bus 104 using a connection 118. Likewise, the memory 108 and optical disk drive 112 are coupled to the system bus 104 by connections 119. Examples of computers on which the described arrangemenis can be practised include IBM-PC's and compatibies, Sun Sparcstations, Apple Mac™ or a like computer systems.
[0033] The methods as described herein may be implemented using the computer system 100 wherein the processes of Figs. 5, 6 and 8, to be described, may be implemented as one or more software application programs 133 executable within the computer system 100. In particular, the steps of the methods described herein are effected by instructions 131 (see Fig. 1 B) in the software 133 that are carried out within the computer system 100. The software instructions 131 may foe formed as one or more code modules, each for performing one or more particular tasks.
[0034] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 100 from the computer readable medium, and then executed by the computer system 1 0. A computer readable medium having suc software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program produci in the computer system 100 preferabiy effects an advantageous apparatus for detecting and/or sharing writing actions.
[0035] The software 133 is typically stored in the HDD 1 10 or the memory 106. The software is loaded into the computer system 100 from a computer readable medium, and executed by the computer system 100. Thus, fo example, the software 133 may be stored on an optically readable disk storage medium {e.g., CD-ROM) 125 that Is read by the optical disk drive 112. A computer readable medium hawing such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 100 preferably effects an apparatus for detecting and/or sharing writing actions.
[0036] In some instances, th application programs 133 may be supplied to the user encoded on one or more CD-ROMs 125 and read via the corresponding drive 112, or alternatively may be read by the user from the networks 120 or 122. Still further, the software can also be toade into the computer system 100 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 100 fo execution and/or processing. Examples of such storage media include flopp disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or externa! of the computer module 101. Examples of transitory or non-tangib!e computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 101 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0037] The second part of the application programs 133 and the corresponding code modules mentioned above may be execute to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 114. Through manipulation of typically the keyboard 102 and the mouse 103, a user of th computer system 100 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 117 and user voice commands input via the microphone 180. [0038] Fig. 1 B is a: detailed schematic block diagram of the processor 105 and a "memory" 134, The memory 134 represents a logical aggregation of all the memory moduies (including the HDD 109 and semtconductor memory 106} that can be accessed by the computer module 101 in Fig, 1A.
[0039] When th computer module 101 is Initially powered up, a power-on self-test (POST) program 150 executes. The POST program 160 i typically stored in a ROM 149 of the semiconductor memory 106 of Fig. 1A. A hardware device such as the ROM 149 storing software is sometimes referred to as firmware. The POST program 150 examines hardware within the computer module 101 to ensure proper functioning and typically checks the processor 105, the memory 134 (109, 106), and a basic input-output systems software (BIOS) module 151 , also typically stored in the ROM 149, for correct operation. Once the POST program 150 has run successfully, the BIOS 151 activates th hard disk drive 110 of Fig, 1A. Activation of the hard disk drive 110 causes a bootstrap loader program 152 that is resident on the hard disk drive 110 to execute via the processo 105. This loads an operating system 153 into the RAM; memory 106, upon which the operating system 153 commences operation. The operating system 153 is a system level application, executable by the processor 105, to fulfil various high level functions, including processor management, memory management, devic management, storage management, software application interface, and generic user interface.
[0040] The operating system 153 manages the memory 134 (109, 106) to ensure that each process or application running on the computer module 101 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 100 of Fig. 1A must be used property so that each process can run effectively. Accordingly, the aggregated memory 134 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the compute system 100 and how such is used.
[0041] As shown in Fig. 1 B, the processor 105 includes a number of functional modules including a control unit 139, an arithmetic logic unit (ALU) 140, and a local or interna! memor 148, sometimes called a cache memory. The cache memory 148 typically includes a number of storage registers 144 - 146 in a register section. One or more internal busses 141 functionally interconnect these functional moduies. The processor 105 iypicaliy also has one or more interfaces 142 for communicating with external devices via the system bus 104, using a connection 118. The memory 134 is coupled to the bus 104 using a connection 119. [0042] The application program 133 includes a sequence of instructions 131 that may include conditional branch and loop instructions, The program 133 may also include data 132 which is used in execution of the program 133. The instructions 131 and the data 132 are stored in memory locations 128, 129, 130 and 135, 136, 137, respectively. Depending upon the relative size of the instructions 131 and the memory locations 128-130, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 130. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 128 and 1 9.
[0043] in general, the processor 105 is given a set of instructions which are executed therein. The processor 105 waits for a subsequent input, to which the processor 105 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 102, 103, data received from an externa! source across one of the networks 120, 02, data retrieved from one of the storage devices 106, 109 or data retrieved from a storage medium 125 inserted into the corresponding reader 112, all depicted in Fig. 1A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 134.
[0044] The disclosed writing detection and sharing arrangements use input variables 154, which are stored in the memory 134 in corresponding memory locations 155, 56, 157, The writing detection and sharing arrangements produce output variables 161 , which are stored in the memory 134 in corresponding memory locations 162, 163, 164. Intermediate variables 158 may be stored in memory locations 159, 160, 168 and 167.
[0045] Referring to the processor 105 of Fig. 1B, the registers 144, 145, 146, the arithmetic logic unit (ALU) 140, and the control unit 139 work together to perfonn sequences of micro- operations needed to perform 'fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 133. Each fetch, decode, and execute cycle comprises:
[0046] a fetch operation, which fetches or reads an instruction 131 from a memory
location 128, 129, 130;
[0047] a decode operation in which the control unit 39 determines which instruction has been fetched; and [0048] an execute operation in which the control unit 139 and/o the ALU 140 execute the instruction.
[0049] Thereafter, a further fetch, decode, and execute cycie for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 39 stores or writes a value to a memory location 132.
[0050] Each step or sub-process in the processes of Figs. S, 6 and 8 is associated with one or more segments of the program 133 and is performed by the register section 144, 145, 147, the ALU 140, and the control unit 139 in the processor 105 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 133.
[0051] The methods described herein may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the writing detection and sharing methods. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
[0052] Figs. 2A and 28 collectively form a schematic block diagram of a genera! purpose electronic device 201 including embedded components, upon which the writing detection and/or sharing methods to be described are desirably practiced. The electronic device 201 may be, for example, a mobile phone, a portable media player, virtual reality glasses or a digital camera, in which processing resources are limited. Nevertheless, the methods to be described may also be performed on higher-level devices such as desktop computers, server computers, and other such devices with significantly larger processing resources.
[0053] As seen in Fig. 2A, the electronic device 201 comprises an embedded controller 202, Accordingly, the electronic device 201 may be referred to as an "embedded device." In the present example, the controller 202 has a processing unit (or processor) 205 which is bi- directionally coupled to an internal storage module 209. The storage module 209 may be formed from non-volatile semiconductor read only memory (ROM) 260 and semiconductor random access memory (RAM) 270, as seen in Fig. 2B, The RAM 270 may be volatile, nonvolatile or a combination of volatile and non-volatile memory.
[0054] The electronic device 201 includes a display controller 207, which is connected to a video display 214, such as a liquid crystal display (LCD) panel or the like. The display controller 207 is configured for displaying graphical images on the video display 214 in accordance with instructions received from the embedded controller 202, to which the display controller 207 is connected.
[0055] The electronic device 201 also includes user input devices 213 which are typically formed by keys, a keypad or like controls, i some implementations, the user input devices 213 may include a touch sensitive panel physically associated with the display 214 to collectively form a touch-screen. Such a touch-screen ma thus operate as one form of graphical user interface (GUI) as opposed to a prompt or menu driven GUI typically used with keypad-display combinations. Other forms of user input devices may also be used, such as a microphone (not illustrated) for voice commands or a joystick/thumb wheel (not illustrated) for ease of navigation about menus.
[GQ56] As seen in Fig. 2A, the electronic device 201 also comprises a portable memory interface 208, which is coupled to the processor 205 via a connection 219, The portable memory interface 206 allows a complementary portable memory device 225 to be coupled to the electronic device 201 to act as a source or destination of data or to supplement the internal storage module 209. Examples of such interfaces permit coupling with portable memory devices such as Universal Serial Bus (USB) memory devices, Secure Digital (SD) cards, Persona] Computer Memory Card International Association (PCM!A) cards, optical disks and magnetic disks.
[0057] The electronic device 201 also has a communications interface 208 to permit coupling of the device 201 to a computer or communications network 220 via a connection 221. The connection 221 may be wired or wireless. For example, the connection 221 may be radio frequenc or optical. An example of a wired connection includes Ethernet. Further, an example of wireless connection includes Bluetooth™ type local interconnection, Wi-Fi (including protocols based on the standards of the IEEE 802.11 family), Infrared Data Association (irDa) and the like.
[0058] Typically, the electronic device 201 is configured to perform some special function. The embedded controller 202, possibly in conjunction with further special function
components 210, is provided to perform that special function. For example, where the device 201 is a digital camera, the components 210 may represent a lens, focus control and image senso of the camera. The special function component 210 is connected to the embedded controller 202. As another example, the device 201 may be a mobile telephone handset. In this instance, the components 210 may represent those components required for communications in a cellular telephone environment Where the device 201 is a portable device, the special function components 210 may represent a number of encoders and decoders of a type including Joint Photographic Experts Group (JPEG), (Moving Picture Experts Group) MPEG, MPEG-1 Audio Layer 3 (MP3), and the like,
[0059] The methods described hereinafter may be implemented using the embedded controller 202, where the processes of Figs. 5, 6 and 8 may be implemented as one or more software application programs 233 executable within the embedded controller 202, The electronic device 201 of Fig. 2A implements the described methods. I n particular, with reference to Fig. 28 , the steps of the described methods are effected by instructions in the software 233 that are carried out within the controller 202. The software instructions may be formed as one or more code modules, each for performing one or more particular tasks. The software ma also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0080] The software 233 of the embedded controller 202 is typically stored in the non-volatile ROM 260 of the internal storag module 209. The software 233 stored in the ROM 260 can be updated when required from a computer readable medium. The software 233 can be loaded into and executed by the processor 205. I n some instances, the processor 205 may execute software instructions that are located in RAM 270. Software instructions may be loaded into the RAM 270 by the processor 205 initiating a copy of one or more code modules from ROM 280 info RAM 270. Alternatively, the software instructions of one o more code modules may be pre- installed in a non-volatile region of RAM 270 by a manufactu er. After one or more code modules have bee located in RAM 270, the processor 205 may execute software instructions of the one or more code modules.
[0081] The application program 233 is typically pre-installed and stored in the ROM 260 by a manufacturer, prior to distribution of the electronic device 20 . However, in some instances, the application programs 233 may be supplied to the user encoded on one or more CD-ROM (not shown) and read via the portable memory interface 206 of Fig. 2A prior to storage in the internal storage module 209 or in the portable memory 225. In another alternative, the software application program 233 may be read by the processor 205 from the network 220, or loaded into the controller 202 or the portable storage medium 225 from other computer readable media. Computer readable storage media refers to any non -transitory tangible storage medium that participates in providing instructions and/or data to the controlle 202 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, flash memory, or a computer readabie card such as a PCMCiA card and the like, whether or not such devices are internal or external of the device 201. Examples of transitory or non-tangible computer readable transmission media that may aiso participate in the provision of software, application programs, instructions and/or data to the device 201 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the internet or Intranets including e-mail transmissions and information recorded on Websites and the like. A com uter readable medium having such software or computer program recorded on it is a computer program product.
[0082] The second part of the application programs 233 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 214 of Fig, 2A. Through manipulation of the user input device 213 (e.g., the keypad), a user of the device 201 and the application programs 233 may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers (not illustrated) and user voice commands input via the microphone (not illustrated).
[0063] Fig. 28 illustrates in detail the embedded controller 202 having the processor 205 for executing the application programs 233 and the internal storage 209. The internal storage 209 comprises read only memory (ROM) 260 and random access memory (RAM) 270. The processor 205 is able to execute the application programs 233 stored in one or both of the connected memories 260 and 270. When the electronic device 201 is initially powered up, a system program resident in the ROM 260 is executed. The application program 233
permanent!y stored in the ROM 260 is sometimes referred to as "firmware". Execution of the firmware by the processor 205 may fulfil various functions, including processor management, memor management, device management, storage management and user interface.
[0064] The processor 205 typically includes a number of functional modules including a control unit (CU) 251, an arithmetic logic unit (ALU) 252, a digital signal processor (DSP) 2 53 and a local or internal memory comprising a set of registers 254 which typically contain atomic data elements 256, 257, along with internal buffer or cache memory 255, One or more internal buses 259 i terconnect these functional modules. The processor 205 typically also has one or more interfaces 258 for communicating with external devices via system bus 281 , using a connection 261, [0065] The application program 233 includes a sequence of instructions 262 though 263 that may include conditional branch and loop instructions, The program 233 may also include data, which is used in execution of the program 233. This data may be stored as part of the instruction or in a separate location 264 within the ROM 260 or RAM 270.
[0066] in general, the processor 205 is given a set of instructions, which are executed therein. This set of instructions may be organised into blocks, which perform specific tasks or handle specific events that occur in the electronic device 201. Typically, the application program 233 waits for events and subsequently executes the block of code associated with that event.
Events may be triggered in response to input from a user, via the user input devices 213 of Fig. 2A, as detected by the processor 205. Events may also be triggered in response to other sensors and interfaces in the electronic device 201.
[0067] The execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in the RAM 270. The disclosed method uses input variables 271 that are stored in known locations 272, 273 in the memory 270. The input variables 271 are processed to produce output variables 277 that ar stored in known locations 278, 279 in the memory 270. Intermediate variables 274 may be stored in additional memory locations in locations 275, 276 of the memory 270. Alternatively, some intermediate variables may only exist in the registers 254 of the processor 205.
[0068] The execution of a sequence of instructions is achieved in the processor 205 by repeated application of a fetch-execute cycle. The control unit 251 of the processor 205 maintains a register called the program counter, which contains the address in ROM 260 or RAM 270 of the next instruction to be executed. At the start of the fetch execute cycle, the contents of the memory address indexed by the program counter is loaded into the control unit 251. The instruction thus loaded controls the subsequent operation of the processor 205, causing for example, data to be loaded from ROM memory 260 into processor registers 254, the contents of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on. At the end of the fetch execute cycle the program counter is updated to point to the next instruction in the system program code. Depending on the instruction just executed this may involve incrementing the address contained in the program counter o loading the program counter with a new address in order to achieve a branch operation.
[0069] Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 233, and is performed by repeated execution of a fetch-execute cycle in the processor 205 or similar programmatic operation of other independent processor blocks in the electronic device 201.
[0070] The herein described systems and methods fill the current gaps in traditional and virtual collaborations. According to one example, in classrooms only the teacher can create notes, present and record the entire session, According to one example, an in-person lesson may be replicated using the herein described highly interactive application. This system and method can solve the time spent on attempting to communicate paradigms using dated web conferencing tools. Students and the teacher can intuitively create and share their own materials utilising new ways of writing and drawing. The teacher can easily access students' activities and provide instant feedback. At the end, all the participants can archive their own personalized recorded videos of their works on their device.
[0071 ] According to one example, the herein described system and methods may provide a new and innovative technology (as well as buiid on existing technologies developed) to provide an online marketplace providing high qualify online group learning via a real-time educational platform. This provides the capability of creating an online group environment as close as possible to face-to-face group collaborations (such as classrooms or the like).
[0072] It will be understood that the system may be used in environments other than teaching environments. Indeed, t e system can be used in any environment where collaboration is required between two or more users.
[0073] In general, the herein described system includes a group based collaboration platform combining hardware and software elements to allow for next level collaboration. According to one particular example, the system is a learning tool.
Desktop Application
[0074] This works across several operating systems {Mac, Windows and Linux) and can be integrated into existing Learning Management Systems.
Digital ink Pen
[0075] The hardware components of the pen system include a base unit and a digital ink pen that connects via Bluetooth to a user's computer and the disclosed desktop application. A special driver has been developed for the base unit to enable transfer of the writing signals to the desktop application using Bluetooth. IR (Infra-Red) readers capture the handwriting on paper and transmit the writing signals to the screen simultaneously.
Grou Meetings
[0076] Users can connect to a host-meeting remotely via a desktop application, Meetings can be set up amongst users of the system through a web based scheduling application. Once meetings are scheduled, the users are notified via email and can log into the system with their credentials to attend a meeting. Users can join at any time, as long as the meeting is five.
Virtual Notepad
[0077] The platform allows teachers and students, for example, to intuitively create their own material by writing and drawing on their own notepad, with the base unit attached to it. It also allows them to share those notes and drawings in real-time, and remotely. In addition to that, they can create pages on screen and share those instantly.
Delivers a Personal Recording
[0078] The system provides all users with the ability to record their own notes and the general meeting conversations. All these are saved in the cloud on the server and the system provides users with the ability to access the notes and conversations on-demand.
[0079] According to one example, the system emulates a physical classroom. A teacher has the ability to write exercises for the class on a shared notepad. The teacher may then monitor the progress of the students via a live dashboard accessible via the teacher's computing device. The teacher's dashboard on the teacher's computer will show each of the students' live virtual notepad. The teacher may see the status of each worksheet of each student by zooming in on each page. The system provides teachers with the ability to put students' work side by side for comparison.
[0080] The system and method may also provide instant feedback. Fo example, the system enables siudents to "put their hands up" to speak, just like in a standard classroom. The teacher can may either communicate to the student's desktop via pre-populated messages or send them a personalised message. The teacher may also use the system to notify a student that they are busy with someone else in class by putting up a virtual sign that can be viewed from the student's desktop. [0081] The system and method allows for real-time collaboration. Users may allow several people to speak and write at the same time on the same virtual notepad, using a pen and paper,
[0082] Students may also instantly chat to their teacher. The system enables a teacher to either chat to individuals or a group.
[0083] The integrated voice conferencing in the application may be used to discuss topics. The host can allow students to speak one at a time. All voice conferencing can be recorded.
[0084] The host may control permissions to allow users to share notes, speak one at a time or even illustrate their notes.
[0085] Alt users, including the host, may share their notes with anyone in the meeting. They can also duplicate their handwriting for the purpose of annotation and editing.
[0086] Alt users may create thei own unlimited number of pages that can also be shared with everyone (or a selected few) in the meeting.
[0087] All notes, including PDFs can be edited and annotated using the system. Users can use the on screen tools to highlight text and mark documents and notes.
[0088] Alt users may save their notes and handwriting as PDFs, or they may open multiple PDF documents for viewing and editing within the desktop application. Alt notes and PDFs may also be saved as one document. Further, notes may be shared via the server with other users,
[0089] Users may also use a mouse instead of the digital pen, to make notes and annotate documents. The application pen may allow the user to switch between different brushes, thicknesses and colours.
[0090] Fig. 3A shows a system block diagram according to an embodiment of the present invention. A server 301 is provided which performs methods described herein. The server may be a computing device as described with reference to Figs, 1 A and 1 B, The server is connected to a network 303. The network 303 may be the internet, for example. Alternativel„ the network may be any other suitable network.
[0091] A first computing devic 305A is also connected to the network 303, Again, the f irst computing device 305A may be a computing device as described with reference to Figs. 1 A and 1 B, [0092] A second computing device 305B is also connected to the network 303, Again, the second computing device may be a computing device as described above with reference to Figs. 1A and 1 B.
[0093] The first and second computing devices may be remote from each other.
[0094] According to one example, the first computing device may be operated by a teacher or a student. The second computing device may also be operated by a teacher or a student. Alternatively, it will be understood that the computing devices may be operated by any other suitable entity. The system described with reference to Fig. 3A shows a one-one relationship to enable one person to collaborate with another person. The first and second computing devices {305A and 305B) are connected via a Bluetooth connection to a driver of a pen detection system. That is, the first computing device 305A is connected via a Bluetooth connection to an electronic pen driver 307A. The second computing device 3G5B is connected via a Bluetooth connection to a pen driver 307B. The electronic pen driver 307A emits infrared signals to detect the movement of an electronic pen 309A. The electronic pen driver 307B emits infrared signals to an electronic pen 3098, Each of the electronic pen drivers (307A, 307B) detects the movement of the pens (3G9A, 309B) to enable the detection of physical writing actions on a physical medium. That is, the electronic pen 3G9A is used to write words, symbols or images onto a physical medium 311 A. That is, the physical medium 31 A is a physical writing surface upon which a physical writing action may be performed. For example, the physical medium 3 1 A may be a piece of paper or the like. Accordingly, the physical medium 311B may also be a piece of paper.
[0095] According to this example, an electronic pen driver communicates with the electronic pen via a standard infrared process. However, according to the herein disclosure, the driver between the electronic pen driver 307A and the computing device 305A has been updated to enable Bluetooth connections to be made.
[0096] The same type of Bluetooth connection has also been enabled on the electronic pen driver 307B.
[0097] As an alternative, one or more of the electronic pen systems may be replaced with a camera system as described herein,
[0098] As will be explained in more detail below, the herein described system and method enables a meeting to be setup via a desktop application running on each of the first and second computing devices (305A and 3Q5B), The meeting is controlled by the server 301. That is, upon joining the meeting at each of the first and second computing devices, the server is arranged to share physical writing actions thai are performed by each of the electronic pen systems. That is, the desktop applications running on the first and second computing devices in association with the electronic pen systems are arranged to detect for a particular meeting one or more physical writing actions that are being performed on physical writing surfaces. Writing signals are generated at the first and second computing devices via the pen systems. The writing signals generated at the first computing device are forwarded to the server 301 via the network 303. Further, the writing signals generated at the second computing device 305B are also forwarded to the server 301 by the network 303.
[0099] The server then forwards these writing signals to the other computing device. That is, the server 301 forwards the writing signals received from the first computing device 305A to the second computing device 3Q5B, Further, the server 301 forward the writing signals generated at the second computing device 305B to the first computing device 305A. Therefore, the writing signals are forwarded by the server for receipt at each of the computing devices associated with the meeting. However, it will be understood thai the server may forward all of the generated writing signals to ail of the computing devices. Alternatively, the server may be arranged to forward only the writing signals generated by other computing devices to a particular computing device.
[00100] The first computing device may then output a representation of the physical writing actions. For example, the output may be in the form of a display on a connected screen.
Alternatively, the output may be any other suitable output such as storing the received signals and the internally generated signals info an internal or external memory. Alternatively, the output may be in the form of forwarding the generated writing signals to a printer. Other suitable outputs are also envisaged.
[00101] It will be understood that the first computing device may output a representation of the physical writing action generated: at the second computing device based on the received writing signals from the server 301. Further, the representation of the physical writing actions also show the actions performed by the electronic pen system connected to the first computing device. That is, the writing actions of both of the first and second pen systems are represented or output at the computing devices (305A and 305),
[00102] Fig. SB shows a system block diagram according to an alternative example. As shown in Fig. 3B, the server 301 is still connected to the network 303 as described above in relationship to Fig. 3A. Further, the first and second computing device are also still connected to the network and the server 301. In addition however, further computing devices 305C and others (represented by the dots) may also be connected to the server via the network 303.
[00103] Each of the computing devices may be remote from each other,
[00104] In this scenario, it can be seen that a connection may be made from one computing device to many other computing devices. Further, it will be understood that a few computing devices may be connected to many other computing devices. Further, it will be understood that many computing devices may be conneeted to many other computing devices. For example, in this scenario, a first computing device may be operated by a teacher in a teaching
establishment. Other computing devices, being two or more computing devices may be operated by students of that same teaching establishment. Therefore, the first computing device 305A may the host of a meeting (e.g., a lesson) that may be followed by the other computin devices connected to that same meeting. Figs. 4A and 4B show a system block diagram according to the herein described example.
[00105] As an alternative, one or more of the electronic pen systems may be replaced with a camera system as described herein.
[00106] A high level overview of the system and process is now described.
[00107] According to this particular description,, the system is described in the context of an education-orientated, real-time online collaboration application. It will be understood that the system may be used in other environments besides education.
[00108] According to this example, the system is designed to make online tutoring lessons easier. In particular, this is well suited fo interactive and visual lessons such as Maths and Science tutoring, where hand-written notes are necessary for proper communication.
[00109] The system is designed to be an 'all-in-one' application for a iesson, i.e. the only application required for the lesson. Therefore it provides many other features such as audio communication, text chat functionality and permission & administration capabilities for tutors,
[00110] A list of some of the features the system provides include:
♦ Networked & shared virtual 'pads' that multiple users see exactly the same way, and can draw on and/or edit together. * Support for drawing on real paper and virtual pads using a digital pen system. Aiternativeiy, a digital writing system using a camera may also be utilized.
» The ability to communicate using text chat, audio & video chat (if a microphone is available) and basic 'raise hand / attention* notifications.
* The ability to save & load PDF documents which are all fully networked.
The ability to download meeting recordings and play them back as if the user was in the
meeting agai .
• Tutors or faculty staff can manage, plan, book and archive meetings using a web control panel.
• A fully functioning piugin system on both client and server, that may provide integration with other learning software,
• Integration with learning management software (IMS) e.g oodle, to provide user accounts, permissions and automatic meeting booking.
[00111} These features also include:
* Emulation of a physical classroom.
• A monitoring dashboard.
* A h ig hfy intu itive & i nte ractive white board .
• Instant feedback.
* Real-time collaboration.
instant chat.
* The ability to share handwriting that is physically performed on paper.
» Voice and/or video conferencing.
» Host permissions.
Multiple attendees.
* Personal recordings.
* Bluetooth or USB Connection to the electronic pen system.
• Attendees page creation.
* Creatio of own personalised recordings,
♦ I n- pag e ed it ί ng S a notati on ,
• PDF file management.
* Notes sharing.
» Multiple page interaction.
» Side by side view & interaction. [00112] The software of the system may be written in any suitable language, in particular, in this example, the software used was main!y Java. Other components, such as drivers, web panel software and deployment tools (e.g. installers) were developed in a variety of other languages which were mostly C based.
High Level Architecture (Network)
[00113] At its core, there are only two 'iogica elements to the system architecture: the server and the ciient. The server manages meetings, handles all: connections, passes messages between clients, etc. The client, which connects to the server, makes requests, joins meetings, sends drawings (via the server) etc.
[00114] According to on example, there are 3 core elements: the server, the desktop application and the control panel Technically, the control panel is also a client (and is based off the same network client module as the deskto application. See below for more details).
[00115] Client-server communication occurs over a TCP connection. A specific protocol has been developed for structuring messages, basic logic and some constants, called Steeves Protocol.
[00116] For the desktop application, there is also another connection over UDP that handles audio communication. There is a separate protocol for this, called Velvet Protocol. The audio connection is optional and entirely separate from the master TCP connection. The control panel does not make this audio connection
[00117] On the desktop application, there are the hardware/driver communications, Drivers for 3rd party hardware are provided as entirely separate applications that run in separate processes to the main application. They communicate with the main application using a very basic network protocol that operates only on the local machine. Communication is over TCP. There are designated ports for each driver that th driver opens and the application connects to. Communication is only one-way. Drivers simply send the state of all pens connected to them constantly, and the application extrapolates active pens and filters out useless/redundant messages.
[00118] The electronic pen systems use triangulate methods to determine the location of the writing tip. Module overview
[00119] Figures 4A and 4B show various modules that are part of the software suite. Figure 4A shows the clients and client control panel connected to the server. Figure 4B shows the desktop connected to the electronic pen systems.
Modules
[00120] VMCLASS Server This is the main server module, it is a standalone Java application. Network IO is built off the VMCLASS project. The server is monolithic: It manages all meetings and handles ail connections. TCP (main) connections begin in SteevesProiocol Adapter. Java. UDP (audio) connections begin in VelvetProtocalAdapierjava. The actual application starts in V classServerJava
[00121] St ves Protocol This module contains objects and basic logic shared between client and server for the main TCP network protocol. Every object that is sent ove the wire is a basic Java bean object, Messages are serialised into JSON objects.
[00122] Velvet Protocol This modul contains objects and basic logic shared between client and server for the audio UDP network protocol. Every object that is sent over the wire is manually serialised and de-serialised. Objects are classified by their length.
[00123] Driver Protocol This module contains objects and basic logic shared between drivers and the application that connects to them.
[00124] VMCLASS Client Contains connection logic for the client only. Network SO is built off the VMCLASS project VMCLASS Client provides simple functions for calling RPGs (Remot Procedure Calls) on the server and getting the result. An RPC effectively sends instructions to the server to enable the server to perform a process. If an exception is thrown on the server while processing a request it will be re-thrown on the client, on whatever called the function. Also provides listeners for events {unsolicited messages from the server),
[00125] Velvet Client Similar to VMCLASS client, for audio. Network IO is built off the
VMCLASS project. Also contains recording functionality (directly connecting to the microphone and processing audio), encoding/decoding functionality and mixing/ layback functionality.
[00126] Email/Simple Moodle user providers (Java Eclipse Project) These are piugins that authenticate users through various methods and decides permissions for them (tutor/student). Email User Provider is for adding meeting participants by their email address (and letting them login with their email address). Simple User Provider loads users, passwords and permissions from a fiat configuration file (the main server config file, prop-properties). Moodl User Provider connects to a moodie server over HTTP to authenticate users and get their permissions. Both the Moodie and Email plugins run on both control panel and server. The Simple user provider runs on the server only
[00127] Zigma Control ' Panel This is the control panel client, written as a servfei. Though possible to run on other servlet engines, it is designed to be ran on the server, it serves the control panel pages and manages client connections to the server, ft performs actions such as deleting, creating and managing meetings on the browser's behalf.
[00128] New Jama This module is effectively the desktop application. It contains ail the Ul code, rendering drawings, managing the desktop application connection, managing playback recordings, as well as other functions.
[00129] Driver Manager This modul spawns separate driver processes depending on the current platform (see High Level Architecture) and connects to them, It passes messages from the individual drivers to the main application so they can eventually be rendered.
[00130] Driver USB Win This is the USB driver that only runs on Windows computers. There are actually two projects to this module: The Java loader, under DriverUSB Win (which saves the native e e fiie in a temporary folder and loads it) and the actual native application, written in C# and found under UsbPenSupport (whic connects to the pen and forwards events over the network connection to the Driver Manager)
[00131] Driver BT Mac QSXLin This single module connects to pen hardware over Biuetooth. It is built off the Bluecove library to handle the Bluetooth connections. It interprets the raw byte stream from the Biuetooth receiver, turns them into events and forwards said events over the network connection to the Driver Manager.
[00132] installer This contains a number of various scripts, setups and tools to build the application for its various desktop target platforms, (See Build Process)
Steeves Protocol Walk-though
[00133] This section explains the life of a typical TCP connection using Steeves Protocol. At its core, Steeves protocol is broken up into two main systems; the RFC system and the event system (built off the RPC system). When used together, they handle everything networked in the system, except audio. Meeting updates, drawings, login information etc, are ail handled over Steeves Protocol. According to this example, every message in Steeves Protocol is Java Bean. These Beans are serialised as JSON strings, and these strings are sent over a TCP stream, separated by newiines. When receiving, each end server or client) first tries deserialising the given message, and then processes it
[00134] The two main network 'entry points' for Steeves protocol is SteevesProiocolAdapter (for server) and VMCLASSCiient (for client). Messages first appear here on both ends, it is important to note that if an unhanded exception is thrown in any VIVICLASS handling thread, the connection is immediately terminated.
[00135] It should be noted that there are two handlers on each side of the protocol. On the client side, a message always goes through SteevesCiient & VmclassClient, and on the server side messages always go through SieevesProfocolAdapter before being handled by
ProtocolHandler. This separation is on purpose. That is, there are two levels to handling a message, one level simply parses the message, the othe apples state and other information to draw meanin from a message, A similar structure can be seen in the Velvet audio protocol structure.
RPCs
[00136] As mentioned above, at the core of Steeves Protocol is the RPC system. Everything is considered an RPC. There is a base message bean (Requestivlessage) that is sent to the server that contains the function to cali and it's parameters, and a response (RequestResponse) tha contains the return value of the .call, or an exception (if an exception occurred), if there is an exception, it is re-thrown on the client, it is also important to note that during an RPC, the calling method biocks on the client until the server replies, if the request times out, a TimeoutException is thrown.
[00137] On the server, there are actually two higher level handlers (for different levels of message handling, see above). If a message is detected to have come from a piugin, it will be sent to the piugin RPC handier, an IRPCMessageHandler. If not, it wiil be sent to the default message handier, ProtocolHandler.
[00138] First, the main application makes a call to the SteevesAdapter, for example to authenticate with a usemame and password. The adapter first makes sure it is connected to the server through its VMCLASSClient (if it can't connect it will throw an exception). The
SteevesAdapter then prepares a message bean to send to the server, fills it, and passes it onto the VMCLASS'CHent to send (and block until it receives a response). The V CLASSC!ien then writes the message (complete and in order) and a newline to the server over the V GLASS TCP connection.
[00139] On the server side, the message is first received by the SteevesProtoeolAdapter, which de-serialises the message back into a bean, it then decides whether to send it to the plugin RPC handler for processing or the main protocol! handler, if the message is bound for the main protocol handler, the SteevesProtoeolAdapter then extracts the function name and parameter from the bean, and calls the function specified in the ProtocolHand!er. It then expects a
ResponseMessage object to be returned, which it can write back to the client. If an exception is thrown whiie waiting for it, the Adapter will fill its own ResponseMessage with the exception and write it back. If not, then it will simply write the ResponseMessage it got back to the client.
[00140] The client does a similar thing to the response message once received, it immediately attempts to parse it, and if that fails closes the connection. The response is matched to the request, and the calling function thread continues execution with the right result, if an exception was returned then the exception will be re-thrown In the calling function's thread.
Events
[00141] Events are built off the RPC system. An event may be considered at least a portion of a physical writing action that has been recorded. It is best to think of an event as an 'unsolicited' RPC. They are response objects without a request, or a request object without expecting a response. The server handles received events tike it would most messages. The client has a listener-publisher pattern for dealing with received events.
[00142] There is only basic handling logic on the server side of event processing. That is, the server simply verifies the user has permission to send that event type, then forwards the event to every other use that is associated with the meeting.
[00143] From client to server, the method is extremely similar to sending RPCs, except the event itself is wrapped up in another bean called a MeetingEvent and sent to a constant handling function (update eetingChannel) on the server's Protoco!Hand!er. The calling client thread does not block like it does for RPCs.
[00144] From server to client something a little different happens. The server fills the response method name with an event pref ix then the event name, and passes the event object as the parameter. For example, if the server wanted to send a ViewingPageEvent event it would set the meihod name to ''event+ViewingPageEvent", The client checks for this "even *" prefix, and if it finds it when processing a message from the server, forwards the object to the main application for processing instead of treating it like an RFC response.
Velvet Protocol Walk-through
[001451 Velvet is the name given to the system's audio trans missiors protocol. The protocol and logic is built of UDP, It has its own implementation for pinging, clock alignment, retransmission , missing packet resolution, packet reordering etc.
[00146] Clients are authenticated via an integer 'token' that they collect from the
Meeting Partici ant object sent over the Steeves Protocol connection. The tokens are random and unique for each user in a meeting on the server, if a user is already connected when another joins, the new user may have their connection denied.
[00147] The following describes the steps a request (authentication) and audio messages take throughout the entire system, both on the client and server.
Joining an Audio Meeting
[00148] in order to connect to an 'audio meeting', first the main application creates a
Ve!vetClient object that will handle all networking on its end. It also creates an AudioRecorder for collecting audio from the microphone. After the main application receives its audio token from the server, it will initiate a connection by calling connectQ in the VelvetClient object.
[00149] The client then creates a handshake packet (class: Auth equest) and fills it with various connection parameters and other connection information (such as system time for clock alignment, etc.}. It then calls writeQ in V CLASS to send the message to the server.
[00150] Once the packet is received on the server, the server immediately attempts to parse the message in the VeivetProtocolAdapter, The VelvetProtocolAdapter is a parser, sending and receiving messages in a dumb, stateless way. Management of almost everything audio related happens in the AudioManager. Once the message is parsed, it is sent to the AudioManager fo processing. The AudioManager looks up the token and matches it with a Meeting and
Meeting Pa tici ant object. If not done already, the AudioManager will create an AudioMeeting and AudioConnection for the connection. For every meeting the server is handling, there should be an AudtofVteefing that handles audio. For every audio connection the server is handling, there should be an AudioConnection object that handles sending, receiving and buffering of audio on the server. The AudioManager then prepares a response and writes it directly to the ctient. if the VeiveiClient receives a success message, it will, begin processing its buffer of audio and begin sending audio to the server.
Life of an Audio Message
[00151] The life or process of an audio message from client to server and back again is as follows. At multiple stages, packets are 'put away' or stored in a buffer for a certain amount of time, and then collected by another thread upon which processing continues,
[00152] Audio is first collected from the microphone in. the AudioCapture class. The main application then encodes and wraps the audio in a voicefvlessage packet and sends it to the VeiveiClient via offerVoiceMessageQ, The client then puts the message in the toSend queue and returns the current thread. The VeiveiClient has a thread running that waifs for messages in the toSend queue. It receives the message some time later and writes it over the network.
[00153] The server initially receives the VoiceMessage in the VelvetProtocolAdapter, the main network entry point for audio in the server. The VetvetProtocolAdapter quickly attempts to parse the message, and if successful passes it straight onto the Audio Manager. The Audiofvlanager then quickly looks up which AudioMeeting this connection is in, and which AudioConnection is supposed to handle this connection. If none can be found, the server will close/block the connection, if found, the AudioManager passes the message directly to the AudioConnection object. The AudioConnection object then puts it in its buffer/queue for re ordering and buffering, it then returns the current thread.
[00154] While this is ail happening in the server, there is a separate thread running constantly in the AudioMeeting, similar to the VelvetClient. This thread runs a function
(doPumpWithoutlVSixing} every 20 milliseconds (the length of time an audio packet represents, defined in VoiceMessage.java) to push audio to every other client in the meeting. Each time the function is run, it first pops audio for every AudioConnection in the AudioMeeting (if possible), and sends it to every other connection in the AudioMeeting. Note that the Audiofvleeting uses a basic interface for this, so that sending audio to be recorded will work correctly.
[00155] Once a client receives audio from the server, it performs almost the exact same behaviour as the server. That is, it puts the audio in a queue, one for each client, and pops the audio from the queues every 20ms for playback on the speaker line. A software audio mixer is also provided that mixes different user's audio. [00156] Fig. 5 shows a flow diagram of a process for connecting one computing device to another computing device for a meeting. The process starts at step 501. At step 503, the first computing device detects a first writing action. That is, a first physical writing action being performed on a first physical writing surface, such as a portion of a document, is detected at the first computing device. According to this example, the detection is performed in association with an electronic pen system as described above, it will be understood that any other writing detection system may he used.
[00157] At step 505;, the first computing system generates a first writing signal based on that detected first physical writing action.
[00158] At step 507, the first computing device transmits the first writing signal from the first computing device to the server.
[00159] Subsequently or concurrently, a second computing device is also detecting a second writtng action at step 509. That is, a second computing device detects a second physical writtng action that is being performed on a second physical writing surface (e.g. a portion of the document).
[00160] The second computing device generates a second writing signal based on the detected second physical writing action at step 511.
[00161] At step 513, the second computing device transmits the second writing signal from the second computing device to the server.
[00182] Therefore, the server receives both the first writing signal from the first computing device and the second writing signal from the second computing device, as described in more detail below. Upon receipt, the server forwards the first writing signal to the second computing device and forwards the second writing signal to the first computing device. That is, the first computing device receives the second writing signal from the server at step 515. Further, the second computing device receives the first writing signal from the server at step 521. The first computing device then generates an output, in this example, a display, of the first and second writing signals at step 51 . The process then ends at step 519 fo the first computing device. At the second computing device the second computer generates an output, in this example in the form of a display, of the second and first writing signals at step 523. The process then ends with step 525. [00183] Fig. 8 shows a flow diagram of a process according to an alternative example where the first computing device is connected to two or more other computing device for a meeting, According to this example, one of the computing devices is associated with a host of the meeting (steps 603, 605, 607, 615, 617, 619), One or more of the other computing devices are associated with attendees of the meeting (steps 609, 61 , 813, 82 , 823, 625). For example, the host may be a teacher and the attendees may be students. The process starts at step 801.
[00164] At step 603, the first computing device (host) detects a first writing action. That is, a first physical writing actton being performed on a first physical writing surface, such as a portion of a document, is detected at the first computing device. That is, the detection is performed in association with an electronic pen system as described above.
[00165] At step 605, the first computing system generates a first writing signal based on that detected first physical writing action.
[00166] At step 607, the first computing device transmits the first writing signal from the first computing device to the server.
[00167] Subsequently or concurrently, a second computing device (attendee) is also detecting a second writing action at step 609. That is, a second computing device detects a second physical writi g action that is being performed on a second physical writing surface (e.g. a portion of the document).
[00168] The second computing device generates a second writing signal based on the detected second physical writing action at step 611.
[00169] At step 6 3, the second computing device transmits the second writing signal from the second computing device to the server.
[00170] Further computing devices (attendees) may also be generating writing signals for transmission to the server, as indicated by the dots in Fig 6.
[00171] Therefore, the server receives all the writing signals from all of the computing devices (host and attendees), as described in more detail below. Upon receipt, the server forwards the first writing signal (host) to all of the other computing devices (attendees) in that meeting. The server also forwards all of the other computing devices writing signals (attendees) to the first computing device (host). That is, the first computing device receives all the writing signals for that meeting from the server at step 815, Further, th second computing device (and other computing devices in the meeting) receives the first writing signal from the server at step 621. The first computing device then generates an output, in this example, a display, of the first and further writing signals at step 617 according to a number of different options, which are explained with reference to the user interface shown in Figs 9A to 9K, The process then ends for the first computing device at step 619. At the second computing device (and further computing devices in the same meeting) an output Is generated, which in this example is in the form of a display. The output at step 623 is of the second and first writing signals only. That is, the second computing device can only dispiay the writing signals generated by the host's computing device and its own computing device. The process then ends at step 825.
[00172] The process is repeated for each writing action. A writing action may be multiple strokes, a stroke or partial strokes of the pen.
[00173] It will be understood as shown in Fig. 8 that further computing devices may perform the same actions as shown in steps 809, 811, 813, 621 , 623 to allow a one-many connection via the server for showing a meeting. That is, the meeting sharing procedure enables users to share the physical writing actions being performed on a physical medium with other users also sharing their physical writing actions on physical mediums.
[00174] Further, as explained above, it will be understood that the first writing signals that are being output from the first computing devices may be output prior to transmitting the generated first writing signal to the server. Further, it will be understood that the second or further writing signals being generated by the second: or further computing devices may be output prior to transmitting the generated second (or further) writing signals to the seven
[00175] Also, it will be understood that the representation being displayed on a screen connected to the computing devices of the physical writing actions may be shown in relation to a virtual writing space that corresponds with the physical writing surface on which the physical writing action is being performed. That is, the image shown on the screen may represent a piece of paper upon which the user is actually performing the physical: writing actions. That is, the user may select via a menu dropdown the type of pape upon which they are performing th physical writing actions. Further, the position of the physical writing actions in relation to the physical writing medium ma be detected by th computing devices via the pen systems in order to display the representation of the physical writing actions of the virtual writing spaces in a position that corresponds with the position that the physical writing actions were originally being performed on the physical writing surfaces. [00176] in accordance with the one-to many representation process of Fig. 6, it is clear that a first computing device receives ail the writing signals of the remaining computing devices and outputs a re resentation of the physical writing actions based on its own generated writing signal and the received writing signals from the other computing devices. Further, the other computing devices (one or more) receive the writing signals from the first computing device only (via the server) and output a re resentation of the physical writing actions based on its own generated writing signal and the writing signals received from the first computing device only.
[00177] It will be understood that the computing devices include a rendering process to render the writing signals on the user interface. Any suitable rendering process may be used.
[00178] Optionally, any suitable compression and decompression process may be used when transmitting and receiving various data packets at various points throughout the system.
[00179] it will be understood that the server may be connected to a pubiicaify accessible network. For example, the server ma be located in the cloud to enable any user with a suitable use name and login to access the server functions.
[00180] Therefore, it can be seen that the processor forwarding the writing signals from the computing devices involves the step of forwarding to each of the multiple computing devices all the writing signals that have been generated by all of the other computing devices. Further, the writing signal generated by a particular computing device may not be forwarded to that particular computing device to avoid duplication. That is, a writing signal generated by particular computing devices does not need to be sent from the server back to that computing device. That is, the server is arranged to only send writing signals to computing devices that have not been generated by those computing devices. The writing signals generated by those computing devices are stored locally and output locally in reai-time.
[00181] it will be understood that certain electronic peri systems available may require calibration of the pen detection algorithms. That is, the IR and/or ultrasound transmitters may not be !ocaied at the tip of the pen but may be particular distance away, such as 1c, for example. This can caus an error in terms of the detected position of a 1cm diamete circle. This error may ary between left and right handed people and also between different personal styles of writing. In order to minimize the error, the herein described system and method performed at the computing device provides a calibration option for the users of the system. [00182] The calibration option includes the steps of the user first positioning the pen at an angle of 90° to the paper surface, The desktop .application running on the computing device is notified of this position by the user. For example, by the user selecting a calibration option on the desktop application. The user then subsequently positions the pen with at the user's normal writing position at the same location on the paper. This position is then sent to the desktop application. The difference in the location readings is used as a calibration in order to minimize the variable writing error. Typically the error circle may be reduced from 10mm to 2mm.
[00183] From the perspective of the server, the server enables the sharing of physical writing actions b receiving generated writing signals that are associated with a meeting from two or more computing devices. These generated writing signals are associated with physical writing actions captured by the two or more computing devices. The server then forwards to the computing devices the generated writing signals associated with the meeting to enable each computing device to output a representation of the physical writing actions.
[00184] For example, the server receives a first writing signal from a first computing device. The first writing signal is based on a first physical writing action being performed on a first physical writin surface detected at the first computing device. The server also receives a second writing signal from a second computing device. The second writing signal is based on a second physical writing action being performed on a second physical writing surface detected at the second computing device. The server then forwards the first writing signal from the serve to the second computing device and forwards the second writing signal from the server to the first computing device. This therefore enables the first computing device and the second computing device to output a representation of both of the detected first and second physica! writing actions.
[00185] The server may transmit to the first computing device all the writing signals of the remaining computing devices to enable the first computing device to output a representation of the physical writing actions based on its own generated writing signal and the received writing signals.
[00186] The server may transmit to one or more of the plurality of computing devices the writing signals from the first computing device only. This enables the one or more of the plurality of computing devices to output a representation of the physica! writing actions based on the one or more of the plurality of computing devices own generated writing signal and the writing signals received from the first computing device only. [00187] The writing signals may be forwarded by the server by forwarding to each of the plurality of computing devices ail writing signals thai have been generated by ail of the other computing devices. The writing signal generated by a particular computing device may not be forwarded to that particular computing device.
[00188] The generated writing signais are received by the server in real. time.
[00189] The server may store each of the writing signais at the server for retrieval by the plurality of computing devices after completion of the meeting.
[00190] The writing signais may be part of a personalised workspace associated with each computing device associated with the meeting.
[00191] The server may record the physical writing actions that occur in a meeting for a particular user. The server may record all physical writing actions associated with a host of a meeting and attendees of the meeting. Further, the server may send to a computing device associated with th host of meeting, all the recorded physical writing actions.
[00192] The server may record all physical writing actions associated with a host of a meeting and attendees of the meeting. The server may send to a computing device associated with a first attendee of the meeting a combination of the host's recorded physical writing actions and the physical writing actions of the first attendee of the meeting to the first attendee, while excluding the physical writing actions of other attendees.
[00193] Further, the server may record personalised audio signals in addition to the physical writing actions associated with a meeting.
[00194] Fig. 7A shows a system block diagram of a similar system to that shown and described in relation to Fig. 3A. According to this example, the first computing device 305A is not connected to a standard electronic pen system. The standard electronic pen system of Fig. 3A is replaced with a camera or an electronic device 701 including or i corporati g a camera, as described with reference to Figs. 2A and 2B. Alternatively, a camera incorporated into a computing device as described with reference to Figs 1.A and 1 B ma be used. Further, the electronic pen 309A is replaced with a standard non-electronic pen 703.
[00195] In this example, the second computing device 305 B is using the writing detection sysiem as described with reference to Fig. 3A. [00196] Further, Fig. 7B shows a system biock diagram of a similar system to that shown and described in relation to Fig. 3B. According to this example, the first computing device 305A and second computing device 305B are not connected to standard electronic pen systems. The standard electronic pen system of Fig. 3B is replaced with a camera, or an electronic device (701 A, 701B] including or incorporating a camera as described with reference to Figs. 2A and 2B, or a computing device incorporating a camera as described with reference to Figs 1A and 1 B. For example, the device may be a smartphone or tablet device with an inbuilt camera. The device may be positioned in or on a stand or base to hold the camera steady.
[00197] In this example, the further computing device 305C is using the writing detection system as described with reference to Fig. 3B.
[00198] Further, the electronic pens (3G9A, 3Q9B) are replaced with standard non-electronic pens (7Q3A, 703B). The pens (7G3A, 703B) may be the same or different, in this example, the third computing device 305C is using a standard electronic pen system as described with reference to Fig 3B. it will be understood that any other combination of standard electronic pen systems and the herein described camera system for detecting physical writing actions may be connected to the computing devices.
[00199] This electronic (or computing) camera device is arranged to detect a physical writing action via the lens of the camera. That is, the device incorporates a software algorithm that performs the process of accessing an image generated by the camera associated with an electronic device, it will also be understood that the electronic device may also be a computing device as described with reference to Figs. 1A and 1B where a camera (e.g. a webcam) is connected to that computing device.
[00200] The electronic (or computing) device also performs the process of analysing the image to detect a first physical writing action. The device generates a first writing signal based on the analysis and then outputs that generated first writing signal. The generated writing signal ma b stored either locally in local memor or externa fly. For example, the generated writing signal may be stored on a connected external memory or transferred to the server. The transfer to the server may be in real time.
[00201] The device is arranged to detect within the image a plurality of edges of the writing surface upon which the writing actions are being performed. The device defines a boundary of the writing surface based on the detected edges, and then defines a number of distinct areas within the defined boundary. The device then analyses the image to detect a first physical writing action in one or more of the definec! areas based on the detected movement of the writing implement being used, and in particular the detected movement of the writing tip of the writing implement.
[00202] in order to determine whether the writing implement i actually making a mark on the physical writing surface and not just being moved in the air, the device is arranged to only record the movement being tracked if a determination is made that actual writing has occurred.
[00203] That Is, the device is arranged to analyse the image In order to determine whether writing has occurred on a physical writing surface, and, upon a positive determination, analyse the image to detect the writing tip of the writing implement performing the first physical writing action, detect movement of the writing tip in the area previously defined within the boundary, and then generate the first writing signal based on the detected movement, i this way, the detection of physical writing enables the device to accurately record the movement of the writing tip to record the writing action. As soon as the device detects that a mark is not being made on the writing medium, the recording of the physical writing action is paused, it should be understood that the camera lens is not mereiy recording the action of physical writing, but is recording the movement of the writing implement (and in particular its tip) only upon the detection that a physical writing action is occurring. That is, movement of the tip is constantly being monitored, but the recordal of a writing action is only made upon detection within the image that a physical writing action is occurring (i.e. a mark is being made).
[00204] The device is in communication with the computing device 305A and may utilise a database of writing implements either stored on the computing device 305A or in the server 301 to enable detection of the type of writing implement within the image. By accessing the database, a comparison of the sub-image of the detected writing implement may be made with images of writing implements in the database. Upon determining the type of writing implement based on the comparison, the server 301 or computing device 305A may use the data in the database entry for that writing tip to determine the physical location in space of the writing tip of that writing implement
[00205] Therefore, this writing detection system a d method may be used in conjunction with the writing collaboration system also described herein. The writing detection system provides reliable optical/camera based detection. [00206] The following steps provide reliable writing detection. The software operates by classifying and analysing the image content fo characteristic (pen) features in the consecutive frames usi g the following steps;
[00207] Edge detection: This is used to identify points in an image at which the image changes sharply. An edge is a boundary between two regions with relatively distinct properties.
[00208] Detect/Follow identified areas in image: This allows detection of movement in particular areas within the image. The contour detector combines areas based on spectral clustering.
[00209] Detect Pen objects in image: First, a classifier is developed. After a classifier is developed, the classifier can be applied to input images collected from the camera. Cascade classifiers may be used as a machine learning method for training the classifier to detect an object in different images; in this case the image of the pen and the tip of the pen.
[00210] Detect new writing in image: This allows automatic detection of actual writing actions occurring for the purpose of classifying the pen action as active or .non-active. This allows classification of pen movements to write or non-write modes based on whether actual writing has occurred or not.
[00211] Therefore, the system can detect physical writing actions being performed using a writing implement (e.g. a pen) that is writing on any type of surface, such as a piece of paper or a whiteboard, which is visible to the camera. The camera based solution may detect the pen and follow its. movement in relation to defined areas within a defined boundary, and subsequently display the written images or text on a display connected to a computing device. The pen can be moved in three-dimensional space and tracked by the camera relative to the areas and boundary based on the images captured by the camera. Corresponding images may be displayed on a display of the computing device. The camera tracks the different positions of the pen while a user is writing or drawing and then either stores the data to be uploaded later or transmits the data simultaneously to a computing device. The computer displays th images and text drawn on the surface. The computer may b located anywhere, as long as it is able to communicate wireiessly with the camera, and be able to display the written text or images.
[00212] Fig. 8 shows a process flow diagram. The process starts at step 8 1. At step 803, an image captured by the camera is accessed. At step 805, the image is analysed to detect a physical writing action. At step 807, a writing signa! is generated. At step 809, the writing signal is output. Th process then ends at step 811. [00213] Figs. 9A to 9K show a number of images of a user interface that may be generated at one or more of the computing devices (305).
[00214] Fig. 9A shows a user interface screen displayed by the desktop application running on the computing device operated by a host of a meeting. The host may be a teacher for example,
[00215] The user must enter a user name and password to log in to a session. The use then has the option to host a particular session such as a "math NSW" session. Alternatively, the user may attend a particular session that is being hosted by someone else.
[00216] Alternatively, the user may schedule a meeting with another user.
[00217] Figure 9B shows a further user interface. This user interface is displayed by a desktop application running on a computing device operated by another user. In this case, the user is not a host but is an attendee of a meeting being run by a host. The attendee must enter their usemame and password to log in to a session. The user may then select a meeting that they have been invited to by a host in order to attend that meeting. An indicator is provided in the dropdown list of available meetings for that particular user to show whether the meeting has already started or not. Upon selecting login and the system confirming their login details are correct, the user is then taken to a landing page upon which they can view a canvas of the meeting. The canvas displays all the physical writing actions that have been performed by the host of the meeting. Further, the canvas also shows the physical writing actions that are being performed b the attendee. The physical writing actions of the other attendees are not shown on the attendees screen.
[00218] Fig. 9C shows a user interface for the host of the meeting (all session) a large canvas page (virtual writing space) is shown for this session. The page indicates all physical writing actions that have been performed by the host, A window is provided within the user interface indicating all the attendees attending the session. A furthe window is also provided that enables the host to provide feedback to one o more of the attendees.
[00219] An option is also provided to start recording the session. The recording of the session will record all the actions performed by the host as well as all the actions performed by all of the attendees to the session. However, each attendee will only be able to access the recording of their own actions and those of the host when they collaborate with the attendee. [00220] Fig. 9D shows a further user interface available to the host of the session. This interface indicates ail the help information available to the host. For example, help information is provided to show the user that various views may be available to the host. As described with reference to Fig. 9G, 9H, 91
[00221] Fig. 9E shows a further user interface for the host. This particular user interface is allowing the host to view the page of one particular attendee (Sam). While viewing this page, the host may perform physical writing actions that are detected by the hosts computing device and forwarded b the server to the attendees computing device for display on the attendee's page. That is, the host may perform writing actions that are then displayed on Sam's page.
[0Q222] None of the other attendees' will receive the same physical writing actions on their pages.
[00223] Fig. 9F shows a further user interface available to the hosts and the attendees'. According to this interface, the users ma chat with the host to ask questions and provide feedback,
[00224] Fig. 9G provides a further user Interface available to the host. According to this interface, the host may switch the different views to enable the host to view all of the pages being produced by the individual attendees in one screen. That is, all of the attendee's pages are displayed on the screen available to the host. Further, the host may select an individual's page in order to interact with that user's page. The user interface shown in Fig. 9H shows the host selecting a particular page of a particular user to enable the host to interact with that user via their page.
[GQ225J Fig. 9£ shows a user interface where the host may select two or more users to view the pages of those users side-by-sid in order to make a comparison. That is, the physical writing actions of two or more attendees may be monitored by the host and the host may interact with each of those pages to generate physical writing actions that are then transmitted to those individual attendees.
[00226] Fig. 9 shows a user interface available to an attendee of the session. The attendee's virtual writing space is displayed here, ft shows the attendee's writing actions as well as those of the host. Other attendee's writing actions are not shown. According to this user interface, the attendee may receive instant feedback from the host. [00227] Fig. ΘΚ shows a further user interface available to the host (as well as available to the attendee) wherein the attendees and hosts may record their session. Sessions that are recorded are personalised sessions in the form of a video that records alt the physical writing actions performed by both the host and the attendee. Further, audio signals are also recorded alongside the images of the interaction by the screen. Each attendee receives a recording of their own interactions and with the host, The attendees do not receive copies of interactions of other attendee's physical writing actions or audio. That is, any audio signals or voice signals that are generated and forwarded to each of the host and the attendees are personalised for those particular users. That is, each attendee only receives audio that is generated by them or generated by the host. The do not receive any audio from other attendees. Therefore, a personalised voice recording of the meeting is provided to eac attendee. Further, the host is able to receive voice recordings from all of the attendees including themselves;
[00228] It will be understood that the server may incorporate an algorithm to track which attendee's page the host is viewing and to then link that determination with the ability to enable the host to interact with that attendee via the attendee's page. Also, the server can determine which of multiple pages multiple attendees are viewing in order to assist the host in determining further actions with the attendees. An indication may be provided on the user interface to indicate to the host which of the pages is the active page being viewed by the attendee to assist the host in collaborating and sharing information via that active page.
[00229] Further, the server ma provide the ability to buffer content between the server and client. These buffers may be used to transmit custom page backgrounds (PDF documents, JPG/PNG images etc.).
[00230] Further, synchronisation between the server and the clients may be provided in any suitable form to enable users to leave and join meetings seamlessly. For example, the system enables a user to leave a meeting by choice or because of failure of communication or one of the other peer components. Recovery and restart is the automated continuation of processing for the meeting after failure or restart. The applications are made aware that recovery has taken place. The recovery allows the applications to continue the meeting process with only a short delay. The recovery process will provide for no loss of data in the meeting history.
[00231] The live dashboard available to the host enables the teacher to monitor active pages being used by the attendees. The host may monitor the active pages in real time and interact with each attendee or two or more attendees via those active pages. [00232] The various algorithms and software elements (computer program product) may be incorporated within a computer readable medium, which has recorded thereon the computer program fo implementing any of the methods as described herein.
[00233] It will be understood that the herein described systems and methods may be modified, such as, for example using a web based application that can be accessed across multiple devices. A browser based Plugi may be provided for detection of real-time hand-writing.
[00234] Further optical recognition of handwriting through mobile devices may be provided as described herein.
[00235] Further, the desktop and the scheduling appiication may be combined into one web based interface.
[00236] Further, it will be understood that users ma use a variety of tools to write with, such as pen and lead pencils.
[00237] Further, it will be understood that other communication technologies besides Bluetooth may be used to communicate with the electronic base of the pen system and the computing device.
[00238] The herein described systems and methods provide the following advantages: Desktop applications are stable and work on main operating systems; There are no lag or latency issues providing real-time collaboration in an instant; The resolution of the displayed writing is very high; The server platform is a cloud based making it easy to access, and secure; Users can write on any notepad they prefer; Pre-printed paper is not necessary; Recordings of meeting collaborations are stored on the- server; The grou meetings are scalable; The servers are capable of running multiple meetings concurrently.
[00239] Various other enhancements may be made to the system to improve functionality. For example, virtual glasses may be incorporated into the system. Th virtual glasses include a camera which is used with the pen writing system to pick up the writing actions of the user. Further, the virtual glasses may provide the user with an immersive experience. For example, a student studying at home may wear the virtual glasses and feel as if they are in a real classroom environment. This may assist in reducing distractions. As a further example, augmented reality images may be provided to introduce additional information as well as viewing writing actions from the pen system. According to these systems, the glasses may com unicate with a local computer system using Bluetooth, or may communicate with a remote computer system via any other medium such as Wi-Fi or the internet. The physical elements of writing may be mixed with the virtual classroom, as weii as the augmented information.
[00240] It will be understood that, as an alternative, a desktop application is not required. The remote computer may foe a web server that is in communication with the user's computing device. The we server may take control of the camera of the user's computer or electronic device. The web server may then carry out the process of detecting and processing the hand writing. The movement detection of the pen connected to the user's computer is carried out by a software plug-in executed on the user's computer. The movement data is then transferred to the web server for analysis at the server.
[00241] It will be understood that the system may be used to provide online examinations. The camera may be used to monito the user taking the exam to ensure that the rules of the exam are followed.
[00242] It will be understood that the system may incorporate analytical fools to monitor when and how long a user was using the system. For example, the system may monitor the length of time a user was in a virtual class by analysing when writing actions were being recorded.
[00243] It will be understood that the server ma communicate with a CRM system in order to obtain and utilise corporate information. For example, data may be imported from and shared with the CRM system and associate database to enable access rights to be established. As another example, the CRM system may enable student details to be retrieved and imported into the herein described system.
[GQ244J According to a further example, the system may support distributing, supervising and the collecting of test results in a secure and reliable manner using a cloud based connected service. According to this example, tests may be created via the described cloud-based system. Further, Optical Mark Recognition {OMR) template elements may be supported.
[00245] According to the template, identification information for the user may be created automatically. Items and question numbers in the template may b numbered automatically. Written response areas may be created. Different grid layouts may be used to ensure optimal layout of the OMR elements. [00246] Once the form template is finished, it is stored as template information in the cloud database, so the collected forms can be recognized by the system, The form template can be published as a unique form for each recipient, or it can be published as a single unidentified form and copied or printed into as many forms as is needed.
[00247] The tests may be distributed using a secure VPN network. All tests are stored temporarily in an encrypted file system via a controller.
[00248] The cloud server acts as Certificate Authorit (CA) providing a PKi (public key infrastructure). The PKI consists of i) separate certificate (also known as a public key) and private key for the server and each client, and ii) master Certificate Authority (CA) certificate, a key that is used to sign each of the server and client certificates. This supports bidirectional authentication based on certificates, meaning that the client must authenticate the server certificate and the server must authenticate the client certificate before mutual trust is established.
[00249] The controller prints the tests out with a laser printer. The forms are then placed in the student unit (i.e. a computing or electronic device that connects the web server),
[00250] Identifier codes are included in the forms within the system, so that the system can automatically identify who owns the form and what test it belongs to.
[00251] The system collects test results automatically. The system captures people's hand marked responses made in the checkboxes. The system also reads a barcode or QR code data for form-page recognition and uses OMR to detect and capture pen marks made in the checkboxes. Capturing of handwritten text and drawings is also supported. The forms that cannot be collected automatically can be scanned an stored via the controller.
[00252] The controller pushes completed test results over the secure VPN connection to the cloud server.
[00253] The cloud server automaticall marks multiple-choice OMR tests with the services provided. The system exports the captured data to a database/spreadsheet or other
management software for further analysis. The system also calculates and provides statistics for the tests performed. Tests, which are uniquely identified as belonging to a specific respondent, will connect the captured data with the respondent's record. Tests which are not bound to a specific respondent wiii connect with a data table record using a captured ID number, of an automatically generated numeric record will be created in the database.
[00254] The cloud based management interface may provide one or more of the following functions:
• Publishing a test, the test name and template information is stored in server.
• Managing test forms and pages. » Managing test results.
• Providing functions for enhancing forms with recognition errors. [00255] An interface may be provided to a customer's IT-system.
[00256] The system provides a secure, private reliable service and is able to collect at least 90% results automatically without the requirement for manual scanning.
[00257] The controller and Student Units are connected using encrypted communication link over local secure Wi-Fi connection. WPA2 mode with device MAC address filtering may also be used.
[00258] The controller provides for:
• Temporar storing of tests and test results in encrypted file system.
• Connectio to the cloud server using VPN.
• Connection to Student Units over Wi-Fi.
• Acts as WPA2 Wi-Fi access point. [00259] The Student Unit provides for:
• Storage for test forms . • Real-time scanning of the test results using pen module and camera.
• The necessary battery units. « Simple display unit
• User I terface.
[00260] The cloud server provides functionality for:
• Publishing a test, where the test name and the associated template information is stored in the server.
• Managing test forms and pages.
• Managin test results. The cloud server processes the incoming images. When the processor finds a barcode or QR code, the server will detect the ID and look at the ID information in the database, read the form and capture the results from the form.
[00281] Provide functionality for enhancing forms with recognition errors. industrial Applicability
[00262] The arrangements described are applicable to the computer and data processing industries and particularly for the knowledge sharing and teaching industries.
[00283] For example, the herein described systems and methods may be used in the following areas: Tutoring Companies focally and giobally; Remote Private Tutoring; Distance Education; Remote Language Tutoring; Arts and Design Tutoring; In Classroom Learning; Law-firm Document Annotation and Editing; Flipped Classrooms, On-line collaborations across multiple users. Further, the herein described systems and methods may be used in mining,
construction, engineering, legal, healthcare, market research and many other areas where collaboration between two or more entities is required.
[00264] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. [00265] in the context of this specification, the wore! "comprising" means including principally but not necessarily solely" or "having" or "including", and not "consisting only of. Variations of the word "comprising", suc as "comprise" and "comprises" have correspondingly varied meanings.

Claims

CLAIMS:
1. A server and computer implemented method for sharing physical writing actions, the method comprising the steps of:
detecting, at each of a pluralit of computing devices associated with a meeting, one or more physical writing actions being performed on physical writing surfaces;
generating writing signals based on the physical writing actions;
transmitting the generated writing signals to a server;
forwarding, via the server, the writing signals for receipt at the plurality of computing devices associated with the meeting; and
each computing device outputting a representation of the physical writing actions.
2. The method of claim 1 further comprising the steps of:
detecting, at a first computing device, a first physical writing action being performed on a first physical writing surface;
generating a first writing signal based on the detected first physical writing action;
transmitting the first writing signal from the first computing device to a server;
detecting, at a second computing device, a second physical writing action being performed on a second physical writing surface;
generating a second writing signal based on the detected second physical writing action; transmitting the second writing signal from the second computing device to the server; forwarding, via t e server, the first writing signal to the second computing device and forwarding, via the server, the second writing signal to the first computtng device;
receiving the first writing signal at the second computing device;
receiving the second writing signal at th first computing device;
outputting a representation of both of the detected first and second physical writing actions from the first computing device and from the second computing device.
3. The method of claim 2, further comprising the step of:
outputting the first wriiing signal from the first computing device prior to transmitting the generated first writing signal to the server.
4. The method of claim 2, further comprising the step of:
outputting the second writing signal from the second computing device prior i© transmitting the generated second writing signal to the server.
5. The method of claim 2, further comprising the step of:
displaying the representation in relation to a first virtual writing space that corresponds with the first physical writing surface.
8. The method of claim 2, further comprising the step of:
displaying the representation in relation to a second virtual writing space that corresponds with the second physical writing surface.
7. The method of claims 5 or 6, wherein the representation is displayed on the first virtual writing space in a position that corresponds with the position that the first physical writing action was originall performed on the first physical writing surface.
8. The method of claims 5 or 6, wherein the representation is displayed on the second virtual writing space in a position that corresponds with the position that the second physical writing action was originally performed on the second physical writing surface.
9. The method of claim 1 , further comprising the steps of:
a first computing device receiving all the writing signals of the remaining computing devices, and outputting a re resentation of the physical writing acti ns based on its own generated writing signal and the received writing signals, and
one or more of the plurality of computing devices receiving the writing signals from the first computing device only, and outputting a representation of the physical writing actions based on th one or more of the plurality of computing devices own generated writing signal and the writing signals received from the first computing device only.
10. The method of claim 1 , wherein the server is located on a pubtically accessible network.
11. The method of claim 1 , wherein the step of forwarding the writing signals further comprises the step of:
forwarding to each of the plurality of computing devices all writing signals that have been generated by all of the other computing devices.
12. The method of claim 11 , wherein the writing signal generated by a particular computing device is not forwarded to that particular computing device,
13. The method of claim 1 , wherein the generated writing signals are transmitted to the server in real time.
14. The method of claim 1 further comprising the ste of:
storing each of the writing signals at the server for retrieval by the plurality of computing devices afte completion of the meeting.
15. The method of claim 1 , wherein the writing signals are part of a personalised workspace associated with each computing device associated with the meeting.
16. The method of claim 1 ,. wherein the step of detecting one or more physical writing actions is performed by:
accessing an image generated by a camera on a computing device;
analysing the image to detect a first physical writing action; and
generating a first writing signal based on the analysis.
17. The method of claim 1 further comprising the ste of the server recording the physical writing actions that occur in meeting for a particular user.
18. The method of claim 17, further comprising the steps of: recording, at the server, all physical writing actions associated .with a host of a meeting and attendees of the meeting; and
sending, to a computing device associated with the host of meeting, ail the recorded physical writing actions.
19. The method of claim 17, further comprising the steps of:
recording, at the server, all physical writing actions associated with a host of a meeting and attendees of the meeting; and
sending, to a computing device associated with a first attendee of the meeting, a combination of the host's recorded physical writing actions and the physical writing actions of the first attendee of the meeting to the first attendee, while excluding the physical writing actions of other attendees.
20. The method of claim 17 further comprising the steps of recording personalised audio signals in addition to the physical writing actions.
21. The method of claim 1 further comprising at least one computing device actively displaying writing signals generated by one or more other computing devices.
22. The method of claim 1 further comprising the steps of:
a first computing device allowing selective activation of one or more virtual writing spaces associated with one or more other computing devices,
and, upon activation,
displaying the physical writing actions associated with the computing devices that are associated with th activated virtual writing spaces.
23. The method of claim 22, wherein the physical writing actions are displayed in real time.
24. A server and computer system arranged to perform the methods of any one of claims 1 to 23.
25. A computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the methods of claims 1 to 23.
26. A server implemented method for sharing physical writing actions, the method com rising the steps of:
receiving, at a server, generated writing signals associated with a meeting from two or more computing devices, wherein the generated writing signals are associated with physical writing actions captured by the two or more computing devices; and
forwarding, from the server to the computing devices, the generated writing signals associated with the meeting to enabie each computing device to output a representatio of the physical writing actions.
27. The method of claim 26 further comprising the steps of:
receiving a first writing signal from a first computing device at a server, wherein the first writing signal is based on a first physical writing action being performed on a first physical writing surface detected at the first computing device;
receiving a second writing signal from a second computing device at a server, wherein th second writing signal is based on a second physical writing action being performed on a second physical writing surface detected at the second computing device: and
forwarding, the first writing signal from the server to the second computing device and forwarding the second writing signal from the server to the first computing device, to enable the first. computing device and the second computing device to output a representation of both of the detected first and second physical writing actions.
28. The method of claim 26, further comprising the steps of:
transmitting to a first computing device all the writing signals of the remaining computing devices to enable the first computing device to output a representation of the physical writing actions based on its own generated writing signal and the received writing signals, and
transmitting to one or more of the plurality of computing devices the writing signals from the first computing device only, to enabie the one o more of the plurality of computing devices to output a representation of the physical writing actions based on the one or more of the plurality of computing devices own generated writing signal and the writing signals received from the first computing device only.
29. The method of claim 26, wherein the server is located on a publtcally accessible network.
30. The method of claim 26, wherein the step of forwarding the writing signals further comprises the step of:
forwarding to each of the plurality of computing devices ail writing signals that have been generated by all of the other computing devices.
31. The method of ciaim 30, wherein the writing signal generated by a particular computing device is not forwarded to that particular computing device.
32. The method of claim 2$, wherein the generated writing signals are received by the server in real time.
33. The method of claim 2f> further comprising the step of:
storing each of the writing signals at th server for retrieval by the plurality of computing devices after completion of the meeting.
34. The method of claim 26, wherein the writing signals are part of a personalised workspace associated with each computing device associated with the meeting.
35. The method of claim 26 further comprising th step of the server recording the physical writing actions that occur in a meeting for a particular user.
38. The method of claim 26, further comprising the steps of:
recording, at the server, all physical writing actions associated with a host of a meeting and attendees of the meeting; and
sending, to a computing device associated with the host of meeting, all the recorded physical writing actions.
37. The method of claim 26, further comprising the steps of:
recording, at the server, all physical writing actions associated with a host of a meeting and attendees of the meeting; and
sending, to a computing device associated with a first attendee of the meeting, a combination of the host's recorded physicai writing actions and the physical writing actions of the first attendee of the meeting to the first attendee, while excluding the physical writing actions of other attendees.
38. The method of claim 26 further comprising the steps of recording personalised audio signals in addition to the physical writing actions.
39. A server arranged to arranged to perform the methods of any one of claims 26 to 38.
40. A computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the methods of claims 26 to 38.
41. A computer implemented method for detecting a physical writing action; the method com rising the ste s of:
accessing an image generated by a camera associated with a computing device;
analysing the image to detect a first physical writing action;
generating a first writing signal based on the analysis; and
outputting the first writing signal.
42. The method of claim 41 further comprising the steps of:
detecting within the image a plurality of edges of a writing surface, and defining a boundary of the writing surf ace based on the detected edges;
defining a plurality of areas within the defined boundary; and
analysing the image to detect a first physical writing action in one or more of the defined areas.
43. The method of claim 41 further comprising the steps of:
analysing the image to determine whether writing has occurred on a physical writing surface, and, upon a positive determination:
analysing the image to detect the writing tip of the writing implement performing the first physical writing action, and
generating the first writing signai based on the detection of the writing tip.
44. The method of claim 41 further comprising the step of analysing the image to detect a type of writing implement within the image.
45. The method of claim 44 further comprising the steps of:
accessing a database of writing implements,
comparing a sub-image of a detected writing implement with images of writing implements in the database, and
determining the type of writing implement based on the comparison.
46. The method of claim 45 further comprising the step of determining the location of a writing tip of the detected writing implement based on a database entry of the writing implement.
47. The method of claim 41 further comprising the steps of:
detecting movement of a writing tip of a writing implement in the image, and
generating the first writing signai based on the detection.
48. The method of cfaim 41 further comprising the step of storing the first writing signal either locally or externally.
49. The method of cfaim 41 further comprising the step of transferring the first writing signal to a server in real time.
50, A computer program product including a computer readabie medium having recorded thereon a computer program fo implementing the method of any one of claims 41 to 49.
51. A computing or eieetronic device arranged to perform the method of any one of claims 41 to 49.
PCT/AU2016/000107 2015-03-27 2016-03-29 Improved systems and methods for sharing physical writing actions WO2016154660A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/562,380 US10915288B2 (en) 2015-03-27 2016-03-29 Systems and methods for sharing physical writing actions
AU2016240387A AU2016240387B2 (en) 2015-03-27 2016-03-29 Improved systems and methods for sharing physical writing actions
US17/247,656 US11614913B2 (en) 2015-03-27 2020-12-18 Systems and methods for sharing physical writing actions
AU2022200055A AU2022200055B2 (en) 2015-03-27 2022-01-06 Improved systems and methods for sharing physical writing actions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2015901117 2015-03-27
AU2015901117A AU2015901117A0 (en) 2015-03-27 Improved systems and methods for sharing physical writing actions

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/562,380 A-371-Of-International US10915288B2 (en) 2015-03-27 2016-03-29 Systems and methods for sharing physical writing actions
US17/247,656 Continuation US11614913B2 (en) 2015-03-27 2020-12-18 Systems and methods for sharing physical writing actions

Publications (1)

Publication Number Publication Date
WO2016154660A1 true WO2016154660A1 (en) 2016-10-06

Family

ID=57003671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2016/000107 WO2016154660A1 (en) 2015-03-27 2016-03-29 Improved systems and methods for sharing physical writing actions

Country Status (3)

Country Link
US (2) US10915288B2 (en)
AU (2) AU2016240387B2 (en)
WO (1) WO2016154660A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172052A (en) * 2017-05-25 2017-09-15 苏州科达科技股份有限公司 A kind of authentication method and device for conference system
CN107485176A (en) * 2017-08-09 2017-12-19 安徽状元郎电子科技有限公司 A kind of VR classrooms of high Experience Degree
CN109146744A (en) * 2018-10-18 2019-01-04 贵州民族大学 Overturning Teaching System based on SPOC
WO2019097258A1 (en) * 2017-11-17 2019-05-23 Light Blue Optics Ltd Device authorization systems
CN109920290A (en) * 2017-12-13 2019-06-21 讯飞幻境(北京)科技有限公司 A kind of educational system based on virtual reality

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10657694B2 (en) 2012-10-15 2020-05-19 Tangible Play, Inc. Activity surface detection, display and enhancement of a virtual scene
US9158389B1 (en) 2012-10-15 2015-10-13 Tangible Play, Inc. Virtualization of tangible interface objects
US10915288B2 (en) 2015-03-27 2021-02-09 Inkerz Pty Ltd. Systems and methods for sharing physical writing actions
US10650813B2 (en) * 2017-05-25 2020-05-12 International Business Machines Corporation Analysis of content written on a board
GB2591902B (en) 2018-09-17 2022-06-08 Tangible Play Inc Display positioning system
CN111338811A (en) * 2019-02-13 2020-06-26 鸿合科技股份有限公司 User writing behavior analysis method, server, terminal, system and electronic equipment
MX2021014869A (en) * 2019-06-04 2022-05-03 Tangible Play Inc Virtualization of physical activity surface.
US11165597B1 (en) * 2021-01-28 2021-11-02 International Business Machines Corporation Differentiating attendees in a conference call
EP4141766A1 (en) * 2021-08-31 2023-03-01 Ricoh Company, Ltd. Information processing apparatus, meeting system, method and carrier means
CN115328382A (en) * 2022-08-16 2022-11-11 北京有竹居网络技术有限公司 Method, apparatus, device and medium for managing writing content

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060206564A1 (en) * 2005-03-08 2006-09-14 Burns Roland J System and method for sharing notes
US20060209051A1 (en) * 2005-03-18 2006-09-21 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Electronic acquisition of a hand formed expression and a context of the expression
US20110310066A1 (en) * 2009-03-02 2011-12-22 Anoto Ab Digital pen
US20120231441A1 (en) * 2009-09-03 2012-09-13 Coaxis Services Inc. System and method for virtual content collaboration
US8402391B1 (en) * 2008-09-25 2013-03-19 Apple, Inc. Collaboration system
US20140267081A1 (en) * 2013-03-15 2014-09-18 Research In Motion Limited Shared document editing and voting using active stylus based touch-sensitive displays

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3402391A (en) 1965-04-19 1968-09-17 A V Electronics Inc Continuous control using alternating or direct current via a single conductor of plural functions at a remote station
WO2001052230A1 (en) * 2000-01-10 2001-07-19 Ic Tech, Inc. Method and system for interacting with a display
US6661409B2 (en) * 2001-08-22 2003-12-09 Motorola, Inc. Automatically scrolling handwritten input user interface for personal digital assistants and the like
US20030163525A1 (en) * 2002-02-22 2003-08-28 International Business Machines Corporation Ink instant messaging with active message annotation
US7567239B2 (en) * 2003-06-26 2009-07-28 Motorola, Inc. Method and system for message and note composition on small screen devices
US20070100952A1 (en) * 2005-10-27 2007-05-03 Yen-Fu Chen Systems, methods, and media for playback of instant messaging session histrory
US20090193327A1 (en) * 2008-01-30 2009-07-30 Microsoft Corporation High-fidelity scalable annotations
JP4385169B1 (en) 2008-11-25 2009-12-16 健治 吉田 Handwriting input / output system, handwriting input sheet, information input system, information input auxiliary sheet
US8760416B2 (en) * 2009-10-02 2014-06-24 Dedo Interactive, Inc. Universal touch input driver
JP2012005107A (en) * 2010-05-17 2012-01-05 Ricoh Co Ltd Multi-base drawing image sharing apparatus, multi-base drawing image sharing system, method, program and recording medium
US10965480B2 (en) * 2011-09-14 2021-03-30 Barco N.V. Electronic tool and methods for recording a meeting
US10050800B2 (en) * 2011-09-14 2018-08-14 Barco N.V. Electronic tool and methods for meetings for providing connection to a communications network
JP6051549B2 (en) * 2012-03-16 2016-12-27 株式会社リコー Communication control system, control device, program and electronic information board
US9354725B2 (en) * 2012-06-01 2016-05-31 New York University Tracking movement of a writing instrument on a general surface
US9111258B2 (en) * 2012-10-25 2015-08-18 Microsoft Technology Licensing, Llc Connecting to meetings with barcodes or other watermarks on meeting content
US20140165152A1 (en) * 2012-12-11 2014-06-12 Microsoft Corporation Whiteboard records accessibility
JP6142580B2 (en) * 2013-03-07 2017-06-07 株式会社リコー Information processing system, information registration method, conference apparatus, and program
JP2015069284A (en) * 2013-09-27 2015-04-13 株式会社リコー Image processing apparatus
JP6451276B2 (en) * 2014-12-10 2019-01-16 株式会社リコー Image management system, communication terminal, communication system, image management method, and program
US10915288B2 (en) 2015-03-27 2021-02-09 Inkerz Pty Ltd. Systems and methods for sharing physical writing actions
CN105182676B (en) * 2015-09-08 2017-03-01 京东方科技集团股份有限公司 Projection screen, touch screen method for displaying projection and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060206564A1 (en) * 2005-03-08 2006-09-14 Burns Roland J System and method for sharing notes
US20060209051A1 (en) * 2005-03-18 2006-09-21 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Electronic acquisition of a hand formed expression and a context of the expression
US8402391B1 (en) * 2008-09-25 2013-03-19 Apple, Inc. Collaboration system
US20110310066A1 (en) * 2009-03-02 2011-12-22 Anoto Ab Digital pen
US20120231441A1 (en) * 2009-09-03 2012-09-13 Coaxis Services Inc. System and method for virtual content collaboration
US20140267081A1 (en) * 2013-03-15 2014-09-18 Research In Motion Limited Shared document editing and voting using active stylus based touch-sensitive displays

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172052A (en) * 2017-05-25 2017-09-15 苏州科达科技股份有限公司 A kind of authentication method and device for conference system
CN107485176A (en) * 2017-08-09 2017-12-19 安徽状元郎电子科技有限公司 A kind of VR classrooms of high Experience Degree
WO2019097258A1 (en) * 2017-11-17 2019-05-23 Light Blue Optics Ltd Device authorization systems
US11729165B2 (en) 2017-11-17 2023-08-15 Plantronics, Inc. Device authorization systems
CN109920290A (en) * 2017-12-13 2019-06-21 讯飞幻境(北京)科技有限公司 A kind of educational system based on virtual reality
CN109146744A (en) * 2018-10-18 2019-01-04 贵州民族大学 Overturning Teaching System based on SPOC
CN109146744B (en) * 2018-10-18 2021-06-25 贵州民族大学 SPOC-based turnover classroom teaching system

Also Published As

Publication number Publication date
US20210240429A1 (en) 2021-08-05
AU2016240387A1 (en) 2017-11-16
US20180284907A1 (en) 2018-10-04
AU2022200055A1 (en) 2022-02-03
US11614913B2 (en) 2023-03-28
AU2016240387B2 (en) 2021-10-07
US10915288B2 (en) 2021-02-09
AU2022200055B2 (en) 2023-07-27

Similar Documents

Publication Publication Date Title
US11614913B2 (en) Systems and methods for sharing physical writing actions
US10908802B1 (en) Collaborative, social online education and whiteboard techniques
US10747418B2 (en) Frictionless interface for virtual collaboration, communication and cloud computing
US10033774B2 (en) Multi-user and multi-device collaboration
US10404943B1 (en) Bandwidth reduction in video conference group sessions
US20130091440A1 (en) Workspace Collaboration Via a Wall-Type Computing Device
US11288031B2 (en) Information processing apparatus, information processing method, and information processing system
US20120204120A1 (en) Systems and methods for conducting and replaying virtual meetings
US9992243B2 (en) Video conference application for detecting conference presenters by search parameters of facial or voice features, dynamically or manually configuring presentation templates based on the search parameters and altering the templates to a slideshow
US11562657B1 (en) Queuing for a video conference session
US20120042265A1 (en) Information Processing Device, Information Processing Method, Computer Program, and Content Display System
US11388173B2 (en) Meeting join for meeting device
CN104035565A (en) Input method, input device, auxiliary input method and auxiliary input system
US9024974B2 (en) Augmented reality system, apparatus and method
JP2020161118A (en) Information processing apparatus, information processing method, and information processing system
US11349888B2 (en) Text data transmission-reception system, shared terminal, and method of processing information
WO2012109006A2 (en) Systems and methods for conducting and replaying virtual meetings
JP2020198078A (en) Information processing apparatus, information processing system, and information processing method
CN110720179A (en) Method and system for watermarking video media to track video distribution
TWI809604B (en) Video conference device and operation method thereof
Bill Hundson Web-based image-Augmented Reality (AR) matching generator
JP2021039617A (en) Information processing system, information processing device, image display method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16771081

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15562380

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2016240387

Country of ref document: AU

Date of ref document: 20160329

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 16771081

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.04.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16771081

Country of ref document: EP

Kind code of ref document: A1