WO2019055952A1 - Integrated document editor - Google Patents
Integrated document editor Download PDFInfo
- Publication number
- WO2019055952A1 WO2019055952A1 PCT/US2018/051400 US2018051400W WO2019055952A1 WO 2019055952 A1 WO2019055952 A1 WO 2019055952A1 US 2018051400 W US2018051400 W US 2018051400W WO 2019055952 A1 WO2019055952 A1 WO 2019055952A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- command
- graphic object
- computing device
- memory
- parameter
- Prior art date
Links
- 230000008859 change Effects 0.000 claims abstract description 71
- 230000004044 response Effects 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 52
- 238000003780 insertion Methods 0.000 claims description 31
- 230000037431 insertion Effects 0.000 claims description 31
- 238000010079 rubber tapping Methods 0.000 claims description 25
- 230000007423 decrease Effects 0.000 claims description 24
- 238000013479 data entry Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 13
- 230000033001 locomotion Effects 0.000 claims description 7
- 230000003247 decreasing effect Effects 0.000 claims 1
- 239000013598 vector Substances 0.000 description 66
- 230000006870 function Effects 0.000 description 32
- 230000003993 interaction Effects 0.000 description 20
- 230000000007 visual effect Effects 0.000 description 17
- 238000012552 review Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 10
- 238000011960 computer-aided design Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 101100440286 Mus musculus Cntrl gene Proteins 0.000 description 5
- 238000013459 approach Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 238000012217 deletion Methods 0.000 description 5
- 230000037430 deletion Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000010348 incorporation Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 101000579646 Penaeus vannamei Penaeidin-1 Proteins 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000005057 finger movement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000009418 renovation Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
- G06V30/333—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
- G06V30/36—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
Definitions
- the disclosed embodiments relate to document creation and editing. More specifically, the disclosed embodiments relate to integration of recognition of information entry with document creation. Handwritten data entry into computer programs is known. The most widespread use has been in personal digital assistant devices. Handwritten input to devices using keyboards is not widespread for various reasons. For example, character transcription and recognition are relatively slow, and there are as yet no widely accepted standards for character or command input.
- a digitizing recognizer such as a digitizing pad, a touch screen or other positional input receiving mechanism as part of a display.
- a unit of data is inserted by means of a writing pen or like scribing tool and accepted for placement at a designated location, correlating x-y location of the writing pen to the actual location in the document, or accessing locations in the document memory by emulating keyboard keystrokes (or by the running of code/programs).
- the entered data is recognized as legible text with optionally embedded edit or other commands, and it is converted to machine-readable format. Otherwise, the data is recognized as graphics (for applications that accommodate graphics) and accepted into an associated image frame. Combinations of data, in text or in graphics form, may be concurrently recognized.
- there is a window of error in location of the writing tool after initial invocation of the data entry mode so that actual placement of the tool is not critical, since the input of data is correlated by the initial x-y location of the writing pen to the actual location in the document.
- there is an allowed error as a function of the pen's location within the document (i.e., with respect to the surrounding data).
- handwritten symbols selected from a basic set common to various application programs may be entered and the corresponding commands may be executed.
- a basic set of handwritten symbols and/or commands that are not application- dependent and that may be user-intuitive are applied. This handwritten command set allows for the making of revisions and creating documents without having prior knowledge of commands for a specific application.
- the disclosed embodiments may be implemented when the user invokes a Comments Mode of at a designated location in a document and then the handwritten information may be entered via the input device into the native Comments field, whereupon it is either converted to text or image or to the command data to be executed, with a handwriting recognizer operating either concurrently or after completion of entry of a unit of the handwritten information.
- Information recognized as text is then converted to ciphers and imported into the main body of the text, either automatically or upon a separate command.
- Information recognized as graphics is then converted to image data, such as a native graphics format or as a JPEG image and imported into to the main body of the text at the designated point, either automatically or upon a separate command.
- Information interpreted as commands can be executed, such as editing commands, which control addition, deletion or movement of text within the document, as well as font type or size change or color change.
- the disclosed embodiments may be incorporated as a plug-in module for the word processor program and invoked as part of the system, such as the use of a macro or as invoked through the Track Changes feature.
- the user may manually indicate, prior to invoking the recognition mode, the nature of the input, whether the input is text, graphics or command, recognition can be further improved by providing a step-by- step protocol prompted by the program for setting up preferred symbols and for learning the handwriting patterns of the user.
- a computing device includes a memory and a touch screen including a display medium for displaying a representation of at least one graphic object stored in the memory, the graphic object having at least one parameter stored in the memory, a surface for determining an indication of a change to at the least one parameter, wherein, in response to indicating the change, the computing device is configured to automatically change the at least one parameter in the memory and automatically change the representation of the one or more graphic objects in the memory, and wherein the display medium is configured to display the changed representation of one or more graphic objects with the changed parameter.
- a method includes displaying, on a display medium of a computing device, a representation of at least one graphic object stored in a memory, each graphic object having at least one parameter stored in the memory, indicating a change to the least one parameter, and in response to indicating the change, automatically changing the at least one parameter in the memory and automatically changing the representation of the at least one graphic object in the memory, and displaying the changed representation of the at least one graphic object on the display medium
- Figure 1 is a block schematic diagram illustrating basic functional blocks and data flow according to one embodiment of the disclosed embodiments.
- Figure 2 is a flow chart of an interrupt handler that reads handwritten information in response to writing pen taps on a writing surface.
- Figure 3 is a flow chart of a polling technique for reading handwritten information.
- Figure 4 is a flow chart of operation according to a representative embodiment of the disclosed embodiments wherein handwritten information is incorporated into the document after all handwritten information is concluded.
- Figure 5 is a flow chart of operation according to a representative embodiment of the disclosed embodiments, wherein handwritten information is incorporated into the document concurrently during input.
- Figure 6 is an illustration example of options available for displaying handwritten information during various steps in the process according to the disclosed embodiments.
- Figure 7 is an illustration of samples of handwritten symbols / commands and their associated meanings.
- Figure 8 is a listing that provides generic routines for each of the first 3 symbol operations illustrated in Figure 7.
- Figure 9 is an illustration of data flow for data received from a recognition functionality element processed and defined in an RHI memory.
- Figure 10 is an example of a memory block format of the RHI memory suitable for storing data associated with one handwritten command.
- Figure 1 1 is an example of data flow of the embedded element of Figure 1 and Figure 38 according to the first embodiment illustrating the emulating of keyboard keystrokes.
- Figure 12 is a flow chart representing subroutine D of Figure 4 and Figure 5 according to the first embodiment using techniques to emulate keyboard keystrokes.
- Figure 13 is an example of data flow of the embedded element of Figure 1 and Figure 38 according to the second embodiment illustrating the running of programs.
- Figure 14 is a flow chart representing subroutine D of Figure 4 and Figure 5 according to the second embodiment illustrating the running of programs.
- Figure 15 through Figure 20 are flow charts of subroutine H referenced in Figure 1 2 for the first three symbol operations illustrated in Figure 7 and according to the generic routines illustrated in Figure 8.
- Figure 21 is a flow chart of subroutine L referenced in Figure 4 and Figure 5 for concluding the embedding of revisions for a Microsoft® Word type document, according to the first embodiment using techniques to emulate keyboard keystrokes.
- Figure 22 is a flow chart of an alternative to subroutine L of Figure 21 for concluding revisions for MS Word type document.
- Figure 23 is a sample flow chart of the subroutine I referenced in Figure 12 for copying a recognized image from the RHI memory and placing it in the document memory via a clipboard.
- Figure 24 is a sample of code for subroutine N referenced in Figure 23 and Figure 37, for copying an image from the RHI memory into the clipboard.
- Figure 25 is a sample of translated Visual Basic code for built-in macros referenced in the flow charts of Figure 26 to Figure 32 and Figure 37.
- Figure 26 through Figure 32 are flow charts of subroutine J referenced in Figure 14 for the first three symbol operations illustrated in Figure 7 and according to the generic routines illustrated in Figure 8 for MS Word.
- Figure 33 is a sample of code in Visual Basic for the subroutine M referenced in Figure 4 and Figure 5, for concluding embedding of the revisions for MS Word, according to the second embodiment using the running of programs.
- Figure 34 is a sample of translated Visual Basic code for useful built-in macros in comment mode for MS Word.
- Figure 35 provides examples of recorded macros translated into Visual Basic code that emulates some keyboard keys for MS Word.
- Figure 36 is a flow chart of a process for checking if a handwritten character to be emulated as a keyboard keystroke exists in table and thus can be emulated and, if so, for executing the relevant line of code that emulates the keystroke.
- Figure 37 is a flow chart of an example for subroutine K in Figure 14 for copying a recognized image from RHI memory and placing it in the document memory via the clipboard.
- Figure 38 is an alternate block schematic diagram to the one illustrated in Figure 1 , illustrating basic functional blocks and data flow according to another embodiment of the disclosed embodiments, using a touch screen.
- Figure 39 is a schematic diagram of an integrated edited document made with the use of a wireless pad.
- Figures 40A-40D illustrate an example of user interaction with the touch screen to Insert a line.
- Figures 41 A-41 C illustrate an example of use of the command to delete an object.
- Figures 42A-42D illustrate an example of user interaction with the touch screen to change line length.
- Figures 43A-43D illustrate an example of user interaction with the touch screen to change line angle.
- Figures 44A-44D illustrate an example of user interaction with the touch screen to apply a radius to a line or to change the radius of an arc.
- Figures 45A-45C illustrate an example of user interaction with the touch screen to make a line parallel to another line.
- Figures 46A-46D illustrate an example of user interaction with the touch screen to add a fillet or an arc to an object.
- Figures 47A-47D illustrate an example of user interaction with the touch screen to add a chamfer.
- Figures 48A-48F illustrate an example of use of the command to trim an object.
- Figures 49A-49D illustrate an example of user interaction with the touch screen to move an arced object.
- Figures 50A-50D illustrate an example of use of the "no snap" command.
- Figures 51 A-51 D illustrate another example of use of the 'No Snap' command.
- Figures 52A-52D illustrate another example of use of the command to trim an object.
- Figure 53 is an example of a user interface with icons.
- Figures 54A-54B illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a cube on the touch screen.
- Figures 54C-54D illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a sphere on the touch screen.
- Figures 54E-54F illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a ramp on the touch screen.
- Figures 55A-55B illustrate examples of a user interface menus for text editing, selection mode.
- fQQ54i-r00551 [00441
- Figure 56 illustrates an example of a gesture to mark text in command mode.
- Figure 57 illustrates another example of a gesture to mark text in command mode.
- Figures 58A-58B illustrate an example of automatically zooming a text while drawing the gesture to mark text.
- FIG. 1 there is a block schematic diagram of an integrated document editor 10 according to a first embodiment, which illustrates the basic functional blocks and data flow according to that first embodiment.
- a digitizing pad 12 is used, with its writing area (e.g., within margins of an 8-1/2" x 1 1 " sheet) to accommodate standard sized papers that corresponds to the x-y location of the edited page.
- Pad 1 2 receives data from a writing pen 1 0 (e.g., magnetically, or mechanically by way of pressure with a standard pen).
- Data from the digitizing pad 12 is read by a data receiver 14 as bitmap and/or vector data and then stored corresponding to or referencing the appropriate x-y location in a data receiving memory 1 6.
- this information can be displayed on the screen of a display 25 on a real-time basis to provide the writer with real-time feedback.
- a touch screen 1 1 (or other positional input receiving mechanism as part of a display) with its receiving and displaying mechanisms integrated, receives data from the writing pen 10, whereby the original document is displayed on the touch screen as it would have been displayed on a printed page placed on the digitizing pad 12 and the writing by the pen 10 occurs on the touch screen at the same locations as it would have been written on a printed page.
- the display 25, pad 1 2 and data receiver 14 of Figure 1 are replaced with element 1 1 , the touch screen and associated electronics of Figure 38, and elements 1 6, 18, 20, 22, and 24 are discussed hereunder with reference to Figure 1 .
- writing paper is eliminated.
- the touch screen 1 1 may generate a signal, such as a beeping sound, requesting the user to tap closer to the point where handwritten information needs to be inserted. If the ambiguity is still not resolved (when the digitizing pad 1 2 is used), the user may be requested to follow an adjustment procedure.
- the writing area on the digitizing pad 1 2 will be set to correspond to a specific active window (for example, in multi-windows screen), or to a portion of a window (i.e. , when the active portion of a window covers partial screen, e.g., an invoice or a bill of an accounting program QuickBooks), such that the writing area of the digitizing pad 1 2 is efficiently utilized.
- a document is a form (e.g., an order form)
- the paper document can be a pre-set to the specific format of the form, such that the handwritten information can be entered at specific fields of the form (that correspond to these fields in the document memory 22).
- handwritten information on the digitizing pad 1 2 may be deleted after it is integrated into the document memory 22.
- multi-use media that allow multiple deletions can be used, although the touch screen alternative would be preferred over this alternative.
- fOO €3 ⁇ 4-r00631 ⁇ 00521 ⁇ recognition functionality element 18 reads information from the data receiving memory 16 and writes the recognition results or recognized handwritten elements into the recognized handwritten information (RHI) memory 20. Recognized handwritten information elements, (RHI elements) such as characters, words, and symbols, are stored in the RHI memory 20.
- RHI element in the RHI memory 20 correlates to its location in the data receiving memory 16 and in the document memory 22.
- symbols may be stored as images or icons in, for example, JPEG format (or they can be emulated as if they were keyboard keys. This technique will be discussed hereafter.), since the symbols are intended to be intuitive. They can be useful for reviewing and interpreting revisions in the document.
- the recognized handwritten information prior to final incorporation e.g., revisions for review
- embedded criteria and functionality element 24 reads the information from the RHI memory 20 and embeds it into the document memory 22.
- Information in the document memory 22 is displayed on the display 25, which is for example a computer monitor or a display of a touch screen.
- the embedded functionality determines what to display and what to be embedded into the document memory 22 based on the stage of the revision and selected user criteria/preferences. fQQ €41 ⁇ r00651 ⁇ 0054 ⁇ Embedding the recognized information into the document memory 22 can be either applied concurrently or after input of all handwritten information, such as after revisions, have been concluded. Incorporation of the handwritten information concurrently can occur with or without user involvement.
- the document memory 22 contains, for example, one of the following files: 1 ) A word processing file, such as a MS Word file or a Word Perfect file, 2) A spreadsheet, such as an Excel file, 3) A form such as a sales order, an invoice or a bill in accounting software (e.g., QuickBooks), 4) A table or a database, 5) A desktop publishing file, such as a QuarkXPress or a PageMaker file, or 6) A presentation file, such as a MS Power Point file.
- a word processing file such as a MS Word file or a Word Perfect file
- a spreadsheet such as an Excel file
- a form such as a sales order, an invoice or a bill in accounting software (e.g., QuickBooks)
- a table or a database 5)
- a desktop publishing file such as a QuarkXPress or a PageMaker file, or 6)
- a presentation file such as a MS Power Point file.
- the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, object, or any portion of these types of documents, such as a block of text or a unit of data.
- the document memory 22, the data receiving memory 16 and the RHI memory 20 could be any kind of memory or memory device or a portion of a memory device, e.g., any type of RAM, magnetic disk, CD-ROM, DVD-ROM, optical disk or any other type of storage.
- the elements/components discussed herein may be implemented in any combination of electronic or computer hardware and/or software.
- the disclosed embodiments could be implemented in software operating on a general-purpose computer or other types of computing / communication devices, such as hand-held computers, personal digital assistant (PDA)s, cell phones, etc.
- PDA personal digital assistant
- a general-purpose computer may be interfaced with specialized hardware such as an Application Specific Integrated Circuit (ASIC) or some other electronic components to implement the disclosed embodiments.
- ASIC Application Specific Integrated Circuit
- the disclosed embodiments may be carried out using various codes of one or more software modules forming a program and executed as instructions/data by, e.g., a central processing unit, or using hardware modules specifically configured and dedicated to perform the disclosed embodiments.
- the disclosed embodiments may be carried out using a combination of software and hardware modules.
- the recognition functionality element 18 encompasses one or more of the following recognition approaches:
- Units that could not be recognized as a character, word or a symbol are interpreted as images if the application accommodates graphics and optionally, if approved by the user as graphics and stored into the RHI memory 20 as graphics. It should be noted that units that could not be recognized as character, word or symbol may not be interpreted as graphics in applications that do not accommodate graphics (e.g., Excel); in this scenario, user involvement may be required.
- data may be read from the document memory 22 by the recognition element 1 8 to verify that the recognized handwritten information does not conflict with data in the original document and to resolve/minimize as much as possible recognized information retaining ambiguity.
- the user may also resolve ambiguity by approving/disapproving recognized handwritten information (e.g., revisions) shown on the display 25.
- adaptive algorithms (beyond the scope of this disclosure) may be employed. Thereunder, user involvement may be relatively significant at first, but as the adaptive algorithms learn the specific handwritten patterns and store them as historical patterns, future ambiguities should be minimized as recognition becomes more robust.
- Figure 2 though Figure 5 are flow charts of operation according to an exemplary embodiment and are briefly explained herein below. The text in all of the drawings is herewith explicitly incorporated into this written description for the purposes of claim support.
- Figure 2 illustrates a program that reads the output of the digitizing pad 12 (or of the touch screen 1 1 ) each time the writing pen 10 taps on and/or leaves the writing surface of the pad 12 (or of the touch screen 1 1 ). Thereafter data is stored in the data receiving memory 1 6 (Step E). Both the recognition element and the data receiver (or the touch screen) access the data receiving memory. Therefore, during read/write cycle by one element, the access by the other element should be disabled.
- the program checks every few milliseconds to see if there is new data to read from the digitizing pad 12 (or of the touch screen 1 1 ). If so, data is received from the digitizing recognizer and stored in the data receiving memory 16 (E). This process continues until the user indicates that the revisions are concluded, or until there is a timeout.
- Embedding of the handwritten information may be executed either all at once according to procedures explained with Figure 4, or concurrently according to procedures explained with Figure 5.
- the recognition element 18 recognizes one unit at a time, e.g., a character, a word, graphic or a symbol, and makes them available to the RHI processor and memory 20 (C).
- This processor and the way in which it stores recognized units into the RHI memory will be discussed hereafter with reference to Figure 9.
- Units that are not recognized immediately are either dealt with at the end as graphics, or the user may indicate otherwise manually by other means, such as a selection table or keyboard input (F).
- graphics are interpreted as graphics if the user indicates when the writing of graphics begins and when it is concluded.
- each memory block contains all (as in Figure 4) or possibly partial (as in Figure 5) recognized information that is related to one handwritten command, e.g., a revision.
- the embedded function (D) then embeds the recognized handwritten information (e.g., revisions) in "for review" mode.
- the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences set up (A) by the user.
- revisions in MS Word are embedded in Track Changes mode all at once.
- revisions in MS Word may, for example, be useful when the digitizing pad 12 is separate from the rest of the system, whereby handwritten information from the digitizing pad internal memory may be downloaded into the data receiving memory 16 after the revisions are concluded via a USB or other IEEE or ANSI standard port.
- FIG. 4 is a flow chart of the various steps, whereby embedding "all" recognized handwritten information (such as revisions) into the document memory 22 is executed once "all" handwritten information is concluded.
- the Document Type is set up (e.g., Microsoft® Word or QuarkXPress), with software version and user preferences (e.g., whether to incorporate revisions as they are available or one at a time upon user approval/disapproval), and the various symbols preferred by the user for the various commands such as for inserting text, for deleting text and for moving text around) (A).
- the handwritten information is read from the data receiving memory 16 and stored in the memory of the recognition element 18 (B). Information that is read from the receiving memory 16 is marked/flagged as read, or it is erased after it is read by the recognition element 1 8 and stored in its memory; this will insure that only new data is read by the recognition element 18.
- FIG. 5 is a flow chart of the various steps whereby embedding recognized handwritten information (e.g., revisions) into the document memory 22 is executed concurrently (e.g., with the making of the revisions). Steps 1 - 3 are identical to the steps of the flow chart in Figure 4 (discussed above). Once a unit, such as a character, a symbol or a word is recognized, it is processed by the RHI processor 20 and stored in the RHI memory. A processor (GMB functionality 30 referenced in Figure 9) identifies it as either a unit that can be embedded immediately or not.
- GMB functionality 30 referenced in Figure 9
- step 4.3 It is checked if it can be embedded (step 4.3); if it can be (step 5), it is embedded (D) and then (step 6) deleted or marked/updated as an embedded (G). If it cannot be embedded (step 4.1 ), more information is read from the digitizing pad 12 (or from the touch screen 1 1 ). This process of steps 4 - 6 repeats and continues so long as handwritten information is forthcoming. Once all data is embedded (indicated by an End command or a simple timeout), units that could not be recognized are dealt with (F) in the same manner discussed for the flow chart of Figure 4. Finally, once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences chosen by the user.
- Figure 6 is an example of the various options and preferences available to the user to display the handwritten information in the various steps for MS Word.
- “For Review” mode the revisions are displayed as “For Review” pending approval for "Final” incorporation.
- Revisions for example, can be embedded in a "Track Changes” mode, and once approved/disapproved (as in "Accept/Reject changes"), they are embedded into the document memory 22 as "Final”.
- symbols may be also displayed on the display 25. The symbols are selectively chosen to be intuitive, and, therefore, can be useful for quick review of revisions.
- text revisions may be displayed either in handwriting as is, or as revised machine code handwriting for improved readability; in "Final” mode, all the symbols are erased, and the revisions are incorporated as an integral part of the document.
- Embodiment One Emulating Keyboard Entries:
- Command information in the RHI memory 20 is used to insert or revise data, such as text or images in designated locations in the document memory 22, wherein the execution mechanisms emulate keyboard keystrokes, and when available, operate in conjunction with running pre-recorded and/or built-in macros assigned to sequences of keystrokes (i.e., shortcut keys).
- Data such as text can be copied from the RHI memory 20 to the clipboard and then pasted into designated locations in the document memory 22, or it can be emulated as keyboard keystrokes. This embodiment will be discussed hereafter.
- Embodiment Two Running Programs:
- the commands and their associated data stored in the RHI memory 20 are translated to programs that embed them into the document memory 22 as intended.
- the operating system clipboard can be used as a buffer for data (e.g., text and images). This embodiment will also be discussed hereafter.
- Embodiment One and Embodiment Two Information associated with a handwritten command as discussed in Embodiment One and Embodiment Two is either text or graphics (image), although it could be a combination of text and graphics.
- the clipboard can be used as a buffer.
- Embodiment One is usefulness in a large array of applications, with or without programming capabilities, to execute commands, relying merely on control keys, and when available built-in or pre-recorded macros.
- a control key such as Arrow Up or a simultaneous combination of keys, such as Cntrl-C
- a command is executed.
- Embodiment Two cannot be run in Embodiment Two unless translated to actual low-level programming code (e.g., Visual Basic Code).
- running a macro in a control language native to the application (recorded and/or built-in) in Embodiment One is simply achieved by emulating its assigned shortcut key(s).
- Embodiment Two may be preferred over Embodiment One, for example in MS Word, if a Visual Basic Editor is used to create codes that include Visual Basic instructions that cannot be recorded as macros.
- Embodiment Two may be used in conjunction with Embodiment One, whereby, for example, instead of moving text from the RHI memory 20 to the clipboard and then placing it in a designation location in the document memory 22, text is emulated as keyboard keystrokes. If desired, the keyboards keys can be emulated in Embodiment Two by writing a code for each key, that, when executed, emulates a keystroke.
- Embodiment One may be implemented for applications with no programming capabilities, such as QuarkXPress, and Embodiment Two may be implemented for some of the applications that do have programming capabilities.
- x-y locations in the data receiving memory 1 6 can be identified on a printout or on the display 25, and if desired, on the touch screen 1 1 , based on: 1 ) recognition/identification of a unique text and/or image representation around the writing pen, and 2) searching for and matching the recognized/identified data around the pen with data in the original document which may be converted into the bitmap and/or vector format that is identical to the format handwritten information is stored in the data receiving memory 1 6. Then handwritten information along with its x-y locations correspondingly indexed in the document memory 22 is transmitted to a remote platform for recognition, embedding and displaying.
- the data representation around the writing pen and the handwritten information are read by a miniature camera with attached circuitry that is built-in the pen.
- the data representing the original data in the document memory 22 is downloaded into the pen internal memory prior the commencement of handwriting, either via a wireless connection (e.g., Bluetooth) or via physical connection (e.g., USB port).
- a wireless connection e.g., Bluetooth
- physical connection e.g., USB port
- the handwritten information along with its identified x-y locations is either downloaded into the data receiving memory 16 of the remote platform after the handwritten information is concluded (via physical or wireless link), or it can be transmitted to the remote platform via wireless link as the x-y location of the handwritten information is identified. Then, the handwritten information is embedded into the document memory 22 all at once (i.e., according to the flow chart illustrated in Figure 4), or concurrently (i.e., according to the flow chart illustrated in Figure 5).
- the display 25 may include pre-set patterns (e.g., engraved or silk-screened) throughout the display or at selected location of the display, such that when read by the camera of the pen, the exact x-y location on the display 25 can be determined.
- the pre-set patterns on the display 25 can be useful to resolve ambiguities, for example when the identical information around locations in the document memory 22 exists multiple times within the document.
- the tapping of the pen in selected locations of the touch screen 1 1 can be used to determine the x-y location in the document memory (e.g., when the user makes yes-no type selections within a form displayed on the touch screen). This, for example, can be performed on a tablet that can accept input from a pen or any other pointing device that function as a mouse and writing instrument.
- fQQ H-r00921 ⁇ 00791
- the writing pen can emit a focused laser/IR beam to a screen with thermal or optical sensing, and the location of the sensed beam may be used to identify the x-y location on the screen.
- the use of a pen with a built-in miniature camera is not needed.
- the designated x-y location in the document memory 22 can be determined based on: 1 ) the detected x-y location of the pen 10 on the screen, and 2) parameters that correlate between the displayed data and the data in the document memory 22 (e.g., application name, cursor location on the screen and zoom percent).
- the mouse could be emulated to place the insertion point at designated locations in the document memory 22 based on the X-Y locations indicated in the Data receiving memory 16. Then information from the RHI memory 20 can be embedded into the document memory 22 according to Embodiment One or Embodiment Two. Further, once the insertion point is at a designated location in the document memory 22, selection of text or an image within the document memory 22 may be also achieved by emulating the mouse pointer click operation.
- the document type is identified and user preferences are set (A).
- the user may select to display revisions in Track Change feature.
- the Track Changes Mode of Microsoft® Word (or similar features in other applications) can be invoked by the user or automatically in conjunction with either or both Embodiment One and Embodiment Two, and then handwritten information from the RHI memory 20 can be embedded into the document memory 22.
- the insertion mechanism may also be a plug-in that emulates the Track Changes feature.
- the Track Changes Feature may be invoked after the Comments Feature is invoked such that revisions in the Comments fields are displayed as revisions, i.e., "For Review". This could in particular be useful for large documents reviewed/revised by multiple parties.
- the original document is read and converted into a document with known accessible format (e.g., ASCII for text and JPEG for graphics) and stored into an intermediate memory location. All read/write operations are performed directly on it. Once revisions are completed, or before transmitting to another platform, it can be converted back into the original format and stored into the document memory 22.
- known accessible format e.g., ASCII for text and JPEG for graphics
- the revisions can be made on a blank paper (or on another document), whereby, the handwritten information, for example, is a command (or a set of commands) to write or revise a value/number in a cell of a spreadsheet, or to update new information in a specific location of a database; this can be useful, for example in cases were an action to update a spreadsheet, a table or a database is needed after reviewing a document (or a set of documents).
- the x-y location in the Receiving Memory 16 is immaterial.
- the Embed function (D) referenced in Figure 4 reads data from memory blocks in the RHI memory 20 one at a time, which corresponds to one handwritten command and its associated text data or image data.
- the Embed function (D) referenced in Figure 5 reads data from memory blocks and embeds recognized units concurrently.
- Memory blocks An example of how a handwritten command and its associated text or image is defined in the memory block 32 is illustrated in Figure 10. This format may be expanded, for example, if additional commands are added, i.e., in addition to the commands specified in the Command field.
- the parameters defining the x-y location of recognized units i.e., InsertionPointl and lnsertionPoint2 in Figure 10) vary as a function of the application. For example, the x-y locations/insertion points of text or image in MS Word can be defined with the parameters Page#, Line# and Column* (as illustrated in Figure 10).
- the x-y locations can be translated into the cell location in the spreadsheet, i.e., Sheet#, Row# and Column*. Therefore, different formats for x-y InsertionPointl and x-y lnsertionPoint2 need to be defined to accommodate variety of applications.
- Figure 9 is a chart of data flow of recognized units. These are discussed below.
- FIFO First In First Out Protocol: Once a unit is recognized it is stored in a queue, awaiting processing by the processor of element 20, and more specifically, by the GMB functionality 30.
- the "New Recog” flag (set to One" by the recognition element 18 when a unit is available), indicates to the RU receiver 29 that a recognized unit (i.e., the next in the queue) is available.
- the "New Recog” flag is reset back to "Zero" after the recognized unit is read and stored in the memory elements 26 and 28 of Figure 9 (e.g., as in step 3.2. of the subroutines illustrated in Figure 4 and Figure 5).
- the recognition element 18: 1 makes the next recognized unit available to read by the RU receiver 29, and 2) sets the "New Recog” flag back to "One" to indicate to the RU receiver 29 that the next unit is ready. This process continues so long as recognized units are forthcoming.
- This protocol insures that the recognition element 18 is in synch with the speed with which recognized units are read from the recognition element and stored in the RHI memory (i.e., in memory elements 26 and 28 of Figure 9). For example, when handwritten information is processed concurrently, there may be more than one memory block available before the previous memory block is embedded into the document memory 22.
- this FIFO technique may also be employed between elements 24 and 22 and between elements 1 6 and 1 8 of Figure 1 and Figure 38, and between elements 14 and 1 2 of Figure 1 , to ensure that independent processes are well synchronized, regardless of the speed by which data is available by one element and the speed by which data is read and processed by the other element.
- the "New Recog" flag could be implemented in h/w (such as within an IC), for example, by setting a line to "High” when a recognized unit is available and to "Low” after the unit is read and stored, i.e. , to acknowledge receipt.
- ⁇ 001011 ⁇ 001051 ⁇ 00921 Process 1 As a unit, such as a character, a symbol or a word is recognized: 1 ) it is stored in Recognized Units (RU) Memory 28, and 2) its location in the RU memory 28 along with its x-y location, as indicated in the data receiving memory 1 6, is stored in the XY-RU Location to Address in RU table 26. This process continues so long as handwritten units are recognized and forthcoming.
- RU Recognized Units
- Process 2 In parallel to Process 1 , the grouping into memory blocks (GMB) functionality 30 identifies each recognized unit such as a character, a word or a handwritten command (symbols or words), and stores them in the appropriate locations of memory blocks 32. In operations such as "moving text around”, “increasing fonts size” or “changing color”, an entire handwritten command must be concluded before it can be embedded into the document memory 22.
- GMB memory blocks
- deleting or embedding the text can begin as soon as the command has been identified and the deletion (or insertion of text) operation can then continue concurrently as the user continue to write on the digitizing pad 1 2 (or on the touch screen 1 1 ).
- Process 3 As unit(s) are grouped into memory blocks, 1 ) the identity of the recognized units (whether they can be immediately incorporated or not) and 2) the locations of the units that can be incorporated in the RHI memory are continuously updated.
- a flag i.e., "Identity-Flag" is set to One" to indicate when unit(s) can be embedded. It should be noted that this flag is defined for each memory block and that it could be set more than one time for the same memory block (for example, when the user strikes through a line of text). This flag is checked in steps 4.1 - 4.3 of Figure 5 and is reset to "Zero" after the recognized unit(s) is embedded, i.e., in step 6.1 of the subroutine in Figure 5, and at initialization.
- a pointer for memory block i.e., the "Next memory block pointer” 31
- the "Next memory block pointer” 31 is updated every time a new memory block is introduced (i.e., when a recognized unit(s) that is not yet ready to be embedded is introduced; when the "Identity” flag is Zero), and every time a memory block is embedded into the document memory 22, such that the pointer will always point to the location of the memory block that is ready (when it is ready) to be embedded.
- This pointer indicates to the subroutines Embeddl (of Figure 12) and Embedd2 (of Figure 14) the exact location of the relevant memory block with the recognized unit(s) that is ready to be embedded (as in step 1 .2 of these
- This counter is relevant when the handwritten information is embedded all at once after its conclusion, i.e., when the subroutines of Figure 1 2 and Figure 14 are called from the subroutine illustrated in Figure 4 (i.e., it is not relevant when they are called from the subroutine in Figure 5; its value then is set to "One", since in this embodiment, memory blocks are embedded one at a time).
- Figure 1 1 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment One.
- the text of these and all other figures is largely self-explanatory and need not be repeated herein. Nevertheless, the text thereof may be the basis of claim language used in this document.
- FIG. 1 2 is a flow chart example of the Embed subroutine D referenced in Figure 4 and Figure 5 according to Embodiment One. The following is to be noted.
- memory block pointer is set to the location of the first memory block to be embedded
- memory block counter is set to the value in # of memory blocks element (33) of Figure 9.
- memory blocks 32 are fetched one at a time from the RH I memory 20 (G) and processed as follows:
- ⁇ 001131 ⁇ 001141 ⁇ 01051 Commands are converted to keystrokes (35) in the same sequence as the operation is performed via the keyboard and then stored in sequence in the keystrokes memory 34.
- the emulate keyboard element 36 uses this data to emulate the keyboard, such that the application reads the data as it was received from the keyboard (although this element may include additional keys not available via a keyboard such as the symbols illustrated in Figure 7, e.g. for insertion of new text in MS Word document).
- the clipboard 38 can handle insertion of text, or text can be emulated as keyboard keystrokes.
- the lookup tables 40 determines the appropriate control key(s) and keystroke sequences for pre-recorded and built-in macros that, when emulated, execute the desired command.
- keyboard keys are application-dependent and are a function of parameters, such as application name, software version and platform.
- Some control keys such as the arrow keys, execute the same commands in a large array of applications; however, this assumption is excluded from the design in Figure 1 1 , i.e., by the inclusion of the lookup table command-keystrokes in element 40 of Figure 1 1 .
- Element 40 may include lookup tables for a large array of applications, although it could include tables for one or any desired number of applications.
- the image (graphic) is first copied from the RHI memory 20, more specifically, based on information in the memory block 32, into the clipboard 38. Its designated location is located in the document memory 22 via a sequence of keystrokes (e.g., via the arrow keys). It is stored (i.e., pasted from the clipboard 38 by the keystrokes sequence: Cntr-V) into the document memory 22. If the command involves another operation, such as "Reduce Image Size" or "Move image", the image is first identified in the document memory 22 and selected. Then the operation is applied by the appropriate sequences of keystrokes.
- a basic set of keystrokes sequences can be used to execute a basic set of commands for creation and revision of a document in a large array of applications.
- the arrow keys can be used for jumping to a designated location in the document.
- a desired text/graphic object can be selected.
- clipboard operations i.e., the typical combined keystroke sequences Cntrl-X (for Cut), Cntrl-C (for Copy) and Cntrl-V (for Paste), can be used for basic edit/revision operations in many applications.
- InsertFrame InsertObject
- InsertPicture EditCopyPicture
- EditCopyAsPicture EditObject
- InsertDrawing InsertFram, InsertHorizentlLine
- Combinations of macros can be recorded as a new macro; the new macro runs whenever the sequence of keystrokes that is assigned to it is emulated.
- a macro in combination with keystrokes e.g., of arrow keys
- recording of some sequences as a macro may not be permitted.
- Emulating a keyboard key 36 in applications with built-in programming capability, such as Microsoft® Word, can be achieved by running code that is equivalent to pressing that keyboard key. Referring to Figure 35 and Figure 36, details of this operation are presented. The text thereof is incorporated herein by reference. Otherwise, emulating the keyboard is a function that can be performed in conjunction with Windows or other computer operating systems.
- Figure 13 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment Two.
- Figure 14 is a flow chart example of the Embed function D referenced in Figure 4 and in Figure 5 according to Embodiment Two. Memory blocks are fetched from the RHI memory 20 (G) and processed. Text of these figures is incorporated herein by reference. The following should be noted with Figure 14:
- Figure 33 is the code in Visual Basic that embeds the information in Final Mode, i.e., Accept All Changes" of the Track Changes, which embeds all revisions to be an integral part of the document.
- the clipboard 38 can handle the insertion of text into the document memory 22, or text can be emulated as keyboard keystrokes. (Refer to Figures 35-36 for details).
- an image operation (K) such as copying an image from the RHI memory 20 to the document memory 22 is executed as follow: an image is first copied from the RHI memory 20 into the clipboard 3f8. Its designated location is located in the document memory 22. Then it is pasted via the clipboard 38 into the document memory 22.
- the selection of a program by the program selection and execution element 42 is a function of the command, the application, software version, platform, and the like. Therefore, the ConvertText2 J selects a specific program for command data that are stored in the RHI memory 20 by accessing the lookup command-programs table 44. Programs may also be initiated by events, e.g., when opening or closing a file, or by a key entry, e.g., when bringing the insertion point to a specific cell of a spreadsheet by pressing the Tab key.
- the Visual Basic Editor can be used to create very flexible, powerful macros that include Visual Basic instructions that cannot be recorded from the keyboard.
- the Visual Basic Editor provides additional assistance, such as reference information about objects and properties or an aspect of its behavior.
- Insert Annotation can be achieved by emulating the keystrokes sequence Alt+Cntrl+M.
- revisions in the RH I memory 20 can be incorporated into the document memory 22 as comments. If the text includes revisions, the Track Changes mode can be invoked prior to insertion of text into a comment pane.
- Embedding handwritten information in a cell of a spreadsheet or a field in a form or a table can either be for new information or it could be for revising an existing data (e.g., deletion, moving data between cells or for adding new data in a field).
- the handwritten information after the handwritten information is embedded in the document memory 22, it can cause the application (e.g., Excel) to change parameters within the document memory 22, e.g., when the embedded information in a cell is a parameter of a formula in a spreadsheet which when embedded changes the output of the formula, or when it is a price of an item in a Sales Order which when embedded changes the subtotal of the Sales Order; if desired, these new parameters may be read by the embed functionality 24 and displayed on the display 25 to provide the user with useful information such as new subtotals, spell check output, stock status of an item (e.g., as a sales order is filed in).
- the application e.g., Excel
- the x-y location in the document memory 22 for a word processing type documents can for example be defined by page#, line# and character* (see figure 10, x-y locations for InsertionPointl and lnsertionPoint2).
- the x-y location in the document memory 22 for a form, table or a spreadsheet can for example be defined based on the location of a cell / field within the document (e.g., column #, Row # and Page # for a spreadsheet).
- it can be defined based on number of Tabs and/or Arrow keys from a given known location.
- a field in a Sales Order in the accounting application QuickBooks can be defined based on the number of Tab from the first field (i.e., "customer; job") in the form.
- the embed functionality can read the x-y information (see step 2 in flow charts referenced in Figures 1 2 and 14), and then bring the insertion point to the desired location according to Embodiment One (see example flow charts referenced in Figures 15-16), or according to Embodiment Two (see example flow charts for MS Word referenced in Figure 26). Then the handwritten information can be embedded.
- the software application QuickBooks has no macros or programming capabilities.
- Forms e.g., Sales Order, a Bill, or a Purchase Order
- Lists e.g., Chart of Accounts and customer; job list
- Embodiment One could be used to emulate keyboard keystrokes to invoke specific form or a specific list. For example, invoking a new invoice can be achieved by emulating the keyboard keys combination "Cntrl+N" and invoking the chart of accounts list can be achieved by emulating the keyboard keys combination "Cntrl+A".
- Invoking a Sales Order which has no associated shortcut key defined, can be achieved by emulating the following keyboard keystrokes:
- the insertion point can be brought to the specified x-y location, and then the recognized handwritten information (i.e., command(s) and associated text) can be embedded.
- the recognized handwritten information i.e., command(s) and associated text
- ⁇ 001401 ⁇ 001411 IO1421 As far as the user is concerned, he can either write the information (e.g., for posting a bill) on a pre-set form (e.g., in conjunction with the digitizing pad 12 or touch screen 1 1 ) or specify commands related to the operation desired. Parameters, such as the type of entry (a form, or a command), the order for entering commands, and the setup of the form are selected by the user in step 1 "Document Type and Preferences Setup" (A) illustrated in Figure 4 and in Figure 5. ⁇ 001 ⁇ 11 ⁇ 001421 ⁇ 01431 For example, the following sequence handwritten commands will post a bill for purchase of office supply at OfficeMax on 03/02/05, for a total of $45.
- the parameter "office supply", which is the account associated with the purchase, may be omitted if the vendor OfficeMax has already been set up in QuickBooks.
- Information can be read from the document memory 22 and based on this information the embed functionality 24 can determine if the account has previously been set up or not, and report the result on the display 25. This, for example can be achieved by attempting to cut information from the "Account” field (i.e., via the clipboard), assuming the account is already set up. The data in the clipboard can be compared with the expected results, and based on that, generating output for the display.
- Embodiment One and Embodiment Two can be used to bring the insertion point to the desired location and to embed recognized handwritten information.
- a wireless pad can be used for transmission of an integrated document to a computer and optionally receiving back information that is related to the transmitted information. It can be used, for example, in the following scenarios:
- Handwritten information can be inserted in designated locations in a pre-designed document such an order form, an application, a table or an invoice, on top of a digitizing pad 1 2 or using a touch screen 1 1 or the like.
- the pre-designed form is stored in a remote or a close-by computer.
- the handwritten information can be transmitted via a wireless link concurrently to a receiving computer.
- the receiving computer will recognize the handwritten information, interpret it and store it in a machine code into the pre-designed document.
- the receiving computer will prepare a response to and transmit it back to the transmitting pad (or touch screen), e.g., to assist the user.
- the Wireless Pad comprises a digitizing pad 1 2, display 25, data receiver 48, processing circuitry 60, transmission circuitry I 50, and receiving circuitry I I 58.
- the digitizing pad receives tactile positional input from a writing pen 10.
- the transmission circuitry I 50 takes data from the digitizing pad 1 2 via the data receiver 48 and supplies it to receiving circuitry I 52 of a remote processing unit.
- the receiving circuitry I I 58 captures information from display processing 54 via transmission circuitry I I 56 of the remote circuitry and supplies it to processing circuitry 60 for the display 25.
- the receiving memory I 52 communicates with the data receiving memory 1 6 which interacts with the recognition module 1 8 as previously explained, which in turn interacts with the RH I processor and memory 20 and the document memory 22 as previously explained.
- the embedded criteria and functionality element 24 interacts with the elements 20 and 22 to modify the subject electronic document and communicate output to the display processing unit 54.
- handwritten information can be incorporated into a document, information can be recognized and converted into machine-readable text and image and incorporated into the document as "For Review".
- "For review” information can be displayed in a number of ways.
- the "For Review” document can then be sent to one or more receiving parties (e.g. , via email).
- the receiving party may approve portions or all of the revisions and/or revise further in handwriting (as the sender has done) via the digitized pad 1 2, via the touch screen 1 1 or via a wireless pad.
- the document can then be sent again "for review”. This process may continue until all revisions are incorporated/concluded.
- Handwritten information on a page can be sent via fax, and the receiving facsimile machine enhanced as a Multiple Function Device (printer/fax, character recognizing scanner) can convert the document into a machine-readable text/image for a designated application (e.g., Microsoft® Word).
- a designated application e.g., Microsoft® Word
- Revisions vs. original information can be distinguished and converted accordingly based on designated revision areas marked on the page (e.g., by underlining or circling the revisions). Then it can be sent (e.g., via email) "For Review" (as discussed above, under "Remote Communication").
- Handwritten information can be entered on a digitizing pad 1 2 whereby locations on the digitizing pad 1 2 correspond to locations on the cell phone display.
- handwritten information can be entered on a touch screen that is used as a digitizing pad as well as a display (i.e. , similar to the touch screen 1 1 referenced in Figure 38).
- Handwritten information can either be new information, or revision of an existing stored information (e.g., a phone number, contact name, to do list, calendar events, an image photo, etc.).
- Handwritten information can be recognized by the recognition element 1 8, processed by the RH I element 20 and then embedded into the document memory 22 (e.g., in a specific memory location of a specific contact information). Embedding the handwritten information can, for example, be achieved by directly accessing locations in the document memory (e.g., specific contact name); however, the method by which recognized handwritten information is embedded can be determined at the OEM level by the manufacturer of the phone.
- a unique representation such as a signature, a stamp, a finger print or any other drawing pattern can be pre-set and fed into the recognition element 1 8 as units that are part of a vocabulary or as a new character.
- handwritten information is recognized as one of these pre-set units to be placed in a, e.g., specific expected x-y location of the digitizing pad 1 2 ( Figure 1 ) or touch screen 1 1 ( Figure 38), an authentication or part of an authentication will pass. The authentication will fail if there is no match between the recognized unit and the preset expected unit.
- the unique pre-set patterns can be either or both: 1 ) stored in a specific platform belonging to the user and/or 2) stored in a remote database location. It should be noted that the unique pre-set patterns (e.g., a signature) do not have to be disclosed in the document. For example, when an authentication of a signature passes, the embedded functionality 24 will, for example embed the word "OK" in the signature line / field of the document.
- the parameters may include one or more of a line length, a line angle or arc radius, a size, surface area, or any other parameter of a graphic object, stored in memory of the computing device or computed by functions of the computing device. Changes in these one or more parameters are computed by functions of the computing device based on the user interaction on the touch screen, and these computed changes may be used by other functions of the computing device to compute changes in other graphic objects.
- the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, objects, or any portion of these types of documents, such as a block of text or a unit of data. It should be understood that the document or file may be utilized in any suitable application, including but not limited to, computer aided design, gaming, and educational materials.
- CAD Computer Aided Design
- the disclosed embodiments may provide a significant time saving by providing simpler and faster user interaction, while revision iterations with professionals are avoided.
- Typical users may include, but not limited to construction builders and contractors, architects, interior designers, patent attorneys, inventors, and manufacturing plant managers.
- Figures 40A-58B Figures 40A-52D, Figures 54A-54F, and Figures 56-58A may be viewed as a portion of a tutorial of an app to familiarize users with the use of the gestures discussed in these drawings.
- the user selects a command (e.g., a command to change line length, discussed in Figures 42A-42D), by drawing a letter or by selecting an icon which represents the desired command.
- a command e.g., a command to change line length, discussed in Figures 42A-42D
- the computing device identifies the command.
- the computing device responsive to user interaction with a displayed representation of a graphic object on the touch screen to indicate a desired change in one or more parameters (such as, in line length), the computing device automatically causes the desired change in the indicated parameter and, when applicable, also automatically affects changes in locations of the graphic object and further, as a result, in other graphic objects in memory in which the drawing is stored.
- a desired (gradual or single) change in a parameter of a graphic object being an increase or a decrease in its value (and/or in its shape, when the shape of the graphic object being the parameter, such as a change from a straight line object to a segmented line object, or gradual change from one shape to another, such as from a circle/sphere to an eclipse and vice versa), may be indicated, by changes in positional locations along a gesture being drawn on the touch screen (as illustrated for example, in Figures 42A-42B), and during which the computing device gradually and automatically applies the desired changes as the user continues to draw the gesture. From the user perspective, it would seem as the value of the parameter is changing at the same time as the gesture is being drawn.
- the subject drawing or a portion thereof stored in the device memory may be displayed on the touch screen as a two-dimensional representation (herein defined as “vector image”), with which the user may interact in order to communicate desired changes in one or more parameters of a graphic object, such as in line length, line angle, or arc radius.
- vector image a two-dimensional representation
- the computing device automatically causes these desired changes in the graphic object, and when applicable, also in its locations, and further in parameters and locations of other graphic objects within the graphics vector which may be caused as a result of the changes in the graphic object indicated by the user.
- the graphics vector may altrrnatively be represented on the touch screen as a three- dimensional vector image, so as to allow the user to view/review the effects of a change in a parameter of a graphic object in an actual three-dimensional representation of the graphics vector, rather than attempting to visualize the effects while viewing a two-dimensional representation.
- the user may interact with a three-dimensional vector image on the touch screen to indicate desired changes in one or more parameters of one or more graphic objects, for example, by pointing/touching or tapping at geometrical features of the three-dimensional representation, such as on surfaces or at corners, which will cause the computing device to automatically change one or more parameters of one or more graphic objects of the graphics vector.
- Such user interaction with geometrical features may, for example, be along surface length, width or height, along edges of two connecting surfaces (e.g., along an edge connecting the top surface and one of the side surfaces, within surface(s) inside or outside a beveled/trimmed corner, a sloped surface (e.g., of a ramp), or within an arced surface inside or outside an arced corner.
- two connecting surfaces e.g., along an edge connecting the top surface and one of the side surfaces, within surface(s) inside or outside a beveled/trimmed corner, a sloped surface (e.g., of a ramp), or within an arced surface inside or outside an arced corner.
- the correlation between user interaction with a geometrical feature of the three-dimensional vector image on the touch screen and changes in size and/or geometry of the vector graphics stored in the device memory may be achieved, by first, using one or more points/locations in the vector graphics stored (and defined in the xyz coordinate axis system) in the device memory (referred to herein as "locations"), and correlating them with the geometrical features of the vector image with which the user may interact to communicate desired changes in graphic objects.
- a location herein is defined such that, changes in that location, or in a stored or computed parameter of a line (straight, arced, or segmented) extending/branching from that location, such as length, radius or angle, herein defined as "variable”, can be used as the variable (or as one of the variables) in function(s) capable to compute changes in size and/or geometry of the vector graphics as a result of changes in that variable.
- User interaction may be defined within a region of interest, being the area of the geometrical feature on the touch screen within which the user may gesture/interact; this region may, for example, be an entire surface of a cube, or the entire cube surface with an area proximate to the center excluded.
- the computing device responsive to detecting finger movements in predefined/expected direction (or in one of predefined/expected directions), or predefined/expected touching and/or tapping within this region, the computing device automatically determines/identifies the relevant variable and automatically carries out its associated function(s) to automatically affect the desired change(s) communicated by the user.
- a position of either of the edges/corners of a rectangle or of a cube is a location that may be used as a variable in a function (or in one of the functions) capable to compute a change in the geometry of the rectangle or of the cube as a result of a change in that variable.
- the length of a line between two edges/corners (i.e., between two locations) of the cube or the angle between two connected surfaces of the cube may be used as the variable.
- the center point of a circle or of a sphere may be used as the "location" from which the radius of the circle or of the sphere is extending; the radius in this example may be a variable of a function capable to compute the circumference and surface area of the circle or the circumference, surface and volume of the sphere, as the user interacts with (e.g., touches) the sphere.
- a length of a line extending from the center point of a vector graphics having a symmetrical geometry may be used as a variable (or one of the variables) of a function (or of one of the functions) capable to computes changes in the size of the symmetrical vector graphics or changes in its geometry, as the user interacts with the symmetrical vector image.
- two locations may be defined, the first at the center point of the surface at the base, and the second being the edge of the line extending from that location to the top of the cone; the variables in this example may be the first location and the length of the line extending from the first location to the top of the cone, which can be used in function(s) capable to compute changes in the size and geometry of the cone, as the user interacts with the vector image representing the cone.
- a complex or non-symmetrical graphics vector represented on the touch screen as a three-dimensional vector image, with which the user may interact to communicate changes in the graphics vector, may be divided into a plurality of partial graphics vectors in the device memory (represented as one vector image on the touch screen), each represented by one or more functions capable to compute changes in its size and geometry, whereby the size and geometry of the graphics vector may be computed by the computing device based on the sum of the partial graphics vectors.
- the computing device responsive to a user "pushing" (i.e., in effect touching) or tapping at a geometrical feature of a displayed representation of a graphics vector (i.e., at the vector image), the computing device automatically increases or decreases the size of the graphics vector or of one or more parameters represented on the graphic feature. For example, touching or tapping at a displayed representation of a corner of a cube or at a surface of a ramp, will cause the computing device to automatically decrease or increase the size of the cube ( Figures 54A-54B) or of the decline/incline angle of the ramp, respectively.
- the computing device responsive to touching or tapping anywhere at a displayed representation of a sphere, the computing device automatically decreases or increases the radius of the sphere, respectively, which in turn, decreases or increases, respectively the circumference, surface area and volume of the sphere.
- the computing device responsive to continued "squeezing" (i.e. holding/touching) a geometrical feature of a vector image representing a feature in graphics vector, such as the side edges of a top of a tube or of a cube, the computing device automatically brings the outside edge(s) of that graphics vector together gradually as the user continues squeezing/holding the geometrical feature of the vector image.
- the computing device responsive to the user tapping at or holding/touching the top surface of the geometrical feature, automatically and gradually brings the outside edges of the geometrical feature outward or inward, respectively as the user continues tapping at or touching the top surface of the vector image, respectively.
- the computing device responsive to touching at or, in proximity to a center point of a top surface (note that the region of interest here is proximate to the center, which is excluded from the region of interest in the prior example), the computing device automatically creates a wale (or other predetermined shape) with a radius centered at that center point, and continued touching or tapping (anywhere on the touch screen) will cause the computing device to automatically and gradually decrease or increase the radius of the wale, respectively.
- the computing device identifies the command. Then, the user may gesture at a displayed geometrical feature of a vector image to indicate desired changes in the vector graphics.
- the computing device responsive to continued 'pushing' (i.e., touching) or tapping at a displayed representation of a surface of a corner, after the user has indicated a command to add a fillet (at the surface of the inside corner) or an arc (at the surface of the outside corner) and the computing device identified the command, the computing device automatically rounds the corner (if the corner is not yet rounded), and then causes an increase or a decrease in the value of the radius of the fillet/arc (as well as in locations of the adjacent line objects), as the user continues touching or tapping, respectively at the fillet/arc surface (or anywhere on the touch screen).
- the computing device identifies a command to change line length (e.g., after the user touches a distinct icon representing the command), responsive to finger movement to the right or to the left (indicative of a desired change in width from the right edge or from the left edge of the surface of the cube, respectively) anywhere on a surface of the displayed cube, followed by continued touching or tapping (anywhere on the touch screen), the computing device automatically decreases or increases the width of the cube, respectively from the right edge or from the left edge of the surface, as the user continues touching or tapping.
- a command to change line length e.g., after the user touches a distinct icon representing the command
- the computing device responsive to finger movement to the right or to the left (indicative of a desired change in width from the right edge or from the left edge of the surface of the cube, respectively) anywhere on a surface of the displayed cube, followed by continued touching or tapping (anywhere on the touch screen)
- the computing device automatically decreases or increases the width of the cube, respectively from the right edge or
- the computing device responsive to a finger movement up or down on the surface of the cube followed by continued touching or tapping anywhere on the touch screen, the computing device automatically decreases or increases the height of the cube, respectively from the top edge or from the bottom edge of the surface, as the user continues touching or tapping. Further, responsive to tapping or touching a point proximate to an edge along two connected surfaces of a graphic image of a cube, the computing device automatically increases or decreases the angle between the two connected surfaces.
- the computing device after the computing device identifies a command to insert a blind hole and a point on a surface of the graphic image at which to insert the blind hole (e.g., after detecting a long press at that point, indicating the point on the surface at which to drill the hole), responsive to continued tapping or touching (anywhere on the touch screen), the computing device gradually and automatically increases or decreases the depth of the hole, respectively in the graphics vector and updates the vector image. Similarly, responsive to identifying a command to drill a through hole at user indicated point on a surface of the vector image, the computing device automatically inserts the a through hole in the vector graphics and updates the vector image with the inserted through hole.
- the computing device responsive to tapping or touching at a point along the circumference of the hole, the computing device automatically increases or decreases the radius of the hole. Or, responsive to touching the inside surface of the hole, the computing device automatically invokes a selection table/menu of standard threads, from which the user may select a desired thread to apply to the outside surface of the hole.
- Figures 40A-40D relate to a command to Insert a line. They illustrate the interaction between a user and a touch screen, whereby a user draws a line 3705 free-hand between two points A and B ( Figure 40B). In some embodiments, an estimated distance of the line 3710 is displayed while the line is being drawn. Responsive to the user finger being lifted from the touch screen ( Figure 40C), the computing device automatically inserts a straight-line object in the device memory, at memory locations represented by points A and B on the touch screen, where the drawing is stored, and displays the straight-line object 371 5 along with its actual distance 3720 on the touch screen.
- Figures 41 A-41 C relate to a command to delete an object.
- the user selects the desired object 3725 by touching it ( Figure 41 A) and then may draw a command indicator 3730, for example, the letter 'd' to indicate the command 'Delete' '( Figure 41 B).
- the computing device identifies the command and deletes the object ( Figure 41 C). It should be noted that the user may indicate the command by selecting an icon representing the command, by an audible signal and the like.
- Figures 42A-42D relate to a command to change line length.
- the user selects the line 3735 by touching it ( Figure 42A) and then may draw a command indicator 3740, for example, the letter 'U to indicate the desired command ( Figure 42B).
- selecting line 3735 prior to drawing the command indicator 3740 is optional, for example, to view its distance or to copy or cut it.
- the computing device responsive to each of gradual changes in user selected positional locations on the touch screen starting from point 3745 of line 3735, the computing device automatically causes each of respective gradual changes in line length stored in the device memory and updates the length on display box 3750 ( Figures 42B- 42C).
- Figures 43A-43D relate to a command to change line angle.
- the user may optionaly first select line 3755 (Figure 43A) and then may draw a command indicator 3760, for example, the letter 'a' to indicate the desired command ( Figure 43B).
- the computing device responsive to each of gradual changes in user selected positional locations (up or down) on the touch screen starting from the edge 3765 of line 3755, the computing device automatically causes each of respective gradual changes in line angle stored in the device memory and updates the angle of the line, for example, relative to the x-axis, in the device memory, and also updates the angle on display box 3770 ( Figures 43B-43C).
- the computing device will automatically cause gradual changes in length and/or angle of the line based on direction of movement of the gesture, and accordingly will update the values of either or both the length and the angle on the display box at each of gradual changes in user selected positional locations on the touch screen.
- Figures 44A-41 D relate to a command to apply a radius to a line or to change the radius of an arc between A and B.
- the user may optionally first select the displayed line or arc, being line 3775 in this example ( Figure 44A) and then may draw a command indicator 3780, for example, the letter 'R' to indicate the desired command ( Figure 44B).
- the computing device automatically causes each of respective gradual changes in the radius of the line/arc in the drawing stored in the device memory and updates the radius of the arc on display box 3790 ( Figures 44C).
- Figures 45A-45C relate to a command to make a line parallel to another line.
- the user may draw a command indicator 3795, for example, the letter 'N' to indicate the desired command and then touch a reference line 3800 (Figure 45A).
- the user selects target line 3805 (Figure 45B) and lifts finger (Figure 45C).
- the computing device automatically alters the target line 3805 in the device memory to be parallel to the reference line 3800 and updates the displayed target line on the touch screen (Figure 45C).
- Figures 46A-46D relate to a command to add a fillet (at a 2D representation of a corner or at a 3D representation of an inside surface of a corner) or an arc (at a 3D representation of an outside surface of a corner).
- the user may draw a command indicator 381 0 to indicate the desired command and then touch corner 381 5 to which to apply a fillet (Figure 46A).
- the computing device converts the sharp corner 3815 into rounded corner 3820 (having a default radius value) and zooms in that corner (Figure 46B).
- the computing device responsive to each of gradual changes in user selected positional locations on the touch screen across the displayed arc 3825 at a position along it, the computing device causes each of respective gradual changes in the radius of the arc stored in the device memory and in its locations in memory represented by A and B, such that the arc is tangent to the adjacent lines 3830 and 3835 (Figure 46C).
- the user touches the screen and in response the computing device zooms out the drawing to its original zoom percentage ( Figure 46D). Otherwise, the user may indicate additional changes in the radius, even after the finger is lifted.
- Figures 47A-47D relate to a command to add a chamfer.
- the user may draw a command indicator 3840 to indicate the desired command and then touches the desired corner 3845 to which to apply a chamfer/bevel (Figure 47A).
- the computing device trims the corner between two locations represented by A and B on the touch screen, and sets the height H and width W at default values, and as a result also the angle a ( Figure 47B).
- the computing device responsive to each of gradual changes in user selected positional locations on the touch screen (in parallel motions to line 3850 and/or line 3855), the computing device causes gradual changes in the width W and/or height H, respectively, as stored in the device memory as well as in locations A and B as stored in memory, and updates their displayed representation (Figure 47C).
- the user touches the screen and in response the computing device zooms out the drawing to its original zoom percentage ( Figure 47D). Otherwise, the user may indicate additional changes in parameters W and/or H, even after the finger is lifted.
- Figure 48A-48F relate to the command to trim an object.
- the user may draw a command indicator 3860 to indicate the desired command (Figure 48A).
- the user touches target object 3865 ( Figure 48B) and then reference object 3870 ( Figure 48C) ; it should be noted that these steps are optional.
- the user then moves reference object 3870 to indicate the desired trim in target object 3865 ( Figures 48D-48E).
- the computing device automatically applies the desired trim 3875 to target object 3865 ( Figure 48F).
- Figure 49A-49D relate to a command to move an arced object.
- the user may optionally select object 3885 (Figure 49A) and then draw a command indicator 3880 to indicate the desired command, and then touches the displayed target object 3885 (Figure 49B) (at this point the object is selected), and moves it until edge 3890 of the arc 3885 is at or proximate to edge 3895 of line 3897 ( Figure 49C).
- the computing device automatically moves the arc 3885 such that it is tangent to line 3897 where the edges meet (Figure 49D).
- Figures 50A-50D relate to the 'No Snap' command.
- the user may touch command indicator 3900 to indicate the desired command (Figure 50A), and then the user may touch the desired intersection 3905 to unsnap ( Figure 50B).
- the computing device responsive to the finger being lifted from the touch screen, the computing device automatically applies the no-snap 391 0 at intersection 3905 and zooms in the intersection ( Figure 50C). Touching again causes the computing device to zoom out the drawing to its original zoom percentage ( Figure 50D).
- Figures 51 A-51 D illustrate another example of use of the 'No Snap' command.
- the user may touch command indicator 391 5 to indicate the desired command ( Figure 51 A).
- the user may draw a command indicator 3920, for example, the letter 'U to indicate the desired command to change line length ( Figure 51 B).
- the computing device responsive to each of gradual changes in user selected positional locations on the touch screen, starting from the edge 3925 of line 3930 and ending at position 3935 on the touch screen, across line 3940, the computing device automatically unsnaps intersection 3945 or avoids the intersection 3945 from being snapped, if the snap operation is set as a default operation by the computing device.
- Figures 52A-52D illustrate another example of use of the command to trim an object.
- the user may draw a command indicator 3950 to indicate the desired command ( Figure 52A).
- the user moves reference object 3955 to indicate the desired trim in target object 3960 ( Figures 52B-52C).
- the computing device automatically applies the desired trim 3965 to target object 3960 ( Figure 52D).
- Commands to copy and cut graphic objects may be added to the set of gestures discussed above, and carried out for example by selecting one or more graphic objects (as shown for example in Figure 42A), and then the user may draw a command indicator or touch an associated distinct icon on the touch screen to indicate the desired command, to copy or cut.
- the command to paste may also be added, and may be carried out for example by drawing a command indicator, such as the letter 'P' (or by touching a distinct icon representing the command), and then pointing at a position on the touch screen, which represents a location in memory at which to paste the clipboard content.
- the copy, cut and paste commands may be useful, for example, in copying a portion of a CAD drawing representing a feature such as a bath tab and pasting it at another location of the drawing representing a second bathroom of a renovation site.
- Figure 53 is an example of a user interface with icons corresponding to the available user commands discussed in the Figures above and a 'Gesture Help' by each distinct icon indicating a letter/symbol which may be drawn to indicate a command, instead of selecting the icon by it representing the command.
- Figures 54A-54B illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a cube.
- the computing device Responsive to a user touching corner 3970 of vector image 3975, representing a graphics vector of a cube ( Figure 54A), for a predetermined period of time, the computing device interprets/identifies the touching at corner 3970 as a command to proportionally decrease the dimensions of the cube. Then, responsive to continued touching at corner 3970, the computing device automatically and gradually decreases the length, width and height of the cube in the vector graphics, displayed at 3977, 3980 and 3985, respectively, at the same rate, and updates the displayed length 3990, width 3950 and height 4000 in vector image 4005 ( Figure 54B).
- Figures 54C-54D illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a sphere. Responsive to continued touching at point 401 0 or anywhere on the vector image 401 5 of a sphere ( Figure 54C), representing a graphics vector of the sphere, for a predetermined period of time, the computing device interprets/identifies the touching at point 401 0 as a command to decrease the radius of the sphere. Then responsive to continued touching at point 401 0, the computing device automatically and gradually decreases the radius of the vector graphics of the sphere, and updates the vector image 401 7 ( Figure 54D) on the touch screen.
- Figures 54E-54F illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a ramp. Responsive to a user touching at point 4020 or any point along edge 4025 of base 4030 of the vector image 4035 of a ramp ( Figure 54E), representing a graphics vector of the ramp, for a predetermined period of time, the computing device interprets/identifies the touching as a command to increase incline angle 4040 and decrease distance 4045 of base 4030 in the graphic object, such that distance 4050 along the ramp remains unchanged.
- the computing device automatically and gradually increases incline angle 4040 and decreases distance 4045 of base 4030 in the graphics vector, such that distance 4050 along the height of the ramp remains unchanged, and updates displayed incline angle 4040 and distance 4045 to incline angle 4055 and distance 4060 in vector image 4065 ( Figure 54F).
- the computing device may be configured to automatically and gradually decrease inclines angle 4040 and increase distance 4045, such that distance 4050 along the ramp will remain unchanged.
- Figures 55A-55B illustrate examples of user interface menus for the text editing, selection mode, discussed below.
- Figure 56 is an example of a gesture to mark text in command mode.
- the user indicates a desired command, such as a command to underline, for example by touching icon 4055 representing the command.
- the computing device responsive to the user drawing line 4060 free-hand between A and B, from the right to the left or from the left to the right, to indicate the locations in memory at which to underline text, the computing device automatically underlines the text at the indicated locations and displays a representation of the underlined text on the touch screen as the user continues drawing the gesture or when the finger is lifted from the touch screen, depending on the user predefined preference.
- Figure 57 is another example of a gesture to mark text in command mode.
- the user indicates a desired command, such as a command to move text, for example by touching icon 4065 representing the command.
- a desired command such as a command to move text
- the computing device responsive to the user drawing a zigzagged line 4070 free-hand between A and B, from the right to the left or from the left to the right, to indicate the locations in memory at which to select text to be moved, the computing device automatically selects the text at the indicated locations in memory and highlights it on the touch screen as the user continues drawing the gesture or when the finger is lifted from the touch screen, depending on user predefined preference. At this point, the computing device automatically switches to data entry mode.
- the computing device responsive to the user pointing at a position on the touch screen, indicative of a location in memory at which to paste the selected text, the computing device automatically pastes the selected text, starting from that indicated location. Once the text is pasted, the computing device will automatically revert back to command mode.
- the computing device invokes command mode or data entry mode; command mode is invoked when a command intended to be applied to text or graphics already stored in memory and displayed on the touch screen is identified, and data entry mode is invoked when a command to insert or paste text or graphics is identified.
- command mode data entry mode is disabled to allow for unrestricted/unconfined user input, on the touch screen of the computing device, in order to indicate locations of displayed text/graphics at which to: apply user pre-defined command(s), and in data entry mode, command mode is disabled to enable pointing at positions on the touch screen indicative of locations in memory at which to insert text, insert a drawn shape such as a line, or paste text or graphics.
- Command mode may be set to be a default mode.
- the computing device when in data entry mode, will interpret such a position as indicative of an insertion location in memory, only after the finger is lifted from the touch screen, to further improve robustness/user friendliness; the benefit of this feature with respect to control over a zooming functionality is further discussed below.
- the user may draw the marking gesture free-hand on displayed text on the touch screen to indicate desired locations of text characters in memory where a desired command, such as bold, underline, move or delete, should be applied, or on displayed graphics (i.e., on vector image) to indicate desired locations of graphic objects in memory where a desired command, such as select, delete, replace, change objects color, color shade, size, style, or line thickness, should be applied.
- the user may define a command, by selecting a distinct icon representing the command from a bar menu on the touch screen, illustrated for example in Figure 53.
- the user may define a desired command by drawing a letter/symbol which represents the command; under this scenario, however, both command mode and data entry mode may be disabled while drawing the letter/symbol, to allow for unconfined free-hand drawing of the letter/symbol anywhere on the touch screen, such that the drawing of a letter/symbol will not be interpreted as the marking gesture, or as a drawn feature, such as a drawn line, to be inserted, and a finger being lifted from the touch screen will not be interpreted as inserting or pasting data.
- the drawing of the marking gesture on displayed text/graphics to indicate the desired locations in memory at which to apply user indicated commands to text/graphics can be achieved in a single step, and if desired, in one or more time interval breaks, if for example the user lifts his/her finger from the touch screen up to a predetermined period of time, or under other predetermined conditions, such as between double taps, during which the user may, for example, wish to review a portion in another document before deciding whether to continue marking additional displayed text/graphics from the last indicated location prior to the time break or on other displayed text/graphics, or to simply conclude the marking.
- the marking gesture may be drawn free-hand in any shape, such as in zigzag ( Figure 57), a line across ( Figure 56), or a line above or below displayed text/graphics.
- the user may also choose to display the marking gesture as it is being drawn, and to draw back along the gesture (or anywhere along it) to undo applied command(s) to text/graphics indicated by previously marked area(s) of displayed text/graphics.
- the computing device responsive to a gesture being drawn on the touch screen to mark displayed text or graphics while in command mode and no command was selected prior to drawing the gesture, the computing device automatically invokes selection mode, selects the marked/indicated text/graphics on the touch screen as the finger is lifted from the touch screen, and automatically invokes a set of icons, each representing a distinct command, arranged in menus and/or tooltips by the selected text/graphics ( Figures 55A-55B).
- the computing device when the user selects one or more of the displayed icons, and the computing device automatically applies the corresponding command(s) to the selected text.
- the user may exit the selection mode by simply dismissing the screen, which in response, the computing device will automatically revert back to command mode.
- the computing device will also automatically revert back to command mode after the selected text is moved (if the user have had indicated a command to move text, pointed at a position on the touch screen representing the location in memory at which to move the selected text, and then lifts his/her finger).
- command mode data entry mode is disabled while in selection mode to allow for unrestricted/unconfined drawing of the marking gesture to mark displayed text or graphics.
- Selection mode may be useful, for example, when the user wishes to focus on a specific portion of text and perform some trial and errors prior concluding the edits on that portion of text.
- the user may for example indicate a command to suggest a synonym, capitalize the word, or change its fonts to all caps.
- Figures 58A-58B illustrate an example of automatically zooming a text while drawing the gesture to mark text, as discussed below.
- the computing device while in command mode or in data entry mode, or while drawing the marking gesture during selection mode (prior to the finger being lifted from the touch screen), responsive to detecting a decrease or an increase in speed between two positions on the touch screen while the marking gesture or a shape such as a line to be inserted, is being drawn, the computing device automatically zooms in or zooms out, respectively a portion of the displayed text/graphic on the touch screen which is proximate to the current position along the marking gesture or the drawn line.
- the computing device responsive to detecting a user selected position on the touch screen with no movement for a predetermined period of time while in either command mode or data entry mode, automatically zooms in a portion of the displayed text/graphic on the touch screen which is proximate to the selected position and further continues to gradually zoom in up to a maximal predetermined zoom percentage as the user continues to point at that selected position; this feature may be useful especially near or at the start and end points along the gesture or along the drawn line, as the user may needs to see more details in their proximity so as to point closer at the desired displayed text character/graphic object or its location; naturally, the finger is at rest at the starting point (prior to drawing the gesture or the line) as well as while at a potential end point.
- the finger (or writing tool) being at rest on the touch screen will not be interpreted as the insertion location in memory at which to insert text/graphics, until after the finger (or writing tool) is lifted from the touch screen, and therefore, the user may have his/her finger be periodically at rest (to zoom in) while approaching the intended position.
- the computing device may be configured to automatically zoom out as the user continues tapping.
- the disclosed embodiments may further provide a facility that allows a user to specify customized gestures for interacting with the displayed representations of the graphic objects.
- the user may be prompted to select one or more parameters to be associated with a desired gesture.
- the user may be presented with a list of available parameters, or may be provided with a facility to input custom parameters.
- the user may be prompted to associate desired gesture(s), indicative of change(s) in the specified parameter, with a geometrical feature within the vector image;
- the user may be prompted to input a desired gesture indicative of an increase in the value of the specified parameter and then to input another desired gesture indicative of a decrease in the value of the specified parameter, in other aspects, the user may be prompted to associate desired gesture(s) indicative of change(s) in its shape (when the shape/geometry of graphic object(s) being the specified parameter), and in other aspects, the user may be prompted to associate direction(s) of movement of a drawn gesture with a feature within the geometrical feature, and the like.
- the computing device may associate the custom parameter(s) with one or more functions, or the user may be presented with a list of available functions, or the user may be provided with a facility to specify custom function(s), such that when the user inputs the specified gesture(s) within other, similar geometrical features within the same vector image or within another vector image, the computing device will automatically affect the indicated changes in the vector graphics, represented by the vector image, in memory of the computing device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
A computing device includes a memory and a touch screen including a display medium for displaying a representation of at least one graphic object stored in the memory, the graphic object having at least one parameter stored in the memory, a surface for determining an indication of a change to at the least one parameter. In response to indicating the change, the computing device is configured to automatically change the at least one parameter in the memory and automatically change the representation of the one or more graphic objects in the memory, and the display medium is configured to display the changed representation of one or more graphic objects with the changed parameter.
Description
INTEGRATED DOCUMENT EDITOR
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This Application claims the benefit of U.S. Provisional Patent
Application 62/559,269, filed September 15, 2017, the contents of which are herein incorporated by reference.
BACKGROUND
[0002] The disclosed embodiments relate to document creation and editing. More specifically, the disclosed embodiments relate to integration of recognition of information entry with document creation. Handwritten data entry into computer programs is known. The most widespread use has been in personal digital assistant devices. Handwritten input to devices using keyboards is not widespread for various reasons. For example, character transcription and recognition are relatively slow, and there are as yet no widely accepted standards for character or command input.
SUMMARY
[0003] According to the disclosed embodiments, methods and systems are provided for incorporating handwritten information, particularly corrective information, into a previously created revisable text or graphics document, for example text data, image data or command cues, by use of a digitizing recognizer, such as a digitizing pad, a touch screen or other positional input receiving mechanism as part of a display. In a data entry mode, a unit of data is inserted by means of a writing pen or like scribing tool and accepted for placement at a designated location, correlating x-y location of the writing pen to the actual location in the document, or accessing locations in the document memory by emulating keyboard keystrokes (or by the running of code/programs). In a recognition mode, the entered data is recognized as legible text with optionally embedded edit or other commands, and it is converted to machine-readable format. Otherwise, the data is recognized as graphics (for applications that accommodate graphics) and accepted into an associated image frame. Combinations of data, in text or in graphics form, may be concurrently recognized. In a specific embodiment, there is a window of error in location of the writing tool after initial invocation of the data entry mode, so that actual placement of the tool is not critical, since the input of data is correlated by the initial x-y location of the writing pen to the actual location in the document. In addition, there is an allowed error as a function of the pen's location within the document (i.e., with
respect to the surrounding data). In a command entry mode, handwritten symbols selected from a basic set common to various application programs may be entered and the corresponding commands may be executed. In specific embodiments, a basic set of handwritten symbols and/or commands that are not application- dependent and that may be user-intuitive are applied. This handwritten command set allows for the making of revisions and creating documents without having prior knowledge of commands for a specific application.
[0004] In a specific embodiment, such as in use with a word processor, the disclosed embodiments may be implemented when the user invokes a Comments Mode of at a designated location in a document and then the handwritten information may be entered via the input device into the native Comments field, whereupon it is either converted to text or image or to the command data to be executed, with a handwriting recognizer operating either concurrently or after completion of entry of a unit of the handwritten information. Information recognized as text is then converted to ciphers and imported into the main body of the text, either automatically or upon a separate command. Information recognized as graphics is then converted to image data, such as a native graphics format or as a JPEG image and imported into to the main body of the text at the designated point, either automatically or upon a separate command. Information interpreted as commands can be executed, such as editing commands, which control addition, deletion or movement of text within the document, as well as font type or size change or color change. In a further specific embodiment, the disclosed embodiments may be incorporated as a plug-in module for the word processor program and invoked as part of the system, such as the use of a macro or as invoked through the Track Changes feature.
[0005] In an alternative embodiment, the user may manually indicate, prior to invoking the recognition mode, the nature of the input, whether the input is text, graphics or command, recognition can be further improved by providing a step-by- step protocol prompted by the program for setting up preferred symbols and for learning the handwriting patterns of the user.
Γ00061 In at least one aspect of the disclosed embodiments, a computing device includes a memory and a touch screen including a display medium for displaying a representation of at least one graphic object stored in the memory, the graphic object having at least one parameter stored in the memory, a surface for determining an indication of a change to at the least one parameter, wherein, in
response to indicating the change, the computing device is configured to automatically change the at least one parameter in the memory and automatically change the representation of the one or more graphic objects in the memory, and wherein the display medium is configured to display the changed representation of one or more graphic objects with the changed parameter.
fQQQ&l-r00071 In another aspect of the disclosed embodiments, a method includes displaying, on a display medium of a computing device, a representation of at least one graphic object stored in a memory, each graphic object having at least one parameter stored in the memory, indicating a change to the least one parameter, and in response to indicating the change, automatically changing the at least one parameter in the memory and automatically changing the representation of the at least one graphic object in the memory, and displaying the changed representation of the at least one graphic object on the display medium
Ϊ0007Η00081 These and other features of the disclosed embodiments will be better understood by reference to the following detailed description in connection with the accompanying drawings, which should be taken as illustrative and not limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
fQQQ8i-r00091 Γ000Π Figure 1 is a block schematic diagram illustrating basic functional blocks and data flow according to one embodiment of the disclosed embodiments.
ΐΟΟΟΟί-ΓΟΟΙΟΙ Γ00021 Figure 2 is a flow chart of an interrupt handler that reads handwritten information in response to writing pen taps on a writing surface.
ΐΟΟ4ΰί-Γ00111 Γ0003Ί Figure 3 is a flow chart of a polling technique for reading handwritten information.
fQO4 -r00121 Γ00041 Figure 4 is a flow chart of operation according to a representative embodiment of the disclosed embodiments wherein handwritten information is incorporated into the document after all handwritten information is concluded.
fQO ¾-r00131 Γ00051 Figure 5 is a flow chart of operation according to a representative embodiment of the disclosed embodiments, wherein handwritten information is incorporated into the document concurrently during input.
Figure 6 is an illustration example of options available for displaying handwritten information during various steps in the process according to the disclosed embodiments.
fQO 4i-r00151 Γ00071 Figure 7 is an illustration of samples of handwritten symbols / commands and their associated meanings.
Figure 8 is a listing that provides generic routines for each of the first 3 symbol operations illustrated in Figure 7.
fQQ4&l-r001 1 Γ00091 Figure 9 is an illustration of data flow for data received from a recognition functionality element processed and defined in an RHI memory.
fQQ 7H00181 Γ00101 Figure 10 is an example of a memory block format of the RHI memory suitable for storing data associated with one handwritten command.
f00 8i-rooi91 Γ0011Ί Figure 1 1 is an example of data flow of the embedded element of Figure 1 and Figure 38 according to the first embodiment illustrating the emulating of keyboard keystrokes.
fQQ4£l-r00201 Γ00121 Figure 12 is a flow chart representing subroutine D of Figure 4 and Figure 5 according to the first embodiment using techniques to emulate keyboard keystrokes.
{002 HQQ21]_[0013] Figure 13 is an example of data flow of the embedded element of Figure 1 and Figure 38 according to the second embodiment illustrating the running of programs.
Ϊ002 ^-Γ00221 Γ00141 Figure 14 is a flow chart representing subroutine D of Figure 4 and Figure 5 according to the second embodiment illustrating the running of programs.
Ϊ00224-Γ00231 Γ00151 Figure 15 through Figure 20 are flow charts of subroutine H referenced in Figure 1 2 for the first three symbol operations illustrated in Figure 7 and according to the generic routines illustrated in Figure 8.
fQQ23H00241 Γ00161 Figure 21 is a flow chart of subroutine L referenced in Figure 4 and Figure 5 for concluding the embedding of revisions for a Microsoft® Word type document, according to the first embodiment using techniques to emulate keyboard keystrokes.
fQQ2 H00251 Γ00171 Figure 22 is a flow chart of an alternative to subroutine L of Figure 21 for concluding revisions for MS Word type document.
Ϊ002£1-Γ00261 Γ00181 Figure 23 is a sample flow chart of the subroutine I referenced in Figure 12 for copying a recognized image from the RHI memory and placing it in the document memory via a clipboard.
fQQ--&l-r00271 Γ00191 Figure 24 is a sample of code for subroutine N referenced in Figure 23 and Figure 37, for copying an image from the RHI memory into the clipboard.
fQQ--7-l-r00281 Γ00201 Figure 25 is a sample of translated Visual Basic code for built-in macros referenced in the flow charts of Figure 26 to Figure 32 and Figure 37.
fQQ2Sl-r00291 Γ0021Ί Figure 26 through Figure 32 are flow charts of subroutine J referenced in Figure 14 for the first three symbol operations illustrated in Figure 7 and according to the generic routines illustrated in Figure 8 for MS Word.
fQOa»l-r00301 Γ00221 Figure 33 is a sample of code in Visual Basic for the subroutine M referenced in Figure 4 and Figure 5, for concluding embedding of the revisions for MS Word, according to the second embodiment using the running of programs.
Figure 34 is a sample of translated Visual Basic code for useful built-in macros in comment mode for MS Word.
{003 f JO032J_[0024] Figure 35 provides examples of recorded macros translated into Visual Basic code that emulates some keyboard keys for MS Word.
100321400331 Γ00251 Figure 36 is a flow chart of a process for checking if a handwritten character to be emulated as a keyboard keystroke exists in table and thus can be emulated and, if so, for executing the relevant line of code that emulates the keystroke.
100331400341 Γ00261 Figure 37 is a flow chart of an example for subroutine K in Figure 14 for copying a recognized image from RHI memory and placing it in the document memory via the clipboard.
100341400351 Γ00271 Figure 38 is an alternate block schematic diagram to the one illustrated in Figure 1 , illustrating basic functional blocks and data flow according to another embodiment of the disclosed embodiments, using a touch screen.
100351400361 Γ00281 Figure 39 is a schematic diagram of an integrated edited document made with the use of a wireless pad.
100361400371 Γ00291 Figures 40A-40D illustrate an example of user interaction with the touch screen to Insert a line.
100371400381 Γ00301 Figures 41 A-41 C illustrate an example of use of the command to delete an object.
Ϊ0038ί-Γ00391 Figures 42A-42D illustrate an example of user interaction with the touch screen to change line length.
fQQ38i-r00401 Figures 43A-43D illustrate an example of user interaction with the touch screen to change line angle.
fQQ4(H-r00411 Γ0031Ί Figures 44A-44D illustrate an example of user interaction with the touch screen to apply a radius to a line or to change the radius of an arc.
Figures 45A-45C illustrate an example of user interaction with the touch screen to make a line parallel to another line.
fQQ4g|-r00431 Γ00331 Figures 46A-46D illustrate an example of user interaction with the touch screen to add a fillet or an arc to an object.
fQQ43i-r00441 Γ00341 Figures 47A-47D illustrate an example of user interaction with the touch screen to add a chamfer.
fQQ441-r00451 Γ00351 Figures 48A-48F illustrate an example of use of the command to trim an object.
fQQ4Sl-r00461 Γ00361 Figures 49A-49D illustrate an example of user interaction with the touch screen to move an arced object.
{0046HQQ47]_[0037] Figures 50A-50D illustrate an example of use of the "no snap" command.
100473400481 Γ00381 Figures 51 A-51 D illustrate another example of use of the 'No Snap' command.
fQQ481-r00491 Figures 52A-52D illustrate another example of use of the command to trim an object.
100491400501 Γ00391 Figure 53 is an example of a user interface with icons.
100501400511 Γ00401 Figures 54A-54B illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a cube on the touch screen.
{005 JO052J_[0041] Figures 54C-54D illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a sphere on the touch screen.
100521400531 Γ00421 Figures 54E-54F illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a ramp on the touch screen.
100531400541 Γ00431 Figures 55A-55B illustrate examples of a user interface menus for text editing, selection mode.
fQQ54i-r00551 [00441 Figure 56 illustrates an example of a gesture to mark text in command mode.
fQQ5£l-r00561 Γ00451 Figure 57 illustrates another example of a gesture to mark text in command mode.
fQQ5&l-r00571 Γ00461 Figures 58A-58B illustrate an example of automatically zooming a text while drawing the gesture to mark text.
DETAILED DESCRIPTION
fQQ571-r00581 Γ00471 Referring to Figure 1 , there is a block schematic diagram of an integrated document editor 10 according to a first embodiment, which illustrates the basic functional blocks and data flow according to that first embodiment. A digitizing pad 12 is used, with its writing area (e.g., within margins of an 8-1/2" x 1 1 " sheet) to accommodate standard sized papers that corresponds to the x-y location of the edited page. Pad 1 2 receives data from a writing pen 1 0 (e.g., magnetically, or mechanically by way of pressure with a standard pen). Data from the digitizing pad 12 is read by a data receiver 14 as bitmap and/or vector data and then stored corresponding to or referencing the appropriate x-y location in a data receiving memory 1 6. Optionally, this information can be displayed on the screen of a display 25 on a real-time basis to provide the writer with real-time feedback.
fQQ581-r00591 Γ00481 Alternatively, and as illustrated in Figure 38, a touch screen 1 1 (or other positional input receiving mechanism as part of a display) with its receiving and displaying mechanisms integrated, receives data from the writing pen 10, whereby the original document is displayed on the touch screen as it would have been displayed on a printed page placed on the digitizing pad 12 and the writing by the pen 10 occurs on the touch screen at the same locations as it would have been written on a printed page. Under this scenario, the display 25, pad 1 2 and data receiver 14 of Figure 1 are replaced with element 1 1 , the touch screen and associated electronics of Figure 38, and elements 1 6, 18, 20, 22, and 24 are discussed hereunder with reference to Figure 1 . Under the touch screen display alternative, writing paper is eliminated.
{00593-JO060L[0049] When a printed page is used with the digitizing pad 12, adjustments in registration of location may be required such that locations on the printed page correlates to the correct x-y locations for data stored in the data receiving memory 16.
{00€0}-JO061J_[0050] The correlation between locations of the writing pen 1 0 (on the touch screen 1 1 or on the digitizing pad 1 2) and the actual x-y locations in the document memory 22 need not be perfectly accurate, since the location of the pen 1 0 is with reference to existing machine code data. In other words, there is a window of error around the writing point that can be allowed without loss of useful information, because it is assumed that the new handwritten information (e.g., revisions) must always correspond to a specific location of the pen, e.g., near text, drawing or image. This is similar to, but not always the same as, placing a cursor at an insertion point in a document and changing from command mode to data input mode. For example, the writing point may be between two lines of text but closer to one line of text than to the other. This window of error could be continuously computed as a function of the pen tapping point and the data surrounding the tapping point. In case of ambiguity as to the exact location where the new data are intended to be inserted (e.g., when the writing point overlaps multiple possible locations in the document memory 22), the touch screen 1 1 (or the pad 1 2) may generate a signal, such as a beeping sound, requesting the user to tap closer to the point where handwritten information needs to be inserted. If the ambiguity is still not resolved (when the digitizing pad 1 2 is used), the user may be requested to follow an adjustment procedure.
fQQ€4 -r00621 Γ00511 If desired, adjustments may be made such that the writing area on the digitizing pad 1 2 will be set to correspond to a specific active window (for example, in multi-windows screen), or to a portion of a window (i.e. , when the active portion of a window covers partial screen, e.g., an invoice or a bill of an accounting program QuickBooks), such that the writing area of the digitizing pad 1 2 is efficiently utilized. In situations where a document is a form (e.g., an order form), the paper document can be a pre-set to the specific format of the form, such that the handwritten information can be entered at specific fields of the form (that correspond to these fields in the document memory 22). In addition, in operations that do not require archiving of the handwritten paper documents, handwritten information on the digitizing pad 1 2 may be deleted after it is integrated into the document memory 22. Alternatively, multi-use media that allow multiple deletions (that clear the handwritten information) can be used, although the touch screen alternative would be preferred over this alternative.
fOO€¾-r00631 Γ00521 Α recognition functionality element 18 reads information from the data receiving memory 16 and writes the recognition results or recognized handwritten elements into the recognized handwritten information (RHI) memory 20. Recognized handwritten information elements, (RHI elements) such as characters, words, and symbols, are stored in the RHI memory 20. Location of an RHI element in the RHI memory 20 correlates to its location in the data receiving memory 16 and in the document memory 22. After symbols are recognized and interpreted as commands, they may be stored as images or icons in, for example, JPEG format (or they can be emulated as if they were keyboard keys. This technique will be discussed hereafter.), since the symbols are intended to be intuitive. They can be useful for reviewing and interpreting revisions in the document. In addition, the recognized handwritten information prior to final incorporation (e.g., revisions for review) may be displayed either in handwriting (as is or as revised machine code handwriting for improved readability) or in standard text.
embedded criteria and functionality element 24 reads the information from the RHI memory 20 and embeds it into the document memory 22. Information in the document memory 22 is displayed on the display 25, which is for example a computer monitor or a display of a touch screen. The embedded functionality determines what to display and what to be embedded into the document memory 22 based on the stage of the revision and selected user criteria/preferences. fQQ€41~r00651 Γ0054Ί Embedding the recognized information into the document memory 22 can be either applied concurrently or after input of all handwritten information, such as after revisions, have been concluded. Incorporation of the handwritten information concurrently can occur with or without user involvement. The user can indicate each time a handwritten command and its associated text and/or image has been concluded, and then it can be incorporated into the document memory 22 one at a time. (Incorporation of handwritten information concurrently without user involvement will be discussed hereafter.) The document memory 22 contains, for example, one of the following files: 1 ) A word processing file, such as a MS Word file or a Word Perfect file, 2) A spreadsheet, such as an Excel file, 3) A form such as a sales order, an invoice or a bill in accounting software (e.g., QuickBooks), 4) A table or a database, 5) A desktop publishing file, such as a QuarkXPress or a PageMaker file, or 6) A presentation file, such as a MS Power Point file.
It should be noted that the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, object, or any portion of these types of documents, such as a block of text or a unit of data. In addition, the document memory 22, the data receiving memory 16 and the RHI memory 20 could be any kind of memory or memory device or a portion of a memory device, e.g., any type of RAM, magnetic disk, CD-ROM, DVD-ROM, optical disk or any other type of storage. It should be further noted that one skilled in the art will recognize that the elements/components discussed herein (e.g., in Figures 1 , 38, 9, 1 1 , 1 3), such as the RHI element may be implemented in any combination of electronic or computer hardware and/or software. For example, the disclosed embodiments could be implemented in software operating on a general-purpose computer or other types of computing / communication devices, such as hand-held computers, personal digital assistant (PDA)s, cell phones, etc. Alternatively, a general-purpose computer may be interfaced with specialized hardware such as an Application Specific Integrated Circuit (ASIC) or some other electronic components to implement the disclosed embodiments. Therefore, it is understood that the disclosed embodiments may be carried out using various codes of one or more software modules forming a program and executed as instructions/data by, e.g., a central processing unit, or using hardware modules specifically configured and dedicated to perform the disclosed embodiments. Alternatively, the disclosed embodiments may be carried out using a combination of software and hardware modules.
{00€€}-JO06 L[0056] The recognition functionality element 18 encompasses one or more of the following recognition approaches:
1 - Character recognition, which can for example be used in cases where the user clearly spells each character in capital letters in an effort to minimize recognition errors,
2- A holistic approach where recognition is globally performed on the whole representation of the words and there is no attempt to identify characters individually. (The main advantage of the holistic methods is that they avoid word segmentation. Their main drawback is that they are related to a fixed lexicon of words description: since these methods do not rely on letters, words are directly described by means of features. Adding new words to the lexicon typically requires human training or the automatic generation of a word description from ASCII words.)
3- Analytical strategies that deal with several levels of representation
corresponding to increasing levels of abstractions. (Words are not considered as a whole, but as sequences of smaller size units, which must be easily related to characters in order to make recognition independent from a specific vocabulary.) Ϊ00£7+Γ00681 Γ00571 Strings of words or symbols, such as those described in connection with Figure 7 and discussed hereafter, can be recognized by either the holistic approach or by the analytical strategies, although character recognition may be preferred. Units recognized as characters, words or symbols are stored into the RHI memory 20, for example in ASCII format. Units that are graphics are stored into the RHI memory as graphics, for example as a JPEG file. Units that could not be recognized as a character, word or a symbol are interpreted as images if the application accommodates graphics and optionally, if approved by the user as graphics and stored into the RHI memory 20 as graphics. It should be noted that units that could not be recognized as character, word or symbol may not be interpreted as graphics in applications that do not accommodate graphics (e.g., Excel); in this scenario, user involvement may be required.
fQQ€Sl-r00691 Γ00581 Το improve the recognition functionality, data may be read from the document memory 22 by the recognition element 1 8 to verify that the recognized handwritten information does not conflict with data in the original document and to resolve/minimize as much as possible recognized information retaining ambiguity. The user may also resolve ambiguity by approving/disapproving recognized handwritten information (e.g., revisions) shown on the display 25. In addition, adaptive algorithms (beyond the scope of this disclosure) may be employed. Thereunder, user involvement may be relatively significant at first, but as the adaptive algorithms learn the specific handwritten patterns and store them as historical patterns, future ambiguities should be minimized as recognition becomes more robust.
fQQ€ffl-r00701 Γ00591 Figure 2 though Figure 5 are flow charts of operation according to an exemplary embodiment and are briefly explained herein below. The text in all of the drawings is herewith explicitly incorporated into this written description for the purposes of claim support. Figure 2 illustrates a program that reads the output of the digitizing pad 12 (or of the touch screen 1 1 ) each time the writing pen 10 taps on and/or leaves the writing surface of the pad 12 (or of the touch screen 1 1 ). Thereafter data is stored in the data receiving memory 1 6 (Step E). Both the
recognition element and the data receiver (or the touch screen) access the data receiving memory. Therefore, during read/write cycle by one element, the access by the other element should be disabled.
fQQ7(H-r00711 Γ00601 Optionally, as illustrated in Figure 3, the program checks every few milliseconds to see if there is new data to read from the digitizing pad 12 (or of the touch screen 1 1 ). If so, data is received from the digitizing recognizer and stored in the data receiving memory 16 (E). This process continues until the user indicates that the revisions are concluded, or until there is a timeout.
fQQ7W00721 Γ006Π Embedding of the handwritten information may be executed either all at once according to procedures explained with Figure 4, or concurrently according to procedures explained with Figure 5.
fQQ7-¾-r00731 IO0621 The recognition element 18 recognizes one unit at a time, e.g., a character, a word, graphic or a symbol, and makes them available to the RHI processor and memory 20 (C). The functionality of this processor and the way in which it stores recognized units into the RHI memory will be discussed hereafter with reference to Figure 9. Units that are not recognized immediately are either dealt with at the end as graphics, or the user may indicate otherwise manually by other means, such as a selection table or keyboard input (F). Alternatively, graphics are interpreted as graphics if the user indicates when the writing of graphics begins and when it is concluded. Once the handwritten information is concluded, it is grouped into memory blocks, whereby each memory block contains all (as in Figure 4) or possibly partial (as in Figure 5) recognized information that is related to one handwritten command, e.g., a revision. The embedded function (D) then embeds the recognized handwritten information (e.g., revisions) in "for review" mode. Once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences set up (A) by the user. In the examples illustrated hereafter, revisions in MS Word are embedded in Track Changes mode all at once. Also, in the examples illustrated hereafter, revisions in MS Word that are according to Figure 4 may, for example, be useful when the digitizing pad 12 is separate from the rest of the system, whereby handwritten information from the digitizing pad internal memory may be downloaded into the data receiving memory 16 after the revisions are concluded via a USB or other IEEE or ANSI standard port.
fQQ7-3i-r00741 Γ00631 Figure 4 is a flow chart of the various steps, whereby embedding "all" recognized handwritten information (such as revisions) into the document
memory 22 is executed once "all" handwritten information is concluded. First, the Document Type is set up (e.g., Microsoft® Word or QuarkXPress), with software version and user preferences (e.g., whether to incorporate revisions as they are available or one at a time upon user approval/disapproval), and the various symbols preferred by the user for the various commands such as for inserting text, for deleting text and for moving text around) (A). The handwritten information is read from the data receiving memory 16 and stored in the memory of the recognition element 18 (B). Information that is read from the receiving memory 16 is marked/flagged as read, or it is erased after it is read by the recognition element 1 8 and stored in its memory; this will insure that only new data is read by the recognition element 18.
fQQ74i-r00751 Γ00641 Figure 5 is a flow chart of the various steps whereby embedding recognized handwritten information (e.g., revisions) into the document memory 22 is executed concurrently (e.g., with the making of the revisions). Steps 1 - 3 are identical to the steps of the flow chart in Figure 4 (discussed above). Once a unit, such as a character, a symbol or a word is recognized, it is processed by the RHI processor 20 and stored in the RHI memory. A processor (GMB functionality 30 referenced in Figure 9) identifies it as either a unit that can be embedded immediately or not. It is checked if it can be embedded (step 4.3); if it can be (step 5), it is embedded (D) and then (step 6) deleted or marked/updated as an embedded (G). If it cannot be embedded (step 4.1 ), more information is read from the digitizing pad 12 (or from the touch screen 1 1 ). This process of steps 4 - 6 repeats and continues so long as handwritten information is forthcoming. Once all data is embedded (indicated by an End command or a simple timeout), units that could not be recognized are dealt with (F) in the same manner discussed for the flow chart of Figure 4. Finally, once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences chosen by the user.
1¾07-£1-Γ00761 Γ00651 Figure 6 is an example of the various options and preferences available to the user to display the handwritten information in the various steps for MS Word. In "For Review" mode the revisions are displayed as "For Review" pending approval for "Final" incorporation. Revisions, for example, can be embedded in a "Track Changes" mode, and once approved/disapproved (as in "Accept/Reject changes"), they are embedded into the document memory 22 as "Final". Alternatively, symbols may be also displayed on the display 25. The
symbols are selectively chosen to be intuitive, and, therefore, can be useful for quick review of revisions. For the same reason, text revisions may be displayed either in handwriting as is, or as revised machine code handwriting for improved readability; in "Final" mode, all the symbols are erased, and the revisions are incorporated as an integral part of the document.
fQQ7Sl-r00771 Γ00661 Αη example of a basic set of handwritten commands/symbols and their interpretation with respect to their associated data for making revisions in various types of documents is illustrated in Figure 7.
fQQ77+r00781 Γ00671 Direct access to specific locations is needed in the document memory 22 for read/write operations. Embedding recognized handwritten information from the RHI memory 20 into the document memory 22 (e.g., for incorporating revisions) may not be possible (or limited) for after-market applications. Each of the embodiments discussed below provides an alternate "back door" solution to overcome this obstacle.
Embodiment One: Emulating Keyboard Entries:
fQQ78i-r00791 Γ0068Ί Command information in the RHI memory 20 is used to insert or revise data, such as text or images in designated locations in the document memory 22, wherein the execution mechanisms emulate keyboard keystrokes, and when available, operate in conjunction with running pre-recorded and/or built-in macros assigned to sequences of keystrokes (i.e., shortcut keys). Data such as text can be copied from the RHI memory 20 to the clipboard and then pasted into designated locations in the document memory 22, or it can be emulated as keyboard keystrokes. This embodiment will be discussed hereafter.
Embodiment Two: Running Programs:
fQQ79i-r00801 Γ0069Ί In applications such as Microsoft® Word, Excel and WordPerfect, where programming capabilities, such as VB Scripts and Visual Basic are available, the commands and their associated data stored in the RHI memory 20 are translated to programs that embed them into the document memory 22 as intended. In this embodiment, the operating system clipboard can be used as a buffer for data (e.g., text and images). This embodiment will also be discussed hereafter.
f0Q804-r008 1 Γ00701 Information associated with a handwritten command as discussed in Embodiment One and Embodiment Two is either text or graphics
(image), although it could be a combination of text and graphics. In either embodiment, the clipboard can be used as a buffer.
For copy operations in the RHI memory:
fQQ8W00821 When a unit of text or image is copied from a specific location indicated in the memory block in the RHI memory 20 to be inserted in a designated location in the document memory 22.
For Cut/Paste and for Paste operations within the document memory:
Ϊ0082ί-Γ00831 For moving text or image around within the document memory 22, and for pasting text or image copied from the RHI memory 20.
fQQ83H00841 Γ0071Ί Α key benefit of Embodiment One is usefulness in a large array of applications, with or without programming capabilities, to execute commands, relying merely on control keys, and when available built-in or pre-recorded macros. When a control key, such as Arrow Up or a simultaneous combination of keys, such as Cntrl-C, is emulated, a command is executed.
fQQ84i-r00851 Γ00721 Macros cannot be run in Embodiment Two unless translated to actual low-level programming code (e.g., Visual Basic Code). In contrast, running a macro in a control language native to the application (recorded and/or built-in) in Embodiment One is simply achieved by emulating its assigned shortcut key(s). Embodiment Two may be preferred over Embodiment One, for example in MS Word, if a Visual Basic Editor is used to create codes that include Visual Basic instructions that cannot be recorded as macros.
fQQ8£l-r00861 Γ00731 Alternatively, Embodiment Two may be used in conjunction with Embodiment One, whereby, for example, instead of moving text from the RHI memory 20 to the clipboard and then placing it in a designation location in the document memory 22, text is emulated as keyboard keystrokes. If desired, the keyboards keys can be emulated in Embodiment Two by writing a code for each key, that, when executed, emulates a keystroke. Alternatively, Embodiment One may be implemented for applications with no programming capabilities, such as QuarkXPress, and Embodiment Two may be implemented for some of the applications that do have programming capabilities. Under this scenario, some applications with programming capabilities may still be implemented in Embodiment One or in both Embodiment One and Embodiment Two.
fQQ8Sl-r00871 Γ00741 Alternatively, x-y locations in the data receiving memory 1 6 (as well as designated locations in the document memory 22), can be identified on a printout or on the display 25, and if desired, on the touch screen 1 1 , based on: 1 ) recognition/identification of a unique text and/or image representation around the writing pen, and 2) searching for and matching the recognized/identified data around the pen with data in the original document which may be converted into the bitmap and/or vector format that is identical to the format handwritten information is stored in the data receiving memory 1 6. Then handwritten information along with its x-y locations correspondingly indexed in the document memory 22 is transmitted to a remote platform for recognition, embedding and displaying.
{0087J-JO088L[0075] The data representation around the writing pen and the handwritten information are read by a miniature camera with attached circuitry that is built-in the pen. The data representing the original data in the document memory 22 is downloaded into the pen internal memory prior the commencement of handwriting, either via a wireless connection (e.g., Bluetooth) or via physical connection (e.g., USB port).
fQQ8Sl-r00891 IO0761 The handwritten information along with its identified x-y locations is either downloaded into the data receiving memory 16 of the remote platform after the handwritten information is concluded (via physical or wireless link), or it can be transmitted to the remote platform via wireless link as the x-y location of the handwritten information is identified. Then, the handwritten information is embedded into the document memory 22 all at once (i.e., according to the flow chart illustrated in Figure 4), or concurrently (i.e., according to the flow chart illustrated in Figure 5). f0089 |O090]_[0077] If desired, the display 25 may include pre-set patterns (e.g., engraved or silk-screened) throughout the display or at selected location of the display, such that when read by the camera of the pen, the exact x-y location on the display 25 can be determined. The pre-set patterns on the display 25 can be useful to resolve ambiguities, for example when the identical information around locations in the document memory 22 exists multiple times within the document.
fQQ9(H-r00911 Γ00781 Further, the tapping of the pen in selected locations of the touch screen 1 1 can be used to determine the x-y location in the document memory (e.g., when the user makes yes-no type selections within a form displayed on the touch screen). This, for example, can be performed on a tablet that can accept input from a pen or any other pointing device that function as a mouse and writing instrument.
fQQ»H-r00921 Γ00791 Alternatively (or in addition to a touch screen), the writing pen can emit a focused laser/IR beam to a screen with thermal or optical sensing, and the location of the sensed beam may be used to identify the x-y location on the screen. Under this scenario, the use of a pen with a built-in miniature camera is not needed. When a touch screen or a display with thermal/optical sensing (or when preset patterns on an ordinary display) is used to detect x-y locations on the screen, the designated x-y location in the document memory 22 can be determined based on: 1 ) the detected x-y location of the pen 10 on the screen, and 2) parameters that correlate between the displayed data and the data in the document memory 22 (e.g., application name, cursor location on the screen and zoom percent).
fQQ8--4-r00931 Γ00801 Alternatively, the mouse could be emulated to place the insertion point at designated locations in the document memory 22 based on the X-Y locations indicated in the Data receiving memory 16. Then information from the RHI memory 20 can be embedded into the document memory 22 according to Embodiment One or Embodiment Two. Further, once the insertion point is at a designated location in the document memory 22, selection of text or an image within the document memory 22 may be also achieved by emulating the mouse pointer click operation.
Use of the Comments insertion feature:
fQQ93H00941 IO0811 The Comments feature of Microsoft® Word (or similar comment- inserting feature in other program applications) may be employed by the user or automatically in conjunction with either of the approaches discussed above, and then handwritten information from the RHI memory 20 can be embedded into designated Comments fields of the document memory 22. This approach will be discussed further hereafter.
Use of the Track Changes Feature:
fOQ94H00951 Γ00821 Before embedding information into the document memory 22, the document type is identified and user preferences are set (A). The user may select to display revisions in Track Change feature. The Track Changes Mode of Microsoft® Word (or similar features in other applications) can be invoked by the user or automatically in conjunction with either or both Embodiment One and Embodiment Two, and then handwritten information from the RHI memory 20 can be embedded into the document memory 22. After all revisions are incorporated into
the document memory 22, they can be accepted for the entire document, or they can be accepted /rejected one at a time upon user command. Alternatively, they can be accepted/rejected at the making of the revisions.
fQQ9£l-r00961 IO0831 The insertion mechanism may also be a plug-in that emulates the Track Changes feature. Alternatively, the Track Changes Feature may be invoked after the Comments Feature is invoked such that revisions in the Comments fields are displayed as revisions, i.e., "For Review". This could in particular be useful for large documents reviewed/revised by multiple parties.
fQQ9&l-r00971 Γ00841 In another embodiment, the original document is read and converted into a document with known accessible format (e.g., ASCII for text and JPEG for graphics) and stored into an intermediate memory location. All read/write operations are performed directly on it. Once revisions are completed, or before transmitting to another platform, it can be converted back into the original format and stored into the document memory 22.
{0097J-JO098L[0085] As discussed, revisions are written on a paper document placed on the digitizing pad 1 2, whereby the paper document contains/resembles the machine code information stored in the document memory 22, and the x-y locations on the paper document corresponds to the x-y locations in the document memory 22. In an alternative embodiment, the revisions can be made on a blank paper (or on another document), whereby, the handwritten information, for example, is a command (or a set of commands) to write or revise a value/number in a cell of a spreadsheet, or to update new information in a specific location of a database; this can be useful, for example in cases were an action to update a spreadsheet, a table or a database is needed after reviewing a document (or a set of documents). In this embodiment, the x-y location in the Receiving Memory 16 is immaterial.
RHI processor and memory blocks
fQQ9Sl-r00991 Γ0086Ί Before discussing the way in which information is embedded into the document memory 22 in greater detail with reference to the flow charts, it is necessary to define how recognized data is stored in memory and how it correlates to locations in the document memory 22. As previously explained, embedding the recognized information into the document memory 22 can be either applied concurrently or after all handwritten information has been concluded. The Embed function (D) referenced in Figure 4 reads data from memory blocks in the RHI
memory 20 one at a time, which corresponds to one handwritten command and its associated text data or image data. The Embed function (D) referenced in Figure 5 reads data from memory blocks and embeds recognized units concurrently.
{OOSef-JOOlOO] Γ00871 Memory blocks: An example of how a handwritten command and its associated text or image is defined in the memory block 32 is illustrated in Figure 10. This format may be expanded, for example, if additional commands are added, i.e., in addition to the commands specified in the Command field. The parameters defining the x-y location of recognized units (i.e., InsertionPointl and lnsertionPoint2 in Figure 10) vary as a function of the application. For example, the x-y locations/insertion points of text or image in MS Word can be defined with the parameters Page#, Line# and Column* (as illustrated in Figure 10). In the application Excel, the x-y locations can be translated into the cell location in the spreadsheet, i.e., Sheet#, Row# and Column*. Therefore, different formats for x-y InsertionPointl and x-y lnsertionPoint2 need to be defined to accommodate variety of applications.
Γ001001 Γ001011 Γ00881 Figure 9 is a chart of data flow of recognized units. These are discussed below.
f0Q W-r00102l r00891 FIFO (First In First Out) Protocol: Once a unit is recognized it is stored in a queue, awaiting processing by the processor of element 20, and more specifically, by the GMB functionality 30. The "New Recog" flag (set to One" by the recognition element 18 when a unit is available), indicates to the RU receiver 29 that a recognized unit (i.e., the next in the queue) is available. The "New Recog" flag is reset back to "Zero" after the recognized unit is read and stored in the memory elements 26 and 28 of Figure 9 (e.g., as in step 3.2. of the subroutines illustrated in Figure 4 and Figure 5). In response, the recognition element 18: 1 ) makes the next recognized unit available to read by the RU receiver 29, and 2) sets the "New Recog" flag back to "One" to indicate to the RU receiver 29 that the next unit is ready. This process continues so long as recognized units are forthcoming. This protocol insures that the recognition element 18 is in synch with the speed with which recognized units are read from the recognition element and stored in the RHI memory (i.e., in memory elements 26 and 28 of Figure 9). For example, when handwritten information is processed concurrently, there may be more than one memory block available before the previous memory block is embedded into the document memory 22.
Γ001021 Γ001031 Γ00901 1 n a similar manner, this FIFO technique may also be employed between elements 24 and 22 and between elements 1 6 and 1 8 of Figure 1 and Figure 38, and between elements 14 and 1 2 of Figure 1 , to ensure that independent processes are well synchronized, regardless of the speed by which data is available by one element and the speed by which data is read and processed by the other element.
fOQ4Q31-rooi 041 Γ009Π Optionally, the "New Recog" flag could be implemented in h/w (such as within an IC), for example, by setting a line to "High" when a recognized unit is available and to "Low" after the unit is read and stored, i.e. , to acknowledge receipt.
Γ001011 Γ001051 Γ00921 Process 1 : As a unit, such as a character, a symbol or a word is recognized: 1 ) it is stored in Recognized Units (RU) Memory 28, and 2) its location in the RU memory 28 along with its x-y location, as indicated in the data receiving memory 1 6, is stored in the XY-RU Location to Address in RU table 26. This process continues so long as handwritten units are recognized and forthcoming. Γ001051 Γ001061 Γ00931 Process 2: In parallel to Process 1 , the grouping into memory blocks (GMB) functionality 30 identifies each recognized unit such as a character, a word or a handwritten command (symbols or words), and stores them in the appropriate locations of memory blocks 32. In operations such as "moving text around", "increasing fonts size" or "changing color", an entire handwritten command must be concluded before it can be embedded into the document memory 22. In operations such as "deleting text" or "inserting new text", deleting or embedding the text can begin as soon as the command has been identified and the deletion (or insertion of text) operation can then continue concurrently as the user continue to write on the digitizing pad 1 2 (or on the touch screen 1 1 ).
Γ001061 Γ001071 Γ00941 In this last scenario, as soon as the recognized unit(s) is incorporated into (or deleted from) the document memory 22, it is deleted from the RH I memory 22, i.e., from the memory elements 26, 28 and 32 of Figure 9. If deletion is not desired, embedded units may be flagged as "incorporated/embedded" or moved to another memory location (as illustrated in step 6.2 of the flow chart in Figure 5). This should insure that information in the memory blocks is continuously current with new unincorporated information.
Γ001071 Γ001081 Γ00951 Process 3: As unit(s) are grouped into memory blocks, 1 ) the identity of the recognized units (whether they can be immediately incorporated or
not) and 2) the locations of the units that can be incorporated in the RHI memory are continuously updated.
[0096] 1 . As units are groups into memory blocks, a flag (i.e., "Identity-Flag") is set to One" to indicate when unit(s) can be embedded. It should be noted that this flag is defined for each memory block and that it could be set more than one time for the same memory block (for example, when the user strikes through a line of text). This flag is checked in steps 4.1 - 4.3 of Figure 5 and is reset to "Zero" after the recognized unit(s) is embedded, i.e., in step 6.1 of the subroutine in Figure 5, and at initialization. It should be noted that the "Identity" flag discussed above is irrelevant when all recognized units associated with a memory block are embedded all at once; under this scenario and after the handwritten information is concluded, recognized, grouped and stored in the proper locations of the RHI memory, the "All Units" flag in step 6.1 of Figure 4 will be set to "One" by the GMB functionality 30 of Figure 9, to indicate that all units can be embedded.
[0097] 2. As units are grouped into memory blocks, a pointer for memory block, i.e., the "Next memory block pointer" 31 , is updated every time a new memory block is introduced (i.e., when a recognized unit(s) that is not yet ready to be embedded is introduced; when the "Identity" flag is Zero), and every time a memory block is embedded into the document memory 22, such that the pointer will always point to the location of the memory block that is ready (when it is ready) to be embedded. This pointer indicates to the subroutines Embeddl (of Figure 12) and Embedd2 (of Figure 14) the exact location of the relevant memory block with the recognized unit(s) that is ready to be embedded (as in step 1 .2 of these
subroutines).
Γ001081 Γ001091 [0098] An example of a scenario under which the "next memory block pointer" 31 is updated is when a handwritten input related to changing font size has begun, then another handwritten input related to changing colors has begun (Note that these two commands cannot be incorporated until after they are concluded), and then another handwritten input for deleting text has begun (Note that this command may be embedded as soon as the GMB functionality identify it). fQQ flgl-r00110l r00991 The value in the "# of memory blocks" 33 indicates the number of memory blocks to be embedded. This element is set by the GMB functionality 30 and used in step 1 .1 of the subroutines illustrated in Figure 1 2 and Figure 14. This counter is relevant when the handwritten information is embedded
all at once after its conclusion, i.e., when the subroutines of Figure 1 2 and Figure 14 are called from the subroutine illustrated in Figure 4 (i.e., it is not relevant when they are called from the subroutine in Figure 5; its value then is set to "One", since in this embodiment, memory blocks are embedded one at a time).
Embodiment One
Γ001101 Γ001111 rOI OOI Figure 1 1 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment One. The text of these and all other figures is largely self-explanatory and need not be repeated herein. Nevertheless, the text thereof may be the basis of claim language used in this document.
fQQ44W00112l Γ01011 Figure 1 2 is a flow chart example of the Embed subroutine D referenced in Figure 4 and Figure 5 according to Embodiment One. The following is to be noted.
[01 02] 1 . When this subroutine is called by the routine illustrated in Figure 5 (i.e., when handwritten information is embedded concurrently) : 1 ) memory block counter (in step 1 .1 ) is set to 1 , and 2) memory block pointer is set to the location in which the current memory block to be embedded is located; this value is defined in memory block pointers element (31 ) of Figure 9.
[01 03] 2. When this subroutine is called by the subroutine illustrated in Figure 4 (i.e., when all handwritten information is embedded after all handwritten
information is concluded) : 1 ) memory block pointer is set to the location of the first memory block to be embedded, and 2) memory block counter is set to the value in # of memory blocks element (33) of Figure 9.
Γ001121 Γ001131 Γ01041 In operation, memory blocks 32 are fetched one at a time from the RH I memory 20 (G) and processed as follows:
Memory blocks related to text revisions (H) :
Γ001131 Γ001141 Γ01051 Commands are converted to keystrokes (35) in the same sequence as the operation is performed via the keyboard and then stored in sequence in the keystrokes memory 34. The emulate keyboard element 36 uses this data to emulate the keyboard, such that the application reads the data as it was received from the keyboard (although this element may include additional keys not available via a keyboard such as the symbols illustrated in Figure 7, e.g. for insertion of new text in MS Word document). The clipboard 38 can handle insertion
of text, or text can be emulated as keyboard keystrokes. The lookup tables 40 determines the appropriate control key(s) and keystroke sequences for pre-recorded and built-in macros that, when emulated, execute the desired command. These keyboard keys are application-dependent and are a function of parameters, such as application name, software version and platform. Some control keys, such as the arrow keys, execute the same commands in a large array of applications; however, this assumption is excluded from the design in Figure 1 1 , i.e., by the inclusion of the lookup table command-keystrokes in element 40 of Figure 1 1 . Although, in the flow charts in Figures 15 - 20, it is assumed that the following control keys execute the same commands (in the applications that are included): "Page Up", "Page Down", "Arrow Up", "Arrow Down", "Arrow Right" and "Arrow Left" (For moving the insertion point within the document), "Shift + Arrow Right" (for selection of text), and "Delete" for deleting a selected text. Element 40 may include lookup tables for a large array of applications, although it could include tables for one or any desired number of applications.
Memory blocks related to new image (I):
Γ0011 1 Γ001151 Γ01061 The image (graphic) is first copied from the RHI memory 20, more specifically, based on information in the memory block 32, into the clipboard 38. Its designated location is located in the document memory 22 via a sequence of keystrokes (e.g., via the arrow keys). It is stored (i.e., pasted from the clipboard 38 by the keystrokes sequence: Cntr-V) into the document memory 22. If the command involves another operation, such as "Reduce Image Size" or "Move image", the image is first identified in the document memory 22 and selected. Then the operation is applied by the appropriate sequences of keystrokes.
fQQ4 £l-r00116l Γ01071 Figure 15 through Figure 20, the flow charts of the subroutines H referenced in Figure 12, illustrate execution of the first three basic text revisions discussed in connection with and in Figure 8 for MS Word and other applications. These flow charts are self-explanatory and are therefore not further described herein but are incorporated into this text. The following points are to be noted with reference to the function StartOfDocEmbl illustrated in the flow chart of Figure 15:
[0108] 1 . This function is called by the function SetPointerembl , illustrated in Figure 16.
[0109] 2. Although, in many applications, the shortcut keys combination "Cntrl+Home" will bring the insertion point to the start of the document (including MS Word), this routine was written to execute the same operation with the arrow keys.
[0110] 3. Designated x-y locations in the document memory 22 in this subroutine are defined based on Page#, Line# & Column#; other subroutines are required when the x-y definition differs.
Γ001161 Γ001171 [01111 Once all revisions are embedded, they are incorporated in final mode according to the flow chart illustrated in Figure 21 or according to the flow chart illustrated in Figure 22. In this implementation example, the Track Changes feature is used to "Accept All Changes" which embed all revisions as an integral part of the document.
Γ001171 Γ001181 [01121 As discussed above, a basic set of keystrokes sequences can be used to execute a basic set of commands for creation and revision of a document in a large array of applications. For example, the arrow keys can be used for jumping to a designated location in the document. When these keys are used in conjunction with the Shift key, a desired text/graphic object can be selected. Further, clipboard operations, i.e., the typical combined keystroke sequences Cntrl-X (for Cut), Cntrl-C (for Copy) and Cntrl-V (for Paste), can be used for basic edit/revision operations in many applications. It should be noted that, although a relatively small number of keyboard control keys are available, the design of an application at the OEM level is unlimited in this regard. (See for example Figures 1 -5). It should be noted that the same key combination could execute different commands. For example, deleting an item in QuarkXPress is achieved by the keystrokes Cntrl-K, where the keystrokes Cntrl-K in MS Word open a hyperlink. Therefore, the ConvertTextl function H determines the keyboard keystroke sequences for commands data stored in the RHI memory by accessing the lookup table command- keystrokes command-control-key 40 of Figure 1 1 .
The Use of Macros:
Γ001181 Γ001191 [01131 Execution of handwritten commands in applications such as Microsoft® Word, Excel and Word Perfect is enhanced with the use of macros. This is because sequences of keystrokes that can execute desired operations may simply be recorded and assigned to shortcut keys. Once the assigned shortcut key(s) are emulated, the recorded macro is executed. Below are some useful built-in
macros for Microsoft® Word. For simplification, they are grouped based on the operations used to embed handwritten information (D).
Γ0114Ί Bringing the insertion point to a specific location in the document:
CharRight, CharLeft, LineUp, LineDown, StartOfDocument, StartOfLine,
EndOfDocument, EndOfLine, EditGoto, GotoNextPage, GotoNextSection,
GotoPreviousPage, GotoPreviousSelection, GoBack
[01 1 51 Selection:
CharRightExtent, CharLeftExtend, LineDownExtend, LineUpExtend,
ExtendSelection, EditFind, EditReplace
[01 1 61 Operations on selected text/graphic:
EditClear, EditCopy, EditCut, EditPaste,
CopyText, FontColors, FontSizeSelect, GrowFont, ShrinkFont, GrowFontOnePoint, ShrinkFontOnePoint, AllCaps, SmallCaps, Bold, Italic, Underline, UnderlineCoor, UnderlineStyle, WordUnderline, ChangeCase, DoubleStrikethrough, Font, FontColor, FontSizeSelect
[01 1 71 Displaying revisions:
Hidden, Magnifier, Highlight, DocAccent, CommaAccent, DottedUnderline,
DoubleUnderline, DoubleStrikethrough, HtmlSourceRefresh, InsertFieldChar (for enclosing a symbol for display), ViewMasterDocument, ViewPage, ViewZoom, ViewZoom l OO, ViewZoom200, ViewZoom75
[01 1 81 Images:
InsertFrame, InsertObject, InsertPicture, EditCopyPicture, EditCopyAsPicture, EditObject, InsertDrawing, InsertFram, InsertHorizentlLine
[01 1 91 File operations:
FileOpen, FileNew, FileNewDefault, DocClose, FileSave, SaveTemplate
Γ001191 Γ001201 Γ0 20Ί If a macro has no shortcut key assigned to it, it can be assigned by the following procedure:
Γ001201 Γ001211 Γ01211 Clicking on the Tools menu and selecting Customize causing the Customize form to appear. Clicking on the Keyboard button brings the dialog box Customize Keyboard. In the Categories box all the menus are listed, and in the Commands box all their associated commands are listed. Assigning a
shortcut key to a specific macro can be simply done by selecting the desired built-in macro in the command box and pressing the desired shortcut keys.
Γ001211 Γ001221 Γ01221 Combinations of macros can be recorded as a new macro; the new macro runs whenever the sequence of keystrokes that is assigned to it is emulated. In the same manner, a macro in combination with keystrokes (e.g., of arrow keys) may be recorded as a new macro. It should be noted that recording of some sequences as a macro may not be permitted.
Γ001221 Γ001231 |O1231 The use of macros, as well as the assignment of a sequence of keys to macros can also be done in other word processors, such as WordPerfect.
Γ001231 Γ001241 Γ01241 Emulating a keyboard key 36 in applications with built-in programming capability, such as Microsoft® Word, can be achieved by running code that is equivalent to pressing that keyboard key. Referring to Figure 35 and Figure 36, details of this operation are presented. The text thereof is incorporated herein by reference. Otherwise, emulating the keyboard is a function that can be performed in conjunction with Windows or other computer operating systems.
Embodiment Two
Γ001211 Γ001251 Γ01251 Figure 13 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment Two. Figure 14 is a flow chart example of the Embed function D referenced in Figure 4 and in Figure 5 according to Embodiment Two. Memory blocks are fetched from the RHI memory 20 (G) and processed. Text of these figures is incorporated herein by reference. The following should be noted with Figure 14:
[0126] 1 . When this subroutine is called by the routine illustrated in Figure 5 (i.e., when handwritten information is embedded concurrently): 1 ) memory block counter (in step 1 .1 below) is set to 1 , and 2) memory block pointer is set to the location in which the current memory block to be embedded is located; this value is defined in memory block pointers element (31 ) of Figure 9.
[0127] 2. When this subroutine is called by the subroutine illustrated in Figure 4 (i.e., when all handwritten information is embedded after all handwritten
information is concluded): 1 ) memory block Pointer is set to the location of the first memory block to be embedded, and 2) memory block counter is set to the value in # of memory blocks element (33) of Figure 9.
Γ001251 Γ001261 Γ01281 Α set of programs executes the commands defined in the memory blocks 32 of Figure 9, one at a time. Figure 26 through Figure 32, with text incorporated herein by reference, are flow charts of the subroutine J referenced in Figure 14. The programs depicted execute the first three basic text revisions discussed in Figure 8 for MS Word. These sub-routines are self-explanatory and are not further explained here, but the text is incorporated by reference.
fQQ43Sl-rooi 271 Γ01291 Figure 33 is the code in Visual Basic that embeds the information in Final Mode, i.e., Accept All Changes" of the Track Changes, which embeds all revisions to be an integral part of the document.
Γ001271 Γ001281 Γ01301 Each of the macros referenced in the flow charts of Figure 26 through Figure 32 needs to be translated into executable code such as VB Script or Visual Basic code. If there is uncertainty as to which method or property to use, the macro recorder typically can translate the recorded actions into code. The translated code for these macros to Visual Basic is illustrated in Figure 25.
Γ001281 Γ001291 Γ01311 The clipboard 38 can handle the insertion of text into the document memory 22, or text can be emulated as keyboard keystrokes. (Refer to Figures 35-36 for details). As in Embodiment One, an image operation (K) such as copying an image from the RHI memory 20 to the document memory 22 is executed as follow: an image is first copied from the RHI memory 20 into the clipboard 3f8. Its designated location is located in the document memory 22. Then it is pasted via the clipboard 38 into the document memory 22.
Γ001291 Γ001301 T01321 The selection of a program by the program selection and execution element 42 is a function of the command, the application, software version, platform, and the like. Therefore, the ConvertText2 J selects a specific program for command data that are stored in the RHI memory 20 by accessing the lookup command-programs table 44. Programs may also be initiated by events, e.g., when opening or closing a file, or by a key entry, e.g., when bringing the insertion point to a specific cell of a spreadsheet by pressing the Tab key.
Ϊ004331-Γ0013Π Γ0133Ί In Microsoft® Word, the Visual Basic Editor can be used to create very flexible, powerful macros that include Visual Basic instructions that cannot be recorded from the keyboard. The Visual Basic Editor provides additional assistance, such as reference information about objects and properties or an aspect of its behavior.
Working with the Comment feature as an insertion mechanism
Γ001311 Γ001321 Γ01341 Incorporating the handwritten revisions into the document through the Comment feature may be beneficial in cases where the revisions are mainly insertion of new text into designated locations, or when plurality of revisions in various designated locations in the document need to be indexed to simplify future access to revisions; this can be particularly useful for large documents under review by multiple parties. Each comment can be further loaded into a sub-document which is referenced by a comment # (or a flag) in the main document. The Comments mode can also work in conjunction with Track Changes mode.
Γ001321 Γ001331 Γ01351 For Embodiment One: Insert Annotation can be achieved by emulating the keystrokes sequence Alt+Cntrl+M. The Visual Basic translated code for the recorded macro with this sequence is "Selection. Comments. Add Range:=Selection. Range", which could be used to achieve the same result in embodiment 2.
Γ001331 Γ001341 Γ01361 Once in Comment mode, revisions in the RH I memory 20 can be incorporated into the document memory 22 as comments. If the text includes revisions, the Track Changes mode can be invoked prior to insertion of text into a comment pane.
Γ0137Ί Useful built-in macros for use in the Comment mode of MS Word:
GotoCommentScope ;highlight the text associated with a comment reference mark
GotoNextComment, ;jump to the next comment in the active document
GotoPreviousComment ;jump to the previous comment in the active document InsertAnnotation ;insert comment
DeleteAnnotation ;delete comment
ViewAnnotation ;show or hide the comment pane
Γ001311 Γ001351 The above macros can be used in Embodiment One by emulating their shortcut keys or in Embodiment Two with their translated code in Visual Basic. Figure 34 provides the translated Visual Basic code for each of these macros.
Spreadsheets, forms and Tables
Γ001351 Γ001361 Embedding handwritten information in a cell of a spreadsheet or a field in a form or a table can either be for new information or it could be for revising
an existing data (e.g., deletion, moving data between cells or for adding new data in a field). Either way, after the handwritten information is embedded in the document memory 22, it can cause the application (e.g., Excel) to change parameters within the document memory 22, e.g., when the embedded information in a cell is a parameter of a formula in a spreadsheet which when embedded changes the output of the formula, or when it is a price of an item in a Sales Order which when embedded changes the subtotal of the Sales Order; if desired, these new parameters may be read by the embed functionality 24 and displayed on the display 25 to provide the user with useful information such as new subtotals, spell check output, stock status of an item (e.g., as a sales order is filed in).
Γ001361 Γ001371 IO1381 As discussed, the x-y location in the document memory 22 for a word processing type documents can for example be defined by page#, line# and character* (see figure 10, x-y locations for InsertionPointl and lnsertionPoint2). Similarly, the x-y location in the document memory 22 for a form, table or a spreadsheet can for example be defined based on the location of a cell / field within the document (e.g., column #, Row # and Page # for a spreadsheet). Alternatively, it can be defined based on number of Tabs and/or Arrow keys from a given known location. For example, a field in a Sales Order in the accounting application QuickBooks can be defined based on the number of Tab from the first field (i.e., "customer; job") in the form.
Γ001371 Γ001381 IO1391 The embed functionality can read the x-y information (see step 2 in flow charts referenced in Figures 1 2 and 14), and then bring the insertion point to the desired location according to Embodiment One (see example flow charts referenced in Figures 15-16), or according to Embodiment Two (see example flow charts for MS Word referenced in Figure 26). Then the handwritten information can be embedded. For example, for a Sales Order in QuickBooks, emulating the keyboard keys combination "Cntrl+J" will bring the insertion point to the first field, customer; job; then, emulating three Tab keys will bring the insertion point to the "Date" field, or emulating eight Tab keys will bring the insertion point to the field of the first "Item Code".
Γ001381 Γ001391 IO1401 The software application QuickBooks has no macros or programming capabilities. Forms (e.g., Sales Order, a Bill, or a Purchase Order) and Lists (e.g., Chart of Accounts and customer; job list) in QuickBooks can either be invoked via pull-down menus via the toolbar, or via a shortcut key. Therefore,
Embodiment One could be used to emulate keyboard keystrokes to invoke specific form or a specific list. For example, invoking a new invoice can be achieved by emulating the keyboard keys combination "Cntrl+N" and invoking the chart of accounts list can be achieved by emulating the keyboard keys combination "Cntrl+A". Invoking a Sales Order, which has no associated shortcut key defined, can be achieved by emulating the following keyboard keystrokes:
1 . "Alt+C" ;brings the pull-down menu from the toolbar menu related to
"Customers"
2. "Alt+O" ; Invokes a new sales order form
Γ001391 Γ001401 Γ01411 Once a form is invoked, the insertion point can be brought to the specified x-y location, and then the recognized handwritten information (i.e., command(s) and associated text) can be embedded.
Γ001401 Γ001411 IO1421 As far as the user is concerned, he can either write the information (e.g., for posting a bill) on a pre-set form (e.g., in conjunction with the digitizing pad 12 or touch screen 1 1 ) or specify commands related to the operation desired. Parameters, such as the type of entry (a form, or a command), the order for entering commands, and the setup of the form are selected by the user in step 1 "Document Type and Preferences Setup" (A) illustrated in Figure 4 and in Figure 5. Γ001 Ί 11 Γ001421 Γ01431 For example, the following sequence handwritten commands will post a bill for purchase of office supply at OfficeMax on 03/02/05, for a total of $45. The parameter "office supply", which is the account associated with the purchase, may be omitted if the vendor OfficeMax has already been set up in QuickBooks. Information can be read from the document memory 22 and based on this information the embed functionality 24 can determine if the account has previously been set up or not, and report the result on the display 25. This, for example can be achieved by attempting to cut information from the "Account" field (i.e., via the clipboard), assuming the account is already set up. The data in the clipboard can be compared with the expected results, and based on that, generating output for the display.
Bill
03/02/05
OfficeMax
$45
Office supply
Γ001 Ί21 Γ001431 In applications such as Excel, either or both Embodiment One and Embodiment Two can be used to bring the insertion point to the desired location and to embed recognized handwritten information.
APPLICATIONS EXAMPLES
Wireless Pad
Γ001 31 Γ001441 A wireless pad can be used for transmission of an integrated document to a computer and optionally receiving back information that is related to the transmitted information. It can be used, for example, in the following scenarios:
1 - Filling up a form at a doctor office
2- Filling up an airway bill for shipping a package
3- Filing up an application for a driver license at the DMV
4- Serving a customer at a car rental agency or at a retail store.
5- Taking notes at a crime scene or at an accident site
6- Order taking off-site, e.g., at conventions.
Γ001441 Γ001451 Γ01441 Handwritten information can be inserted in designated locations in a pre-designed document such an order form, an application, a table or an invoice, on top of a digitizing pad 1 2 or using a touch screen 1 1 or the like. The pre-designed form is stored in a remote or a close-by computer. The handwritten information can be transmitted via a wireless link concurrently to a receiving computer. The receiving computer will recognize the handwritten information, interpret it and store it in a machine code into the pre-designed document. Optionally, the receiving computer will prepare a response to and transmit it back to the transmitting pad (or touch screen), e.g., to assist the user.
Γ001451 Γ001461 Γ01451 For example, information filled out on the pad 1 2 in an order form at a convention can be transmitted to an accounting program or a database residing in a close-by or remote server computer as the information is written. In turn, the program can check the status of an item, such as cost, price and stock status, and transmit information in real-time to assist the order taker. When the order taker indicates that the order has been completed, a sales order or an invoice can be posted in the remote server computer.
Γ001161 Γ001471 Γ01461 Figure 39 is a schematic diagram of an Integrated Edited Document System shown in connection with the use of a Wireless Pad. The Wireless Pad comprises a digitizing pad 1 2, display 25, data receiver 48, processing circuitry 60, transmission circuitry I 50, and receiving circuitry I I 58. The digitizing pad receives tactile positional input from a writing pen 10. The transmission circuitry I 50 takes data from the digitizing pad 1 2 via the data receiver 48 and supplies it to receiving circuitry I 52 of a remote processing unit. The receiving circuitry I I 58 captures information from display processing 54 via transmission circuitry I I 56 of the remote circuitry and supplies it to processing circuitry 60 for the display 25. The receiving memory I 52 communicates with the data receiving memory 1 6 which interacts with the recognition module 1 8 as previously explained, which in turn interacts with the RH I processor and memory 20 and the document memory 22 as previously explained. The embedded criteria and functionality element 24 interacts with the elements 20 and 22 to modify the subject electronic document and communicate output to the display processing unit 54.
Remote Communication
Γ001171 Γ001481 In a communication between two or more parties at different locations, handwritten information can be incorporated into a document, information can be recognized and converted into machine-readable text and image and incorporated into the document as "For Review". As discussed in connection with Figure 6 (as an exemplary embodiment for MS Word type document), "For review" information can be displayed in a number of ways. The "For Review" document can then be sent to one or more receiving parties (e.g. , via email). The receiving party may approve portions or all of the revisions and/or revise further in handwriting (as the sender has done) via the digitized pad 1 2, via the touch screen 1 1 or via a wireless pad. The document can then be sent again "for review". This process may continue until all revisions are incorporated/concluded.
Revisions via Fax
Γ001181 Γ001491 Handwritten information on a page (with or without machine- printed information) can be sent via fax, and the receiving facsimile machine enhanced as a Multiple Function Device (printer/fax, character recognizing scanner) can convert the document into a machine-readable text/image for a designated application (e.g., Microsoft® Word). Revisions vs. original information can be
distinguished and converted accordingly based on designated revision areas marked on the page (e.g., by underlining or circling the revisions). Then it can be sent (e.g., via email) "For Review" (as discussed above, under "Remote Communication").
Integrated Document Editor with the use of a Cell Phone
Γ001 Ί91 Γ001501 Handwritten information can be entered on a digitizing pad 1 2 whereby locations on the digitizing pad 1 2 correspond to locations on the cell phone display. Alternatively, handwritten information can be entered on a touch screen that is used as a digitizing pad as well as a display (i.e. , similar to the touch screen 1 1 referenced in Figure 38). Handwritten information can either be new information, or revision of an existing stored information (e.g., a phone number, contact name, to do list, calendar events, an image photo, etc.). Handwritten information can be recognized by the recognition element 1 8, processed by the RH I element 20 and then embedded into the document memory 22 (e.g., in a specific memory location of a specific contact information). Embedding the handwritten information can, for example, be achieved by directly accessing locations in the document memory (e.g., specific contact name); however, the method by which recognized handwritten information is embedded can be determined at the OEM level by the manufacturer of the phone.
Use of the Integrated Document Editor in authentication of handwritten information Γ001501 Γ001511 A unique representation such as a signature, a stamp, a finger print or any other drawing pattern can be pre-set and fed into the recognition element 1 8 as units that are part of a vocabulary or as a new character. When handwritten information is recognized as one of these pre-set units to be placed in a, e.g., specific expected x-y location of the digitizing pad 1 2 (Figure 1 ) or touch screen 1 1 (Figure 38), an authentication or part of an authentication will pass. The authentication will fail if there is no match between the recognized unit and the preset expected unit. This can be useful for authentication of a document (e.g., an email, a ballot or a form) to ensure that the writer / sender of the document is the intended sender. Other examples are for authentication and access of bank information or credit reports. The unique pre-set patterns can be either or both: 1 ) stored in a specific platform belonging to the user and/or 2) stored in a remote database location. It should be noted that the unique pre-set patterns (e.g., a signature) do not have to be disclosed in the document. For example, when an
authentication of a signature passes, the embedded functionality 24 will, for example embed the word "OK" in the signature line / field of the document.
Γ001511 Γ001521 Γ01471 Computing devices and methods discussing automatic computation of document locations at which to automatically apply user commands communicated by user input on a touch screen of a computing device are discussed in US Patent no. 9,582,095, in US patent application no.4^15/391 ,710 which is a continuation of US patent no. 9,582.095 and in US patent application no.13/955,288. Γ001521 Γ001531 IO1481 The disclosed embodiments further relate to simplified user interaction with displayed representations of one or more graphic objects. The simplified user interaction may utilize, a touch screen of a computing device, and may include using gestures to indicate desired change(s) in one or more parameters of the graphic objects. The parameters may include one or more of a line length, a line angle or arc radius, a size, surface area, or any other parameter of a graphic object, stored in memory of the computing device or computed by functions of the computing device. Changes in these one or more parameters are computed by functions of the computing device based on the user interaction on the touch screen, and these computed changes may be used by other functions of the computing device to compute changes in other graphic objects.
Γ001531 Γ001541 As mentioned above, the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, objects, or any portion of these types of documents, such as a block of text or a unit of data. It should be understood that the document or file may be utilized in any suitable application, including but not limited to, computer aided design, gaming, and educational materials.
Γ001541 Γ001551 Γ01491 It is an object of the disclosed embodiments to allow users to quickly edit Computer Aided Design (CAD) drawings on the go or on site following a short interactive on-screen tutorial; there is no need for skills/expertise such as those needed in operating CAD drawings applications, for example, AutoCAD® software. In addition, the disclosed embodiments may provide a significant time saving by providing simpler and faster user interaction, while revision iterations with professionals are avoided. Typical users may include, but not limited
to construction builders and contractors, architects, interior designers, patent attorneys, inventors, and manufacturing plant managers.
Γ001551 Γ001561 Γ01501 It is a further object of the disclosed embodiments to allow users to use the same set of gestures provided for editing CAD drawings to edit graphics documents in a variety of commonly used document formats, such as in doc and docx formats. It should be noted that some of the commands commonly used in CAD drawing applications, for example AutoCAD® software, such as the command to apply a radius to a line or to add a chamfer, are not available in word processing applications or in desktop publishing applications.
Γ001561 Γ001571 Γ01511 It is a further object of the disclosed embodiments to allow users to create CAD drawing and graphics documents, based on user interaction on a touch screen of a computing device, in a variety of document formats, including CAD drawings formats such as DXF format and doc and docx formats, using the same gestures.
Γ001571 Γ001581 It is yet a further object of the disclosed embodiments to allow users to interact with a three-dimensional representation of graphic objects on the touch screen to indicate desired changes in one or more parameters of one or more graphic objects, which in turn, will cause functions of the computing device to automatically affect the indicated changes.
Γ001581 Γ001591 Γ01521 These, other embodiments, and other features of the disclosed embodiments herein will be better understood by reference to the set of accompanying drawings (Figures 40A-58B), which should be taken as an illustrative example and not limiting. Figures 40A-52D, Figures 54A-54F, and Figures 56-58A may be viewed as a portion of a tutorial of an app to familiarize users with the use of the gestures discussed in these drawings.
Γ001591 Γ001601 While the disclosed embodiments from Figures 41 A through Figure 52D are described with reference to user interaction with two-dimensional representations of graphic objects, it should be understood that the disclosed embodiments may also be implemented with reference to user interaction with three- dimensional representations of graphic objects.
Γ001601 Γ001611 First, the user selects a command (e.g., a command to change line length, discussed in Figures 42A-42D), by drawing a letter or by selecting an icon which represents the desired command. Second, the computing device identifies the command. Then, responsive to user interaction with a displayed
representation of a graphic object on the touch screen to indicate a desired change in one or more parameters (such as, in line length), the computing device automatically causes the desired change in the indicated parameter and, when applicable, also automatically affects changes in locations of the graphic object and further, as a result, in other graphic objects in memory in which the drawing is stored. Γ001611 Γ001621 A desired (gradual or single) change in a parameter of a graphic object, being an increase or a decrease in its value (and/or in its shape, when the shape of the graphic object being the parameter, such as a change from a straight line object to a segmented line object, or gradual change from one shape to another, such as from a circle/sphere to an eclipse and vice versa), may be indicated, by changes in positional locations along a gesture being drawn on the touch screen (as illustrated for example, in Figures 42A-42B), and during which the computing device gradually and automatically applies the desired changes as the user continues to draw the gesture. From the user perspective, it would seem as the value of the parameter is changing at the same time as the gesture is being drawn.
Γ001621 Γ001631 The subject drawing or a portion thereof stored in the device memory (herein defined as "graphics vector") may be displayed on the touch screen as a two-dimensional representation (herein defined as "vector image"), with which the user may interact in order to communicate desired changes in one or more parameters of a graphic object, such as in line length, line angle, or arc radius. As discussed above, the computing device automatically causes these desired changes in the graphic object, and when applicable, also in its locations, and further in parameters and locations of other graphic objects within the graphics vector which may be caused as a result of the changes in the graphic object indicated by the user. The graphics vector may altrrnatively be represented on the touch screen as a three- dimensional vector image, so as to allow the user to view/review the effects of a change in a parameter of a graphic object in an actual three-dimensional representation of the graphics vector, rather than attempting to visualize the effects while viewing a two-dimensional representation.
Γ001631 Γ001641 Furthermore, the user may interact with a three-dimensional vector image on the touch screen to indicate desired changes in one or more parameters of one or more graphic objects, for example, by pointing/touching or tapping at geometrical features of the three-dimensional representation, such as on surfaces or at corners, which will cause the computing device to automatically
change one or more parameters of one or more graphic objects of the graphics vector. Such user interaction with geometrical features may, for example, be along surface length, width or height, along edges of two connecting surfaces (e.g., along an edge connecting the top surface and one of the side surfaces, within surface(s) inside or outside a beveled/trimmed corner, a sloped surface (e.g., of a ramp), or within an arced surface inside or outside an arced corner.
Γ0016Ί1 Γ001651 The correlation between user interaction with a geometrical feature of the three-dimensional vector image on the touch screen and changes in size and/or geometry of the vector graphics stored in the device memory may be achieved, by first, using one or more points/locations in the vector graphics stored (and defined in the xyz coordinate axis system) in the device memory (referred to herein as "locations"), and correlating them with the geometrical features of the vector image with which the user may interact to communicate desired changes in graphic objects. A location herein is defined such that, changes in that location, or in a stored or computed parameter of a line (straight, arced, or segmented) extending/branching from that location, such as length, radius or angle, herein defined as "variable", can be used as the variable (or as one of the variables) in function(s) capable to compute changes in size and/or geometry of the vector graphics as a result of changes in that variable. User interaction may be defined within a region of interest, being the area of the geometrical feature on the touch screen within which the user may gesture/interact; this region may, for example, be an entire surface of a cube, or the entire cube surface with an area proximate to the center excluded. In addition, responsive to detecting finger movements in predefined/expected direction (or in one of predefined/expected directions), or predefined/expected touching and/or tapping within this region, the computing device automatically determines/identifies the relevant variable and automatically carries out its associated function(s) to automatically affect the desired change(s) communicated by the user.
Γ001651 Γ001661 For example, a position of either of the edges/corners of a rectangle or of a cube is a location that may be used as a variable in a function (or in one of the functions) capable to compute a change in the geometry of the rectangle or of the cube as a result of a change in that variable. Similarly, the length of a line between two edges/corners (i.e., between two locations) of the cube or the angle between two connected surfaces of the cube may be used as the variable. Or, the
center point of a circle or of a sphere, may be used as the "location" from which the radius of the circle or of the sphere is extending; the radius in this example may be a variable of a function capable to compute the circumference and surface area of the circle or the circumference, surface and volume of the sphere, as the user interacts with (e.g., touches) the sphere. Similarly, a length of a line extending from the center point of a vector graphics having a symmetrical geometry, such as a cube or a tube, or the location at the end of the extending line from the center point, may be used as a variable (or one of the variables) of a function (or of one of the functions) capable to computes changes in the size of the symmetrical vector graphics or changes in its geometry, as the user interacts with the symmetrical vector image. Or, in a three- dimensional vector graphics with symmetry in one of more of its displayed surfaces such as in the surface of a base of a cone, two locations may be defined, the first at the center point of the surface at the base, and the second being the edge of the line extending from that location to the top of the cone; the variables in this example may be the first location and the length of the line extending from the first location to the top of the cone, which can be used in function(s) capable to compute changes in the size and geometry of the cone, as the user interacts with the vector image representing the cone. Or, a complex or non-symmetrical graphics vector, represented on the touch screen as a three-dimensional vector image, with which the user may interact to communicate changes in the graphics vector, may be divided into a plurality of partial graphics vectors in the device memory (represented as one vector image on the touch screen), each represented by one or more functions capable to compute changes in its size and geometry, whereby the size and geometry of the graphics vector may be computed by the computing device based on the sum of the partial graphics vectors.
Γ001661 Γ001671 In one embodiment, responsive to a user "pushing" (i.e., in effect touching) or tapping at a geometrical feature of a displayed representation of a graphics vector (i.e., at the vector image), the computing device automatically increases or decreases the size of the graphics vector or of one or more parameters represented on the graphic feature. For example, touching or tapping at a displayed representation of a corner of a cube or at a surface of a ramp, will cause the computing device to automatically decrease or increase the size of the cube (Figures 54A-54B) or of the decline/incline angle of the ramp, respectively.
Γ001671 Γ001681 Similarly, responsive to touching or tapping anywhere at a displayed representation of a sphere, the computing device automatically decreases or increases the radius of the sphere, respectively, which in turn, decreases or increases, respectively the circumference, surface area and volume of the sphere. Or, responsive to continued "squeezing" (i.e. holding/touching) a geometrical feature of a vector image representing a feature in graphics vector, such as the side edges of a top of a tube or of a cube, the computing device automatically brings the outside edge(s) of that graphics vector together gradually as the user continues squeezing/holding the geometrical feature of the vector image. Similarly, responsive to the user tapping at or holding/touching the top surface of the geometrical feature, the computing device automatically and gradually brings the outside edges of the geometrical feature outward or inward, respectively as the user continues tapping at or touching the top surface of the vector image, respectively. Or, responsive to touching at or, in proximity to a center point of a top surface (note that the region of interest here is proximate to the center, which is excluded from the region of interest in the prior example), the computing device automatically creates a wale (or other predetermined shape) with a radius centered at that center point, and continued touching or tapping (anywhere on the touch screen) will cause the computing device to automatically and gradually decrease or increase the radius of the wale, respectively.
Γ001681 Γ001691 In another embodiment, first responsive to a user indicating a desired command, the computing device identifies the command. Then, the user may gesture at a displayed geometrical feature of a vector image to indicate desired changes in the vector graphics. For example, responsive to continued 'pushing' (i.e., touching) or tapping at a displayed representation of a surface of a corner, after the user has indicated a command to add a fillet (at the surface of the inside corner) or an arc (at the surface of the outside corner) and the computing device identified the command, the computing device automatically rounds the corner (if the corner is not yet rounded), and then causes an increase or a decrease in the value of the radius of the fillet/arc (as well as in locations of the adjacent line objects), as the user continues touching or tapping, respectively at the fillet/arc surface (or anywhere on the touch screen). Or, after the computing device identifies a command to change line length (e.g., after the user touches a distinct icon representing the command), responsive to finger movement to the right or to the left (indicative of a desired
change in width from the right edge or from the left edge of the surface of the cube, respectively) anywhere on a surface of the displayed cube, followed by continued touching or tapping (anywhere on the touch screen), the computing device automatically decreases or increases the width of the cube, respectively from the right edge or from the left edge of the surface, as the user continues touching or tapping. Similarly, responsive to a finger movement up or down on the surface of the cube followed by continued touching or tapping anywhere on the touch screen, the computing device automatically decreases or increases the height of the cube, respectively from the top edge or from the bottom edge of the surface, as the user continues touching or tapping. Further, responsive to tapping or touching a point proximate to an edge along two connected surfaces of a graphic image of a cube, the computing device automatically increases or decreases the angle between the two connected surfaces. Or, after the computing device identifies a command to insert a blind hole and a point on a surface of the graphic image at which to insert the blind hole (e.g., after detecting a long press at that point, indicating the point on the surface at which to drill the hole), responsive to continued tapping or touching (anywhere on the touch screen), the computing device gradually and automatically increases or decreases the depth of the hole, respectively in the graphics vector and updates the vector image. Similarly, responsive to identifying a command to drill a through hole at user indicated point on a surface of the vector image, the computing device automatically inserts the a through hole in the vector graphics and updates the vector image with the inserted through hole. Further, responsive to tapping or touching at a point along the circumference of the hole, the computing device automatically increases or decreases the radius of the hole. Or, responsive to touching the inside surface of the hole, the computing device automatically invokes a selection table/menu of standard threads, from which the user may select a desired thread to apply to the outside surface of the hole.
53Ί Figures 40A-40D relate to a command to Insert a line. They illustrate the interaction between a user and a touch screen, whereby a user draws a line 3705 free-hand between two points A and B (Figure 40B). In some embodiments, an estimated distance of the line 3710 is displayed while the line is being drawn. Responsive to the user finger being lifted from the touch screen (Figure 40C), the computing device automatically inserts a straight-line object in the device memory, at memory locations represented by points A and B on the touch
screen, where the drawing is stored, and displays the straight-line object 371 5 along with its actual distance 3720 on the touch screen.
fQO 7ffl-r00171l Γ0154Ί Figures 41 A-41 C relate to a command to delete an object. The user selects the desired object 3725 by touching it (Figure 41 A) and then may draw a command indicator 3730, for example, the letter 'd' to indicate the command 'Delete' '(Figure 41 B). In response, the computing device identifies the command and deletes the object (Figure 41 C). It should be noted that the user may indicate the command by selecting an icon representing the command, by an audible signal and the like.
Γ001711 Γ001721 Figures 42A-42D relate to a command to change line length. First, the user selects the line 3735 by touching it (Figure 42A) and then may draw a command indicator 3740, for example, the letter 'U to indicate the desired command (Figure 42B). It should be noted that selecting line 3735 prior to drawing the command indicator 3740 is optional, for example, to view its distance or to copy or cut it. Then, responsive to each of gradual changes in user selected positional locations on the touch screen starting from point 3745 of line 3735, the computing device automatically causes each of respective gradual changes in line length stored in the device memory and updates the length on display box 3750 (Figures 42B- 42C).
Γ001721 Γ001731 Figures 43A-43D relate to a command to change line angle. The user may optionaly first select line 3755 (Figure 43A) and then may draw a command indicator 3760, for example, the letter 'a' to indicate the desired command (Figure 43B). Then, in similar manner to changing line length, responsive to each of gradual changes in user selected positional locations (up or down) on the touch screen starting from the edge 3765 of line 3755, the computing device automatically causes each of respective gradual changes in line angle stored in the device memory and updates the angle of the line, for example, relative to the x-axis, in the device memory, and also updates the angle on display box 3770 (Figures 43B-43C). Γ001731 Γ001741 It should be noted that if the user indicates both commands: to change line length and to change line angle prior to drawing the gesture discussed in the two paragraphs above (for example, by selecting two distinct icons, each representing one of the commands), then the computing device will automatically cause gradual changes in length and/or angle of the line based on direction of movement of the gesture, and accordingly will update the values of either or both the
length and the angle on the display box at each of gradual changes in user selected positional locations on the touch screen.
Γ001741 Γ001751 Γ01551 Figures 44A-41 D relate to a command to apply a radius to a line or to change the radius of an arc between A and B. The user may optionally first select the displayed line or arc, being line 3775 in this example (Figure 44A) and then may draw a command indicator 3780, for example, the letter 'R' to indicate the desired command (Figure 44B). Then, in similar manner to changing line length or line angle, responsive to each of gradual changes in user selected positional locations on the touch screen across the displayed line/arc 3785, starting from a position along the displayed line/arc 3775, the computing device automatically causes each of respective gradual changes in the radius of the line/arc in the drawing stored in the device memory and updates the radius of the arc on display box 3790 (Figures 44C).
Γ001751 Γ001761 Γ01561 Figures 45A-45C relate to a command to make a line parallel to another line. First, the user may draw a command indicator 3795, for example, the letter 'N' to indicate the desired command and then touch a reference line 3800 (Figure 45A). The user then selects target line 3805 (Figure 45B) and lifts finger (Figure 45C). Responsive to the finger being lifted, the computing device automatically alters the target line 3805 in the device memory to be parallel to the reference line 3800 and updates the displayed target line on the touch screen (Figure 45C).
Figures 46A-46D relate to a command to add a fillet (at a 2D representation of a corner or at a 3D representation of an inside surface of a corner) or an arc (at a 3D representation of an outside surface of a corner). First, the user may draw a command indicator 381 0 to indicate the desired command and then touch corner 381 5 to which to apply a fillet (Figure 46A). In response, the computing device converts the sharp corner 3815 into rounded corner 3820 (having a default radius value) and zooms in that corner (Figure 46B). Then, responsive to each of gradual changes in user selected positional locations on the touch screen across the displayed arc 3825 at a position along it, the computing device causes each of respective gradual changes in the radius of the arc stored in the device memory and in its locations in memory represented by A and B, such that the arc is tangent to the adjacent lines 3830 and 3835 (Figure 46C). Next, the user touches the screen and in response the computing device zooms out the drawing to its original zoom
percentage (Figure 46D). Otherwise, the user may indicate additional changes in the radius, even after the finger is lifted.
Γ001771 Γ001781 Γ01581 Figures 47A-47D relate to a command to add a chamfer. First, the user may draw a command indicator 3840 to indicate the desired command and then touches the desired corner 3845 to which to apply a chamfer/bevel (Figure 47A). In response, the computing device trims the corner between two locations represented by A and B on the touch screen, and sets the height H and width W at default values, and as a result also the angle a (Figure 47B). Then, responsive to each of gradual changes in user selected positional locations on the touch screen (in parallel motions to line 3850 and/or line 3855), the computing device causes gradual changes in the width W and/or height H, respectively, as stored in the device memory as well as in locations A and B as stored in memory, and updates their displayed representation (Figure 47C). Next, the user touches the screen and in response the computing device zooms out the drawing to its original zoom percentage (Figure 47D). Otherwise, the user may indicate additional changes in parameters W and/or H, even after the finger is lifted.
fQQ 7¾ r00179l Γ0159Ί Figure 48A-48F relate to the command to trim an object. First, the user may draw a command indicator 3860 to indicate the desired command (Figure 48A). Next, the user touches target object 3865 (Figure 48B) and then reference object 3870 (Figure 48C) ; it should be noted that these steps are optional. The user then moves reference object 3870 to indicate the desired trim in target object 3865 (Figures 48D-48E). Then, responsive to the finger being lifted from the touch screen, the computing device automatically applies the desired trim 3875 to target object 3865 (Figure 48F).
Γ001791 Γ001801 Γ0160Ί Figure 49A-49D relate to a command to move an arced object. First, the user may optionally select object 3885 (Figure 49A) and then draw a command indicator 3880 to indicate the desired command, and then touches the displayed target object 3885 (Figure 49B) (at this point the object is selected), and moves it until edge 3890 of the arc 3885 is at or proximate to edge 3895 of line 3897 (Figure 49C). Then, responsive to the finger being lifted from the screen, the computing device automatically moves the arc 3885 such that it is tangent to line 3897 where the edges meet (Figure 49D).
fQQ43Ql-r0018n Γ0161Ί Figures 50A-50D relate to the 'No Snap' command. First, the user may touch command indicator 3900 to indicate the desired command
(Figure 50A), and then the user may touch the desired intersection 3905 to unsnap (Figure 50B). Then, responsive to the finger being lifted from the touch screen, the computing device automatically applies the no-snap 391 0 at intersection 3905 and zooms in the intersection (Figure 50C). Touching again causes the computing device to zoom out the drawing to its original zoom percentage (Figure 50D).
fQQ 3 4-r00182l Γ0162Ί Figures 51 A-51 D illustrate another example of use of the 'No Snap' command. First, the user may touch command indicator 391 5 to indicate the desired command (Figure 51 A). Next, the user may draw a command indicator 3920, for example, the letter 'U to indicate the desired command to change line length (Figure 51 B). Then, responsive to each of gradual changes in user selected positional locations on the touch screen, starting from the edge 3925 of line 3930 and ending at position 3935 on the touch screen, across line 3940, the computing device automatically unsnaps intersection 3945 or avoids the intersection 3945 from being snapped, if the snap operation is set as a default operation by the computing device.
Γ001821 Γ001831 Figures 52A-52D illustrate another example of use of the command to trim an object. First, the user may draw a command indicator 3950 to indicate the desired command (Figure 52A). Next, the user moves reference object 3955 to indicate the desired trim in target object 3960 (Figures 52B-52C). Then, responsive to the user finger being lifted from the touch screen, the computing device automatically applies the desired trim 3965 to target object 3960 (Figure 52D).
Γ00183Ί Γ00184Ί Γ01631 Commands to copy and cut graphic objects may be added to the set of gestures discussed above, and carried out for example by selecting one or more graphic objects (as shown for example in Figure 42A), and then the user may draw a command indicator or touch an associated distinct icon on the touch screen to indicate the desired command, to copy or cut. The command to paste may also be added, and may be carried out for example by drawing a command indicator, such as the letter 'P' (or by touching a distinct icon representing the command), and then pointing at a position on the touch screen, which represents a location in memory at which to paste the clipboard content. The copy, cut and paste commands may be useful, for example, in copying a portion of a CAD drawing representing a feature such as a bath tab and pasting it at another location of the drawing representing a second bathroom of a renovation site.
Γ0018ΊΊ Γ00185Ί Γ01641 Figure 53 is an example of a user interface with icons corresponding to the available user commands discussed in the Figures above and a 'Gesture Help' by each distinct icon indicating a letter/symbol which may be drawn to indicate a command, instead of selecting the icon by it representing the command. Γ001851 Γ001861 Γ01651 Figures 54A-54B illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a cube. Responsive to a user touching corner 3970 of vector image 3975, representing a graphics vector of a cube (Figure 54A), for a predetermined period of time, the computing device interprets/identifies the touching at corner 3970 as a command to proportionally decrease the dimensions of the cube. Then, responsive to continued touching at corner 3970, the computing device automatically and gradually decreases the length, width and height of the cube in the vector graphics, displayed at 3977, 3980 and 3985, respectively, at the same rate, and updates the displayed length 3990, width 3950 and height 4000 in vector image 4005 (Figure 54B).
Γ001861 Γ001871 Figures 54C-54D illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a sphere. Responsive to continued touching at point 401 0 or anywhere on the vector image 401 5 of a sphere (Figure 54C), representing a graphics vector of the sphere, for a predetermined period of time, the computing device interprets/identifies the touching at point 401 0 as a command to decrease the radius of the sphere. Then responsive to continued touching at point 401 0, the computing device automatically and gradually decreases the radius of the vector graphics of the sphere, and updates the vector image 401 7 (Figure 54D) on the touch screen.
Γ001871 Γ001881 Figures 54E-54F illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a ramp. Responsive to a user touching at point 4020 or any point along edge 4025 of base 4030 of the vector image 4035 of a ramp (Figure 54E), representing a graphics vector of the ramp, for a predetermined period of time, the computing device interprets/identifies the touching as a command to increase incline angle 4040 and decrease distance 4045 of base 4030 in the graphic object, such that distance 4050 along the ramp remains unchanged. Then, responsive to continued touching at point 4020, the computing device automatically and gradually increases incline angle 4040 and decreases distance 4045 of base 4030 in the graphics vector, such that distance 4050 along the height of the ramp remains unchanged, and updates displayed
incline angle 4040 and distance 4045 to incline angle 4055 and distance 4060 in vector image 4065 (Figure 54F). Similarly, responsive to tapping, at point 4020, the computing device may be configured to automatically and gradually decrease inclines angle 4040 and increase distance 4045, such that distance 4050 along the ramp will remain unchanged.
Γ001881 Γ001891 Γ01661 Figures 55A-55B illustrate examples of user interface menus for the text editing, selection mode, discussed below.
Γ001891 Γ001901 [01 671 Figure 56 is an example of a gesture to mark text in command mode. First, the user indicates a desired command, such as a command to underline, for example by touching icon 4055 representing the command. Then, responsive to the user drawing line 4060 free-hand between A and B, from the right to the left or from the left to the right, to indicate the locations in memory at which to underline text, the computing device automatically underlines the text at the indicated locations and displays a representation of the underlined text on the touch screen as the user continues drawing the gesture or when the finger is lifted from the touch screen, depending on the user predefined preference.
Γ001901 Γ001911 Γ01681 Figure 57 is another example of a gesture to mark text in command mode. First, the user indicates a desired command, such as a command to move text, for example by touching icon 4065 representing the command. Then, responsive to the user drawing a zigzagged line 4070 free-hand between A and B, from the right to the left or from the left to the right, to indicate the locations in memory at which to select text to be moved, the computing device automatically selects the text at the indicated locations in memory and highlights it on the touch screen as the user continues drawing the gesture or when the finger is lifted from the touch screen, depending on user predefined preference. At this point, the computing device automatically switches to data entry mode. Next (not shown), responsive to the user pointing at a position on the touch screen, indicative of a location in memory at which to paste the selected text, the computing device automatically pastes the selected text, starting from that indicated location. Once the text is pasted, the computing device will automatically revert back to command mode.
Γ001911 Γ001921 In one embodiment, the computing device invokes command mode or data entry mode; command mode is invoked when a command intended to be applied to text or graphics already stored in memory and displayed on the touch screen is identified, and data entry mode is invoked when a command to insert or
paste text or graphics is identified. In command mode, data entry mode is disabled to allow for unrestricted/unconfined user input, on the touch screen of the computing device, in order to indicate locations of displayed text/graphics at which to: apply user pre-defined command(s), and in data entry mode, command mode is disabled to enable pointing at positions on the touch screen indicative of locations in memory at which to insert text, insert a drawn shape such as a line, or paste text or graphics. Command mode may be set to be a default mode.
Γ001921 Γ001931 When in command mode, the drawing by the user on displayed text or graphics (defined herein as "marking gesture") to indicate locations in memory (at which to apply pre-defined command(s)) will not be interpreted by the computing device as a command to insert a line, and stopping movement while drawing the marking gesture or simply touching a position on the touch screen will not be interpreted by the computing device as a position indicative of a location in memory where to insert text or graphics, since in this mode, data entry mode is disabled. In one embodiment, however, when in data entry mode, the computing device will interpret such a position as indicative of an insertion location in memory, only after the finger is lifted from the touch screen, to further improve robustness/user friendliness; the benefit of this feature with respect to control over a zooming functionality is further discussed below. The user may draw the marking gesture free-hand on displayed text on the touch screen to indicate desired locations of text characters in memory where a desired command, such as bold, underline, move or delete, should be applied, or on displayed graphics (i.e., on vector image) to indicate desired locations of graphic objects in memory where a desired command, such as select, delete, replace, change objects color, color shade, size, style, or line thickness, should be applied.
Γ001931 Γ001941 Prior to drawing the marking gesture, the user may define a command, by selecting a distinct icon representing the command from a bar menu on the touch screen, illustrated for example in Figure 53. Alternatively, the user may define a desired command by drawing a letter/symbol which represents the command; under this scenario, however, both command mode and data entry mode may be disabled while drawing the letter/symbol, to allow for unconfined free-hand drawing of the letter/symbol anywhere on the touch screen, such that the drawing of a letter/symbol will not be interpreted as the marking gesture, or as a drawn feature,
such as a drawn line, to be inserted, and a finger being lifted from the touch screen will not be interpreted as inserting or pasting data.
Γ0019Ί1 Γ001951 It should be noted, that the drawing of the marking gesture on displayed text/graphics to indicate the desired locations in memory at which to apply user indicated commands to text/graphics, can be achieved in a single step, and if desired, in one or more time interval breaks, if for example the user lifts his/her finger from the touch screen up to a predetermined period of time, or under other predetermined conditions, such as between double taps, during which the user may, for example, wish to review a portion in another document before deciding whether to continue marking additional displayed text/graphics from the last indicated location prior to the time break or on other displayed text/graphics, or to simply conclude the marking. It should be further noted that the marking gesture may be drawn free-hand in any shape, such as in zigzag (Figure 57), a line across (Figure 56), or a line above or below displayed text/graphics. The user may also choose to display the marking gesture as it is being drawn, and to draw back along the gesture (or anywhere along it) to undo applied command(s) to text/graphics indicated by previously marked area(s) of displayed text/graphics.
Γ001951 Γ001961 In another embodiment, especially useful in, but not limited to, text editing, responsive to a gesture being drawn on the touch screen to mark displayed text or graphics while in command mode and no command was selected prior to drawing the gesture, the computing device automatically invokes selection mode, selects the marked/indicated text/graphics on the touch screen as the finger is lifted from the touch screen, and automatically invokes a set of icons, each representing a distinct command, arranged in menus and/or tooltips by the selected text/graphics (Figures 55A-55B). In these examples, when the user selects one or more of the displayed icons, and the computing device automatically applies the corresponding command(s) to the selected text. The user may exit the selection mode by simply dismissing the screen, which in response, the computing device will automatically revert back to command mode. The computing device will also automatically revert back to command mode after the selected text is moved (if the user have had indicated a command to move text, pointed at a position on the touch screen representing the location in memory at which to move the selected text, and then lifts his/her finger). As in command mode, data entry mode is disabled while in selection mode to allow for unrestricted/unconfined drawing of the marking gesture
to mark displayed text or graphics. Selection mode may be useful, for example, when the user wishes to focus on a specific portion of text and perform some trial and errors prior concluding the edits on that portion of text. When the selected text is a single word, the user may for example indicate a command to suggest a synonym, capitalize the word, or change its fonts to all caps.
Γ001961 Γ001971 Figures 58A-58B illustrate an example of automatically zooming a text while drawing the gesture to mark text, as discussed below.
Γ001971 Γ001981 In another embodiment while in command mode or in data entry mode, or while drawing the marking gesture during selection mode (prior to the finger being lifted from the touch screen), responsive to detecting a decrease or an increase in speed between two positions on the touch screen while the marking gesture or a shape such as a line to be inserted, is being drawn, the computing device automatically zooms in or zooms out, respectively a portion of the displayed text/graphic on the touch screen which is proximate to the current position along the marking gesture or the drawn line. In addition, responsive to detecting a user selected position on the touch screen with no movement for a predetermined period of time while in either command mode or data entry mode, the computing device automatically zooms in a portion of the displayed text/graphic on the touch screen which is proximate to the selected position and further continues to gradually zoom in up to a maximal predetermined zoom percentage as the user continues to point at that selected position; this feature may be useful especially near or at the start and end points along the gesture or along the drawn line, as the user may needs to see more details in their proximity so as to point closer at the desired displayed text character/graphic object or its location; naturally, the finger is at rest at the starting point (prior to drawing the gesture or the line) as well as while at a potential end point. As discussed, in one embodiment, when in data entry mode, the finger (or writing tool) being at rest on the touch screen will not be interpreted as the insertion location in memory at which to insert text/graphics, until after the finger (or writing tool) is lifted from the touch screen, and therefore, the user may have his/her finger be periodically at rest (to zoom in) while approaching the intended position. Furthermore, responsive to detecting continued tapping, the computing device may be configured to automatically zoom out as the user continues tapping.
Γ001981 Γ001991 The disclosed embodiments may further provide a facility that allows a user to specify customized gestures for interacting with the displayed
representations of the graphic objects. The user may be prompted to select one or more parameters to be associated with a desired gesture. In some aspects, the user may be presented with a list of available parameters, or may be provided with a facility to input custom parameters. Once a parameter has been specified, the user may be prompted to associate desired gesture(s), indicative of change(s) in the specified parameter, with a geometrical feature within the vector image; In some aspects, the user may be prompted to input a desired gesture indicative of an increase in the value of the specified parameter and then to input another desired gesture indicative of a decrease in the value of the specified parameter, in other aspects, the user may be prompted to associate desired gesture(s) indicative of change(s) in its shape (when the shape/geometry of graphic object(s) being the specified parameter), and in other aspects, the user may be prompted to associate direction(s) of movement of a drawn gesture with a feature within the geometrical feature, and the like. Then, the computing device may associate the custom parameter(s) with one or more functions, or the user may be presented with a list of available functions, or the user may be provided with a facility to specify custom function(s), such that when the user inputs the specified gesture(s) within other, similar geometrical features within the same vector image or within another vector image, the computing device will automatically affect the indicated changes in the vector graphics, represented by the vector image, in memory of the computing device.
Γ001991 Γ002001 It is noted that the embodiments described herein can be used individually or in any combination thereof. It should be understood that the foregoing description is only illustrative of the embodiments. Various alternatives and modifications can be devised by those skilled in the art without departing from the embodiments. Accordingly, the present embodiments are intended to embrace all such alternatives, modifications and variances that fall within the scope of the appended claims.
Γ002001 Γ002011 Various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, all such and similar modifications of the teachings of the disclosed embodiments will still fall within the scope of the disclosed embodiments.
Γ002011 Γ002021 Various features of the different embodiments described herein are interchangeable, one with the other. The various described features, as well as any known equivalents can be mixed and matched to construct additional embodiments and techniques in accordance with the principles of this disclosure. Γ002021 Γ002031 Furthermore, some of the features of the exemplary embodiments could be used to advantage without the corresponding use of other features. As such, the foregoing description should be considered as merely illustrative of the principles of the disclosed embodiments and not in limitation thereof.
Claims
1 . A computing device, comprising:
a memory
a touch screen including:
a display medium for displaying a representation of at least one graphic object stored in the memory, the graphic object having at least one parameter stored in the memory;
a surface for determining an indication of a change to at the least one parameter, wherein, in response to indicating the change, the computing device is configured to automatically change the at least one parameter in the memory and automatically change the representation of the one or more graphic objects in the memory;
wherein the display medium is configured to display the changed representation of one or more graphic objects with the changed parameter.
2. The computing device of claim 1 , wherein the representation of the at least one graphic object stored in a memory represents at least one two dimensional graphic object.
3. The computing device of claim 1 , wherein the representation of the at least one graphic object stored in a memory represents at least one three dimensional graphic object.
4. The computing device of claim 1 , wherein the surface for determining an indication of a change comprises a touch screen configured to identify one or more gestures indicating the change.
5. The computing device of claim 1 , wherein the touch screen is configured to identify one or more gestures selecting a command that determines the at least one parameter.
6. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a drawing of a letter or a selection of an icon which represents the command that determines the at least one parameter.
7. The computing device of claim 5, wherein the touch screen is configured to determine a selection of a portion of the representation of that at least one graphic object having the at least one parameter to be changed.
8. The computing device of claim 7, wherein the computing device is configured to cause the display medium to zoom in on the selected portion of the representation of the at least one graphic object having the at least one parameter to be changed while the parameter is being changed and to zoom out when the parameter has been changed.
9. The computing device of claim 7, wherein the touch screen is configured to identify a gesture for selecting the portion.
10. The computing device of claim 7, wherein the touch screen is configured to identify a touching gesture as indicating an increase in a value of the parameter.
1 1 . The computing device of claim 7, wherein the touch screen is configured to identify a tapping gesture as indicating a decrease in a value of the parameter.
12. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to insert a line between two points of the at least one graphic object.
13. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to delete a line between two points of the at least one graphic object.
14. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to change a length of a line between two points of the at least one graphic object.
15. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to change an angle of a line between two points of the at least one graphic object.
16. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to change an angle of a line between two points of the at least one graphic object.
17. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to apply a radius to a line or to change a radius of an arc between two points of the at least one graphic object.
18. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to apply a radius to a line or to change a radius of an arc between two points of the at least one graphic object.
19. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to make a line of the at least one graphic object parallel to another line of the at least one graphic object.
20. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to make a line of the at least one graphic object parallel to another line of the at least one graphic object.
21 . The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to add a fillet to an inside corner of the at least one graphic object, or an arc to an outside corner of the at least one graphic object.
22. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to add a chamfer to the at least one graphic object.
23. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to trim the at least one graphic object.
24. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to move the at least one graphic object, wherein the at least one graphic object is an arc.
25. The computing device of claim 5, wherein the command that determines the at least one parameter comprises a command to unsnap an intersection of two parts of the at least one graphic object.
26. A method, comprising:
displaying, on a display medium of a computing device, a representation of at least one graphic object stored in a memory, each graphic object having at least one parameter stored in the memory;
indicating a change to the least one parameter, and in response to indicating the change, automatically changing the at least one parameter in the memory and automatically changing the representation of the at least one graphic object in the memory; and
displaying the changed representation of the at least one graphic object on the display medium.
27. The method of claim 26, wherein the representation of the at least one graphic object stored in a memory represents at least one two dimensional graphic object.
28. The method of claim 26, wherein the representation of the at least one graphic object stored in a memory represents at least one three dimensional graphic object.
29. The method of claim 26, wherein indicating a change to the at least one parameter comprises inputting one or more gestures on a touch screen of the display medium.
30. The method of claim 26, wherein indicating a change to the at least one parameter comprises selecting a command that determines the at least one parameter.
31 . The method of claim 30, wherein selecting a command that determines the at least one parameter comprises drawing a letter or selecting an icon which represents the command that determines the at least one parameter.
32. The method of claim 30, wherein indicating a change to the at least one parameter comprises selecting a portion of the representation of that at least one graphic object having the at least one parameter to be changed.
33. The method of claim 32, wherein indicating a change to the at least one parameter comprises zooming in on the selected portion of the representation of the at least one graphic object having the at least one parameter to be changed while changing the parameter and zooming out when the parameter has been changed.
34. The method of claim 32, wherein selecting a portion of the representation of one or more graphic objects comprises using a gesture to select the portion.
35. The method of claim 32, wherein indicating a change to the at least one parameter comprises a touching gesture that indicates an increase in a value of the parameter.
36. The method of claim 32, wherein indicating a change to the at least one parameter comprises a tapping gesture that indicates a decrease in a value of the parameter.
37. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to insert a line between two points of the at least one graphic object.
38. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to delete a line between two points of the at least one graphic object.
39. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to change a length of a line between two points of the at least one graphic object.
40. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to change an angle of a line between two points of the at least one graphic object.
41 . The method of claim 30, wherein the command that determines the at least one parameter comprises a command to change an angle of a line between two points of the at least one graphic object.
42. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to apply a radius to a line or to change a radius of an arc between two points of the at least one graphic object.
43. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to apply a radius to a line or to change a radius of an arc between two points of the at least one graphic object.
44. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to make a line of the at least one graphic object parallel to another line of the at least one graphic object.
45. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to make a line of the at least one graphic object parallel to another line of the at least one graphic object.
46. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to add a fillet to an inside corner of the at least one graphic object, or an arc to an outside corner of the at least one graphic object.
47. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to add a chamfer to the at least one graphic object.
48. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to trim the at least one graphic object.
49. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to move the at least one graphic object, wherein the at least one graphic object is an arc.
50. The method of claim 30, wherein the command that determines the at least one parameter comprises a command to unsnap an intersection of two parts of the at least one graphic object.
51 . A computing device, comprising:
a memory;
a touch screen including:
a display medium for displaying a representation of one or more text characters or graphic objects stored in the memory;
wherein the touch screen is configured to accept, in memory of the computing device, data representing user input; and
one or more processing units, configured to
invoke command mode or data entry mode, wherein:
said command mode is invoked when a user command associated with a graphic object stored at locations within said plurality of data locations is identified, and
said data entry mode is invoked when a command to: insert or paste, one or more graphic objects at insertion locations within said plurality of data locations is identified;
identify said user command;
responsive to a detecting a gesture being inputted on the touch screen to indicate at least one of said locations of said graphic object:
said computing device is configured to automatically: apply said user command to said graphic object or change said parameter of said graphic object, wherein said data entry mode is disabled in said command mode to allow for unconfined input of said gesture on the touch screen within the user input.
52. The computing device of claim 51 , wherein:
in said command mode, said one or more operations are configured to one of: select, copy or change an attribute of, said stored graphic object, and
said to change an attribute comprises to change one of: color, shade, size, style or line thickness.
53. The computing device of claim 51 wherein said command mode is disabled in said data entry mode:
to allow for unconfined input of a drawn shape on the touch screen within the user input, indicative of a graphic object to be inserted at said insertion locations, or to indicate one or more of said insertion locations.
54. The computing device of claim 51 wherein, in said data entry mode:
responsive to a finger or a writing tool being lifted from the touch screen for a predetermined period of time, said computing device is configured to automatically: insert or paste, said one or more text characters or graphic objects at automatically determined said insertion locations within said plurality of data locations.
55. The computing device of claim 51 , wherein in said data entry mode, said one or more operations are configured to automatically apply said user command to: insert a or paste, said one or more text characters or graphic objects at automatically determined said insertion locations.
55. The computing device of claim 51 , wherein said command mode is automatically invoked after said one or more text characters or graphic objects are automatically: inserted or pasted, at said insertion locations.
56. The computing device of claim 51 , wherein, in said command mode:
responsive to detecting a speed change between a first user selected position and a second user selected position while drawing said gesture on the touch screen: the computing device is configured to automatically zoom in or zoom out at least one portion of said graphic object represented on the touch screen which is proximate to said second user selected positional location, as said speed change is decreased or increased from said first user selected positional location to said second user selected positional location, respectively.
57. The computing device of claim 51 , further comprising:
responsive to detecting no movement at a user selected position on the touch screen within the user input, for a predetermined period of time:
the computing device is configured to automatically zoom in gradually up to a maximal predetermined zoom percentage, at least one portion of said plurality of data locations represented on the touch screen which is proximate to said user selected position.
58. The computing device of claim 51 wherein said command mode is a default mode.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880071870.4A CN111492338B (en) | 2017-09-15 | 2018-09-18 | Integrated document editor |
CN202410387855.8A CN118131966A (en) | 2017-09-15 | 2018-09-18 | Computing device and computing method |
IL273279A IL273279B2 (en) | 2017-09-15 | 2018-09-18 | Integrated document editor |
CA3075627A CA3075627A1 (en) | 2017-09-15 | 2018-09-18 | Integrated document editor |
IL308115A IL308115B1 (en) | 2017-09-15 | 2018-09-18 | Integrated document editor |
EP18855679.9A EP3682319A4 (en) | 2017-09-15 | 2018-09-18 | Integrated document editor |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762559269P | 2017-09-15 | 2017-09-15 | |
US62/559,269 | 2017-09-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019055952A1 true WO2019055952A1 (en) | 2019-03-21 |
Family
ID=65723440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/051400 WO2019055952A1 (en) | 2017-09-15 | 2018-09-18 | Integrated document editor |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP3682319A4 (en) |
CN (2) | CN111492338B (en) |
CA (1) | CA3075627A1 (en) |
IL (2) | IL273279B2 (en) |
WO (1) | WO2019055952A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11550583B2 (en) * | 2020-11-13 | 2023-01-10 | Google Llc | Systems and methods for handling macro compatibility for documents at a storage system |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0635780A1 (en) * | 1993-07-21 | 1995-01-25 | Xerox Corporation | User interface having clicktrough tools that can be composed with other tools |
US20040174399A1 (en) | 2003-03-04 | 2004-09-09 | Institute For Information Industry | Computer with a touch screen |
US20070070064A1 (en) * | 2005-09-26 | 2007-03-29 | Fujitsu Limited | Program storage medium storing CAD program for controlling projection and apparatus thereof |
US20090259442A1 (en) * | 2008-04-14 | 2009-10-15 | Mallikarjuna Gandikota | System and method for geometric editing |
CN101986249A (en) * | 2010-07-14 | 2011-03-16 | 上海无戒空间信息技术有限公司 | Method for controlling computer by using gesture object and corresponding computer system |
US7961943B1 (en) | 2005-06-02 | 2011-06-14 | Zeevi Eli I | Integrated document editor |
US20120092268A1 (en) | 2010-10-15 | 2012-04-19 | Hon Hai Precision Industry Co., Ltd. | Computer-implemented method for manipulating onscreen data |
US20130042199A1 (en) | 2011-08-10 | 2013-02-14 | Microsoft Corporation | Automatic zooming for text selection/cursor placement |
US8884990B2 (en) * | 2006-09-11 | 2014-11-11 | Adobe Systems Incorporated | Scaling vector objects having arbitrarily complex shapes |
US20150286395A1 (en) * | 2012-12-21 | 2015-10-08 | Fujifilm Corporation | Computer with touch panel, operation method, and recording medium |
US20160011726A1 (en) * | 2014-07-08 | 2016-01-14 | Verizon Patent And Licensing Inc. | Visual navigation |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294657B (en) * | 2012-03-02 | 2017-10-27 | 富泰华工业(深圳)有限公司 | Method for editing text and system |
CN105373309B (en) * | 2015-11-26 | 2019-10-08 | 努比亚技术有限公司 | Text selection method and mobile terminal |
-
2018
- 2018-09-18 IL IL273279A patent/IL273279B2/en unknown
- 2018-09-18 IL IL308115A patent/IL308115B1/en unknown
- 2018-09-18 CN CN201880071870.4A patent/CN111492338B/en active Active
- 2018-09-18 EP EP18855679.9A patent/EP3682319A4/en active Pending
- 2018-09-18 WO PCT/US2018/051400 patent/WO2019055952A1/en unknown
- 2018-09-18 CN CN202410387855.8A patent/CN118131966A/en active Pending
- 2018-09-18 CA CA3075627A patent/CA3075627A1/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0635780A1 (en) * | 1993-07-21 | 1995-01-25 | Xerox Corporation | User interface having clicktrough tools that can be composed with other tools |
US20040174399A1 (en) | 2003-03-04 | 2004-09-09 | Institute For Information Industry | Computer with a touch screen |
US10133477B1 (en) | 2005-06-02 | 2018-11-20 | Eli I Zeevi | Integrated document editor |
US7961943B1 (en) | 2005-06-02 | 2011-06-14 | Zeevi Eli I | Integrated document editor |
US10810351B2 (en) | 2005-06-02 | 2020-10-20 | Eli I. Zeevi | Integrated document editor |
US10810352B2 (en) | 2005-06-02 | 2020-10-20 | Eli I. Zeevi | Integrated document editor |
US10169301B1 (en) | 2005-06-02 | 2019-01-01 | Eli I Zeevi | Integrated document editor |
US20070070064A1 (en) * | 2005-09-26 | 2007-03-29 | Fujitsu Limited | Program storage medium storing CAD program for controlling projection and apparatus thereof |
US8884990B2 (en) * | 2006-09-11 | 2014-11-11 | Adobe Systems Incorporated | Scaling vector objects having arbitrarily complex shapes |
US20090259442A1 (en) * | 2008-04-14 | 2009-10-15 | Mallikarjuna Gandikota | System and method for geometric editing |
CN101986249A (en) * | 2010-07-14 | 2011-03-16 | 上海无戒空间信息技术有限公司 | Method for controlling computer by using gesture object and corresponding computer system |
US20120092268A1 (en) | 2010-10-15 | 2012-04-19 | Hon Hai Precision Industry Co., Ltd. | Computer-implemented method for manipulating onscreen data |
US20130042199A1 (en) | 2011-08-10 | 2013-02-14 | Microsoft Corporation | Automatic zooming for text selection/cursor placement |
US20150286395A1 (en) * | 2012-12-21 | 2015-10-08 | Fujifilm Corporation | Computer with touch panel, operation method, and recording medium |
US20160011726A1 (en) * | 2014-07-08 | 2016-01-14 | Verizon Patent And Licensing Inc. | Visual navigation |
Non-Patent Citations (1)
Title |
---|
See also references of EP3682319A4 |
Also Published As
Publication number | Publication date |
---|---|
IL273279B1 (en) | 2023-12-01 |
IL308115B1 (en) | 2024-10-01 |
CN111492338B (en) | 2024-04-19 |
CA3075627A1 (en) | 2019-03-21 |
CN111492338A (en) | 2020-08-04 |
CN118131966A (en) | 2024-06-04 |
IL308115A (en) | 2023-12-01 |
EP3682319A4 (en) | 2021-08-04 |
EP3682319A1 (en) | 2020-07-22 |
IL273279B2 (en) | 2024-04-01 |
IL273279A (en) | 2020-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10810352B2 (en) | Integrated document editor | |
US7137076B2 (en) | Correcting recognition results associated with user input | |
EP0607926B1 (en) | Information processing apparatus with a gesture editing function | |
KR101014075B1 (en) | Boxed and lined input panel | |
KR20180095840A (en) | Apparatus and method for writing notes by gestures | |
CN108700994A (en) | System and method for digital ink interactivity | |
US11526659B2 (en) | Converting text to digital ink | |
US20220357844A1 (en) | Integrated document editor | |
EP4309071A1 (en) | Duplicating and aggregating digital ink instances | |
EP4309148A1 (en) | Submitting questions using digital ink | |
CN111492338B (en) | Integrated document editor | |
US20240231582A9 (en) | Modifying digital content including typed and handwritten text | |
US11361153B1 (en) | Linking digital ink instances using connecting lines | |
WO2022197436A1 (en) | Ink grouping reveal and select |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18855679 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3075627 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018855679 Country of ref document: EP Effective date: 20200415 |