IL273279B1 - Integrated document editor - Google Patents

Integrated document editor

Info

Publication number
IL273279B1
IL273279B1 IL273279A IL27327920A IL273279B1 IL 273279 B1 IL273279 B1 IL 273279B1 IL 273279 A IL273279 A IL 273279A IL 27327920 A IL27327920 A IL 27327920A IL 273279 B1 IL273279 B1 IL 273279B1
Authority
IL
Israel
Prior art keywords
change
graphic object
display
detecting
response
Prior art date
Application number
IL273279A
Other languages
Hebrew (he)
Other versions
IL273279A (en
IL273279B2 (en
Original Assignee
Eli Zeevi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eli Zeevi filed Critical Eli Zeevi
Publication of IL273279A publication Critical patent/IL273279A/en
Publication of IL273279B1 publication Critical patent/IL273279B1/en
Publication of IL273279B2 publication Critical patent/IL273279B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition

Description

273279/2 INTEGRATED DOCUMENT EDITOR CROSS-REFERENCE TO RELATED APPLICATIONS id="p-1" id="p-1"
[0001]This Application claims the benefit of U.S. Provisional Patent Application 62/559,269, filed September 15, 2017, the contents of which are herein incorporated by reference.
BACKGROUND id="p-2" id="p-2"
[0002]The disclosed embodiments relate to document creation and editing. More specifically, the disclosed embodiments relate to integration of recognition of information entry with document creation. Handwritten data entry into computer programs is known. The most widespread use has been in personal digital assistant devices. Handwritten input to devices using keyboards is not widespread for various reasons. For example, character transcription and recognition are relatively slow, and there are as yet no widely accepted standards for character or command input.
SUMMARY id="p-3" id="p-3"
[0003]According to the disclosed embodiments, methods and systems are provided for incorporating handwritten information, particularly corrective information, into a previously created revisable text or graphics document, for example text data, image data or command cues, by use of a digitizing recognizer, such as a digitizing pad, a touch screen or other positional input receiving mechanism as part of a display. In a data entry mode, a unit of data is inserted by means of a writing pen or like scribing tool and accepted for placement at a designated location, correlating x-y location of the writing pen to the actual location in the document, or accessing locations in the document memory by emulating keyboard keystrokes (or by the running of code/programs). In a recognition mode, the entered data is recognized as legible text with optionally embedded edit or other commands, and it is converted to machine-readable format. Otherwise, the data is recognized as graphics (for applications that accommodate graphics) and accepted into an associated image frame. Combinations of data, in text or in graphics form, may be concurrently recognized. In a specific embodiment, there is a window of error in location of the writing tool after initial invocation of the data entry mode, so that actual placement of the tool is not critical, since the input of data is correlated by the initial x-y location of the writing pen to the actual location in the document. In addition, there is an allowed error as a function of the pen's location within the document (i.e., with respect to the surrounding data). In a command entry mode, handwritten symbols selected from a basic set common to various application programs may be entered and the corresponding commands may be executed. In specific embodiments, a basic set of handwritten symbols and/or commands that are not application­ 273279/2 dependent and that may be user-intuitive are applied. This handwritten command set allows for the making of revisions and creating documents without having prior knowledge of commands for a specific application. id="p-4" id="p-4"
[0004]In a specific embodiment, such as in use with a word processor, the disclosed embodiments may be implemented when the user invokes a Comments Mode of at a designated location in a document and then the handwritten information may be entered via the input device into the native Comments field, whereupon it is either converted to text or image or to the command data to be executed, with a handwriting recognizer operating either concurrently or after completion of entry of a unit of the handwritten information. Information recognized as text is then converted to ciphers and imported into the main body of the text, either automatically or upon a separate command. Information recognized as graphics is then converted to image data, such as a native graphics format or as a JPEG image and imported into to the main body of the text at the designated point, either automatically or upon a separate command. Information interpreted as commands can be executed, such as editing commands, which control addition, deletion or movement of text within the document, as well as font type or size change or color change. In a further specific embodiment, the disclosed embodiments may be incorporated as a plug-in module for the word processor program and invoked as part of the system, such as the use of a macro or as invoked through the Track Changes feature. id="p-5" id="p-5"
[0005]In an alternative embodiment, the user may manually indicate, prior to invoking the recognition mode, the nature of the input, whether the input is text, graphics or command, recognition can be further improved by providing a step-by­ step protocol prompted by the program for setting up preferred symbols and for learning the handwriting patterns of the user. id="p-6" id="p-6"
[0006]In at least one aspect of the disclosed embodiments, a computing device includes a memory and a touch screen including a display medium for displaying a representation of at least one graphic object stored in the memory, the graphic object having at least one parameter stored in the memory, a surface for determining an indication of a change to at the least one parameter, wherein, in response to indicating the change, the computing device is configured to automatically change the at least one parameter in the memory and automatically change the representation of the one or more graphic objects in the memory, and wherein the display medium is configured to display the changed representation of one or more graphic objects with the changed parameter. id="p-7" id="p-7"
[0007]In another aspect of the disclosed embodiments, a method includes displaying, on a display medium of a computing device, a representation of at least one graphic object 273279/2 stored in a memory, each graphic object having at least one parameter stored in the memory, indicating a change to the least one parameter, and in response to indicating the change, automatically changing the at least one parameter in the memory and automatically changing the representation of the at least one graphic object in the memory, and displaying the changed representation of the at least one graphic object on the display medium. id="p-8" id="p-8"
[0008]These and other features of the disclosed embodiments will be better understood by reference to the following detailed description in connection with the accompanying drawings, which should be taken as illustrative and not limiting.
BRIEF DESCRIPTION OF THE DRAWINGS id="p-9" id="p-9"
[0009]Figure 1 is a block schematic diagram illustrating basic functional blocks and data flow according to one embodiment of the disclosed embodiments. id="p-10" id="p-10"
[0010] Figure 2 is a flow chart of an interrupt handler that reads handwritten information in response to writing pen taps on a writing surface. id="p-11" id="p-11"
[0011] Figure 3 is a flow chart of a polling technique for reading handwritten information. id="p-12" id="p-12"
[0012] Figure 4 is a flow chart of operation according to a representative embodiment of the disclosed embodiments wherein handwritten information is incorporated into the document after all handwritten information is concluded. id="p-13" id="p-13"
[0013] Figure 5 is a flow chart of operation according to a representative embodiment of the disclosed embodiments, wherein handwritten information is incorporated into the document concurrently during input. id="p-14" id="p-14"
[0014]Figure 6 is an illustration example of options available for displaying handwritten information during various steps in the process according to the disclosed embodiments. id="p-15" id="p-15"
[0015]Figure 7 is an illustration of samples of handwritten symbols / commands and their associated meanings. id="p-16" id="p-16"
[0016] Figure 8 is a listing that provides generic routines for each of the first 3 symbol operations illustrated in Figure 7. id="p-17" id="p-17"
[0017]Figure 9 is an illustration of data flow for data received from a recognition functionality element processed and defined in an RHI memory. id="p-18" id="p-18"
[0018] Figure 10 is an example of a memory block format of the RHI memory suitable for storing data associated with one handwritten command. id="p-19" id="p-19"
[0019]Figure 11 is an example of data flow of the embedded element of Figure 1 and 273279/2 Figure 38 according to the first embodiment illustrating the emulating of keyboard keystrokes. id="p-20" id="p-20"
[0020]Figure 12 is a flow chart representing subroutine D of Figure 4 and Figure 5 according to the first embodiment using techniques to emulate keyboard keystrokes. id="p-21" id="p-21"
[0021]Figure 13 is an example of data flow of the embedded element of Figure 1 and Figure 38 according to the second embodiment illustrating the running of programs. id="p-22" id="p-22"
[0022]Figure 14 is a flow chart representing subroutine D of Figure 4 and Figure 5 according to the second embodiment illustrating the running of programs. id="p-23" id="p-23"
[0023]Figure 15 through Figure 20 are flow charts of subroutine H referenced in Figure 12 for the first three symbol operations illustrated in Figure 7 and according to the generic routines illustrated in Figure 8. id="p-24" id="p-24"
[0024]Figure 21 is a flow chart of subroutine L referenced in Figure 4 and Figure 5 for concluding the embedding of revisions for a Microsoft® Word type document, according to the first embodiment using techniques to emulate keyboard keystrokes. id="p-25" id="p-25"
[0025]Figure 22 is a flow chart of an alternative to subroutine L of Figure 21 for concluding revisions for MS Word type document. id="p-26" id="p-26"
[0026]Figure 23 is a sample flow chart of the subroutine I referenced in Figure 12 for copying a recognized image from the RHI memory and placing it in the document memory via a clipboard. id="p-27" id="p-27"
[0027]Figure 24 is a sample of code for subroutine N referenced in Figure 23 and Figure 37, for copying an image from the RHI memory into the clipboard. id="p-28" id="p-28"
[0028]Figure 25 is a sample of translated Visual Basic code for built-in macros referenced in the flow charts of Figure 26 to Figure 32 and Figure 37. id="p-29" id="p-29"
[0029]Figure 26 through Figure 32 are flow charts of subroutine J referenced in Figure 14 for the first three symbol operations illustrated in Figure 7 and according to the generic routines illustrated in Figure 8 for MS Word. id="p-30" id="p-30"
[0030] Figure 33 is a sample of code in Visual Basic for the subroutine M referenced in Figure 4 and Figure 5, for concluding embedding of the revisions for MS Word, according to the second embodiment using the running of programs. id="p-31" id="p-31"
[0031] Figure 34 is a sample of translated Visual Basic code for useful built-in macros in comment mode for MS Word. id="p-32" id="p-32"
[0032] Figure 35 provides examples of recorded macros translated into Visual Basic code that emulates some keyboard keys for MS Word. 273279/2 id="p-33" id="p-33"
[0033]Figure 36 is a flow chart of a process for checking if a handwritten character to be emulated as a keyboard keystroke exists in table and thus can be emulated and, if so, for executing the relevant line of code that emulates the keystroke. id="p-34" id="p-34"
[0034]Figure 37 is a flow chart of an example for subroutine K in Figure 14 for copying a recognized image from RHI memory and placing it in the document memory via the clipboard. id="p-35" id="p-35"
[0035]Figure 38 is an alternate block schematic diagram to the one illustrated in Figure 1, illustrating basic functional blocks and data flow according to another embodiment of the disclosed embodiments, using a touch screen. id="p-36" id="p-36"
[0036]Figure 39 is a schematic diagram of an integrated edited document made with the use of a wireless pad. id="p-37" id="p-37"
[0037]Figures 40A-40D illustrate an example of user interaction with the touch screen to Insert a line. id="p-38" id="p-38"
[0038]Figures 41A-41C illustrate an example of use of the command to delete an object. id="p-39" id="p-39"
[0039] Figures 42A-42D illustrate an example of user interaction with the touch screen to change line length. id="p-40" id="p-40"
[0040] Figures 43A-43D illustrate an example of user interaction with the touch screen to change line angle. id="p-41" id="p-41"
[0041]Figures 44A-44D illustrate an example of user interaction with the touch screen to apply a radius to a line or to change the radius of an arc. id="p-42" id="p-42"
[0042]Figures 45A-45C illustrate an example of user interaction with the touch screen to make a line parallel to another line. id="p-43" id="p-43"
[0043]Figures 46A-46D illustrate an example of user interaction with the touch screen to add a fillet or an arc. id="p-44" id="p-44"
[0044]Figures 47A-47D illustrate an example of user interaction with the touch screen to add a chamfer. id="p-45" id="p-45"
[0045] Figures 48A-48F illustrate an example of use of the command to trim an object. id="p-46" id="p-46"
[0046]Figures 49A-49D illustrate an example of user interaction with the touch screen to move an arced object. id="p-47" id="p-47"
[0047] Figures 50A-50D illustrate an example of use of the "no snap" command. id="p-48" id="p-48"
[0048] Figures 51A-51D illustrate another example of use of the 'No Snap' command. id="p-49" id="p-49"
[0049] Figures 52A-52D illustrate another example of use of the command to trim an object. 273279/2 id="p-50" id="p-50"
[0050] Figure 53 is an example of a user interface with icons. id="p-51" id="p-51"
[0051]Figures 54A-54B illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a cube on the touch screen. id="p-52" id="p-52"
[0052]Figures 54C-54D illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a sphere on the touch screen. id="p-53" id="p-53"
[0053] Figures 54E-54F illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a ramp on the touch screen. id="p-54" id="p-54"
[0054] Figures 55A-55B illustrate examples of a user interface menus for text editing, selection mode. id="p-55" id="p-55"
[0055]Figure 56 illustrates an example of a gesture to mark text in command mode. id="p-56" id="p-56"
[0056] Figure 57 illustrates another example of a gesture to mark text in command mode. id="p-57" id="p-57"
[0057] Figures 58A-58B illustrate an example of automatically zooming a text while drawing the gesture to mark text.
DETAILED DESCRIPTION id="p-58" id="p-58"
[0058]Referring to Figure 1, there is a block schematic diagram of an integrated document editor 10 according to a first embodiment, which illustrates the basic functional blocks and data flow according to that first embodiment. A digitizing pad 12 is used, with its writing area [e.g., within margins of an 8-1/2" x 11" sheet] to accommodate standard sized papers that corresponds to the x-y location of the edited page. Pad 12 receives data from a writing pen [e.g., magnetically, or mechanically by way of pressure with a standard pen]. Data from the digitizing pad 12 is read by a data receiver 14 as bitmap and/or vector data and then stored corresponding to or referencing the appropriate x-y location in a data receiving memory 16. Optionally, this information can be displayed on the screen of a display 25 on a real-time basis to provide the writer with real-time feedback. id="p-59" id="p-59"
[0059] Alternatively, and as illustrated in Figure 38, a touch screen 11 [or other positional input receiving mechanism as part of a display] with its receiving and displaying mechanisms integrated, receives data from the writing pen 10, whereby the original document is displayed on the touch screen as it would have been displayed on a printed page placed on the digitizing pad 12 and the writing by the pen 10 occurs on the touch screen at the same locations as it would have been written on a printed page. Under this scenario, the display 25, pad 12 and data receiver 14 of Figure 1 are replaced with element 11, the touch screen and associated electronics of Figure 38, and elements 16, 18, 20, 22, and 24 are discussed hereunder with reference to Figure 1. Under the touch screen display alternative, writing paper 273279/2 is eliminated. id="p-60" id="p-60"
[0060]When a printed page is used with the digitizing pad 12, adjustments in registration of location may be required such that locations on the printed page correlates to the correct x-y locations for data stored in the data receiving memory 16. id="p-61" id="p-61"
[0061]The correlation between locations of the writing pen 10 (on the touch screen 11 or on the digitizing pad 12) and the actual x-y locations in the document memory 22 need not be perfectly accurate, since the location of the pen 10 is with reference to existing machine code data. In other words, there is a window of error around the writing point that can be allowed without loss of useful information, because it is assumed that the new handwritten information (e.g., revisions) must always correspond to a specific location of the pen, e.g., near text, drawing or image. This is similar to, but not always the same as, placing a cursor at an insertion point in a document and changing from command mode to data input mode. For example, the writing point may be between two lines of text but closer to one line of text than to the other. This window of error could be continuously computed as a function of the pen tapping point and the data surrounding the tapping point. In case of ambiguity as to the exact location where the new data are intended to be inserted (e.g., when the writing point overlaps multiple possible locations in the document memory 22), the touch screen 11 (or the pad 12) may generate a signal, such as a beeping sound, requesting the user to tap closer to the point where handwritten information needs to be inserted. If the ambiguity is still not resolved (when the digitizing pad 12 is used), the user may be requested to follow an adjustment procedure. id="p-62" id="p-62"
[0062]If desired, adjustments may be made such that the writing area on the digitizing pad will be set to correspond to a specific active window (for example, in multi-windows screen), or to a portion of a window (i.e., when the active portion of a window covers partial screen, e.g., an invoice or a bill of an accounting program QuickBooks), such that the writing area of the digitizing pad 12 is efficiently utilized. In situations where a document is a form (e.g., an order form), the paper document can be a pre-set to the specific format of the form, such that the handwritten information can be entered at specific fields of the form (that correspond to these fields in the document memory 22). In addition, in operations that do not require archiving of the handwritten paper documents, handwritten information on the digitizing pad 12 may be deleted after it is integrated into the document memory 22. Alternatively, multi-use media that allow multiple deletions (that clear the handwritten information) can be used, although the touch screen alternative would be preferred over this alternative. id="p-63" id="p-63"
[0063] A recognition functionality element 18 reads information from the data receiving memory 16 and writes the recognition results or recognized handwritten elements into the 273279/2 recognized handwritten information (RHI) memory 20. Recognized handwritten information elements, (RHI elements) such as characters, words, and symbols, are stored in the RHI memory 20. Location of an RHI element in the RHI memory 20 correlates to its location in the data receiving memory 16 and in the document memory 22. After symbols are recognized and interpreted as commands, they may be stored as images or icons in, for example, JPEG format (or they can be emulated as if they were keyboard keys. This technique will be discussed hereafter.), since the symbols are intended to be intuitive. They can be useful for reviewing and interpreting revisions in the document. In addition, the recognized handwritten information prior to final incorporation (e.g., revisions for review) may be displayed either in handwriting (as is or as revised machine code handwriting for improved readability) or in standard text. id="p-64" id="p-64"
[0064]An embedded criteria and functionality element 24 reads the information from the RHI memory 20 and embeds it into the document memory 22. Information in the document memory 22 is displayed on the display 25, which is for example a computer monitor or a display of a touch screen. The embedded functionality determines what to display and what to be embedded into the document memory 22 based on the stage of the revision and selected user criteria/preferences. id="p-65" id="p-65"
[0065]Embedding the recognized information into the document memory 22 can be either applied concurrently or after input of all handwritten information, such as after revisions, have been concluded. Incorporation of the handwritten information concurrently can occur with or without user involvement. The user can indicate each time a handwritten command and its associated text and/or image has been concluded, and then it can be incorporated into the document memory 22 one at a time. (Incorporation of handwritten information concurrently without user involvement will be discussed hereafter.) The document memory 22 contains, for example, one of the following files: 1) A word processing file, such as a MS Word file or a Word Perfect file, 2) A spreadsheet, such as an Excel file, 3) A form such as a sales order, an invoice or a bill in accounting software (e.g., QuickBooks), 4) A table or a database, 5) A desktop publishing file, such as a QuarkXPress or a PageMaker file, or 6) A presentation file, such as a MS Power Point file. id="p-66" id="p-66"
[0066] It should be noted that the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, object, or any portion of these types of documents, such as a block of text or a unit of data. In addition, the document memory 22, the data receiving memory and the RHI memory 20 could be any kind of memory or memory device or a portion of a memory device, e.g., any type of RAM, magnetic disk, CD-ROM, DVD-ROM, optical disk or any other type of storage. It should be further noted that one skilled in the art will 273279/2 recognize that the elements/components discussed herein (e.g., in Figures 1, 38, 9, 11, 13), such as the RHI element may be implemented in any combination of electronic or computer hardware and/or software. For example, the disclosed embodiments could be implemented in software operating on a general-purpose computer or other types of computing / communication devices, such as hand-held computers, personal digital assistant (PDA)s, cell phones, etc. Alternatively, a general-purpose computer may be interfaced with specialized hardware such as an Application Specific Integrated Circuit or some other electronic components to implement the disclosed embodiments. Therefore, it is understood that the disclosed embodiments may be carried out using various codes of one or more software modules forming a program and executed as instructions/data by, e.g., a central processing unit, or using hardware modules specifically configured and dedicated to perform the disclosed embodiments. Alternatively, the disclosed embodiments may be carried out using a combination of software and hardware modules. id="p-67" id="p-67"
[0067]The recognition functionality element 18 encompasses one or more of the following recognition approaches: 1- Character recognition, which can for example be used in cases where the user clearly spells each character in capital letters in an effort to minimize recognition errors, 2- A holistic approach where recognition is globally performed on the whole representation of the words and there is no attempt to identify characters individually. (The main advantage of the holistic methods is that they avoid word segmentation. Their main drawback is that they are related to a fixed lexicon of words description: since these methods do not rely on letters, words are directly described by means of features. Adding new words to the lexicon typically requires human training or the automatic generation of a word description from ASCII words.) 3- Analytical strategies that deal with several levels of representation corresponding to increasing levels of abstractions. (Words are not considered as a whole, but as sequences of smaller size units, which must be easily related to characters in order to make recognition independent from a specific vocabulary.) id="p-68" id="p-68"
[0068] Strings of words or symbols, such as those described in connection with Figure and discussed hereafter, can be recognized by either the holistic approach or by the analytical strategies, although character recognition may be preferred. Units recognized as characters, words or symbols are stored into the RHI memory 20, for example in ASCII format. Units that are graphics are stored into the RHI memory as graphics, for example as a JPEG file. Units that could not be recognized as a character, word or a symbol are interpreted as images if the application accommodates graphics and optionally, if 273279/2 approved by the user as graphics and stored into the RHI memory 20 as graphics. It should be noted that units that could not be recognized as character, word or symbol may not be interpreted as graphics in applications that do not accommodate graphics (e.g., Excel); in this scenario, user involvement may be required. id="p-69" id="p-69"
[0069]To improve the recognition functionality, data may be read from the document memory by the recognition element 18 to verify that the recognized handwritten information does not conflict with data in the original document and to resolve/minimize as much as possible recognized information retaining ambiguity. The user may also resolve ambiguity by approving/disapproving recognized handwritten information (e.g., revisions) shown on the display 25. In addition, adaptive algorithms (beyond the scope of this disclosure) may be employed. Thereunder, user involvement may be relatively significant at first, but as the adaptive algorithms learn the specific handwritten patterns and store them as historical patterns, future ambiguities should be minimized as recognition becomes more robust. id="p-70" id="p-70"
[0070] Figure 2 though Figure 5 are flow charts of operation according to an exemplary embodiment and are briefly explained herein below. The text in all of the drawings is herewith explicitly incorporated into this written description for the purposes of claim support. Figure illustrates a program that reads the output of the digitizing pad 12 (or of the touch screen 11) each time the writing pen 10 taps on and/or leaves the writing surface of the pad 12 (or of the touch screen 11). Thereafter data is stored in the data receiving memory 16 (Step E). Both the recognition element and the data receiver (or the touch screen) access the data receiving memory. Therefore, during read/write cycle by one element, the access by the other element should be disabled. id="p-71" id="p-71"
[0071] Optionally, as illustrated in Figure 3, the program checks every few milliseconds to see if there is new data to read from the digitizing pad 12 (or from the touch screen 11). If so, data is received from the digitizing recognizer and stored in the data receiving memory (E). This process continues until the user indicates that the revisions are concluded, or until there is a timeout. id="p-72" id="p-72"
[0072] Embedding of the handwritten information may be executed either all at once according to procedures explained with Figure 4, or concurrently according to procedures explained with Figure 5. id="p-73" id="p-73"
[0073]The recognition element 18 recognizes one unit at a time, e.g., a character, a word, graphic or a symbol, and makes them available to the RHI processor and memory 20 (C). The functionality of this processor and the way in which it stores recognized units into the RHI memory will be discussed hereafter with reference to Figure 9. Units that are not 273279/2 recognized immediately are either dealt with at the end as graphics, or the user may indicate otherwise manually by other means, such as a selection table or keyboard input (F). Alternatively, graphics are interpreted as graphics if the user indicates when the writing of graphics begins and when it is concluded. Once the handwritten information is concluded, it is grouped into memory blocks, whereby each memory block contains all (as in Figure 4) or possibly partial (as in Figure 5) recognized information that is related to one handwritten command, e.g., a revision. The embedded function (D) then embeds the recognized handwritten information (e.g., revisions) in "for review" mode. Once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences set up (A) by the user. In the examples illustrated hereafter, revisions in MS Word are embedded in Track Changes mode all at once. Also, in the examples illustrated hereafter, revisions in MS Word that are according to Figure 4 may, for example, be useful when the digitizing pad 12 is separate from the rest of the system, whereby handwritten information from the digitizing pad internal memory may be downloaded into the data receiving memory 16 after the revisions are concluded via a USB or other IEEE or ANSI standard port. id="p-74" id="p-74"
[0074]Figure 4 is a flow chart of the various steps, whereby embedding "all" recognized handwritten information (such as revisions) into the document memory 22 is executed once "all" handwritten information is concluded. First, the Document Type is set up (e.g., Microsoft® Word or QuarkXPress), with software version and user preferences (e.g., whether to incorporate revisions as they are available or one at a time upon user approval/disapproval), and the various symbols preferred by the user for the various commands such as for inserting text, for deleting text and for moving text around) (A). The handwritten information is read from the data receiving memory 16 and stored in the memory of the recognition element 18 (B). Information that is read from the receiving memory 16 is marked/flagged as read, or it is erased after it is read by the recognition element 18 and stored in its memory; this will insure that only new data is read by the recognition element 18. id="p-75" id="p-75"
[0075] Figure 5 is a flow chart of the various steps whereby embedding recognized handwritten information (e.g., revisions) into the document memory 22 is executed concurrently (e.g., with the making of the revisions). Steps 1 - 3 are identical to the steps of the flow chart in Figure 4 (discussed above). Once a unit, such as a character, a symbol or a word is recognized, it is processed by the RHI processor 20 and stored in the RHI memory. A processor (GMB functionality 30 referenced in Figure 9) identifies it as either a unit that can be embedded immediately or not. It is checked if it can be embedded (step 4.3); if it can be (step 5), it is embedded (D) and then (step 6) deleted or marked/updated 273279/2 as an embedded (G). If it cannot be embedded (step 4.1), more information is read from the digitizing pad 12 (or from the touch screen 11). This process of steps 4 - 6 repeats and continues so long as handwritten information is forthcoming. Once all data is embedded (indicated by an End command or a simple timeout), units that could not be recognized are dealt with (F) in the same manner discussed for the flow chart of Figure 4. Finally, once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences chosen by the user. id="p-76" id="p-76"
[0076] Figure 6 is an example of the various options and preferences available to the user to display the handwritten information in the various steps for MS Word. In "For Review" mode the revisions are displayed as "For Review" pending approval for "Final" incorporation. Revisions, for example, can be embedded in a "Track Changes" mode, and once approved/disapproved (as in "Accept/Reject changes"), they are embedded into the document memory 22 as "Final". Alternatively, symbols may be also displayed on the display 25. The symbols are selectively chosen to be intuitive, and, therefore, can be useful for quick review of revisions. For the same reason, text revisions may be displayed either in handwriting as is, or as revised machine code handwriting for improved readability; in "Final" mode, all the symbols are erased, and the revisions are incorporated as an integral part of the document. id="p-77" id="p-77"
[0077] An example of a basic set of handwritten commands/symbols and their interpretation with respect to their associated data for making revisions in various types of documents is illustrated in Figure 7. id="p-78" id="p-78"
[0078]Direct access to specific locations is needed in the document memory 22 for read/write operations. Embedding recognized handwritten information from the RHI memory 20 into the document memory 22 (e.g., for incorporating revisions) may not be possible (or limited) for after-market applications. Each of the embodiments discussed below provides an alternate "back door" solution to overcome this obstacle.
Embodiment One: Emulating Keyboard Entries: id="p-79" id="p-79"
[0079]Command information in the RHI memory 20 is used to insert or revise data, such as text or images in designated locations in the document memory 22, wherein the execution mechanisms emulate keyboard keystrokes, and when available, operate in conjunction with running pre-recorded and/or built-in macros assigned to sequences of keystrokes (i.e., shortcut keys). Data such as text can be copied from the RHI memory 20 to the clipboard and then pasted into designated locations in the document memory 22, or it can be emulated as keyboard keystrokes. This embodiment will be discussed hereafter.
Embodiment Two: Running Programs: 273279/2 id="p-80" id="p-80"
[0080] In applications such as Microsoft® Word, Excel and WordPerfect, where programming capabilities, such as VB Scripts and Visual Basic are available, the commands and their associated data stored in the RHI memory 20 are translated to programs that embed them into the document memory 22 as intended. In this embodiment, the operating system clipboard can be used as a buffer for data (e.g., text and images). This embodiment will also be discussed hereafter. id="p-81" id="p-81"
[0081]Information associated with a handwritten command as discussed in Embodiment One and Embodiment Two is either text or graphics (image), although it could be a combination of text and graphics. In either embodiment, the clipboard can be used as a buffer.
For copy operations in the RHI memory: id="p-82" id="p-82"
[0082] When a unit of text or image is copied from a specific location indicated in the memory block in the RHI memory 20 to be inserted in a designated location in the document memory 22.
For Cut/Paste and for Paste operations within the document memory: id="p-83" id="p-83"
[0083] For moving text or image around within the document memory 22, and for pasting text or image copied from the RHI memory 20. id="p-84" id="p-84"
[0084] A key benefit of Embodiment One is usefulness in a large array of applications, with or without programming capabilities, to execute commands, relying merely on control keys, and when available built-in or pre-recorded macros. When a control key, such as Arrow Up or a simultaneous combination of keys, such as Cntrl-C, is emulated, a command is executed. id="p-85" id="p-85"
[0085] Macros cannot be run in Embodiment Two unless translated to actual low-level programming code (e.g., Visual Basic Code). In contrast, running a macro in a control language native to the application (recorded and/or built-in) in Embodiment One is simply achieved by emulating its assigned shortcut key(s). Embodiment Two may be preferred over Embodiment One, for example in MS Word, if a Visual Basic Editor is used to create codes that include Visual Basic instructions that cannot be recorded as macros. id="p-86" id="p-86"
[0086]Alternatively, Embodiment Two may be used in conjunction with Embodiment One, whereby, for example, instead of moving text from the RHI memory 20 to the clipboard and then placing it in a designation location in the document memory 22, text is emulated as keyboard keystrokes. If desired, the keyboards keys can be emulated in Embodiment Two by writing a code for each key, that, when executed, emulates a keystroke. Alternatively, Embodiment One may be implemented for applications with no programming capabilities, such as QuarkXPress, and Embodiment Two may be implemented for some of the 273279/2 applications that do have programming capabilities. Under this scenario, some applications with programming capabilities may still be implemented in Embodiment One or in both Embodiment One and Embodiment Two. id="p-87" id="p-87"
[0087]Alternatively, x-y locations in the data receiving memory 16 (as well as designated locations in the document memory 22), can be identified on a printout or on the display 25, and if desired, on the touch screen 11, based on: 1) recognition/identification of a unique text and/or image representation around the writing pen, and 2) searching for and matching the recognized/identified data around the pen with data in the original document which may be converted into the bitmap and/or vector format that is identical to the format handwritten information is stored in the data receiving memory 16. Then handwritten information along with its x-y locations correspondingly indexed in the document memory is transmitted to a remote platform for recognition, embedding and displaying. id="p-88" id="p-88"
[0088]The data representation around the writing pen and the handwritten information are read by a miniature camera with attached circuitry that is built-in the pen. The data representing the original data in the document memory 22 is downloaded into the pen internal memory prior the commencement of handwriting, either via a wireless connection (e.g., Bluetooth) or via physical connection (e.g., USB port). id="p-89" id="p-89"
[0089] The handwritten information along with its identified x-y locations is either downloaded into the data receiving memory 16 of the remote platform after the handwritten information is concluded (via physical or wireless link), or it can be transmitted to the remote platform via wireless link as the x-y location of the hand written information is identified. Then, the handwritten information is embedded into the document memory 22 all at once (i.e., according to the flow chart illustrated in Figure 4), or concurrently (i.e., according to the flow chart illustrated in Figure 5). id="p-90" id="p-90"
[0090] If desired, the display 25 may include pre-set patterns (e.g., engraved or silk-screened) throughout the display or at selected location of the display, such that when read by the camera of the pen, the exact x-y location on the display 25 can be determined. The pre-set patterns on the display 25 can be useful to resolve ambiguities, for example when the identical information around locations in the document memory 22 exists multiple times within the document. id="p-91" id="p-91"
[0091]Further, the tapping of the pen in selected locations of the touch screen 11 can be used to determine the x-y location in the document memory (e.g., when the user makes yes-no type selections within a form displayed on the touch screen). This, for example, can be performed on a tablet that can accept input from a pen or any other pointing device that function as a mouse and writing instrument. 273279/2 id="p-92" id="p-92"
[0092]Alternatively (or in addition to a touch screen), the writing pen can emit a focused laser/lR beam to a screen with thermal or optical sensing, and the location of the sensed beam may be used to identify the x-y location on the screen. Under this scenario, the use of a pen with a built-in miniature camera is not needed. When a touch screen or a display with thermal/optical sensing (or when preset patterns on an ordinary display) is used to detect x-y locations on the screen, the designated x-y location in the document memory can be determined based on: 1) the detected x-y location of the pen 10 on the screen, and 2) parameters that correlate between the displayed data and the data in the document memory 22 (e.g., application name, cursor location on the screen and zoom percent). id="p-93" id="p-93"
[0093] Alternatively, the mouse could be emulated to place the insertion point at designated locations in the document memory 22 based on the X-Y locations indicated in the Data receiving memory 16. Then information from the RHI memory 20 can be embedded into the document memory 22 according to Embodiment One or Embodiment Two. Further, once the insertion point is at a designated location in the document memory 22, selection of text or an image within the document memory 22 may be also achieved by emulating the mouse pointer click operation.
Use of the Comments insertion feature: id="p-94" id="p-94"
[0094]The Comments feature of Microsoft® Word (or similar comment­ inserting feature in other program applications) may be employed by the user or automatically in conjunction with either of the approaches discussed above, and then handwritten information from the RHI memory 20 can be embedded into designated Comments fields of the document memory 22. This approach will be discussed further hereafter.
Use of the Track Changes Feature: id="p-95" id="p-95"
[0095]Before embedding information into the document memory 22, the document type is identified and user preferences are set (A). The user may select to display revisions in Track Change feature. The Track Changes Mode of Microsoft® Word (or similar features in other applications) can be invoked by the user or automatically in conjunction with either or both Embodiment One and Embodiment Two, and then handwritten information from the RHI memory 20 can be embedded into the document memory 22. After all revisions are incorporated into the document memory 22, they can be accepted for the entire document, or they can be accepted /rejected one at a time upon user command. Alternatively, they can be accepted/rejected at the making of the revisions. id="p-96" id="p-96"
[0096]The insertion mechanism may also be a plug-in that emulates the Track Changes feature. Alternatively, the Track Changes Feature may be invoked after the Comments Feature is invoked such that revisions in the Comments fields are displayed as revisions, i.e., 273279/2 "For Review". This could in particular be useful for large documents reviewed/revised by multiple parties. id="p-97" id="p-97"
[0097]In another embodiment, the original document is read and converted into a document with known accessible format (e.g., ASCII for text and JPEG for graphics) and stored into an intermediate memory location. All read/write operations are performed directly on it. Once revisions are completed, or before transmitting to another platform, it can be converted back into the original format and stored into the document memory 22. id="p-98" id="p-98"
[0098] As discussed, revisions are written on a paper document placed on the digitizing pad 12, whereby the paper document contains/resembles the machine code information stored in the document memory 22, and the x-y locations on the paper document corresponds to the x-y locations in the document memory 22. In an alternative embodiment, the revisions can be made on a blank paper (or on another document), whereby, the handwritten information, for example, is a command (or a set of commands) to write or revise a value/number in a cell of a spreadsheet, or to update new information in a specific location of a database; this can be useful, for example in cases were an action to update a spreadsheet, a table or a database is needed after reviewing a document (or a set of documents). In this embodiment, the x-y location in the Receiving Memory 16 is immaterial.
RHI processor and memory blocks id="p-99" id="p-99"
[0099]Before discussing the way in which information is embedded into the document memory 22 in greater detail with reference to the flow charts, it is necessary to define how recognized data is stored in memory and how it correlates to locations in the document memory 22. As previously explained, embedding the recognized information into the document memory 22 can be either applied concurrently or after all handwritten information has been concluded. The Embed function (D) referenced in Figure 4 reads data from memory blocks in the RHI memory 20 one at a time, which corresponds to one handwritten command and its associated text data or image data. The Embed function (D) referenced in Figure reads data from memory blocks and embeds recognized units concurrently. id="p-100" id="p-100"
[00100] Memory blocks: An example of how a handwritten command and its associated text or image is defined in the memory block 32 is illustrated in Figure 10. This format may be expanded, for example, if additional commands are added, i.e., in addition to the commands specified in the Command field. The parameters defining the x-y location of recognized units (i.e., lnsertionPoint1 and lnsertionPoint2 in Figure 10) vary as a function of the application. For example, the x-y locations/insertion points of text or image in MS Word can be defined with the parameters Page#, Line# and Column# (as illustrated in 273279/2 Figure 10). In the application Excel, the x-y locations can be translated into the cell location in the spreadsheet, i.e., Sheet#, Row# and Column#. Therefore, different formats for x-y lnsertionPoint1 and x-y lnsertionPoint2 need to be defined to accommodate variety of applications. id="p-101" id="p-101"
[00101]Figure 9 is a chart of data flow of recognized units. These are discussed below. id="p-102" id="p-102"
[00102]FIFO (First In First Out) Protocol: Once a unit is recognized it is stored in a queue, awaiting processing by the processor of element 20, and more specifically, by the GMB functionality 30. The "New Recog" flag (set to "One" by the recognition element 18 when a unit is available), indicates to the RU receiver 29 that a recognized unit (i.e., the next in the queue) is available. The "New Recog" flag is reset back to "Zero" after the recognized unit is read and stored in the memory elements 26 and 28 of Figure 9 (e.g., as in step 3.2. of the subroutines illustrated in Figure 4 and Figure 5). In response, the recognition element 18: 1) makes the next recognized unit available to read by the RU receiver 29, and 2) sets the "New Recog" flag back to "One" to indicate to the RU receiver 29 that the next unit is ready. This process continues so long as recognized units are forthcoming. This protocol insures that the recognition element 18 is in synch with the speed with which recognized units are read from the recognition element and stored in the RHI memory (i.e., in memory elements 26 and 28 of Figure 9). For example, when handwritten information is processed concurrently, there may be more than one memory block available before the previous memory block is embedded into the document memory 22. id="p-103" id="p-103"
[00103] In a similar manner, this FIFO technique may also be employed between elements and 22 and between elements 16 and 18 of Figure 1 and Figure 38, and between elements and 12 of Figure 1, to ensure that independent processes are well synchronized, regardless of the speed by which data is available by one element and the speed by which data is read and processed by the other element. id="p-104" id="p-104"
[00104]Optionally, the "New Recog" flag could be implemented in hardware (such as within an IC), for example, by setting a line to "High" when a recognized unit is available and to "Low" after the unit is read and stored, i.e., to acknowledge receipt. id="p-105" id="p-105"
[00105]Process 1: As a unit, such as a character, a symbol or a word is recognized: 1) it is stored in Recognized Units (RU) Memory 28, and 2) its location in the RU memory 28 along with its x-y location, as indicated in the data receiving memory 16, is stored in the XY-RU Location to Address in RU table 26. This process continues so long as handwritten units are recognized and forthcoming. id="p-106" id="p-106"
[00106] Process 2: In parallel to Process 1, the grouping into memory blocks (GMB) functionality 30 identifies each recognized unit such as a character, a word or a handwritten 273279/2 command (symbols or words), and stores them in the appropriate locations of memory blocks 32. In operations such as "moving text around", "increasing fonts size" or "changing color", an entire handwritten command must be concluded before it can be embedded into the document memory 22. In operations such as "deleting text" or "inserting new text", deleting or embedding the text can begin as soon as the command has been identified and the deletion (or insertion of text) operation can then continue concurrently as the user continue to write on the digitizing pad 12 (or on the touch screen 11). id="p-107" id="p-107"
[00107] In this last scenario, as soon as the recognized unit(s) is incorporated into (or deleted from) the document memory 22, they are deleted from the RHI memory 22, i.e., from the memory elements 26, 28 and 32 of Figure 9. If deletion is not desired, embedded units may be flagged as "incorporated/embedded" or moved to another memory location (as illustrated in step 6.2 of the flow chart in Figure 5). This should insure that information in the memory blocks is continuously current with new unincorporated information. id="p-108" id="p-108"
[00108] Process 3: As unit(s) are grouped into memory blocks, 1) the identity of the recognized units (whether they can be immediately incorporated or not) and 2) the locations of the units that can be incorporated in the RHI memory are continuously updated. 1. As units are groups into memory blocks, a flag (i.e., "Identity-Flag") is set to "One" to indicate when unit(s) can be embedded. It should be noted that this flag is defined for each memory block and that it could be set more than one time for the same memory block (for example, when the user strikes through a line of text). This flag is checked in steps 4.- 4.3 of Figure 5 and is reset to "Zero" after the recognized unit(s) is embedded, i.e., in step 6.1 of the subroutine in Figure 5, and at initialization. It should be noted that the "Identity" flag discussed above is irrelevant when all recognized units associated with a memory block are embedded all at once; under this scenario and after the handwritten information is concluded, recognized, grouped and stored in the proper locations of the RHI memory, the "All Units" flag in step 6.1 of Figure 4 will be set to "One" by the GMB functionality 30 of Figure 9, to indicate that all units can be embedded. 2. As units are grouped into memory blocks, a pointer for memory block, i.e., the "Next memory block pointer" 31, is updated every time a new memory block is introduced (i.e., when a recognized unit(s) that is not yet ready to be embedded is introduced; when the "Identity" flag is Zero), and every time a memory block is embedded into the document memory 22, such that the pointer will always point to the location of the memory block that is ready (when it is ready) to be embedded. This pointer indicates to the subroutines Embedd(of Figure 12) and Embedd2 (of Figure 14) the exact location of the relevant memory block with the recognized unit(s) that is ready to be embedded (as in step 1.2 of these 273279/2 subroutines). id="p-109" id="p-109"
[00109]An example of a scenario under which the "next memory block pointer" 31 is updated is when a handwritten input related to changing font size has begun, then another handwritten input related to changing colors has begun; note that these two commands cannot be incorporated until after they are concluded), and then another handwritten input for deleting text has begun; note that this command may be embedded as soon as the GMB functionality identifies it. id="p-110" id="p-110"
[00110] The value in the "# of memory blocks" 33 indicates the number of memory blocks to be embedded. This element is set by the GMB functionality 30 and used in step 1.of the subroutines illustrated in Figure 12 and Figure 14. This counter is relevant when the handwritten information is embedded all at once after its conclusion, i.e., when the subroutines of Figure 12 and Figure 14 are called from the subroutine illustrated in Figure (i.e., it is not relevant when they are called from the subroutine in Figure 5; its value then is set to "One", since in this embodiment, memory blocks are embedded one at a time).
Embodiment One id="p-111" id="p-111"
[00111]Figure 11 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment One. The text of these and all other figures is largely self-explanatory and need not be repeated herein. Nevertheless, the text thereof may be the basis of claim language used in this document. id="p-112" id="p-112"
[00112]Figure 12 is a flow chart example of the Embed subroutine D referenced in Figure 4 and Figure 5 according to Embodiment One. The following is to be noted. 1. When this subroutine is called by the routine illustrated in Figure 5 (i.e., when handwritten information is embedded concurrently): 1) memory block counter (in step 1.1) is set to 1, and 2) memory block pointer is set to the location in which the current memory block to be embedded is located; this value is defined in memory block pointers element (31) of Figure 9. 2. When this subroutine is called by the subroutine illustrated in Figure 4 (i.e., when all handwritten information is embedded after all handwritten information is concluded): 1) memory block pointer is set to the location of the first memory block to be embedded, and 2) memory block counter is set to the value in # of memory blocks element (33) of Figure 9. id="p-113" id="p-113"
[00113]In operation, memory blocks 32 are fetched one at a time from the RHI memory (G) and processed as follows: Memory blocks related to text revisions (H): 273279/2 id="p-114" id="p-114"
[00114]Commands are converted to keystrokes (35) in the same sequence as the operation is performed via the keyboard and then stored in sequence in the keystrokes memory 34. The emulate keyboard element 36 uses this data to emulate the keyboard, such that the application reads the data as it was received from the keyboard (although this element may include additional keys not available via a keyboard such as the symbols illustrated in Figure 7, e.g. for insertion of new text in MS Word document). The clipboard 38 can handle insertion of text, or text can be emulated as keyboard keystrokes. The lookup tables determines the appropriate control key(s) and keystroke sequences for pre-recorded and built-in macros that, when emulated, execute the desired command. These keyboard keys are application-dependent and are a function of parameters, such as application name, software version and platform. Some control keys, such as the arrow keys, execute the same commands in a large array of applications; however, this assumption is excluded from the design in Figure 11, i.e., by the inclusion of the lookup table command-keystrokes in element of Figure 11. Although, in the flow charts in Figures 15 - 20, it is assumed that the following control keys execute the same commands (in the applications that are included): "Page Up", "Page Down", "Arrow Up", "Arrow Down", "Arrow Right" and "Arrow Left" (For moving the insertion point within the document), "Shift + Arrow Right" (for selection of text), and "Delete" for deleting a selected text. Element 40 may include lookup tables for a large array of applications, although it could include tables for one or any desired number of applications.
Memory blocks related to new image (I): id="p-115" id="p-115"
[00115]The image (graphic) is first copied from the RHI memory 20, more specifically, based on information in the memory block 32, into the clipboard 38. Its designated location is located in the document memory 22 via a sequence of keystrokes (e.g., via the arrow keys). It is stored (i.e., pasted from the clipboard 38 by the keystrokes sequence: Cntr-V) into the document memory 22. If the command involves another operation, such as "Reduce Image Size" or "Move image", the image is first identified in the document memory 22 and selected. Then the operation is applied by the appropriate sequences of keystrokes. id="p-116" id="p-116"
[00116] Figure 15 through Figure 20, the flow charts of the subroutines H referenced in Figure 12, illustrate execution of the first three basic text revisions discussed in connection with and in Figure 8 for MS Word and other applications. These flow charts are self-explanatory and are therefore not further described herein but are incorporated into this text. The following points are to be noted with reference to the function StartOfDocEmb1 illustrated in the flow chart of Figure 15: 1. This function is called by the function SetPointeremb1, illustrated in Figure 16. 2. Although, in many applications, the shortcut keys combination "Cntrl+Home" will bring 273279/2 the insertion point to the start of the document (including MS Word), this routine was written to execute the same operation with the arrow keys. 3. Designated x-y locations in the document memory 22 in this subroutine are defined based on Page#, Line# & Column#; other subroutines are required when the x-y definition differs. id="p-117" id="p-117"
[00117]Once all revisions are embedded, they are incorporated in final mode according to the flow chart illustrated in Figure 21 or according to the flow chart illustrated in Figure 22. In this implementation example, the Track Changes feature is used to "Accept All Changes" which embed all revisions as an integral part of the document. id="p-118" id="p-118"
[00118]As discussed above, a basic set of keystrokes sequences can be used to execute a basic set of commands for creation and revision of a document in a large array of applications. For example, the arrow keys can be used for jumping to a designated location in the document. When these keys are used in conjunction with the Shift key, a desired text/graphic object can be selected. Further, clipboard operations, i.e., the typical combined keystroke sequences Cntrl-X (for Cut), Cntrl-C (for Copy) and Cntrl-V (for Paste), can be used for basic edit/revision operations in many applications. It should be noted that, although a relatively small number of keyboard control keys are available, the design of an application at the OEM level is unlimited in this regard. (See for example Figures 1-5). It should be noted that the same key combination could execute different commands. For example, deleting an item in QuarkXPress is achieved by the keystrokes Cntrl-K, where the keystrokes Cntrl-K in MS Word open a hyperlink. Therefore, the ConvertText1 function H determines the keyboard keystroke sequences for commands data stored in the RHI memory by accessing the lookup table command­ keystrokes command-control-key 40 of Figure 11.
The Use of Macros: id="p-119" id="p-119"
[00119]Execution of handwritten commands in applications such as Microsoft® Word, Excel and Word Perfect is enhanced with the use of macros. This is because sequences of keystrokes that can execute desired operations may simply be recorded and assigned to shortcut keys. Once the assigned shortcut key(s) are emulated, the recorded macro is executed. Below are some useful built-in macros for Microsoft® Word. For simplification, they are grouped based on the operations used to embed handwritten information (D).
Bringing the insertion point to a specific location in the document: CharRight, Charleft, Lineup, LineDown, StartOfDocument, StartOfline, EndOfDocument, EndOfline, EditGoto, GotoNextPage, GotoNextSection, GotoPreviousPage, 273279/2 GotoPreviousSelection, GoBack Selection: CharRightExtent, CharleftExtend, LineDownExtend, LineUpExtend, ExtendSelection, EditFind, EditReplace Operations on selected text/graphic: EditClear, EditCopy, EditCut, EditPaste, CopyText, FontColors, FontSizeSelect, GrowFont, ShrinkFont, GrowFontOnePoint, ShrinkFontOnePoint, AIICaps, SmallCaps, Bold, Italic, Underline, UnderlineCoor, UnderlineStyle, WordUnderline, ChangeCase, DoubleStrikethrough, Font, FontColor, FontSizeSelect Displaying revisions: Hidden, Magnifier, Highlight, DocAccent, CommaAccent, DottedUnderline, DoubleUnderline, DoubleStrikethrough, HtmlSourceRefresh, lnsertFieldChar (for enclosing a symbol for display), ViewMasterDocument, ViewPage, ViewZoom, ViewZoom100, ViewZoom200, ViewZoom Images: lnsertFrame, lnsertObject, lnsertPicture, EditCopyPicture, EditCopyAsPicture, EditObject, lnsertDrawing, lnsertFram, lnsertHorizentlline File operations: FileOpen, FileNew, FileNewDefault, DocClose, FileSave, SaveTemplate id="p-120" id="p-120"
[00120]If a macro has no shortcut key assigned to it, it can be assigned by the following procedure: [00121 ] Clicking on the Tools menu and selecting Customize causing the Customize form to appear. Clicking on the Keyboard button brings the dialog box Customize Keyboard. In the Categories box all the menus are listed, and in the Commands box all their associated commands are listed. Assigning a shortcut key to a specific macro can be simply done by selecting the desired built-in macro in the command box and pressing the desired shortcut keys. id="p-122" id="p-122"
[00122]Combinations of macros can be recorded as a new macro; the new macro runs whenever the sequence of keystrokes that is assigned to it is emulated. In the same manner, a macro in combination with keystrokes (e.g., of arrow keys) may be recorded as a new macro. It should be noted that recording of some sequences as a macro may not be permitted. 273279/2 id="p-123" id="p-123"
[00123]The use of macros, as well as the assignment of a sequence of keys to macros can also be done in other word processors, such as WordPerfect. id="p-124" id="p-124"
[00124]Emulating a keyboard key 36 in applications with built-in programming capability, such as Microsoft® Word, can be achieved by running code that is equivalent to pressing that keyboard key. Referring to Figure 35 and Figure 36, details of this operation are presented. The text thereof is incorporated herein by reference. Otherwise, emulating the keyboard is a function that can be performed in conjunction with Windows or other computer operating systems.
Embodiment Two id="p-125" id="p-125"
[00125]Figure 13 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment Two. Figure 14 is a flow chart example of the Embed function D referenced in Figure 4 and in Figure 5 according to Embodiment Two. Memory blocks are fetched from the RHI memory 20 and processed. Text of these figures is incorporated herein by reference. The following should be noted with Figure 14: id="p-126" id="p-126"
[00126]1. When this subroutine is called by the routine illustrated in Figure 5 (i.e., when handwritten information is embedded concurrently): 1) memory block counter (in step 1.below) is set to 1, and 2) memory block pointer is set to the location in which the current memory block to be embedded is located; this value is defined in memory block pointers element (31) of Figure 9. id="p-127" id="p-127"
[00127]2. When this subroutine is called by the subroutine illustrated in Figure 4 (i.e., when all handwritten information is embedded after all handwritten information is concluded): 1) memory block Pointer is set to the location of the first memory block to be embedded, and 2) memory block counter is set to the value in # of memory blocks element (33) of Figure 9. id="p-128" id="p-128"
[00128]set of programs executes the commands defined in the memory blocks 32 of Figure 9, one at a time. Figure 26 through Figure 32, with text incorporated herein by reference, are flow charts of the subroutine J referenced in Figure 14. The programs depicted execute the first three basic text revisions discussed in Figure 8 for MS Word. These sub-routines are self-explanatory and are not further explained here, but the text is incorporated by reference. id="p-129" id="p-129"
[00129]Figure 33 is the code in Visual Basic that embeds the information in Final Mode, i.e., Accept All Changes" of the Track Changes, which embeds all revisions to be an integral part of the document. id="p-130" id="p-130"
[00130]Each of the macros referenced in the flow charts of Figure 26 through Figure needs to be translated into executable code such as VB Script or Visual Basic code. If there 273279/2 is uncertainty as to which method or property to use, the macro recorder typically can translate the recorded actions into code. The translated code for these macros to Visual Basic is illustrated in Figure 25. id="p-131" id="p-131"
[00131] The clipboard 38 can handle the insertion of text into the document memory 22, or text can be emulated as keyboard keystrokes. (Refer to Figures 35-36 for details). As in Embodiment One, an image operation (K) such as copying an image from the RHI memory to the document memory 22 is executed as follow: an image is first copied from the RHI memory 20 into the clipboard 3f8. Its designated location is located in the document memory 22. Then it is pasted via the clipboard 38 into the document memory 22. id="p-132" id="p-132"
[00132]The selection of a program by the program selection and execution element 42 is a function of the command, the application, software version, platform, and the like. Therefore, the ConvertText2 J selects a specific program for command data that are stored in the RHI memory 20 by accessing the lookup command-programs table 44. Programs may also be initiated by events, e.g., when opening or closing a file, or by a key entry, e.g., when bringing the insertion point to a specific cell of a spreadsheet by pressing the Tab key. id="p-133" id="p-133"
[00133]In Microsoft® Word, the Visual Basic Editor can be used to create very flexible, powerful macros that include Visual Basic instructions that cannot be recorded from the keyboard. The Visual Basic Editor provides additional assistance, such as reference information about objects and properties or an aspect of its behavior.
Working with the Comment feature as an insertion mechanism id="p-134" id="p-134"
[00134] Incorporating the handwritten revisions into the document through the Comment feature may be beneficial in cases where the revisions are mainly insertion of new text into designated locations, or when plurality of revisions in various designated locations in the document need to be indexed to simplify future access to revisions; this can be particularly useful for large documents under review by multiple parties. Each comment can be further loaded into a sub-document which is referenced by a comment # (or a flag) in the main document. The Comments mode can also work in conjunction with Track Changes mode. id="p-135" id="p-135"
[00135]For Embodiment One: Insert Annotation can be achieved by emulating the keystrokes sequence Alt+Cntrl+M. The Visual Basic translated code for the recorded macro with this sequence is "Selection.Comments.Add Range:=Selection.Range", which could be used to achieve the same result in embodiment 2. id="p-136" id="p-136"
[00136]Once in Comment mode, revisions in the RHI memory 20 can be incorporated into the document memory 22 as comments. If the text includes revisions, the Track Changes 273279/2 mode can be invoked prior to insertion of text into a comment pane.
Useful built-in macros for use in the Comment mode of MS Word: GotoCommentScope ;highlight the text associated with a comment reference mark GotoNextComment ;jump to the next comment in the active document GotoPreviousComment ;jump to the previous comment in the active document lnsertAnnotation ;insert comment DeleteAnnotation ;delete comment ViewAnnotation ;show or hide the comment pane The above macros can be used in Embodiment One by emulating their shortcut keys or in Embodiment Two with their translated code in Visual Basic. Figure 34 provides the translated Visual Basic code for each of these macros.
Spreadsheets, forms and Tables Embedding handwritten information in a cell of a spreadsheet or a field in a form or a table can either be for new information or it could be for revisingan existing data (e.g., deletion, moving data between cells or for adding new data in a field). Either way, after the handwritten information is embedded in the document memory 22, it can cause the application (e.g., Excel) to change parameters within the document memory 22, e.g., when the embedded information in a cell is a parameter of a formula in a spreadsheet which when embedded changes the output of the formula, or when it is a price of an item in a Sales Order which when embedded changes the subtotal of the Sales Order; if desired, these new parameters may be read by the embed functionality 24 and displayed on the display 25 to provide the user with useful information such as new subtotals, spell check output, stock status of an item (e.g., as a sales order is filed in). id="p-137" id="p-137"
[00137]As discussed, the x-y location in the document memory 22 for a word processing type documents can for example be defined by page#, line# and character# (see figure 10, x-y locations for lnsertionPoint1 and lnsertionPoint2). Similarly, the x-y location in the document memory 22 for a form, table or a spreadsheet can for example be defined based on the location of a cell / field within the document (e.g., column #, Row # and Page # for a spreadsheet). Alternatively, it can be defined based on number of Tabs and/or Arrow keys from a given known location. For example, a field in a Sales Order in the accounting application QuickBooks can be defined based on the number of Tab from the first field (i.e., "customer; job") in the form. id="p-138" id="p-138"
[00138] The embed functionality can read the x-y information (see step 2 in flow charts 273279/2 referenced in Figures 12 and 14), and then bring the insertion point to the desired location according to Embodiment One (see example flow charts referenced in Figures 15-16), or according to Embodiment Two (see example flow charts for MS Word referenced in Figure 26). Then the handwritten information can be embedded. For example, for a Sales Order in QuickBooks, emulating the keyboard keys combination "Cntrl+J" will bring the insertion point to the first field, customer; job; then, emulating three Tab keys will bring the insertion point to the "Date" field, or emulating eight Tab keys will bring the insertion point to the field of the first "Item Code". id="p-139" id="p-139"
[00139]The software application QuickBooks has no macros or programming capabilities. Forms (e.g., Sales Order, a Bill, or a Purchase Order) and Lists (e.g., Chart of Accounts and customer; job list) in QuickBooks can either be invoked via pull-down menus via the toolbar, or via a shortcut key. Therefore, Embodiment One could be used to emulate keyboard keystrokes to invoke specific form or a specific list. For example, invoking a new invoice can be achieved by emulating the keyboard keys combination "Cntrl+N" and invoking the chart of accounts list can be achieved by emulating the keyboard keys combination "Cntrl+A". Invoking a Sales Order, which has no associated shortcut key defined, can be achieved by emulating the following keyboard keystrokes: 1. "Alt+C" ;brings the pull-down menu from the too/bar menu related to "Customers" 2. "Alt+O" ;Invokes a new sales order form id="p-140" id="p-140"
[00140]Once a form is invoked, the insertion point can be brought to the specified x-y location, and then the recognized handwritten information (i.e., command(s) and associated text) can be embedded. id="p-141" id="p-141"
[00141]As far as the user is concerned, he can either write the information (e.g., for posting a bill) on a pre-set form (e.g., in conjunction with the digitizing pad 12 or touch screen 11) or specify commands related to the operation desired. Parameters, such as the type of entry (a form, or a command), the order for entering commands, and the setup of the form are selected by the user in step 1 "Document Type and Preferences Setup" (A) illustrated in Figure and in Figure 5. id="p-142" id="p-142"
[00142]For example, the following sequence handwritten commands will post a bill for purchase of office supply at OfficeMax on 03/02/05, for a total of $45. The parameter "office supply", which is the account associated with the purchase, may be omitted if the vendor OfficeMax has already been set up in QuickBooks. Information can be read from the document memory 22 and based on this information the embed functionality 24 can determine if the account has previously been set up or not, and report the result on the display 25. This, for example can be achieved by attempting 273279/2 to cut information from the "Account" field (i.e., via the clipboard), assuming the account is already set up. The data in the clipboard can be compared with the expected results, and based on that, generating output for the display.
Bill 03/02/ OfficeMax $ Office supply id="p-143" id="p-143"
[00143] In applications such as Excel, either or both Embodiment One and Embodiment Two can be used to bring the insertion point to the desired location and to embed recognized handwritten information.
APPLICATIONS EXAMPLES Wireless Pad id="p-144" id="p-144"
[00144] A wireless pad can be used for transmission of an integrated document to a computer and optionally receiving back information that is related to the transmitted information. It can be used, for example, in the following scenarios: 1- Filling up a form at a doctor office 2- Filling up an airway bill for shipping a package 3- Filing up an application for a driver license at the OMV 4- Serving a customer at a car rental agency or at a retail store.
- Taking notes at a crime scene or at an accident site 6- Order taking off-site, e.g., at conventions. id="p-145" id="p-145"
[00145]Handwritten information can be inserted in designated locations in a pre-designed document such an order form, an application, a table or an invoice, on top of a digitizing pad 12 or using a touch screen 11 or the like. The pre-designed form is stored in a remote or a close-by computer. The handwritten information can be transmitted via a wireless link concurrently to a receiving computer. The receiving computer will recognize the handwritten information, interpret it and store it in a machine code into the pre-designed document. 273279/2 Optionally, the receiving computer will prepare a response to and transmit it back to the transmitting pad (or touch screen), e.g., to assist the user. id="p-146" id="p-146"
[00146]For example, information filled out on the pad 12 in an order form at a convention can be transmitted to an accounting program or a database residing in a close-by or remote server computer as the information is written. In turn, the program can check the status of an item, such as cost, price and stock status, and transmit information in real-time to assist the order taker. When the order taker indicates that the order has been completed, a sales order or an invoice can be posted in the remote server computer. id="p-147" id="p-147"
[00147] Figure 39 is a schematic diagram of an Integrated Edited Document System shown in connection with the use of a Wireless Pad. The Wireless Pad comprises a digitizing pad 12, display 25, data receiver 48, processing circuitry 60, transmission circuitry I 50, and receiving circuitry II 58. The digitizing pad receives tactile positional input from a writing pen 10. The transmission circuitry I 50 takes data from the digitizing pad 12 via the data receiver and supplies it to receiving circuitry I 52 of a remote processing unit. The receiving circuitry II 58 captures information from display processing 54 via transmission circuitry II 56 of the remote circuitry and supplies it to processing circuitry 60 for the display 25. The receiving memory I 52 communicates with the data receiving memory 16 which interacts with the recognition module 18 as previously explained, which in turn interacts with the RHI processor and memory 20 and the document memory 22 as previously explained. The embedded criteria and functionality element 24 interacts with the elements 20 and 22 to modify the subject electronic document and communicate output to the display processing unit 54.
Remote Communication id="p-148" id="p-148"
[00148] ln a communication between two or more parties at different locations, handwritten information can be incorporated into a document, information can be recognized and converted into machine-readable text and image and incorporated into the document as "For Review". As discussed in connection with Figure 6 (as an exemplary embodiment for MS Word type document), "For review" information can be displayed in a number of ways. The "For Review" document can then be sent to one or more receiving parties (e.g., via email). The receiving party may approve portions or all of the revisions and/or revise further in handwriting (as the sender has done) via the digitized pad 12, via the touch screen 11 or via a wireless pad. The document can then be sent again "for review". This process may continue until all revisions are incorporated/concluded.
Revisions via Fax id="p-149" id="p-149"
[00149] Handwritten information on a page (with or without machine- printed information) can be sent via fax, and the receiving facsimile machine enhanced as a Multiple Function 273279/2 Device (printer/fax, character recognizing scanner) can convert the document into a machine-readable text/image for a designated application (e.g., Microsoft® Word). Revisions vs. original information can be distinguished and converted accordingly based on designated revision areas marked on the page (e.g., by underlining or circling the revisions). Then it can be sent (e.g., via email) "For Review" (as discussed above, under "Remote Communication").
Integrated Document Editor with the use of a Cell Phone id="p-150" id="p-150"
[00150]Handwritten information can be entered on a digitizing pad 12 whereby locations on the digitizing pad 12 correspond to locations on the cell phone display. Alternatively, handwritten information can be entered on a touch screen that is used as a digitizing pad as well as a display (i.e., similar to the touch screen 11 referenced in Figure 38). Handwritten information can either be new information, or revision of an existing stored information (e.g., a phone number, contact name, to do list, calendar events, an image photo, etc.). Handwritten information can be recognized by the recognition element 18, processed by the RHI element 20 and then embedded into the document memory 22 (e.g., in a specific memory location of a specific contact information). Embedding the handwritten information can, for example, be achieved by directly accessing locations in the document memory (e.g., specific contact name); however, the method by which recognized handwritten information is embedded can be determined at the OEM level by the manufacturer of the phone.
Use of the Integrated Document Editor in authentication of handwritten information id="p-151" id="p-151"
[00151] A unique representation such as a signature, a stamp, a finger print or any other drawing pattern can be pre-set and fed into the recognition element 18 as units that are part of a vocabulary or as a new character. When handwritten information is recognized as one of these pre-set units to be placed in a, e.g., specific expected x-y location of the digitizing pad 12 (Figure 1) or touch screen 11 (Figure 38), an authentication or part of an authentication will pass. The authentication will fail if there is no match between the recognized unit and the pre­ set expected unit. This can be useful for authentication of a document (e.g., an email, a ballot or a form) to ensure that the writer / sender of the document is the intended sender. Other examples are for authentication and access of bank information or credit reports. The unique pre-set patterns can be either or both: 1) stored in a specific platform belonging to the user and/or 2) stored in a remote database location. It should be noted that the unique pre-set patterns (e.g., a signature) do not have to be disclosed in the document. For example, when an authentication of a signature 273279/2 passes, the embedded functionality 24 will, for example embed the word "OK" in the signature line/ field of the document. id="p-152" id="p-152"
[00152]Computing devices and methods discussing automatic computation of document locations at which to automatically apply user commands communicated by user input on a touch screen of a computing device are discussed in US patent application no. 16,152,2which is a continuation of US patent no. 10,133,477 and in US patent application no.16/158,235 which is a continuation of US patent no. 10,169,301. id="p-153" id="p-153"
[00153] The disclosed embodiments further relate to simplified user interaction with displayed representations of one or more graphic objects. The simplified user interaction may utilize, a touch screen of a computing device, and may include using gestures to indicate desired change(s) in one or more parameters of the graphic objects. The parameters may include one or more of a line length, a line angle, an arc radius, a size, a surface area, or any other parameter of a graphic object, stored in memory of the computing device or computed by functions of the computing device. Changes in these one or more parameters are computed by functions of the computing device based on the user interaction on the touch screen, and these computed changes may be used by other functions of the computing device to compute changes in other graphic objects. id="p-154" id="p-154"
[00154]As mentioned above, the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, objects, or any portion of these types of documents, such as a block of text or a unit of data. It should be understood that the document or file may be utilized in any suitable application, including but not limited to, computer aided design, gaming, and educational materials. id="p-155" id="p-155"
[00155]It is an object of the disclosed embodiments to allow users to quickly edit Computer Aided Design (CAD) drawings on the go or on site following a short interactive on-screen tutorial; there is no need for skills/expertise such as those needed in operating CAD drawings applications, for example, AutoCAD® software. In addition, the disclosed embodiments may provide a significant time saving by providing simpler and faster user interaction, while revision iterations with professionals are avoided. Typical users may include, but not limited to construction builders and contractors, architects, interior designers, patent attorneys, inventors, and manufacturing plant managers. id="p-156" id="p-156"
[00156]It is a further object of the disclosed embodiments to allow users to use the same set of gestures provided for editing CAD drawings to edit graphics documents in a variety of commonly used document formats, such as in doc and docx formats. It should be noted that some of the commands commonly used in CAD drawing applications, for example 273279/2 AutoCAD® software, such as the command to apply a radius to a line or to add a chamfer, are not available in word processing applications or in desktop publishing applications. id="p-157" id="p-157"
[00157]It is a further object of the disclosed embodiments to allow users to create CAD drawings and graphics documents, based on user interaction on a touch screen of a computing device, in a variety of document formats, including CAD drawings formats such as DXF format and doc and docx formats, using the same gestures. id="p-158" id="p-158"
[00158]It is yet a further object of the disclosed embodiments to allow users to interact with a three-dimensional representation of graphic objects on the touch screen to indicate desired changes in one or more parameters of one or more graphic objects, which in turn, will cause functions of the computing device to automatically affect the indicated changes. id="p-159" id="p-159"
[00159]These, other embodiments, and other features of the disclosed embodiments herein will be better understood by reference to the set of accompanying drawings (Figures 40A-58B), which should be taken as an illustrative example and not limiting. Figures 40A-52D, Figures 54A-54F, and Figures 56-58A may be viewed as a portion of a tutorial of an app to familiarize users with the use of the gestures discussed in these drawings. id="p-160" id="p-160"
[00160] While the disclosed embodiments from Figures 41A through Figure 52D are described with reference to user interaction with two-dimensional representations of graphic objects, it should be understood that the disclosed embodiments may also be implemented with reference to user interaction with three­ dimensional representations of graphic objects. id="p-161" id="p-161"
[00161]First, the user selects a command (e.g., a command to change line length, discussed in Figures 42A-42D), by drawing a letter or by selecting an icon which represents the desired command. Second, the computing device identifies the command. Then, responsive to user interaction with a displayed representation of a graphic object on the touch screen to indicate a desired change in one or more parameters (such as, in line length), the computing device automatically causes the desired change in the indicated parameter and, when applicable, also automatically affects changes in locations of the graphic object and further, as a result, in other graphic objects in memory in which the drawing is stored. id="p-162" id="p-162"
[00162] A desired (gradual or single) change in a parameter of a graphic object, being an increase or a decrease in its value, in the shape of the graphic object, such as a change from a straight line object to a segmented line object, or gradual change from one shape to another, such as from a circle or a sphere to an eclipse and vice versa, may be indicated, by changes in positional locations along a gesture being drawn on the touch screen (as illustrated for example, in Figures 42A-42B), and during which the computing device gradually and automatically applies the desired changes as the user continues to draw 273279/2 the gesture. From the user perspective, it would seem as the value of the parameter is changing at the same time as the gesture is being drawn. id="p-163" id="p-163"
[00163] The subject drawing or a portion thereof stored in the device memory defined herein as "vector graphics") may be displayed on the touch screen as a two-dimensional representation (herein defined as "vector image"), with which the user may interact in order to communicate desired changes in one or more parameters of a graphic object, such as in line length, line angle, or arc radius. As discussed above, the computing device automatically causes these desired changes in the graphic object, and when applicable, also in its locations, and further in parameters and locations of other graphic objects within the vector graphics which may be caused as a result of the changes in the graphic object indicated by the user. The vector graphics may alternately be represented on the touch screen as a three­ dimensional vector image, so as to allow the user to view/review the effects of a change in a parameter of a graphic object in an actual three-dimensional representation of the vector graphics, rather than attempting to visualize the effects while viewing a two-dimensional representation. id="p-164" id="p-164"
[00164] Furthermore, the user may interact with a three-dimensional vector image on the touch screen to indicate desired changes in one or more parameters of one or more graphic objects, for example, by pointing/touching or tapping at geometrical features of the three-dimensional representation, such as on surfaces or at corners, which will cause the computing device to automatically change one or more parameters of one or more graphic objects of the vector graphics. Such user interaction with geometrical features may, for example, be along surface length, width or height, along edges of two connecting surfaces (e.g., along an edge connecting the top surface and one of the side surfaces, within surface(s) inside or outside a beveled/trimmed corner, a sloped surface (e.g., of a ramp), or within an arced surface inside or outside an arced corner. id="p-165" id="p-165"
[00165] The correlation between user interaction with a geometrical feature of the three-dimensional vector image on the touch screen and changes in size and/or geometry of the vector graphics stored in the device memory may be achieved, by first, using one or more points/locations in the vector graphics stored (and defined in the xyz coordinate axis system) in the device memory (defined herein as "locations"), and correlating them or parameters defined or computed based on them with the geometrical features of the vector image with which the user may interact to communicate desired changes in graphic objects. A location herein is defined such that, changes in that location, or in a stored or computed parameter, e.g., of a straight, arced, or segmented line extending/branching from that location, such as length, radius or angle, defined herein as "variable", can be used as the variable (or as one of the variables) in function(s) capable to compute 273279/2 changes in size and/or geometry of the vector graphics as a result of changes in that variable. User interaction may be defined within a region of interest, being the area of the geometrical feature on the touch screen within which the user may gesture/interact; this region may, for example, be an entire surface of a cube, or the entire cube surface with an area proximate to the center excluded. In addition, responsive to detecting finger movements in a predefined, expected direction (or in one of predefined directions), or predefined, expected touching or tapping within this region, the computing device automatically identifies the relevant variable and automatically carries out its associated function(s) to automatically affect the desired change(s) communicated by the user. id="p-166" id="p-166"
[00166]For example, a position of either of the edges/corners of a rectangle or of a cube is a location that may be used as a variable in a function (or in one of the functions) capable to compute a change in the geometry of the rectangle or of the cube as a result of a change in that variable. Similarly, the length of a line between two edges/corners (i.e., between two locations) of the cube or the angle between two connected surfaces of the cube may be used as the variable. Or, the center point of a circle or of a sphere, may be used as the "location" from which the radius of the circle or of the sphere is extending; the radius in this example may be a variable of a function capable to compute the circumference and surface area of the circle or the circumference, surface and volume of the sphere, as the user interacts with (e.g., touches) the sphere. Similarly, a length of a line extending from the center point of a vector graphics having a symmetrical geometry, such as a cube or a tube, or the location at the end of the extending line from the center point, may be used as a variable (or as one of the variables) of a function (or of one of the functions) capable to computes changes in the size of the symmetrical vector graphics or changes in its geometry, as the user interacts with the symmetrical vector image. Or, in a vector graphics with symmetry in one or more of its displayed surfaces such as in the surface of a base of a cone, two locations may be defined, the first at the center point of the surface at the base, and the second being the edge of the line extending from that location to the top of the cone; the variables in this example may be the first location and the length of the line extending from the first location to the top of the cone, which can be used in function(s) capable to compute changes in the size and geometry of the cone, as the user interacts with the vector image representing the cone. Or, a complex or non-symmetrical vector graphics, represented on the touch screen as a three-dimensional vector image, with which the user may interact to communicate changes in the vector graphics, may be divided into a plurality of partial vector graphics in the device memory (represented as one vector image on the touch screen), each represented by one or more functions capable to compute changes in its size and geometry, whereby the size and geometry of the vector graphics may be 273279/2 computed by the computing device based on the sum of the partial vector graphics. id="p-167" id="p-167"
[00167]In one embodiment, responsive to a user "pushing" (i.e., in effect touching) or tapping at a geometrical feature of a displayed representation of a vector graphics (i.e., at the vector image), the computing device automatically increases or decreases the size of the vector graphics or of one or more parameters represented on the graphic feature. For example, touching or tapping at a displayed representation of a corner of a cube or at a surface of a ramp, will cause the computing device to automatically decrease or increase the size of the cube (Figures 54A-54B) or of the decline/incline angle of the ramp, respectively. id="p-168" id="p-168"
[00168]Similarly, responsive to touching or tapping anywhere at a displayed representation of a sphere, the computing device automatically decreases or increases the radius of the sphere, respectively, which in turn, decreases or increases, respectively the circumference, surface area and volume of the sphere, respectively. Or, responsive to continued "squeezing" (i.e. holding/touching) a geometrical feature of a vector image representing a feature in vector graphics, such as the side edges of a top of a tube or of a cube, the computing device automatically brings the outside edge(s) of that vector graphics together gradually as the user continues squeezing/holding the geometrical feature of the vector image. Similarly, responsive to the user tapping or holding/touching the top surface of the geometrical feature, the computing device automatically and gradually brings the outside edges of the geometrical feature outward or inward, respectively, as the user continues tapping at or touching the top surface of the vector image. Or, responsive to touching at or proximate a center point of a top surface, the computing device automatically creates a well (or other predetermined shape) with a radius centered at that center point, and continued touching or tapping (anywhere on the touch screen) will cause the computing device to automatically and gradually decrease or increase the radius of the well, respectively. id="p-169" id="p-169"
[00169]In another embodiment, first responsive to a user indicating a desired command, the computing device identifies the command. Then, the user may gesture at a displayed geometrical feature of a vector image to indicate desired changes in the vector graphics. For example, responsive to continued 'pushing' (i.e., touching) or tapping at a displayed representation of a surface of a corner, after the user has indicated a command to add a fillet (at the inside surface of the corner) or an arc (at the outside surface of the corner) and the computing device identified the command, the computing device automatically rounds the corner (if the corner is not yet rounded), and then causes an increase or a decrease in the value of the radius of the fillet or of the arc (as well as in locations of the adjacent line objects), as the user continues touching or tapping the surface of the fillet or of the arc, (or anywhere on the touch screen). Or, after the computing device identifies a command to change line length (e.g., after the user touches a distinct icon representing 273279/2 the command), responsive to finger movement to the right or to the left (indicative of a desired change in width from the right edge or from the left edge of the surface of the cube, respectively) anywhere on the surface of the displayed cube, followed by continued touching or tapping (anywhere on the touch screen), the computing device automatically decreases or increases the width of the cube, respectively from the right edge or from the left edge of the surface, as the user continues touching or tapping. Similarly, responsive to a finger movement up or down on the surface of the cube followed by continued touching or tapping anywhere on the touch screen, the computing device automatically decreases or increases the height of the cube, respectively from the top edge or from the bottom edge of the surface, as the user continues touching or tapping. Further, responsive to tapping or touching a point proximate an edge along two connected surfaces of a vector image of a cube, the computing device automatically increases or decreases the angle between the two connected surfaces. Or, after the computing device identifies a command to insert a blind hole and a point on a surface of the vector image at which to insert the blind hole (e.g., after detecting a long press at that point, indicating the point on the surface at which to insert the hole), responsive to continued tapping or touching (anywhere on the touch screen), the computing device gradually and automatically increases or decreases the depth of the hole, respectively in the vector graphics and updates the vector image. Similarly, responsive to identifying a command to insert a through hole at user indicated point on a surface of the vector image, the computing device automatically inserts a through hole in the vector graphics and updates the vector image with the inserted through hole. Further, responsive to tapping or touching a point along the circumference of the hole, the computing device automatically increases or decreases the radius of the hole. Or, responsive to touching the inside surface of the hole, the computing device automatically invokes a selection table/menu of standard threads, from which the user may select a desired thread to apply to the inside surface of the hole. id="p-170" id="p-170"
[00170]Figures 40A-40D relate to a command to Insert a line. They illustrate the interaction between a user and a touch screen, whereby a user draws a line 3705 free-hand between two points A and B (Figure 40B). In some embodiments, an estimated distance 3710 of the line is displayed while the line is being drawn. Responsive to the user finger being lifted from the touch screen (Figure 40C), the computing device automatically inserts a straight-line object in the device memory, at memory locations represented by points A and B on the touch screen, where the drawing is stored, and displays the straight-line object 3715 along with its actual distance 3720 on the touch screen. id="p-171" id="p-171"
[00171]Figures 41A-41C relate to a command to delete an object. The user selects the desired object 3725 by touching it (Figure 41A) and then may draw a command indicator 3730, for example, the letter 'd' to indicate the command 'Delete' (Figure 41B). In response, the 273279/2 computing device identifies the command and deletes the object (Figure 41C). It should be noted that the user may indicate the command by selecting an icon representing the command, by an audible signal and the like. id="p-172" id="p-172"
[00172]Figures 42A-42D relate to a command to change line length. First, the user selects the line 3735 by touching it (Figure 42A) and then may draw a command indicator 3740, for example, the letter 'L' to indicate the desired command (Figure 42B). It should be noted that selecting line 3735 prior to drawing the command indicator 3740 is optional, for example, to view its distance or to copy it. Then, responsive to each of gradual changes in user selected positional locations on the touch screen starting from point 3745 of line 3735, the computing device automatically causes each of respective gradual changes in line length stored in the device memory and updates the length on display box 3750 (Figures 42B- 42C). id="p-173" id="p-173"
[00173] Figures 43A-43D relate to a command to change line angle. The user may optionally first select line 3755 (Figure 43A) and then may draw a command indicator 3760, for example, the letter 'a' to indicate the desired command (Figure 43B). Then, in similar manner to changing line length, responsive to each of gradual changes in user selected positional locations (up or down) on the touch screen starting from edge 3765 of line 3755, the computing device automatically causes each of respective gradual changes in line angle stored in the device memory and updates the angle of the line, for example, relative to the x-axis, in the device memory, and also updates the angle on display box 3770 (Figures 43B-43C). id="p-174" id="p-174"
[00174]It should be noted that if the user indicates both commands: to change line length and to change line angle, prior to drawing the gesture discussed in the two paragraphs above (for example, by selecting two distinct icons, each representing one of the commands), then the computing device will automatically cause gradual changes in length and/or angle of the line based on direction of movement of the gesture, and accordingly will update the values of either or both the length and the angle on the display box on the touch screen at each of gradual changes in user selected positional locations. id="p-175" id="p-175"
[00175]Figures 44A-44D relate to a command to apply a radius to a line or to change the radius of an arc between A and B. The user may optionally first select the displayed line or arc, being line 3775 in this example (Figure 44A) and then may draw a command indicator 3780, for example, the letter 'R' to indicate the desired command (Figure 44B). Then, in similar manner to changing line length or line angle, responsive to each of gradual changes in user selected positional locations on the touch screen across the displayed line or arc, starting from a position along the displayed line 3775 in this example, the computing device automatically 273279/2 causes each of respective gradual changes in the radius of the line/arc in the drawing stored in the device memory and updates the radius of arc 3785 on display box 3790 (Figures 44C). id="p-176" id="p-176"
[00176] Figures 45A-45C relate to a command to make a line parallel to another line. First, the user may draw a command indicator 3795, for example, the letter 'N' to indicate the desired command and then touch reference line 3800 (Figure 45A). The user then selects target line 3805 (Figure 45B) and lifts finger. Responsive to the finger being lifted, the computing device automatically alters target line 3805 in the device memory to be parallel to reference line 3800 and updates the displayed target line on the touch screen (Figure 45C). id="p-177" id="p-177"
[00177] Figures 46A-46O relate to a command to add a fillet (at a two dimensional representation of a corner or at a three dimensional representation of an inside surface of a corner) or an arc (at a three dimensional representation of an outside surface of a corner). First, the user may draw a command indicator 3810 to indicate the desired command and then touch corner 3815 to which to apply a fillet (Figure 46A). In response, the computing device converts the sharp corner 3815 into rounded corner 3820 (having a default radius value) and zooms in that corner (Figure 46B). Then, responsive to each of gradual changes in user selected positional locations on the touch screen across the displayed arc 3825 at a position along it, the computing device causes each of respective gradual changes in the radius of the arc stored in the device memory and in its locations in memory represented by A and B, such that the arc is tangent to adjacent lines 3830 and 3835 (Figure 46C). Next, the user touches the screen and in response the computing device zooms out the drawing to its original zoom percentage (Figure 46D); otherwise, the user may indicate additional changes in the radius, even after the finger is lifted. id="p-178" id="p-178"
[00178]Figures 47A-470 relate to a command to add a chamfer. First, the user may draw a command indicator 3840 to indicate the desired command and then touches the desired corner 3845 to which to apply a chamfer/bevel (Figure 47A). In response, the computing device trims the corner between two locations represented by A and B on the touch screen, and sets the height H and width W at default values, and as a result also the angle a (Figure 47B). Then, responsive to each of gradual changes in user selected positional locations at A and/or B on the touch screen (in parallel motions to line 3850 and/or line 3855, respectively), the computing device causes gradual changes in width W and/or height H, respectively, as stored in the device memory as well as in the locations in memory represented by A and/or B, and updates their displayed representation (Figure 47C). Next, the user touches the screen and in response the computing device zooms out the drawing to its original zoom percentage (Figure 47D); otherwise, the user may indicate additional changes in parameters Wand/or H, even after the finger is lifted. 273279/2 id="p-179" id="p-179"
[00179] Figures 48A-48F relate to a command to trim an object. First, the user may draw a command indicator 3860 to indicate the desired command (Figure 48A). Next, the user touches target object 3865 (Figure 48B) and then reference object 3870 (Figure 48C); it should be noted that these last two steps are optional. The user then moves reference object 3870 to indicate the desired trim in target object 3865 (Figures 480-48E). Then, responsive to the finger being lifted from the touch screen, the computing device automatically applies the desired trim 3875 to target object 3865 (Figure 48F). id="p-180" id="p-180"
[00180]Figure 49A-490 relate to a command to move an arced object. First, the user may optionally select object 3885 (Figure 49A) and then draw a command indicator 3880, for example, the letter ‘M,’ to indicate the desired command, and then touches the displayed target object 3885 (Figure 49B) (at this point the object is selected), and moves it until edge 3890 of arc 3885 is at or proximate edge 3895 of line 3897 (Figure 49C). Then, responsive to the finger being lifted from the screen, the computing device automatically moves the arc 3885 such that it is tangent to line 3897 where the edges meet (Figure 49D). id="p-181" id="p-181"
[00181] Figures 50A-500 relate to the 'No Snap' command. First, the user may touch command indicator 3900 to indicate the desired command (Figure 50A), and then the user may touch the desired intersection 3905 to unsnap (Figure 50B). Then, responsive to the finger being lifted from the touch screen, the computing device automatically applies the no-snap 3910 at intersection 3905 and zooms in the intersection (Figure 50C). Touching again causes the computing device to zoom out the drawing to its original zoom percentage (Figure 50D). id="p-182" id="p-182"
[00182]Figures 51A-51D illustrate another example of use of the 'No Snap' command. First, the user may touch command indicator 3915 to indicate the desired command (Figure 51A). Next, the user may draw a command indicator 3920, for example, the letter 'L' to indicate the desired command to change line length (Figure 51B). Then, responsive to each of gradual changes in user selected positional locations on the touch screen, starting from the edge 3925 of line 3930 and ending at position 3935 on the touch screen, across line 3940, the computing device automatically unsnaps intersection 3945 or avoids the intersection 39from being snapped, assuming the snap operation is set as a default operation by the computing device. id="p-183" id="p-183"
[00183] Figures 52A-52D illustrate another example of use of the command to trim an object. First, the user may draw a command indicator 3950 to indicate the desired command (Figure 52A). Next, the user moves reference object 3955 to indicate the desired trim in target object 3960 (Figures 52B-52C). Then, responsive to the user finger being lifted from the touch screen, the computing device automatically applies the desired trim 3965 to target object 273279/2 3960 (Figure 52D). id="p-184" id="p-184"
[00184]Commands to copy and cut graphic objects may be added to the set of gestures discussed above, and carried out for example by selecting one or more graphic objects (as shown for example in Figure 42A), after the user draws a command indicator or touch an associated distinct icon on the touch screen to indicate the desired command, to copy or cut. The command to paste may also be added, and may be carried out for example by drawing a command indicator, such as the letter 'P' (or by touching a distinct icon representing the command), and then pointing at a position on the touch screen, which represents a location in memory at which to paste the clipboard content. The copy, cut and paste commands may be useful, for example, in copying a portion of a CAD drawing representing a feature such as a bath tab and pasting it at another location of the drawing representing a second bathroom of a renovation site. id="p-185" id="p-185"
[00185]Figure 53 is an example of a user interface with icons corresponding to the available user commands discussed in the Figures above and a 'Gesture Help' by each distinct icon indicating a letter or a symbol which may be drawn to indicate a command, instead of selecting the icon by it representing the command. id="p-186" id="p-186"
[00186]Figures 54A-54B illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a cube. Responsive to a user touching corner 3970 of vector image 3975, representing a vector graphics of a cube (Figure 54A), for a predetermined period of time, the computing device interprets/identifies the touching of corner 3970 as a command to proportionally decrease the dimensions of the cube. Then, responsive to continued touching corner 3970, the computing device automatically and gradually decreases the length, width and height of the cube in the vector graphics, displayed at 3977, 3980 and 3985, respectively, at the same rate, and updates the displayed length 3990, width 3995 and height 4000 in vector image 4005 (Figure 54B). id="p-187" id="p-187"
[00187]Figures 54C-54D illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a sphere. Responsive to continued touching at point 4010 or anywhere on the vector image 4015 of a sphere (Figure 54C), representing a vector graphics of the sphere, for a predetermined period of time, the computing device interprets/identifies the touching as a command to decrease the radius of the sphere. Then responsive to continued touching point 4010, the computing device automatically and gradually decreases the radius of the vector graphics of the sphere, and updates the vector image 4017 (Figure 54D) on the touch screen. id="p-188" id="p-188"
[00188]Figures 54E-54F illustrate an example of before and after interacting with a three-dimensional representation of a vector graphics of a ramp. Responsive to a user touching 273279/2 at point 4020 or any point along edge 4025 of base 4030 of vector image 4035 of a ramp (Figure 54E), representing a vector graphics of the ramp, for a predetermined period of time, the computing device interprets/identifies the touching as a command to increase incline angle 4040 and decrease distance 4045 of base 4030 in the graphic object, such that distance 4050 along the ramp remains unchanged. Then, responsive to continued touching point 4020, the computing device automatically and gradually increases incline angle 4040 and decreases distance 4045 of base 4030 in the vector graphics, such that distance 4050 along the height of the ramp remains unchanged, and updates displayed incline angle 4040 and distance 4045 to incline angle 4055 and distance 4060 in vector image 40(Figure 54F). Similarly, responsive to tapping, at point 4020, the computing device may be configured to automatically and gradually decrease inclines angle 4040 and increase distance 4045, such that distance 4050 along the ramp will remain unchanged. id="p-189" id="p-189"
[00189] In one embodiment, the computing device invokes command mode or data entry mode; command mode is invoked when a command intended to be applied to text or graphics already stored in memory and displayed on the touch screen is identified, and data entry mode is invoked when a command to insert or paste text or graphics is identified. In command mode, data entry mode is disabled to allow for unrestricted/unconfined user input, on the touch screen of the computing device, in order to indicate locations of displayed text/graphics at which to: apply user pre-defined command(s), and in data entry mode, command mode is disabled to enable pointing at positions on the touch screen indicative of locations in memory at which to insert text, insert a drawn shape such as a line, or paste text or graphics. Command mode may be set to be a default mode. id="p-190" id="p-190"
[00190] When in command mode, the drawing by the user on displayed text or graphics (defined herein as "marking gesture") to indicate locations in memory (at which to apply pre-defined command(s)) will not be interpreted by the computing device as a command to insert a line, and stopping movement while drawing the marking gesture or simply touching a position on the touch screen will not be interpreted by the computing device as a position indicative of a location in memory where to insert text or graphics, since in this mode, data entry mode is disabled. In one embodiment, however, when in data entry mode, the computing device will interpret such a position as indicative of an insertion location in memory, only after the finger is lifted from the touch screen, to further improve robustness/user friendliness; the benefit of this feature with respect to control over a zooming functionality is further discussed below. The user may draw the marking gesture free-hand on displayed text on the touch screen to indicate desired locations of text characters in memory where a desired command, such as bold, underline, move or delete, should be applied, or on displayed graphics (i.e., on vector image) to indicate desired 273279/2 locations of graphic objects in memory where a desired command, such as select, delete, replace, change objects color, color shade, size, style, or line thickness, should be applied. id="p-191" id="p-191"
[00191] Prior to drawing the marking gesture, the user may define a command, by selecting a distinct icon representing the command from a bar menu on the touch screen, illustrated for example in Figure 53. Alternatively, the user may define a desired command by drawing a letter/symbol which represents the command; under this scenario, however, both command mode and data entry mode may be disabled while drawing the letter/symbol, to allow for unconfined free-hand drawing of the letter/symbol anywhere on the touch screen, such that the drawing of a letter/symbol will not be interpreted as the marking gesture, or as a drawn feature, such as a drawn line, to be inserted, and a finger being lifted from the touch screen will not be interpreted as inserting or pasting data. id="p-192" id="p-192"
[00192] It should be noted, that the drawing of the marking gesture on displayed text/graphics to indicate the desired locations in memory at which to apply user indicated commands to text/graphics, can be achieved in a single step, and if desired, in one or more time interval breaks, if for example the user lifts his/her finger from the touch screen up to a predetermined period of time, or under other predetermined conditions, such as between double taps, during which the user may, for example, wish to review a portion in another document before deciding whether to continue marking additional displayed text/graphics from the last indicated location prior to the time break or on other displayed text/graphics, or to simply conclude the marking. It should be further noted that the marking gesture may be drawn free-hand in any shape, such as in zigzag (Figure 57), a line across (Figure 56), or a line above or below displayed text/graphics. The user may also choose to display the marking gesture as it is being drawn, and to draw back along the gesture (or anywhere along it) to undo applied command(s) to text/graphics indicated by previously marked area(s) of displayed text/graphics. id="p-193" id="p-193"
[00193]Figure 56 is an example of a gesture to mark text in command mode. First, the user indicates a desired command, such as a command to underline, for example by touching icon 4055 representing the command. Then, responsive to the user drawing line 4060 free-hand between A and B, from the right to the left or from the left to the right, to indicate the locations in memory at which to underline text, the computing device automatically underlines the text at the indicated locations and displays a representation of the underlined text on the touch screen as the user continues drawing the gesture or when the finger is lifted from the touch screen, depending on the user predefined preference. id="p-194" id="p-194"
[00194]Figure 57 is another example of a gesture to mark text in command mode. First, the 273279/2 user indicates a desired command, such as a command to move text, for example by touching icon 4065 representing the command. Then, responsive to the user drawing a zigzagged line 4070 free-hand between A and B, from the right to the left or from the left to the right, to indicate the locations in memory at which to select text to be moved, the computing device automatically selects the text at the indicated locations in memory and highlights it on the touch screen as the user continues drawing the gesture or when the finger is lifted from the touch screen, depending on user predefined preference. At this point, the computing device automatically switches to data entry mode. Next (not shown), responsive to the user pointing at a position on the touch screen, indicative of a location in memory at which to paste the selected text, the computing device automatically pastes the selected text, starting from that indicated location. Once the text is pasted, the computing device will automatically revert back to command mode. id="p-196" id="p-196"
[00196] In another embodiment, especially useful in, but not limited to, text editing, responsive to a gesture being drawn on the touch screen to mark displayed text or graphics while in command mode and no command was selected prior to drawing the gesture, the computing device automatically invokes selection mode, selects the marked/indicated text or graphics on the touch screen as the finger is lifted from the touch screen, and automatically invokes a set of icons, each representing a distinct command, arranged in menus and/or tooltips by the selected text or graphics (Figures 55A-55B). In these examples, when the user selects one or more of the displayed icons, and the computing device applies the corresponding command(s) to the selected text. The user may exit the selection mode by simply dismissing the screen, which will cause the computing device to automatically revert back to command mode. The computing device will also automatically revert back to command mode after the selected text is moved (if the user have had indicated a command to move text, pointed at a position on the touch screen representing the location in memory at which to move the selected text, and then lifts the finger). As in command mode, data entry mode is disabled while in selection mode to allow for unrestricted/unconfined drawing of the marking gesture to mark displayed text or graphics. Selection mode may be useful, for example, when the user wishes to focus on a specific portion of text and perform some trial and errors prior concluding the edits on that portion of text. When the selected text is a single word, the user may for example indicate a command to suggest a synonym, capitalize the word, or change its fonts to all caps. id="p-197" id="p-197"
[00197] Figures 58A-58B illustrate an example of automatically zooming a displayed text while drawing the gesture to mark text, as discussed below. id="p-198" id="p-198"
[00198] In another embodiment while in command mode or in data entry mode, or while drawing the marking gesture during selection mode (prior to the finger being lifted from the 273279/2 touch screen), responsive to detecting a decrease or an increase in speed between two positions on the touch screen while the marking gesture or a shape such as a line to be inserted, is being drawn, the computing device automatically zooms in or zooms out, respectively, a portion of the displayed text or graphic on the touch screen which is proximate the current position along the marking gesture or the drawn line. In addition, responsive to detecting a user selected position on the touch screen with no movement for a predetermined period of time while in either command mode or data entry mode, the computing device automatically zooms in a portion of the displayed text or graphic on the touch screen which is proximate the selected position and further continues to gradually zoom in up to a maximal predetermined zoom percentage as the user continues to point at that selected position; this feature may be useful especially near or at a start position and an end position along the gesture or along the drawn line, as the user may need to see more details in their proximity so as to point closer to the desired displayed text character or graphic object or its represented location on the touch screen; naturally, the finger is at rest at or near the starting position (prior to drawing the gesture or the line) as well as while at a potential end position. As discussed, in one embodiment, when in data entry mode, the position at which the finger (or writing tool) being at rest on the touch screen will not be interpreted as indicative of the insertion location in memory at which to insert text or graphics, until after the finger (or writing tool) is lifted from the touch screen, and therefore, the user may have the finger be periodically at rest (to zoom in) while approaching the intended end position. Furthermore, responsive to detecting a continued tapping, the computing device may be configured to automatically zoom out as the user continues tapping. id="p-199" id="p-199"
[00199] The disclosed embodiments may further provide a facility that allows a user to specify customized gestures for interacting with the displayed representations of the graphic objects. The user may be prompted to select one or more parameters to be associated with a desired gesture. In some aspects, the user may be presented with a list of available parameters, or may be provided with a facility to input custom parameters. Once a parameter has been specified, the user may be prompted to associate desired gesture(s), indicative of change(s) in the specified parameter, with a geometrical feature within the vector image; In some aspects, the user may be prompted to input a desired gesture indicative of an increase in the value of the specified parameter and then to input another desired gesture indicative of a decrease in the value of the specified parameter. In other aspects, the user may be prompted to associate desired gesture(s) indicative of change(s) in the shape/geometry of graphic object(s) , and in other aspects, the user may be prompted to associate direction(s) of movement of a drawn gesture with a feature within the geometrical feature, and the like. Then, the computing device may associate the custom parameter(s) with one or more functions, or 273279/2 the user may be presented with a list of available functions, or the user may be provided with a facility to specify custom function(s), such that when the user inputs the specified gesture(s) within other, similar geometrical features within the same vector image or within another vector image, the computing device will automatically affect the indicated changes in the vector graphics, represented by the vector image, in memory of the computing device. id="p-200" id="p-200"
[00200] It is noted that the embodiments described herein can be used individually or in any combination thereof. It should be understood that the foregoing description is only illustrative of the embodiments. Various alternatives and modifications can be devised by those skilled in the art without departing from the embodiments. Accordingly, the present embodiments are intended to embrace all such alternatives, modifications and variances that fall within the scope of the appended claims. id="p-201" id="p-201"
[00201] Various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, all such and similar modifications of the teachings of the disclosed embodiments will still fall within the scope of the disclosed embodiments. id="p-202" id="p-202"
[00202] Various features of the different embodiments described herein are interchangeable, one with the other. The various described features, as well as any known equivalents can be mixed and matched to construct additional embodiments and techniques in accordance with the principles of this disclosure.

Claims (45)

273279/ 45 Claims
1. A computing device, comprising: a memory, for storing vector graphics, the vector graphics comprising information about graphic objects, the information comprising locations of the graphic objects, a display for displaying a representation of the vector graphics, a surface or pointing device for detecting an indication of a change in the vector graphics, and one or more processing units; in response to detecting the indication, the one or more processing units are configured to automatically change a geometry of at least one graphic object, wherein: the at least one graphic object is connected to at least one other graphic object at one or more locations of the at least one graphic object , the change comprises a change in at least one location of the at least one graphic object, and at least one location of the at least one other graphic object is unchanged; wherein the display is configured to display a representation of at least a portion of the changed vector graphics.
2. The computing device of claim 1, wherein the display and the surface are of a touch screen.
3. The computing device of claim 1, wherein at least a portion of a representation of the vector graphics on the display is two-dimensional or three-dimensional.
4. The computing device of claim 1, wherein the one or more processing units are further configured to automatically identify a displayed portion of the at least one graphic object in response to detecting a gesture indicating the displayed portion. 273279/ 46
5. The computing device of claim 4, wherein the one or more processing units are further configured to cause the display to zoom in the displayed portion before or while the geometry is being changed and to zoom out the displayed portion after the geometry has been changed.
6. The computing device of claim 1, wherein the one or more processing units are configured to automatically change the geometry in response to detecting at least a portion of the indication proximate at least one location of or within a line, arc, corner, surface or edge of one or more surfaces of the at least one graphic object as indicated on the display.
7. The computing device of claim 1, wherein the one or more processing units are configured to automatically identify at least one parameter of the at least one graphic object in response to detecting at least a portion of the indication proximate or within a geometrical feature of the vector graphics as indicated on the display.
8. The computing device of claim 1, wherein the indication comprises a touching gesture, tapping gesture or indication of a change in position.
9. The computing device of claim 1, wherein a geometry of the at least one other graphic object is unchanged.
10. The computing device of claim 1, wherein the one or more processing units are further configured to identify a command.
11. The computing device of claim 10, wherein the command comprising a command to change at least one of a length or angle of a line of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the length 273279/ 47 or the angle in response to detecting a change in position within the indication proximate the line or a location of the line as indicated on the display; or to change at least one of a width, height or angle of a surface of the at least one graphic object, and wherein the one or more processing units are configured to automatically change at least one of the width, height or angle of the surface in response to detecting the indication proximate at least one location or an edge of the surface or within the surface as indicated on the display.
12. The computing device of claim 10, wherein the command comprises a command to add a fillet or arc to a corner of the at least one graphic object, and wherein the one or more processing units are configured to automatically add the fillet or the arc to the corner in response to detecting the indication proximate or within the corner as indicated on the display; to change a radius and at least one location of a fillet or arc of a rounded corner of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the radius and at least one location of the rounded corner in response to detecting the indication proximate at least one location of or within the rounded corner as indicated on the display; to add a chamfer to the at least one graphic object, and wherein the one or more processing units are configured to automatically add the chamfer in response to detecting the indication to add the chamfer; or to change at least one of a width, height or angle of a chamfered corner, and wherein the one or more processing units are configured to automatically change at least one of the width, height or angle in response to detecting a change in position within the indication proximate at least one location of or within the chamfered corner as indicated on the display.
13. The computing device of claim 10, wherein the command comprises a command to change a straight line of the at least one graphic object to an arc, and wherein the one or more processing units are configured to automatically change the straight line to an arc in 273279/ 48 response to detecting the indication proximate or within the straight line as indicated on the display; to change a flat surface of the at least one graphic object to an arced surface, and wherein the one or more processing units are configured to automatically change the flat surface to an arced surface in response to detecting the indication proximate at least one location of or within the flat surface as indicated on the display; to change a radius of an arc of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the radius of the arc in response to detecting a change in position within the indication proximate or within the arc as indicated on the display; to change a radius of an arced surface of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the radius of the arced surface in response to detecting the indication proximate or within the arced surface as indicated on the display; to trim a portion of the at least one graphic object, and wherein the one or more processing units are configured to automatically trim the portion in response to detecting the indication proximate or within the at least one graphic object as indicated on the display; or to add a segment to or make a geometric change in segments of a segmented line object of the at least one graphic object, and wherein the one or more processing units are configured to automatically add the segment or make the geometric change in the segments in response to detecting the indication proximate at least one location of or within the segmented line object as indicated on the display.
14. A computing device, comprising: a memory, for storing vector graphics, the vector graphics comprising information about graphic objects, the information comprising locations of the graphic objects, a display for displaying a representation of the vector graphics, a surface or pointing device for detecting an indication of a change in the vector graphics, and 273279/ 49 one or more processing units; in response to detecting the indication, the one or more processing units are configured to automatically change at least one parameter of at least one graphic object, wherein: the at least one graphic object is connected to at least one other graphic object at one or more locations of the at least one graphic object, the change comprises a change in at least one location of the at least one graphic object, and at least one location of the at least one other graphic object is unchanged; wherein the display is configured to display a representation of at least a portion of the changed vector graphics.
15. A method comprising: storing vector graphics in a memory, the vector graphics comprising information about graphic objects, the information comprising locations of the graphic objects; displaying on a display at least a portion of a representation of the vector graphics; detecting an indication of a change in the vector graphics; wherein in response to detecting the indication: automatically changing at least one parameter of at least one graphic object, wherein: the at least one graphic object is connected to at least one other graphic object at one or more locations of the at least one graphic object , the change comprises a change in at least one location of the at least one graphic object, and at least one location of the at least one other graphic object is unchanged; and displaying a representation of at least a portion of the changed vector graphics on the display.
16. A computing device, comprising: 273279/ 50 a memory, for storing CAD drawing data, the CAD drawing data comprising information about graphic objects, the information comprising locations of the graphic objects, a display for displaying the CAD drawing, a surface or pointing device for detecting an indication of a change in the CAD drawing, and one or more processing units; in response to detecting the indication, the one or more processing units are configured to automatically change a geometry of at least one graphic object, wherein: at least one location of the at least one graphic object is the same as of at least one other graphic object, the change comprises a change in at least one location of the at least one graphic object, and at least one location of the at least one other graphic object is unchanged; wherein the display is configured to display at least a portion of the changed CAD drawing.
17. The computing device of claim 16, wherein the display and the surface are of a touch screen.
18. The computing device of claim 16, wherein at least a portion of a representation of the CAD drawing data on the display is two-dimensional or three-dimensional.
19. The computing device of claim 16, wherein the one or more processing units are further configured to automatically identify a displayed portion of the at least one graphic object in response to detecting a gesture indicating the displayed portion. 273279/ 51
20. The computing device of claim 19, wherein the one or more processing units are further configured to cause the display to zoom in the displayed portion before or while the geometry is being changed and to zoom out the displayed portion after the geometry has been changed.
21. The computing device of claim 16, wherein the one or more processing units are configured to automatically change the geometry in response to detecting at least a portion of the indication proximate at least one location of or within a line, arc, corner, surface or an edge of one or more surfaces of the at least one graphic object as indicated on the display.
22. The computing device of claim 16, wherein the one or more processing units are configured to automatically identify at least one parameter of the at least one graphic object in response to detecting at least a portion of the indication proximate or within a geometrical feature of the CAD drawing as indicated on the display.
23. The computing device of claim 16, wherein the indication comprises a touching gesture, tapping gesture or indication of a change in position.
24. The computing device of claim 16, wherein the one or more processing units are further configured to identify a command.
25. The computing device of claim 24, wherein the command comprises a command to change at least one of a length or angle of a line of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the length or the angle in response to detecting a change in position within the indication proximate the line or a location of the line as indicated on the display; or to change at least one of a width, height or angle of a surface of the at least one graphic object, and wherein the one or more processing units are configured to automatically change at least one of the width, height or angle of the surface in response to detecting the indication 273279/ 52 proximate at least one location or an edge of the surface or within the surface as indicated on the display.
26. The computing device of claim 24, wherein the command comprises a command to add a fillet or arc to a corner of the at least one graphic object, and wherein the one or more processing units are configured to automatically add the fillet or the arc to the corner in response to detecting the indication proximate or within the corner as indicated on the display; to change a radius and at least one location of a fillet or arc of a rounded corner of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the radius and at least one location of the rounded corner in response to detecting the indication proximate at least one location of or within the rounded corner as indicated on the display; to add a chamfer to the at least one graphic object, and wherein the one or more processing units are configured to automatically add the chamfer in response to detecting the indication to add the chamfer; or to change at least one of a width, height or angle of a chamfered corner, and wherein the one or more processing units are configured to automatically change at least one of the width, height or angle in response to detecting a change in position within the indication proximate at least one location of or within the chamfered corner as indicated on the display.
27. The computing device of claim 24, wherein the command comprises a command to change a straight line of the at least one graphic object to an arc, and wherein the one or more processing units are configured to automatically change the straight line to an arc in response to detecting the indication proximate or within the line as indicated on the display; to change a flat surface of the at least one graphic object to an arced surface, and wherein the one or more processing units are configured to automatically change the flat surface to an arced surface in response to detecting the indication proximate at least one location of or within the flat surface as indicated on the display; 273279/ 53 to change a radius of an arc of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the radius of the arc in response to detecting a change in position within the indication proximate or within the arc as indicated on the display; to change a radius of an arced surface of the at least one graphic object, and wherein the one or more processing units are configured to automatically change the radius of the arced surface in response to detecting the indication proximate or within the arced surface as indicated on the display; to trim a portion of the at least one graphic object, and wherein the one or more processing units are configured to automatically trim the portion in response to detecting the indication proximate or within the at least one graphic object as indicated on the display; or to add a segment to or make a geometric change in segments of a segmented line object of the at least one graphic object, and wherein the one or more processing units are configured to automatically add the segment or make the geometric change in the segments in response to detecting the indication proximate at least one location of or within the segmented line object as indicated on the display.
28. A method comprising: storing in a memory CAD drawing data, the CAD drawing data comprising information about graphic objects, the information comprising locations of the graphic objects; displaying on a display at least a portion of the CAD drawing; detecting an indication of a change in the CAD drawing; wherein in response to detecting the indication: automatically changing a geometry of at least one graphic object, wherein: at least one location of the at least one graphic object is the same as of at least one other graphic object, the change comprises a change in at least one location of the at least one graphic object, and at least one location of the at least one other graphic object is unchanged; and 273279/ 54 displaying at least a portion of the changed CAD drawing on the display.
29. The method of claim 28, wherein at least a portion of a representation of the CAD drawing data on the display is two-dimensional or three-dimensional.
30. The method of claim 28, further comprising automatically identifying a displayed portion of the at least one graphic object in response to detecting a gesture indicating the displayed portion.
31. The method of claim 30, further comprising zooming in the displayed portion before or while the geometry is being changed and zooming out the displayed portion after the geometry has been changed.
32. The method of claim 28, wherein automatically changing the geometry is in response to detecting at least a portion of the indication proximate at least one location of or within a line, arc, corner, surface or edge of one or more surfaces of the at least one graphic object.
33. The method of claim 28, further comprising automatically identifying at least one parameter of the at least one graphic object in response to at least a portion of the indication proximate or within a geometrical feature of the CAD drawing.
34. The method of claim 28, wherein the indication comprises a touching gesture, tapping gesture or indication of a change in position.
35. The method of claim 28, further comprising identifying a command.
36. The method of claim 35, wherein the command comprises a command 273279/ 55 for changing at least one of a length or angle of a line of the at least one graphic object, and wherein automatically changing the length or the angle is in response to detecting a change in position within the indication proximate the line or a location of the line, or for changing at least one of a width, height or angle of a surface of the at least one graphic object, and wherein automatically changing at least one of the width, height or angle of the surface is in response to detecting the indication proximate at least one location or an edge of the surface or within the surface.
37. The method of claim 35, wherein the command comprises a command for adding a fillet or an arc to a corner of the at least one graphic object, and wherein adding the fillet or the arc to the corner is in response to detecting the indication proximate or within the corner; for changing a radius and at least one location of a fillet or arc of a rounded corner of the at least one graphic object, and wherein automatically changing the radius and at least one location of the rounded corner is in response to detecting the indication proximate at least one location of or within the rounded corner; for adding a chamfer to a corner of the at least one graphic object, and wherein automatically adding the chamfer is in response to detecting the indication for adding the chamfer; or for changing at least one of a width, height or angle of a chamfered corner, and wherein automatically changing at least one of the width, height or angle is in response to detecting a change in position within the indication proximate at least one location of or within the chamfered corner.
38. The method of claim 35, wherein the command comprises a command for changing a straight line of the at least one graphic object to an arc, and wherein automatically changing the straight line to an arc is in response to detecting the indication proximate or within the line; 273279/ 56 for changing a flat surface of the at least one graphic object to an arced surface, and wherein automatically changing the flat surface to an arced surface is in response to detecting the indication proximate at least one location of or within the flat surface; for changing a radius of an arc of the at least one graphic object, and wherein automatically changing the radius of the arc is in response to detecting a change in position proximate or within the arc, for trimming a portion of the at least one graphic object, and wherein automatically trimming the portion is in response to detecting the indication proximate or within the at least one graphic object; or for adding a segment to or making a geometric change in segments of a segmented line object of the at least one graphic object, and wherein automatically adding the segment or making the geometric change in the segments is in response to detecting the indication proximate at least one locations of or within the segmented line object.
39. The computing device of claim 1, wherein the one or more processing units are further configured to automatically change the representation of the at least a portion of the changed vector graphics on the display.
40. The computing device of claim 14, wherein a geometry of the at least one other graphic object is unchanged.
41. The method of claim 15, wherein a geometry of the at least one other graphic object is unchanged.
42. The computing device of claim 14, wherein the one or more processing units are further configured to automatically change the representation of the at least a portion of the changed vector graphics on the display. 273279/ 57
43. The method of claim 15, further comprising automatically changing the representation of the at least a portion of the changed vector graphics on the display.
44. The computing device of claim 16, wherein the one or more processing units are further configured to automatically change a representation of at least a portion of the changed CAD drawing data on the display.
45. The method of claim 28, further comprising automatically changing a representation of at least a portion of the changed CAD drawing data on the display. For the Applicant, Naschitz, Brandes, Amir & Co. P-16255-IL
IL273279A 2017-09-15 2018-09-18 Integrated document editor IL273279B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762559269P 2017-09-15 2017-09-15
PCT/US2018/051400 WO2019055952A1 (en) 2017-09-15 2018-09-18 Integrated document editor

Publications (3)

Publication Number Publication Date
IL273279A IL273279A (en) 2020-04-30
IL273279B1 true IL273279B1 (en) 2023-12-01
IL273279B2 IL273279B2 (en) 2024-04-01

Family

ID=65723440

Family Applications (2)

Application Number Title Priority Date Filing Date
IL273279A IL273279B2 (en) 2017-09-15 2018-09-18 Integrated document editor
IL308115A IL308115A (en) 2017-09-15 2018-09-18 Integrated document editor

Family Applications After (1)

Application Number Title Priority Date Filing Date
IL308115A IL308115A (en) 2017-09-15 2018-09-18 Integrated document editor

Country Status (4)

Country Link
EP (1) EP3682319A4 (en)
CA (1) CA3075627A1 (en)
IL (2) IL273279B2 (en)
WO (1) WO2019055952A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11550583B2 (en) 2020-11-13 2023-01-10 Google Llc Systems and methods for handling macro compatibility for documents at a storage system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070064A1 (en) * 2005-09-26 2007-03-29 Fujitsu Limited Program storage medium storing CAD program for controlling projection and apparatus thereof
US20090259442A1 (en) * 2008-04-14 2009-10-15 Mallikarjuna Gandikota System and method for geometric editing
CN101986249A (en) * 2010-07-14 2011-03-16 上海无戒空间信息技术有限公司 Method for controlling computer by using gesture object and corresponding computer system
US8884990B2 (en) * 2006-09-11 2014-11-11 Adobe Systems Incorporated Scaling vector objects having arbitrarily complex shapes
US20150286395A1 (en) * 2012-12-21 2015-10-08 Fujifilm Corporation Computer with touch panel, operation method, and recording medium
US20160011726A1 (en) * 2014-07-08 2016-01-14 Verizon Patent And Licensing Inc. Visual navigation

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2124624C (en) * 1993-07-21 1999-07-13 Eric A. Bier User interface having click-through tools that can be composed with other tools
US7185291B2 (en) * 2003-03-04 2007-02-27 Institute For Information Industry Computer with a touch screen
US7961943B1 (en) 2005-06-02 2011-06-14 Zeevi Eli I Integrated document editor
US20120092268A1 (en) * 2010-10-15 2012-04-19 Hon Hai Precision Industry Co., Ltd. Computer-implemented method for manipulating onscreen data
US9317196B2 (en) 2011-08-10 2016-04-19 Microsoft Technology Licensing, Llc Automatic zooming for text selection/cursor placement
CN105373309B (en) * 2015-11-26 2019-10-08 努比亚技术有限公司 Text selection method and mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070064A1 (en) * 2005-09-26 2007-03-29 Fujitsu Limited Program storage medium storing CAD program for controlling projection and apparatus thereof
US8884990B2 (en) * 2006-09-11 2014-11-11 Adobe Systems Incorporated Scaling vector objects having arbitrarily complex shapes
US20090259442A1 (en) * 2008-04-14 2009-10-15 Mallikarjuna Gandikota System and method for geometric editing
CN101986249A (en) * 2010-07-14 2011-03-16 上海无戒空间信息技术有限公司 Method for controlling computer by using gesture object and corresponding computer system
US20150286395A1 (en) * 2012-12-21 2015-10-08 Fujifilm Corporation Computer with touch panel, operation method, and recording medium
US20160011726A1 (en) * 2014-07-08 2016-01-14 Verizon Patent And Licensing Inc. Visual navigation

Also Published As

Publication number Publication date
CN111492338A (en) 2020-08-04
IL308115A (en) 2023-12-01
IL273279A (en) 2020-04-30
EP3682319A1 (en) 2020-07-22
CA3075627A1 (en) 2019-03-21
EP3682319A4 (en) 2021-08-04
IL273279B2 (en) 2024-04-01
WO2019055952A1 (en) 2019-03-21

Similar Documents

Publication Publication Date Title
US10810352B2 (en) Integrated document editor
US7137076B2 (en) Correcting recognition results associated with user input
KR101014075B1 (en) Boxed and lined input panel
KR102413461B1 (en) Apparatus and method for taking notes by gestures
JP4820382B2 (en) How to provide structure recognition in a node link diagram
EP0607926B1 (en) Information processing apparatus with a gesture editing function
CN108700994B (en) System and method for digital ink interactivity
US20200104586A1 (en) Method and system for manual editing of character recognition results
US20220357844A1 (en) Integrated document editor
IL273279B1 (en) Integrated document editor
KR20040034927A (en) Method and apparatus for editing layer in pen computing system
CN111492338B (en) Integrated document editor
JP2021144469A (en) Data input support system, data input support method, and program
JP6031762B2 (en) Information processing apparatus, information processing method, and program
JP6149812B2 (en) Information processing system, control method and program thereof, and information processing apparatus, control method and program thereof
Islam et al. SpaceX Mag: An Automatic, Scalable, and Rapid Space Compactor for Optimizing Smartphone App Interfaces for Low-Vision Users
US20240134507A1 (en) Modifying digital content including typed and handwritten text
EP4047465A1 (en) Modifying digital content
EP4030334A1 (en) Completing typeset characters using handwritten strokes
JP2010152464A (en) Character recognition device, and confirmation screen generation method for character recognition device
JPH08263576A (en) System for creation of document information database
JP2013105416A (en) Information processing device, information display method and computer program
CN117910429A (en) Anchor notes
KR20190134585A (en) System and process of providing online contents