US20190005001A1 - Integrated document editor - Google Patents
Integrated document editor Download PDFInfo
- Publication number
- US20190005001A1 US20190005001A1 US13/955,288 US201313955288A US2019005001A1 US 20190005001 A1 US20190005001 A1 US 20190005001A1 US 201313955288 A US201313955288 A US 201313955288A US 2019005001 A1 US2019005001 A1 US 2019005001A1
- Authority
- US
- United States
- Prior art keywords
- location
- document
- user
- touch screen
- locations
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G06F17/212—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/106—Display of layout of documents; Previewing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0412—Digitisers structurally integrated in a display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
Abstract
Provided are methods and computing devices wherein in one embodiment, user input on a touch screen of a computing device within which a text document is displayed is associated with both: a) a user chosen command, such as a selection or an insertion of text characters, to be applied within the document, and b) a positional location in the document, representing a user chosen document location. A region associated with the positional location in the document is defined, and data comprises representation of the user chosen command is processed to determine the user chosen document location, being within the region and in proximity to the positional location in the document, based on the document location being capable of being either starting location or ending location within user chosen locations in the document at which to apply the user chosen command.
Description
- This Application claims the benefit of priority of U.S. Pat. No. 7,961,943 filed Jun. 2, 2005; and U.S. Non-Provisional application Ser. No. 13/092,114, filed Apr. 21, 2011, the contents of each which are herein incorporated by reference.
- This invention relates to document creation and editing. More specifically, this invention relates to integration of recognition of information entry with document creation. Handwritten data entry into computer programs is known. The most widespread use has been in personal digital assistant devices. Handwritten input to devices using keyboards is not widespread for various reasons. For example, character transcription and recognition is relatively slow, and there are as yet no widely accepted standards for character or command input.
- According to the invention, methods and systems are provided for incorporating handwritten information, particularly corrective information, into a previously created revisable text or graphics document, for example text data, image data or command cues, by use of a digitizing recognizer, such as a digitizing pad, a touch screen or other positional input receiving mechanism as part of a display. In a data entry mode, a unit of data is inserted by means of a writing pen or like scribing tool and accepted for placement at a designated location, correlating x-y location of the writing pen to the actual location in the document, or accessing locations in the document memory by emulating keyboard keystrokes (or by the running of code/programs). In a recognition mode, the entered data is recognized as legible text with optionally embedded edit or other commands, and it is converted to machine-readable format. Otherwise, the data is recognized as graphics (for applications that accommodate graphics) and accepted into an associated image frame. Combinations of data, in text or in graphics form, may be concurrently recognized. In a specific embodiment, there is a window of error in location of the writing tool after initial invocation of the data entry mode, so that actual placement of the tool is not critical, since the input of data is correlated by the initial x-y location of the writing pen to the actual location in the document. In addition, there is an allowed error as a function of the pen's location within the document (i.e., with respect to the surrounding data). In a command entry mode, handwritten symbols selected from a basic set common to various application programs may be entered and the corresponding commands may be executed. In specific embodiments, a basic set of handwritten symbols and/or commands that are not application-dependent and that are preferably user-intuitive are applied. This handwritten command set allows for the making of revisions and creating documents without having prior knowledge of commands for a specific application.
- In a specific embodiment, such as in use with the Microsoft Word word processor, the invention may be implemented when the user invokes the Comments Mode of Word at a designated location in a Word-type document and then the handwritten information may be entered via the input device into the native Comments field, whereupon it is either converted to text or image or to the command data to be executed, with a handwriting recognizer operating either concurrently or after completion of entry of a unit of the handwritten information. Information recognized as text is then converted to ciphers and imported into the main body of the text, either automatically or upon a separate command. Information recognized as graphics is then converted to image data, such as a native graphics format or as a JPEG image and imported into to the main body of the text at the designated point, either automatically or upon a separate command. Information interpreted as commands can be executed, such as editing commands, which control addition, deletion or movement of text within the document, as well as font type or size change or color change. In a further specific embodiment, the invention may be incorporated as a plug-in module for the word processor program and invoked as part of the system, such as the use of a macro or as invoked through the Track Changes feature.
- In an alternative embodiment, the user may manually indicate, prior to invoking the recognition mode, the nature of the input, whether the input is text, graphics or command, recognition can be further improved by providing a step-by-step protocol prompted by the program for setting up preferred symbols and for learning the handwriting patterns of the user.
- These and other features of the invention will be better understood by reference to the following detailed description in connection with the accompanying drawings, which should be taken as illustrative and not limiting.
-
FIG. 1 is a block schematic diagram illustrating basic functional blocks and data flow according to one embodiment of the invention. -
FIG. 2 is a flow chart of an interrupt handler that reads handwritten information in response to writing pen taps on a writing surface. -
FIG. 3 is a flow chart of a polling technique for reading handwritten information. -
FIG. 4 is a flow chart of operation according to a representative embodiment of the invention wherein handwritten information is incorporated into the document after all handwritten information is concluded. -
FIG. 5 is a flow chart of operation according to a representative embodiment of the invention, wherein handwritten information is incorporated into the document concurrently during input. -
FIG. 6 is an illustration example of options available for displaying handwritten information during various steps in the process according to the invention. -
FIG. 7 is an illustration of samples of handwritten symbols/commands and their associated meanings. -
FIG. 8 is a listing that provides generic routines for each of the first 3 symbol operations illustrated inFIG. 7 . -
FIG. 9 is an illustration of data flow for data received from a recognition functionality element processed and defined in an RHI memory. -
FIG. 10 is an example of a memory block format of the RHI memory suitable for storing data associated with one handwritten command. -
FIG. 11 is an example of data flow of the embedded element ofFIG. 1 andFIG. 38 according to the first embodiment illustrating the emulating of keyboard keystrokes. -
FIG. 12 is a flow chart representing subroutine D ofFIG. 4 andFIG. 5 according to the first embodiment using techniques to emulate keyboard keystrokes. -
FIG. 13 is an example of data flow of the embedded element ofFIG. 1 andFIG. 38 according to the second embodiment illustrating the running of programs. -
FIG. 14 is a flow chart representing subroutine D ofFIG. 4 andFIG. 5 according to the second embodiment illustrating the running of programs. -
FIG. 15 throughFIG. 20 are flow charts of subroutine H referenced inFIG. 12 for the first three symbol operations illustrated inFIG. 7 and according to the generic routines illustrated inFIG. 8 . -
FIG. 21 is a flow chart of subroutine L referenced inFIG. 4 andFIG. 5 for concluding the embedding of revisions for a Microsoft Word type document, according to the first embodiment using techniques to emulate keyboard keystrokes. -
FIG. 22 is a flow chart of an alternative to subroutine L ofFIG. 21 for concluding revisions for MS Word type document. -
FIG. 23 is a sample flow chart of the subroutine I referenced inFIG. 12 for copying a recognized image from the RHI memory and placing it in the document memory via a clipboard. -
FIG. 24 is a sample of code for subroutine N referenced inFIG. 23 andFIG. 37 , for copying an image from the RHI memory into the clipboard. -
FIG. 25 is a sample of translated Visual Basic code for built-in macros referenced in the flow charts ofFIG. 26 toFIG. 32 andFIG. 37 . -
FIG. 26 throughFIG. 32 are flow charts of subroutine J referenced inFIG. 14 for the first three symbol operations illustrated inFIG. 7 and according to the generic routines illustrated inFIG. 8 for MS Word. -
FIG. 33 is a sample of code in Visual Basic for the subroutine M referenced inFIG. 4 andFIG. 5 , for concluding embedding of the revisions for MS Word, according to the second embodiment using the running of programs. -
FIG. 34 is a sample of translated Visual Basic code for useful built-in macros in comment mode for MS Word. -
FIG. 35 provides examples of recorded macros translated into Visual Basic code that emulates some keyboard keys for MS Word. -
FIG. 36 is a flow chart of a process for checking if a handwritten character to be emulated as a keyboard keystroke exists in table and thus can be emulated and, if so, for executing the relevant line of code that emulates the keystroke. -
FIG. 37 is a flow chart of an example for subroutine K inFIG. 14 for copying a recognized image from RHI memory and placing it in the document memory via the clipboard. -
FIG. 38 is an alternate block schematic diagram to the one illustrated inFIG. 1 , illustrating basic functional blocks and data flow according to another embodiment of the invention, using a touch screen. -
FIG. 39 is a schematic diagram of an integrated edited document made with the use of a wireless pad. - Referring to
FIG. 1 , there is a block schematic diagram of an integrateddocument editor 10 according to a first embodiment of the invention, which illustrates the basic functional blocks and data flow according to that first embodiment. Adigitizing pad 12 is used, with its writing area (e.g., within margins of an 8½″×11″ sheet) to accommodate standard sized papers that corresponds to the x-y location of the edited page.Pad 12 receives data from a writing pen 10 (e.g., magnetically, or mechanically by way of pressure with a standard pen). Data from the digitizingpad 12 is read by adata receiver 14 as bitmap and/or vector data and then stored corresponding to or referencing the appropriate x-y location in adata receiving memory 16. Optionally, this information can be displayed on the screen of adisplay 25 on a real-time basis to provide the writer with real-time feedback. - Alternatively, and as illustrated in
FIG. 38 , a touch screen 11 (or other positional input receiving mechanism as part of a display) with its receiving and displaying mechanisms integrated, receives data from thewriting pen 10, whereby the original document is displayed on the touch screen as it would have been displayed on a printed page placed on thedigitizing pad 12 and the writing by thepen 10 occurs on the touch screen at the same locations as it would have been written on a printed page). Under this scenario, thedisplay 25,pad 12 anddata receiver 14 ofFIG. 1 are replaced withelement 11, the touch screen and associated electronics ofFIG. 38 , andelements FIG. 1 . Under the touch screen display alternative, writing paper is eliminated. - When a printed page is used with the
digitizing pad 12, adjustments in registration of location may be required such that locations on the printed page correlates to the correct x-y locations for data stored in thedata receiving memory 16. - The correlation between locations of the writing pen 10 (on the
touch screen 11 or on the digitizing pad 12) and the actual x-y locations in thedocument memory 22 need not be perfectly accurate, since the location of thepen 10 is with reference to existing machine code data. In other words, there is a window of error around the writing point that can be allowed without loss of useful information, because it is assumed that the new handwritten information (e.g., revisions) must always correspond to a specific location of the pen, e.g., near text, drawing or image. This is similar to, but not always the same as, placing a cursor at an insertion point in a document and changing from command mode to data input mode. For example, the writing point may be between two lines of text but closer to one line of text than to the other. This window of error could be continuously computed as a function of the pen tapping point and the data surrounding the tapping point. In case of ambiguity as to the exact location where the new data are intended to be inserted (e.g., when the writing point overlaps multiple possible locations in the document memory 22), the touch screen 11 (or the pad 12) may generate a signal, such as a beeping sound, requesting the user to tap closer to the point where handwritten information needs to be inserted. If the ambiguity is still not resolved (when thedigitizing pad 12 is used), the user may be requested to follow an adjustment procedure. - If desired, adjustments may be made such that the writing area on the
digitizing pad 12 will be set to correspond to a specific active window (for example, in multi-windows screen), or to a portion of a window (i.e., when the active portion of a window covers partial screen, e.g., an invoice or a bill of an accounting program QuickBooks), such that the writing area of thedigitizing pad 12 is efficiently utilized. In situations where a document is a form (e.g., an order form), the paper document can be a pre-set to the specific format of the form, such that the handwritten information can be entered at specific fields of the form (that correspond to these fields in the document memory 22). In addition, in operations that do not require archiving of the handwritten paper documents, handwritten information on thedigitizing pad 12 may be deleted after it is integrated into thedocument memory 22. Alternatively, multi-use media that allow multiple deletions (that clear the handwritten information) can be used, although the touch screen alternative would be preferred over this alternative. - A
recognition functionality element 18 reads information from thedata receiving memory 16 and writes the recognition results or recognized handwritten elements into the recognized handwritten information (RHI)memory 20. Recognized handwritten information elements, (RHI elements) such as characters, words, and symbols, are stored in theRHI memory 20. Location of an RHI element in theRHI memory 20 correlates to its location in thedata receiving memory 14 and in thedocument memory 22. Preferably, after symbols are recognized and interpreted as commands, they may be stored as images or icons in (for example) JPEG format (or they can be emulated as if they were keyboard keys. This technique will be discussed hereafter.), since the symbols are intended to be intuitive. They can be useful for reviewing and interpreting revisions in the document. In addition, the recognized handwritten information prior to final incorporation (e.g., revisions for review) may be displayed either in handwriting (as is or as revised machine code handwriting for improved readability) or in standard text. - An embedded criteria and
functionality element 24 reads the information from theRHI memory 20 and embeds it into thedocument memory 22. Information in thedocument memory 22 is displayed on thedisplay 25, which is for example a computer monitor or a display of a touch screen. The embedded functionality determines what to display and what to be embedded into thedocument memory 22 based on the stage of the revision and selected user criteria/preferences. - Embedding the recognized information into the
document memory 22 can be either applied concurrently or after input of all handwritten information, such as after revisions, have been concluded. Incorporation of the handwritten information concurrently can occur with or without user involvement. The user can indicate each time a handwritten command and its associated text and/or image has been concluded, and then it can be incorporated into thedocument memory 22 one at a time. (Incorporation of handwritten information concurrently without user involvement will be discussed hereafter.) Thedocument memory 22 contains, for example, one of the following files: 1) A word processing file, such as a MS Word file or a Word Perfect file, 2) A spreadsheet, such as an Excel file, 3) A form such as a sales order, an invoice or a bill in accounting software (e.g., QuickBooks), 4) A table or a database, 5) A desktop publishing file, such as a QuarkXpress or a PageMaker file, or 6) A presentation file, such as a MS Power Point file. - It should be noted that the document could be any kind of electronic file, word processing document, spreadsheet, web page, form, e-mail, database, table, template, chart, graph, image, object, or any portion of these types of documents, such as a block of text or a unit of data. In addition, the
document memory 22, thedata receiving memory 16 and theRHI memory 20 could be any kind of memory or memory device or a portion of a memory device, e.g., any type of RAM, magnetic disk, CD-ROM, DVD-ROM, optical disk or any other type of storage. It should be further noted that a skilled in the art will recognize that the elements/components discussed herein (e.g., inFIGS. 1, 38, 9, 11, 13 ), such as the RHI element may be implemented in any combination of electronic or computer hardware and/or software. For example, the invention could be implemented in software operating on a general-purpose computer or other types of computing/communication devices, such as hand-held computers, personal digital assistant (PDA)s, cell phones, etc. Alternatively, a general-purpose computer may be interfaced with specialized hardware such as an Application Specific Integrated Circuit (ASIC) or some other electronic components to implement the invention. Therefore, it is understood that the invention may be carried out using various codes of one or more software modules forming a program and executed as instructions/data by, e.g., a central processing unit, or using hardware modules specifically configured and dedicated to perform the invention. Alternatively, the invention may be carried out using a combination of software and hardware modules. - The
recognition functionality element 18 encompasses one or more of the following recognition approaches: -
- 1—Character recognition, which can for example be used in cases where the user clearly spells each character in capital letters in an effort to minimize recognition errors,
- 2—A holistic approach where recognition is globally performed on the whole representation of the words and there is no attempt to identify characters individually. (The main advantage of the holistic methods is that they avoid word segmentation. Their main drawback is that they are related to a fixed lexicon of words description: since these methods do not rely on letters, words are directly described by means of features. Adding new words to the lexicon typically requires human training or the automatic generation of a word description from ASCII words.)
- 3—Analytical strategies that deal with several levels of representation corresponding to increasing levels of abstractions. (Words are not considered as a whole, but as sequences of smaller size units, which must be easily related to characters in order to make recognition independent from a specific vocabulary.)
- Strings of words or symbols, such as those described in connection with
FIG. 7 and discussed hereafter, can be recognized by either the holistic approach or by the analytical strategies, although character recognition may be preferred. Units recognized as characters, words or symbols are stored into theRHI memory 20, for example in ASCII format. Units that are graphics are stored into the RHI memory as graphics, for example as a JPEG file. Units that could not be recognized as a character, word or a symbol are interpreted as images if the application accommodates graphics and optionally, if approved by the user as graphics and stored into theRHI memory 20 as graphics. It should be noted that units that could not be recognized as character, word or symbol may not be interpreted as graphics in applications that do not accommodate graphics (e.g., Excel); in this scenario, user involvement may be required. - To improve the recognition functionality, data may be read from the
document memory 22 by therecognition element 18 to verify that the recognized handwritten information does not conflict with data in the original document and to resolve/minimize as much as possible recognized information retaining ambiguity. The user may also resolve ambiguity by approving/disapproving recognized handwritten information (e.g., revisions) shown on thedisplay 25. In addition, adaptive algorithms (beyond the scope of this disclosure) may be employed. There under user involvement may be relatively significant at first, but as the adaptive algorithms learn the specific handwritten patterns and store them as historical patterns, future ambiguities should be minimized as recognition become more robust. -
FIG. 2 thoughFIG. 5 are flow charts of operation according to an exemplary embodiment and are briefly explained herein below. The text in all of the drawings is herewith explicitly incorporated into this written description for the purposes of claim support.FIG. 2 illustrates a program that reads the output of the digitizing pad 12 (or of the touch screen 11) each time the writingpen 10 taps on and/or leaves the writing surface of the pad 12 (or of the touch screen 11). Thereafter data is stored in the data receiving memory 16 (Step E). Both the recognition element and the data receiver (or the touch screen) access the data receiving memory. Therefore, during read/write cycle by one element, the access by the other element should be disabled. - Optionally, as illustrated in
FIG. 3 , the program checks every few milliseconds to see if there is a new data to read from the digitizing pad 12 (or of the touch screen 11). If so, data is received from the digitizing recognizer and stored in the data receiving memory 16 (E). This process continues until the user indicates that the revisions are concluded, or until there is a timeout. - Embedding of the handwritten information may be executed either all at once according to procedures explained with
FIG. 4 , or concurrently according to procedures explained withFIG. 5 . - The
recognition element 18 recognizes one unit at a time, e.g., a character, a word, graphic or a symbol, and makes them available to the RHI processor and memory 20 (C). The functionality of this processor and the way in which it stores recognized units into the RHI memory will be discussed hereafter with reference toFIG. 9 . Units that are not recognized immediately are either dealt with at the end as graphics, or the user may indicate otherwise manually by other means, such as a selection table or keyboard input (F). Alternatively, graphics are interpreted as graphics if the user indicates when the writing of graphics begins and when it is concluded. Once the handwritten information is concluded, it is grouped into memory blocks, whereby each memory block contains all (as inFIG. 4 ) or possibly partial (as inFIG. 5 ) recognized information that is related to one handwritten command, e.g., a revision. The embedded function (D) then embeds the recognized handwritten information (e.g., revisions) in “for review” mode. Once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences set up (A) by the user. In the examples illustrated hereafter, revisions in MS Word are embedded in Track Changes mode all at once. Also in the examples illustrated hereafter, revisions in MS Word that are according toFIG. 4 may for example be useful when thedigitizing pad 12 is separate from the rest of the system, whereby handwritten information from the digitizing pad internal memory may be downloaded into thedata receiving memory 16 after the revisions are concluded via a USB or other IEEE or ANSI standard port. -
FIG. 4 is a flow chart of the various steps, whereby embedding “all” recognized handwritten information (such as revisions) into thedocument memory 22 is executed once “all” handwritten information is concluded. First, the Document Type is set up (e.g., Microsoft Word or QuarkXpress), with software version and user preferences (e.g., whether to incorporate revisions as they are available or one at a time upon user approval/disapproval), and the various symbols preferred by the user for the various commands such as for inserting text, for deleting text and for moving text around) (A). The handwritten information is read from thedata receiving memory 16 and stored in the memory of the recognition element 18 (B). Information that is read from the receivingmemory 16 is marked/flagged as read, or it is erased after it is read by therecognition element 18 and stored in its memory; this will insure that only new data is read by therecognition element 18. -
FIG. 5 is a flow chart of the various steps whereby embedding recognized handwritten information (e.g., revisions) into thedocument memory 22 is executed concurrently (e.g., with the making of the revisions). Steps 1-3 are identical to the steps of the flow chart inFIG. 4 (discussed above). Once a unit, such as a character, a symbol or a word is recognized, it is processed by theRHI processor 20 and stored in the RHI memory. A processor (GMB functionality 30 referenced inFIG. 9 ) identifies it as either a unit that can be embedded immediately or not. It is checked if it can be embedded (step 4.3); if it can be (step 5), it is embedded (D) and then (step 6) deleted or marked/updated as an embedded (G). If it cannot be embedded (step 4.1), more information is read from the digitizing pad 12 (or from the touch screen 11). This process of steps 4-6 repeats and continues so long as handwritten information is forthcoming. Once all data is embedded (indicated by an End command or a simple timeout), units that could not be recognized are dealt with (F) in the same manner discussed for the flow chart ofFIG. 4 . Finally, once the user approves/disapproves revisions, they are embedded in final mode (L) according to the preferences chosen by the user. -
FIG. 6 is an example of the various options and preferences available to the user to display the handwritten information in the various steps for MS Word. In “For Review” mode the revisions are displayed as “For Review” pending approval for “Final” incorporation. Revisions, for example, can be embedded in a “Track Changes” mode, and once approved/disapproved (as in “Accept/Reject changes”), they are embedded into thedocument memory 22 as “Final”. Alternatively, symbols may be also displayed on thedisplay 25. The symbols are selectively chosen to be intuitive, and, therefore, can be useful for quick review of revisions. For the same reason, text revisions may be displayed either in handwriting as is, or as revised machine code handwriting for improved readability; in “Final” mode, all the symbols are erased, and the revisions are incorporated as an integral part of the document. - An example of a basic set of handwritten commands/symbols and their interpretation with respect to their associated data for making revisions in various types of documents is illustrated in
FIG. 7 . - Direct access to specific locations is needed in the
document memory 22 for read/write operations. Embedding recognized handwritten information from theRHI memory 20 into the document memory 22 (e.g., for incorporating revisions) may not be possible (or limited) for after-market applications. Each of the embodiments discussed below provides an alternate “back door” solution to overcome this obstacle. - Command information in the
RHI memory 20 is used to insert or revise data, such as text or images in designated locations in thedocument memory 22, wherein the execution mechanisms emulate keyboard keystrokes, and when available, operate in conjunction with running pre-recorded and/or built-in macros assigned to sequences of keystrokes (i.e., shortcut keys). Data such as text can be copied from theRHI memory 20 to the clipboard and then pasted into designated location in thedocument memory 22, or it can be emulated as a keyboard keystrokes. This embodiment will be discussed hereafter. - In applications such as Microsoft Word, Excel and WordPerfect, where programming capabilities, such as VB Scripts and Visual Basic are available, the commands and their associated data stored in the
RHI memory 20 are translated to programs that embed them into thedocument memory 22 as intended. In this embodiment, the operating system clipboard can be used as a buffer for data (e.g., text and images). This embodiment will also be discussed hereafter. - Information associated with a handwritten command as discussed in Embodiment One and Embodiment Two is either text or graphics (image), although it could be a combination of text and graphics. In either embodiment, the clipboard can be used as a buffer.
- For Copy Operations in the RHI Memory:
- When a unit of text or image is copied from a specific location indicated in the memory block in the
RHI memory 20 to be inserted in a designated location in thedocument memory 22. - For Cut/Paste and for Paste Operations within the Document Memory:
- For moving text or image around within the
document memory 22, and for pasting text or image copied from theRHI memory 20. - A key benefit of Embodiment One is usefulness in a large array of applications, with or without programming capabilities, to execute commands, relying merely on control keys, and when available built-in or pre-recorded macros. When a control key, such as Arrow Up or a simultaneous combination of keys, such as Cntrl-C, is emulated, a command is executed.
- Macros cannot be run in Embodiment Two unless translated to actual low-level programming code (e.g., Visual Basic Code). In contrast, running a macro in a control language native to the application (recorded and/or built-in) in Embodiment One is simply achieved by emulating its assigned shortcut key(s). Embodiment Two may be preferred over Embodiment One, for example in MS Word, if a Visual Basic Editor is used to create codes that include Visual Basic instructions that cannot be recorded as macros.
- Alternatively, Embodiment Two may be used in conjunction with Embodiment One, whereby, for example, instead of moving text from the
RHI memory 20 to the clipboard and then placing it in a designation location in thedocument memory 22, text is emulated as keyboard keystrokes. If desired, the keyboards keys can be emulated in Embodiment Two by writing a code for each key, that, when executed, emulates a keystroke. Alternatively, Embodiment One may be implemented for applications with no programming capabilities, such as QuarkXpress, and Embodiment Two may be implemented for some of the applications that do have programming capabilities. Under this scenario, some applications with programming capabilities may still be implemented in Embodiment One or in both Embodiment One and Embodiment Two. - Alternatively, x-y locations in the data receiving memory 16 (as well as designated locations in the document memory 22), can be identified on a printout or on the display 25 (and if desired, on the touch screen 11) based on: 1) recognition/identification of a unique text and/or image representation around the writing pen, and 2) searching for and matching the recognized/identified data around the pen with data in the original document (preferably, converted into the bitmap and/or vector format that is identical to the format handwritten information is stored in the data receiving memory 16). Then handwritten information along with its x-y locations correspondingly indexed in the
document memory 22 is transmitted to a remote platform for recognition, embedding and displaying. - The data representation around the writing pen and the handwritten information are read by a miniature camera with attached circuitry that is built-in the pen. The data representing the original data in the
document memory 22 is downloaded into the pen internal memory prior the commencement of handwriting, either via a wireless connection (e.g., Bluetooth) or via physical connection (e.g., USB port). - The handwritten information along with its identified x-y locations is either downloaded into the
data receiving memory 16 of the remote platform after the handwritten information is concluded (via physical or wireless link), or it can be transmitted to the remote platform via wireless link as the x-y location of the handwritten information is identified. Then, the handwritten information is embedded into thedocument memory 22 all at once (i.e., according to the flow chart illustrated inFIG. 4 ), or concurrently (i.e., according to the flow chart illustrated inFIG. 5 ). - If desired, the
display 25 may include pre-set patterns (e.g., engraved or silk-screened) throughout the display or at selected location of the display, such that when read by the camera of the pen, the exact x-y location on thedisplay 25 can be determined. The pre-set patterns on thedisplay 25 can be useful to resolve ambiguities, for example when the identical information around locations in thedocument memory 22 exists multiple times within the document. - Further, the tapping of the pen in selected locations of the
touch screen 11 can be used to determine the x-y location in the document memory (e.g., when the user makes yes-no type selections within a form displayed on the touch screen). This, for example, can be performed on a tablet that can accept input from a pen or any other pointing device that function as a mouse and writing instrument. - Alternatively (or in addition to a touch screen), the writing pen can emit a focused laser/IR beam to a screen with thermal or optical sensing, and the location of the sensed beam may be used to identify the x-y location on the screen. Under this scenario, the use of a pen with a built-in miniature camera is not needed. When a touch screen or a display with thermal/optical sensing (or when preset patterns on an ordinary display) is used to detect x-y locations on the screen, the designated x-y location in the
document memory 22 can be determined based on: 1) the detected x-y location of thepen 10 on the screen, and 2) parameters that correlate between the displayed data and the data in the document memory 22 (e.g., application name, cursor location on the screen and zoom percent). - Alternatively, the mouse could be emulated to place the insertion point at designated locations in the
document memory 22 based on the X-Y locations indicated in theData receiving memory 16. Then information from theRHI memory 20 can be embedded into thedocument memory 22 according to Embodiment One or Embodiment Two. Further, once the insertion point is at a designated location in thedocument memory 22, selection of text or an image within thedocument memory 22 may be also achieved by emulating the mouse pointer click operation. - The Comments feature of Microsoft Word (or similar comment-inserting feature in other program applications) may be employed by the user or automatically in conjunction with either of the approaches discussed above, and then handwritten information from the
RHI memory 20 can be embedded into designated Comments fields of thedocument memory 22. This approach will be discussed further hereafter. - Before embedding information into the
document memory 22, the document type is identified and user preferences are set (A). The user may select to display revisions in Track Change feature. The Track Changes Mode of Microsoft Word (or similar features in other applications) can be invoked by the user or automatically in conjunction with either or both Embodiment One and Embodiment Two, and then handwritten information from theRHI memory 20 can be embedded into thedocument memory 22. After all revisions are incorporated into thedocument memory 22, they can be accepted for the entire document, or they can be accepted/rejected one at a time upon user command. Alternatively, they can be accepted/rejected at the making of the revisions. - The insertion mechanism may also be a plug-in that emulates the Track Changes feature. Alternatively, the Track Changes Feature may be invoked after the Comments Feature is invoked such that revisions in the Comments fields are displayed as revisions, i.e., “For Review”. This could in particular be useful for large documents reviewed/revised by multiple parties.
- In another embodiment, the original document is read and converted into a document with known accessible format (e.g., ASCII for text and JPEG for graphics) and stored into an intermediate memory location. All read/write operations are performed directly on it. Once revisions are completed, or before transmitting to another platform, it can be converted back into the original format and stored into the
document memory 22. - As discussed, revisions are written on a paper document placed on the
digitizing pad 12, whereby the paper document contains/resembles the machine code information stored in thedocument memory 22, and the x-y locations on the paper document corresponds to the x-y locations in thedocument memory 22. In an alternative embodiment, the revisions can be made on a blank paper (or on another document), whereby, the handwritten information, for example, is a command (or a set of commands) to write or revise a value/number in a cell of a spreadsheet, or to update new information in a specific location of a database; this can be useful, for example in cases were an action to update a spreadsheet, a table or a database is needed after reviewing a document (or a set of documents). In this embodiment, the x-y location in theReceiving Memory 16 is immaterial. - Before discussing the way in which information is embedded into the
document memory 22 in greater detail with reference to the flow charts, it is necessary to define how recognized data is stored in memory and how it correlates to locations in thedocument memory 22. As previously explained, embedding the recognized information into thedocument memory 22 can be either applied concurrently or after all handwritten information has been concluded. The Embed function (D) referenced inFIG. 4 reads data from memory blocks in theRHI memory 20 one at a time, which corresponds to one handwritten command and its associated text data or image data. The Embed function (D) referenced inFIG. 5 reads data from memory blocks and embeds recognized units concurrently. - Memory Blocks:
- An example of how a handwritten command and its associated text or image is defined in the
memory block 32 is illustrated inFIG. 10 . This format may be expanded, for example, if additional commands are added, i.e., in addition to the commands specified in the Command field. The parameters defining the x-y location of recognized units (i.e., InsertionPoint1 and InsertionPoint2 inFIG. 10 ) vary as a function of the application. For example, the x-y locations/insertion points of text or image in MS Word can be defined with the parameters Page#, Line# and Column# (as illustrated inFIG. 10 ). In the application Excel, the x-y locations can be translated into the cell location in the spreadsheet, i.e., Sheet#, Row# and Column#. Therefore, different formats for x-y InsertionPoint1 and x-y InsertionPoint2 need to be defined to accommodate variety of applications. -
FIG. 9 is a chart of data flow of recognized units. These are discussed below. - FIFO (First in First Out) Protocol:
- Once a unit is recognized it is stored in a queue, awaiting processing by the processor of
element 20, and more specifically, by theGMB functionality 30. The “New Recog” flag (set to “One” by therecognition element 18 when a unit is available), indicates to theRU receiver 29 that a recognized unit (i.e., the next in the queue) is available. The “New Recog” flag is reset back to “Zero” after the recognized unit is read and stored in thememory elements FIG. 9 (e.g., as in step 3.2. of the subroutines illustrated inFIG. 4 andFIG. 5 ). In response, the recognition element 18: 1) makes the next recognized unit available to read by theRU receiver 29, and 2) sets the “New Recog” flag back to “One” to indicate to theRU receiver 29 that the next unit is ready. This process continues so long as recognized units are forthcoming. This protocol insures that therecognition element 18 is in synch with the speed with which recognized units are read from the recognition element and stored in the RHI memory (i.e., inmemory elements FIG. 9 ). For example, when handwritten information is processed concurrently, there may be more than one memory block available before the previous memory block is embedded into thedocument memory 22. - In a similar manner, this FIFO technique may also be employed between
elements elements FIG. 1 andFIG. 38 , and betweenelements FIG. 1 , to insure that independent processes are well synchronized, regardless of the speed by which data is available by one element and the speed by which data is read and processed by the other element. - Optionally, the “New Recog” flag could be implemented in h/w (such as within an IC), for example, by setting a line to “High” when a recognized unit is available and to “Low” after the unit is read and stored, i.e., to acknowledge receipt.
- Process 1:
- As a unit, such as a character, a symbol or a word is recognized: 1) it is stored in Recognized Units (RU)
Memory 28, and 2) its location in theRU memory 28 along with its x-y location, as indicated in thedata receiving memory 16, is stored in the XY-RU Location to Address in RU table 26. This process continues so long as handwritten units are recognized and forthcoming. - Process 2:
- In parallel to
Process 1, the grouping into memory blocks (GMB)functionality 30 identifies each recognized unit such as a character, a word or a handwritten command (symbols or words), and stores them in the appropriate locations of memory blocks 32. In operations such as “moving text around”, “increasing fonts size” or “changing color”, an entire handwritten command must be concluded before it can be embedded into thedocument memory 22. In operations such as “deleting text” or “inserting new text”, deleting or embedding the text can begin as soon as the command has been identified and the deletion (or insertion of text) operation can then continue concurrently as the user continue to write on the digitizing pad 12 (or on the touch screen 11). - In this last scenario, as soon as the recognized unit(s) is incorporated into (or deleted from) the
document memory 22, it is deleted from theRHI memory 22, i.e., from thememory elements FIG. 9 . If deletion is not desired, embedded units may be flagged as “incorporated/embedded” or moved to another memory location (as illustrated in step 6.2 of the flow chart inFIG. 5 ). This should insure that information in the memory blocks is continuously current with new unincorporated information. - Process 3:
- As unit(s) are grouped into memory blocks, 1) the identity of the recognized units (whether they can be immediately incorporated or not) and 2) the locations of the units that can be incorporated in the RHI memory are continuously updated.
- 1. As units are groups into memory blocks, a flag (i.e., “Identity-Flag”) is set to “One” to indicate when unit(s) can be embedded. It should be noted that this flag is defined for each memory block and that it could be set more than one time for the same memory block (for example, when the user strikes through a line of text). This flag is checked in steps 4.1-4.3 of
FIG. 5 and is reset to “Zero” after the recognized unit(s) is embedded, i.e., in step 6.1 of the subroutine inFIG. 5 , and at initialization. It should be noted that the “Identity” flag discussed above is irrelevant when all recognized units associated with a memory block are embedded all at once; under this scenario and after the handwritten information is concluded, recognized, grouped and stored in the proper locations of the RHI memory, the “All Units” flag in step 6.1 ofFIG. 4 will be set to “One” by theGMB functionality 30 ofFIG. 9 , to indicate that all units can be embedded. - 2. As units are grouped into memory blocks, a pointer for memory block, i.e., the “Next memory block pointer” 31, is updated every time a new memory block is introduced (i.e., when a recognized unit(s) that is not yet ready to be embedded is introduced; when the “Identity” flag is Zero), and every time a memory block is embedded into the
document memory 22, such that the pointer will always point to the location of the memory block that is ready (when it is ready) to be embedded. This pointer indicates to the subroutines Embedd1 (ofFIG. 12 ) and Embedd2 (ofFIG. 14 ) the exact location of the relevant memory block with the recognized unit(s) that is ready to be embedded (as in step 1.2 of these subroutines). - An example of a scenario under which the “next memory block pointer” 31 is updated is when a handwritten input related to changing font size has begun, then another handwritten input related to changing colors has begun (Note that these two commands cannot be incorporated until after they are concluded), and then another handwritten input for deleting text has begun (Note that this command may be embedded as soon as the GMB functionality identify it).
- The value in the “# of memory blocks” 33 indicates the number of memory blocks to be embedded. This element is set by the
GMB functionality 30 and used in step 1.1 of the subroutines illustrated inFIG. 12 andFIG. 14 . This counter is relevant when the handwritten information is embedded all at once after its conclusion, i.e., when the subroutines ofFIG. 12 andFIG. 14 are called from the subroutine illustrated inFIG. 4 (i.e., it is not relevant when they are called from the subroutine inFIG. 5 ; its value then is set to “One”, since in this embodiment, memory blocks are embedded one at a time). -
FIG. 11 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment One. The text of these and all other figures is largely self-explanatory and need not be repeated herein. Nevertheless, the text thereof may be the basis of claim language used in this document. -
FIG. 12 is a flow chart example of the Embed subroutine D referenced inFIG. 4 andFIG. 5 according to Embodiment One. The following is to be noted. - 1. When this subroutine is called by the routine illustrated in
FIG. 5 (i.e., when handwritten information is embedded concurrently): 1) memory block counter (in step 1.1) is set to 1, and 2) memory block pointer is set to the location in which the current memory block to be embedded is located; this value is defined in memory block pointers element (31) ofFIG. 9 . - 2. When this subroutine is called by the subroutine illustrated in
FIG. 4 (i.e., when all handwritten information is embedded after all handwritten information is concluded): 1) memory block pointer is set to the location of the first memory block to be embedded, and 2) memory block counter is set to the value in # of memory blocks element (33) ofFIG. 9 . - In operation, memory blocks 32 are fetched one at a time from the RHI memory 20 (G) and processed as follows:
- Commands are converted to keystrokes (35) in the same sequence as the operation is performed via the keyboard and then stored in sequence in the
keystrokes memory 34. The emulatekeyboard element 36 uses this data to emulate the keyboard, such that the application reads the data as it was received from the keyboard (although this element may include additional keys not available via a keyboard such as the symbols illustrated inFIG. 7 , e.g. for insertion of new text in MS Word document). Theclipboard 38 can handle insertion of text, or text can be emulated as keyboard keystrokes. The lookup tables 40 determines the appropriate control key(s) and keystroke sequences for pre-recorded and built-in macros that, when emulated, execute the desired command. These keyboard keys are application-dependent and are a function of parameters, such as application name, software version and platform. Some control keys, such as the arrow keys, execute the same commands in a large array of applications; however, this assumption is excluded from the design inFIG. 11 , i.e., by the inclusion of the lookup table command-keystrokes inelement 40 ofFIG. 11 . Although, in the flow charts inFIGS. 15-20 , it is assumed that the following control keys execute the same commands (in the applications that are included): “Page Up”, “Page Down”, “Arrow Up”, “Arrow Down”, “Arrow Right” and “Arrow Left” (For moving the insertion point within the document), “Shift+Arrow Right” (for selection of text), and “Delete” for deleting a selected text. Preferably,element 40 include lookup tables for a large array of applications, although it could include tables for one or any desired number of applications. - The image (graphic) is first copied from the
RHI memory 20, more specifically, based on information in thememory block 32, into theclipboard 38. Its designated location is located in thedocument memory 22 via a sequence of keystrokes (e.g., via the arrow keys). It is stored (i.e., pasted from theclipboard 38 by the keystrokes sequence: Cntr-V) into thedocument memory 22. If the command involves another operation, such as “Reduce Image Size” or “Move image”, the image is first identified in thedocument memory 22 and selected. Then the operation is applied by the appropriate sequences of keystrokes. -
FIG. 15 throughFIG. 20 , the flow charts of the subroutines H referenced inFIG. 12 , illustrate execution of the first three basic text revisions discussed in connection with and inFIG. 8 for MS Word and other applications. These flow charts are self-explanatory and are therefore not further described herein but are incorporated into this text. The following points are to be noted with reference to the function StartOfDocEmb1 illustrated in the flow chart ofFIG. 15 : - 1. This function is called by the function SetPointeremb1, illustrated in
FIG. 16 . - 2. Although, in many applications, the shortcut keys combination “Cntrl+Home” will bring the insertion point to the start of the document (including MS Word), this routine was written to execute the same operation with the arrow keys.
- 3. Designated x-y locations in the
document memory 22 in this subroutine are defined based on Page#, Line# & Column#; other subroutines are required when the x-y definition differs. - Once all revisions are embedded, they are incorporated in final mode according to the flow chart illustrated in
FIG. 21 or according to the flow chart illustrated inFIG. 22 . In this implementation example, the Track Changes feature is used to “Accept All Changes” which embed all revisions as an integral part of the document. - As discussed above, a basic set of keystrokes sequences can be used to execute a basic set of commands for creation and revision of a document in a large array of applications. For example, the arrow keys can be used for jumping to a designated location in the document. When these keys are used in conjunction with the Shift key, a desired text/graphic object can be selected. Further, clipboard operations, i.e., the typical combined keystroke sequences Cntrl-X (for Cut), Cntrl-C (for Copy) and Cntrl-V (for Paste), can be used for basic edit/revision operations in many applications. It should be noted that, although a relatively small number of keyboard control keys are available, the design of an application at the OEM level is unlimited in this regard. (See for example
FIGS. 1-5 ). It should be noted that the same key combination could execute different commands. For example, deleting an item in QuarkXpres is achieved by the keystrokes Cntrl-K, where the keystrokes Cntrl-K in MS Word open a hyperlink. Therefore, the ConvertText1 function H determines the keyboard keystroke sequences for commands data stored in the RHI memory by accessing the lookup table command-keystrokes command-control-key 40 ofFIG. 11 . - Execution of handwritten commands in applications such as Microsoft Word, Excel and Word Perfect is enhanced with the use of macros. This is because sequences of keystrokes that can execute desired operations may simply be recorded and assigned to shortcut keys. Once the assigned shortcut key(s) are emulated, the recorded macro is executed. Below are some useful built-in macros for Microsoft Word. For simplification, they are grouped based on the operations used to embed handwritten information (D).
- Bringing the Insertion Point to a Specific Location in the Document:
- CharRight, CharLeft, LineUp, LineDown, StartOf Document, StartOf Line, EndOf Document, EndOf Line, EditGoto, GotoNextPage, GotoNextSection, GotoPreviousPage, GotoPreviousSelection, GoBack
- Selection:
- CharRightExtent, CharLeftExtend, LineDownExtend, LineUpExtend, ExtendSelection, EditFind, EditReplace
- Operations on Selected Text/Graphic:
- EditClear, EditCopy, EditCut, EditPaste,
- CopyText, FontColors, FontSizeSelect, GrowFont, ShrinkFont, GrowFontOnePoint, ShrinkFontOnePoint, AllCaps, SmallCaps, Bold, Italic, Underline, UnderlineCoor, UnderlineStyle, WordUnderline, ChangeCase, DoubleStrikethrough, Font, FontColor, FontSizeSelect
- Displaying Revisions:
- Hidden, Magnifier, Highlight, DocAccent, CommaAccent, DottedUnderline, DoubleUnderline, DoubleStrikethrough, HtmlSourceRefresh, InsertFieldChar (for enclosing a symbol for display), ViewMasterDocument, ViewPage, ViewZoom, ViewZoom100, ViewZoom200, ViewZoom75
- Images:
- InsertFrame, InsertObject, InsertPicture, EditCopyPicture, EditCopyAsPicture, EditObject, InsertDrawing, InsertFram, InsertHorizentlLine
- File Operations:
- FileOpen, FileNew, FileNewDefault, DocClose, FileSave, SaveTemplate
- If a macro has no shortcut key assigned to it, it can be assigned by the following procedure:
- Clicking on the Tools menu and selecting Customize causing the Customize form to appear. Clicking on the Keyboard button brings the dialog box Customize Keyboard. In the Categories box all the menus are listed, and in the Commands box all their associated commands are listed. Assigning a shortcut key to a specific macro can be simply done by selecting the desired built-in macro in the command box and pressing the desired shortcut keys.
- Combinations of macros can be recorded as a new macro; the new macro runs whenever the sequence of keystrokes that is assigned to it is emulated. In the same manner, a macro in combination with keystrokes (e.g., of arrow keys) may be recorded as a new macro. It should be noted that recording of some sequences as a macro may not be permitted.
- The use of macros, as well as the assignment of a sequence of keys to macros can also be done in other word processors, such as WordPerfect.
- Emulating a
keyboard key 36 in applications with built-in programming capability, such as Microsoft Word, can be achieved by running code that is equivalent to pressing that keyboard key. Referring toFIG. 35 andFIG. 36 , details of this operation are presented. The text thereof is incorporated herein by reference. Otherwise, emulating the keyboard is a function that can be performed in conjunction with Windows or other computer operating systems. -
FIG. 13 is a block schematic diagram illustrating the basic functional blocks and data flow according to Embodiment Two.FIG. 14 is a flow chart example of the Embed function D referenced inFIG. 4 and inFIG. 5 according to Embodiment Two. Memory blocks are fetched from the RHI memory 20 (G) and processed. Text of these figures is incorporated herein by reference. The following should be noted withFIG. 14 : - 1. When this subroutine is called by the routine illustrated in
FIG. 5 (i.e., when handwritten information is embedded concurrently): 1) memory block counter (in step 1.1 below) is set to 1, and 2) memory block pointer is set to the location in which the current memory block to be embedded is located; this value is defined in memory block pointers element (31) ofFIG. 9 . - 2. When this subroutine is called by the subroutine illustrated in
FIG. 4 (i.e., when all handwritten information is embedded after all handwritten information is concluded): 1) memory block Pointer is set to the location of the first memory block to be embedded, and 2) memory block counter is set to the value in # of memory blocks element (33) ofFIG. 9 . - A set of programs executes the commands defined in the memory blocks 32 of
FIG. 9 , one at a time.FIG. 26 throughFIG. 32 , with text incorporated herein by reference, are flow charts of the subroutine J referenced inFIG. 14 . The programs depicted execute the first three basic text revisions discussed inFIG. 8 for MS Word. These sub-routines are self-explanatory and are not further explained here, but the text is incorporated by reference. -
FIG. 33 is the code in Visual Basic that embeds the information in Final Mode, i.e., Accept All Changes” of the Track Changes, which embeds all revisions to be an integral part of the document. - Each of the macros referenced in the flow charts of
FIG. 26 throughFIG. 32 needs to be translated into executable code such as VB Script or Visual Basic code. If there is uncertainty as to which method or property to use, the macro recorder typically can translate the recorded actions into code. The translated code for these macros to Visual Basic is illustrated inFIG. 25 . - The
clipboard 38 can handle the insertion of text into thedocument memory 22, or text can be emulated as keyboard keystrokes. (Refer toFIGS. 35-36 for details). As in Embodiment One, an image operation (K) such as copying an image from theRHI memory 20 to thedocument memory 22 is executed as follow: an image is first copied from theRHI memory 20 into the clipboard 3f 8. Its designated location is located in thedocument memory 22. Then it is pasted via theclipboard 38 into thedocument memory 22. - The selection of a program by the program selection and
execution element 42 is a function of the command, the application, software version, platform, and the like. Therefore, the ConvertText2 J selects a specific program for command data that are stored in theRHI memory 20 by accessing the lookup command-programs table 44. Programs may also be initiated by events, e.g., when opening or closing a file, or by a key entry, e.g., when bringing the insertion point to a specific cell of a spreadsheet by pressing the Tab key. - In Microsoft Word, the Visual Basic Editor can be used to create very flexible, powerful macros that include Visual Basic instructions that cannot be recorded from the keyboard. The Visual Basic Editor provides additional assistance, such as reference information about objects and properties or an aspect of its behavior.
- Working with the Comment Feature as an Insertion Mechanism
- Incorporating the handwritten revisions into the document through the Comment feature may be beneficial in cases where the revisions are mainly insertion of new text into designated locations, or when plurality of revisions in various designated locations in the document need to be indexed to simplify future access to revisions; this can be particularly useful for large documents under review by multiple parties. Each comment can be further loaded into a sub-document which is referenced by a comment # (or a flag) in the main document. The Comments mode can also work in conjunction with Track Changes mode.
- For Embodiment One: Insert Annotation can be achieved by emulating the keystrokes sequence Alt+Cntrl+M. The Visual Basic translated code for the recorded macro with this sequence is “Selection.Comments.Add Range:=Selection.Range”, which could be used to achieve the same result in
embodiment 2. - Once in Comment mode, revisions in the
RHI memory 20 can be incorporated into thedocument memory 22 as comments. If the text includes revisions, the Track Changes mode can be invoked prior to insertion of text into a comment pane. - Useful Built-in Macros for Use in the Comment Mode of MS Word:
- GotoCommentScope ;highlight the text associated with a comment reference mark
- GotoNextComment, ;jump to the next comment in the active document
- GotoPreviousComment ;jump to the previous comment in the active document
- InsertAnnotation ;insert comment
- DeleteAnnotation ;delete comment
- ViewAnnotation ;show or hide the comment pane
- The above macros can be used in Embodiment One by emulating their shortcut keys or in Embodiment Two with their translated code in Visual Basic.
FIG. 34 provides the translated Visual Basic code for each of these macros. - Embedding handwritten information in a cell of a spreadsheet or a field in a form or a table can either be for new information or it could be for revising an existing data (e.g., deletion, moving data between cells or for adding new data in a field). Either way, after the handwritten information is embedded in the
document memory 22, it can cause the application (e.g., Excel) to change parameters within thedocument memory 22, e.g., when the embedded information in a cell is a parameter of a formula in a spreadsheet which when embedded changes the output of the formula, or when it is a price of an item in a Sales Order which when embedded changes the subtotal of the Sales Order; if desired, these new parameters may be read by theembed functionality 24 and displayed on thedisplay 25 to provide the user with useful information such as new subtotals, spell check output, stock status of an item (e.g., as a sales order is filed in). - As discussed, the x-y location in the
document memory 22 for a word processing type documents can for example be defined by page#, line# and character# (seeFIG. 10 , x-y locations for InsertionPoint1 and InsertionPoint2). Similarly, the x-y location in thedocument memory 22 for a form, table or a spreadsheet can for example be defined based on the location of a cell/field within the document (e.g., column #, Row # and Page # for a spreadsheet). Alternatively, it can be defined based on number of Tabs and/or Arrow keys from a given known location. For example, a field in a Sales Order in the accounting application QuickBooks can be defined based on the number of Tab from the first field (i.e., “customer; job”) in the form. - The embed functionality can read the x-y information (see
step 2 in flow charts referenced inFIGS. 12 and 14 ), and then bring the insertion point to the desired location according to Embodiment One (see example flow charts referenced inFIGS. 15-16 ), or according to Embodiment Two (see example flow charts for MS Word referenced inFIG. 26 ). Then the handwritten information can be embedded. For example, for a Sales Order in QuickBooks, emulating the keyboard keys combination “Cntrl+J” will bring the insertion point to the first field, customer; job; then, emulating three Tab keys will bring the insertion point to the “Date” field, or emulating eight Tab keys will bring the insertion point to the field of the first “Item Code”. - The software application QuickBooks has no macros or programming capabilities. Forms (e.g., Sales Order, a Bill, or a Purchase Order) and Lists (e.g., Chart of Accounts and customer; job list) in QuickBooks can either be invoked via pull-down menus via the toolbar, or via a shortcut key. Therefore, Embodiment One could be used to emulate keyboard keystrokes to invoke specific form or a specific list. For example, invoking a new invoice can be achieved by emulating the keyboard keys combination “Cntrl+N” and invoking the chart of accounts list can be achieved by emulating the keyboard keys combination “Cntrl+A”. Invoking a Sales Order, which has no associated shortcut key defined, can be achieved by emulating the following keyboard keystrokes:
-
- 1. “Alt+C”;brings the pull-down menu from the toolbar menu related to “Customers”
- 2. “Alt+O”; Invokes a new sales order form
- Once a form is invoked, the insertion point can be brought to the specified x-y location, and then the recognized handwritten information (i.e., command(s) and associated text) can be embedded.
- As far as the user is concerned, he can either write the information (e.g., for posting a bill) on a pre-set form (e.g., in conjunction with the
digitizing pad 12 or touch screen 11) or specify commands related to the operation desired. Parameters, such as the type of entry (a form, or a command), the order for entering commands, and the setup of the form are selected by the user instep 1 “Document Type and Preferences Setup” (A) illustrated inFIG. 4 and inFIG. 5 . - For example, the following sequence handwritten commands will post a bill for purchase of office supply at OfficeMax on Mar. 2, 2005, for a total of $45. The parameter “office supply”, which is the account associated with the purchase, may be omitted if the vendor OfficeMax has already been set up in QuickBooks. Information can be read from the
document memory 22 and based on this information the embedfunctionality 24 can determine if the account has previously been set up or not, and report the result on thedisplay 25. This, for example can be achieved by attempting to cut information from the “Account” field (i.e., via the clipboard), assuming the account is already set up. The data in the clipboard can be compared with the expected results, and based on that, generating output for the display. - $45
Office supply - In applications such as Excel, either or both Embodiment One and Embodiment Two can be used to bring the insertion point to the desired location and to embed recognized handwritten information.
- A wireless pad can be used for transmission of an integrated document to a computer and optionally receiving back information that is related to the transmitted information. It can be used, for example, in the following scenarios:
-
- 1—Filling up a form at a doctor office
- 2—Filling up an airway bill for shipping a package
- 3—Filing up an application for a driver license at the DMV
- 4—Serving a customer at a car rental agency or at a retail store.
- 5—Taking notes at a crime seen or at an accident site
- 6—Order taking off-site, e.g., at conventions.
- Handwritten information can be inserted in designated locations in a pre-designed document such an order form, an application, a table or an invoice, on top of a
digitizing pad 12 or using atouch screen 11 or the like. The pre-designed form is stored in a remote or a close-by computer. The handwritten information can be transmitted via a wireless link concurrently to a receiving computer. The receiving computer will recognize the handwritten information, interpret it and store it in a machine code into the pre-designed document. Optionally, the receiving computer will prepare a response to and transmit it back to the transmitting pad (or touch screen), e.g., to assist the user. - For example, information filled out on the
pad 12 in an order form at a convention can be transmitted to an accounting program or a database residing in a close-by or remote server computer as the information is written. In turn, the program can check the status of an item, such as cost, price and stock status, and transmit information in real-time to assist the order taker. When the order taker indicates that the order has been completed, a sales order or an invoice can be posted in the remote server computer. -
FIG. 39 is a schematic diagram of an Integrated Edited Document System shown in connection with the use of a Wireless Pad. The Wireless Pad comprises adigitizing pad 12,display 25,data receiver 48, processingcircuitry 60, transmission circuitry I 50, and receivingcircuitry II 58. The digitizing pad receives tactile positional input from a writingpen 10. The transmission circuitry I 50 takes data from thedigitizing pad 12 via thedata receiver 48 and supplies it to receiving circuitry I 52 of a remote processing unit. The receivingcircuitry II 58 captures information fromdisplay processing 54 via transmission circuitry II 56 of the remote circuitry and supplies it to processingcircuitry 60 for thedisplay 25. The receiving memory I 52 communicates with thedata receiving memory 16 which interacts with therecognition module 18 as previously explained, which in turn interacts with the RHI processor andmemory 20 and thedocument memory 22 as previously explained. The embedded criteria andfunctionality element 24 interacts with theelements display processing unit 54. - In a communication between two or more parties at different locations, with this invention handwritten information can be incorporated into a document, information can be recognized and converted into machine-readable text and image and incorporated into the document as “For Review”. As discussed in connection with
FIG. 6 (as an exemplary embodiment for MS Word type document), “For review” information can be displayed in a number of ways. The “For Review” document can then be sent to one or more receiving parties (e.g., via email). The receiving party may approve portions or all of the revisions and/or revise further in handwriting (as the sender has done) via the digitizedpad 12, via thetouch screen 11 or via a wireless pad. The document can then be sent again “for review”. This process may continue until all revisions are incorporated/concluded. - With this invention, handwritten information on a page (with or without machine-printed information) can be sent via fax, and the receiving facsimile machine enhanced as a Multiple Function Device (printer/fax, character recognizing scanner) can convert the document into a machine-readable text/image for a designated application (e.g., Microsoft Word). Revisions vs. original information can be distinguished and converted accordingly based on designated revision areas marked on the page (e.g., by underlining or circling the revisions). Then it can be sent (e.g., via email) “For Review” (as discussed above, under “Remote Communication”).
- Integrated Document Editor with the Use of a Cell Phone
- Handwritten information can be entered on a
digitizing pad 12 whereby locations on thedigitizing pad 12 correspond to locations on the cell phone display. Alternatively handwritten information can be entered on a touch screen that is used as a digitizing pad as well as a display (i.e., similar to thetouch screen 11 referenced inFIG. 38 ). Handwritten information can either be new information, or revision of an existing stored information (e.g., a phone number, contact name, to do list, calendar events, an image photo, etc.). Handwritten information can be recognized by therecognition element 18, processed by theRHI element 20 and then embedded into the document memory 22 (e.g., in a specific memory location of a specific contact information). Embedding the handwritten information can, for example, be achieved by directly accessing locations in the document memory (e.g., specific contact name); however, the method by which recognized handwritten information is embedded can be determined at the OEM level by the manufacturer of the phone. - A unique representation such as a signature, a stamp, a finger print or any other drawing pattern can be pre-set and fed into the
recognition element 18 as units that are part of a vocabulary or as a new character. When handwritten information is recognized as one of these pre-set units to be placed in a, e.g., specific expected x-y location of the digitizing pad 12 (FIG. 1 ) or touch screen 11 (FIG. 38 ), an authentication or part of an authentication will pass. The authentication will fail if there is no match between the recognized unit and the pre-set expected unit. This can be useful for authentication of a document (e.g., an email, a ballot or a form) to insure that the writer/sender of the document is the intended sender. Other examples are for authentication and access of bank information or credit reports. The unique pre-set patterns can be either or both: 1) stored in a specific platform belonging to the user and/or 2) stored in a remote database location. It should be noted that the unique pre-set patterns (e.g., a signature) do not have to be disclosed in the document. For example, when an authentication of a signature passes, the embeddedfunctionality 24 will, for example embedd the word “OK” in the signature line/field of the document. - The invention has now been explained with reference to specific embodiments. Other embodiments will be evident to those of skill in the art without departing from the spirit and scope of the invention. Therefore it is not intended for the invention to be limited, except as indicated by the appended claims.
Claims (60)
1. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input within the displayed representation; and
one or more processing units, configured to automatically:
identify a user chosen command based on one or more changes in positional locations within the user input,
define a region on the touch screen, and
identify a document location within user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, and
said document location being capable of being one of said user chosen locations at which to apply said user chosen command to at least one text character, wherein:
said user chosen locations within said plurality of document locations are automatically determined, and
said region is defined to encompass allowed variation between said document location as represented on the touch screen and said user selected positional location, to compensate for human error.
2. The computing device according to claim 1 , wherein:
said document location is further identified based on: said document location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, and
said closest location being within document locations of which each is capable of being one of said user chosen locations.
3. The computing device according to claim 1 wherein said region is further defined based on at least one of said plurality of document locations as represented on the touch screen being within said region.
4-7. (canceled)
8. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input on the touch screen, associated with the displayed representation; and
one or more processing units, configured to
determine a user chosen command based on one or more user selected positional locations within the user input,
automatically define a region on the touch screen, and
automatically identify a document location within user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, and
said document location being capable of being one of said user chosen locations at which to apply said user chosen command to at least one text character, wherein:
said user chosen locations within said plurality of document locations are automatically determined, and
said computing device is configured to generate at least one of an audible signal and a visual signal indicative of said proximity associated with said user selected positional location.
9. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input on the touch screen, associated with the displayed representation; and
one or more processing units, configured to
determine a user chosen command based on one or more user selected positional locations within the user input,
automatically define a region on the touch screen, and
automatically identify a document location within user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, and
said document location being capable of being one of said user chosen locations at which to apply said user chosen command to at least one text character, wherein:
said user chosen locations within said plurality of document locations are automatically determined, and
said computing device is configured to generate at least one of an audible signal and a visual signal to aid a user in pointing closer to a positional location on the touch screen within the user input associated with said document location.
10-13. (canceled)
14. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein said at least one portion of the document comprises a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input within the displayed representation; and
one or more processing units, configured to automatically:
identify a user chosen command based on one or more changes in positional locations within the user input,
define a region on the touch screen, and
identify a document location within a range of user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, and
said document location being capable of being within said range of said user chosen locations at which to apply said user chosen command to a plurality of text characters, wherein:
said range of said user chosen locations within said plurality of document locations is automatically determined, and
said region is defined to encompass allowed variation between said document location as represented on the touch screen and said user selected positional location, to compensate for human error.
15. The computing device according to claim 14 , wherein:
said document location is further identified based on: said document location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, and
said closest location being within document locations of which each is capable of being one of said user chosen locations.
16. The computing device according to claim 14 wherein said region is further defined based on at least one of said plurality of document locations as represented on the touch screen being within said region.
17. (canceled)
18. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein said at least one portion of the document comprises a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input within the displayed representation; and
one or more processing units, configured to
determine a user chosen command based on one or more user selected positional locations within the user input,
automatically define a region on the touch screen, and
automatically identify a document location within a range of user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, and
said document location being capable of being within said range of said user chosen locations at which to apply said user chosen command to a plurality of text characters, wherein:
said range of said user chosen locations within said plurality of document locations is automatically determined, and
said computing device is configured to generate at least one of an audible signal and a visual signal to aid a user in pointing closer to a positional location on the touch screen within the user input associated with said document location.
19. (canceled)
20. A method, comprising:
automatically identifying a user chosen command based on one or more changes in positional locations within user input, associated with displayed representation of at least one portion of a document on a touch screen of a computing device,
wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations;
automatically defining a region on the touch screen; and
automatically identifying a document location within user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, or said document location being of: text character or graphic element, of which at least one portion as represented on the touch screen being within said region and in proximity to or at said user selected positional location, and
said document location being capable of being one of said user chosen locations at which to apply said user chosen command to at least one: text character or graphic element, wherein:
said user chosen locations within said plurality of document locations are automatically determined, and
said region is defined to encompass allowed variation between
said document location or said portion of: text character or graphic element, as represented on the touch screen, and
said user selected positional location,
to compensate for human error.
21. (canceled)
22. The method according to claim 20 wherein said region is further defined based on:
at least one of said plurality of document locations as represented on the touch screen or
at least one portion of: text character or graphic element, as represented on the touch screen,
being within said region.
23-36. (canceled)
37. The method according to claim 20 wherein said document location is further identified based on:
said portion of: text character or graphic element, as represented on the touch screen being a closest portion of: text character or graphic element, represented on the touch screen that is within the region to said user selected positional location, or
said document location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, wherein said closest location being within document locations of which each is capable of being one of said user chosen locations.
38. A method, comprising:
determining a user chosen command based on at least one portion of user input, associated with displayed representation of at least one portion of a document on a touch screen of a computing device,
wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations;
automatically defining a region on the touch screen;
automatically identifying a document location within user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, or said document location being of: text character or graphic element, of which at least one portion as represented on the touch screen being within said region and in proximity to or at said user selected positional location, and
said document location being capable of being one of said user chosen locations at which to apply said user chosen command to at least one: text character or graphic element, wherein said user chosen locations within said plurality of document locations are automatically determined; and
generating at least one of an audible signal and a visual signal indicative of said proximity associated with said user selected positional location.
39. A method, comprising:
determining a user chosen command based on at least one portion of user input, associated with displayed representation of at least one portion of a document on a touch screen of a computing device,
wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations;
automatically defining a region on the touch screen;
automatically identifying a document location within user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, or said document location being of: text character or graphic element, of which at least one portion as represented on the touch screen being within said region and in proximity to or at said user selected positional location, and
said document location being capable of being one of said user chosen locations at which to apply said user chosen command to at least one: text character or graphic element, wherein said user chosen locations within said plurality of document locations are automatically determined; and
generating at least one of an audible signal and a visual signal to aid a user in pointing closer to at least one positional location within the user input associated with said document location.
40. (canceled)
41. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein said at least one portion of the document comprises a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input within the displayed representation; and
one or more processing units, configured to
determine a user chosen command based on one or more user selected positional locations within the user input,
automatically define a region on the touch screen, and
automatically identify a document location within a range of user chosen locations, based on:
said document location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, and
said document location being capable of being within said range of said user chosen locations at which to apply said user chosen command to a plurality of text characters, wherein:
said range of said user chosen locations within said plurality of document locations is automatically determined, and
said computing device is configured to generate at least one of an audible signal and a visual signal indicative of said proximity associated with said user selected positional location.
42-43. (canceled)
44. The computing device according to claim 1 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, change an attribute of, cut or move, at least one text character.
45. (canceled)
46. The computing device according to claim 14 wherein said one or more operations are configured to automatically apply said user chosen command to one of: select, delete, copy, change an attribute of, move or cut, a plurality of text characters.
47. (canceled)
48. The method according to claim 20 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, cut, change an attribute of or move, at least one: text character or graphic element.
49. (canceled)
50. The computing device according to claim 1 wherein said to automatically identify said user chosen command based on said one or more changes in positional locations, comprises: to recognize said one or more changes in positional locations as said user chosen command.
51. (canceled)
52. The computing device according to claim 14 wherein said to automatically identify said user chosen command based on said one or more changes in positional locations, comprises: to recognize said one or more changes in positional locations as said user chosen command.
53. (canceled)
54. The method according to claim 20 wherein said identifying said user chosen command based on said one or more changes in positional locations, comprises: recognizing said one or more changes in positional locations as said user chosen command.
55-56. (canceled)
57. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input within the displayed representation; and
one or more processing units, configured to automatically:
identify a user chosen command based on one or more changes in positional locations within the user input,
define a region on the touch screen, and
determine a user chosen location, based on:
said user chosen location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, or said user chosen location being of: text character or graphic element, of which at least one portion as represented on the touch screen being within said region and in proximity to or at said user selected positional location, and
said user chosen location within said plurality of document locations being capable of being applied at by said user chosen command to a text character or to at least one graphic element,
wherein, the region is defined to encompass allowed variation between
said user chosen location or said portion of: text character or graphic element, as represented on the touch screen, and
said user selected positional location,
to compensate for human error.
58. The computing device according to claim 57 wherein said region is further defined based on:
at least one of said plurality of document locations as represented on the touch screen or
at least one portion of: text character or graphic element, as represented on the touch screen,
being within said region.
59. (canceled)
60. The computing device according to claim 57 wherein said user chosen location is further determined based on:
said portion of: text character or graphic element, as represented on the touch screen being a closest portion of: text character or graphic element, represented on the touch screen that is within the region to said user selected positional location, or
said user chosen location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location,
wherein said closest location being within document locations of which each is capable of being applied at by said user chosen command.
61. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input on the touch screen, associated with the displayed representation; and
one or more processing units, configured to
determine a user chosen command based on one or more user selected positional locations within the user input,
automatically define a region on the touch screen, and
automatically determine a user chosen location, based on:
said user chosen location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, or said user chosen location being of: text character or graphic element, of which at least one portion as represented on the touch screen being within said region and in proximity to or at said user selected positional location, and
said user chosen location within said plurality of document locations being capable of being applied at by said user chosen command to a text character or to at least one graphic element,
wherein the computing device is configured to generate at least one of an audible signal and a visual signal indicative of said proximity associated with said user selected positional location.
62. A computing device, comprising:
a touch screen, configured to
display a representation of at least one portion of a document, wherein one or more portions of the document comprise a plurality of document locations, each accessible by one or more operations, and
accept, in memory of the computing device, data representing user input on the touch screen, associated with the displayed representation; and
one or more processing units, configured to
determine a user chosen command based on one or more user selected positional locations within the user input,
automatically define a region on the touch screen, and
automatically determine a user chosen location, based on:
said user chosen location as represented on the touch screen being within said region and in proximity to or at a user selected positional location within the user input, or said user chosen location being of: text character or graphic element, of which at least one portion as represented on the touch screen being within said region and in proximity to or at said user selected positional location, and
said user chosen location within said plurality of document locations being capable of being applied at by said user chosen command to a text character or to at least one graphic element,
wherein the computing device is configured to generate at least one of an audible signal and a visual signal to aid a user in pointing closer to a positional location on the touch screen within the user input associated with said user chosen location.
63. (canceled)
64. The computing device according to claim 57 wherein said to automatically identify said user chosen command based on said one or more changes in positional locations, comprises: to recognize said one or more changes in positional locations as said user chosen command.
65. The computing device according to claim 57 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, change an attribute of, cut or move, at least one: text character or graphic element.
66. The method according to claim 38 wherein said document location is further identified based on:
said portion of: text character or graphic element, as represented on the touch screen being a closest portion of: text character or graphic element, represented on the touch screen that is within the region to said user selected positional location, or
said document location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, wherein said closest location being within document locations of which each is capable of being one of said user chosen locations.
67. The method according to claim 38 wherein said region is a computed region based on:
at least one of said plurality of document locations as represented on the touch screen or
at least one portion of: text character or graphic element, as represented on the touch screen,
being within said computed region.
68. The method according to claim 38 wherein the region is defined to encompass allowed variation between
said document location or said portion of: text character or graphic element, as represented on the touch screen, and
said user selected positional location,
to compensate for human error.
69. The method according to claim 38 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, cut, change an attribute of or move, at least one: text character or graphic element.
70. The method according to claim 39 wherein said document location is further identified based on:
said portion of: text character or graphic element, as represented on the touch screen being a closest portion of: text character or graphic element, represented on the touch screen that is within the region to said user selected positional location, or
said document location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, wherein said closest location being within document locations of which each is capable of being one of said user chosen locations.
71. The method according to claim 39 wherein said region is a computed region based on:
at least one of said plurality of document locations as represented on the touch screen or
at least one portion of: text character or graphic element, as represented on the touch screen,
being within said computed region.
72. The method according to claim 39 wherein the region is defined to encompass allowed variation between
said document location or said portion of: text character or graphic element, as represented on the touch screen, and
said user selected positional location,
to compensate for human error.
73. The method according to claim 39 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, cut, change an attribute of or move, at least one: text character or graphic element.
74. The computing device according to claim 61 wherein said user chosen location is further determined based on:
said portion of: text character or graphic element, as represented on the touch screen being a closest portion of: text character or graphic element, represented on the touch screen that is within the region to said user selected positional location, or
said user chosen location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, wherein said closest location being within document locations of which each is capable of being applied at by said user chosen command.
75. The computing device according to claim 61 wherein said region is a computed region based on:
at least one of said plurality of document locations as represented on the touch screen or
at least one portion of: text character or graphic element, as represented on the touch screen,
being within said computed region.
76. The computing device according to claim 61 wherein the region is defined to encompass allowed variation between
said user chosen location or said portion of: text character or graphic element, as represented on the touch screen, and
said user selected positional location,
to compensate for human error.
77. The computing device according to claim 61 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, change an attribute of or move, at least one: text character or graphic element.
78. The computing device according to claim 62 wherein said user chosen location is further determined based on:
said portion of: text character or graphic element, as represented on the touch screen being a closest portion of: text character or graphic element, represented on the touch screen that is within the region to said user selected positional location, or
said user chosen location as represented on the touch screen being a closest location represented on the touch screen within the region to said user selected positional location, wherein said closest location being within document locations of which each is capable of being applied at by said user chosen command.
79. The computing device according to claim 62 wherein said region is a computed region based on:
at least one of said plurality of document locations as represented on the touch screen or
at least one portion of: text character or graphic element, as represented on the touch screen,
being within said computed region.
80. The computing device according to claim 62 wherein the region is defined to encompass allowed variation between
said user chosen location or said portion of: text character or graphic element, as represented on the touch screen, and
said user selected positional location,
to compensate for human error.
81. The computing device according to claim 62 wherein said one or more operations are configured to automatically apply said user chosen command to one of: insert, select, delete, copy, change an attribute of, cut or move, at least one: text character or graphic element.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/955,288 US10169301B1 (en) | 2005-06-02 | 2013-07-31 | Integrated document editor |
US13/955,378 US9582095B1 (en) | 2005-06-02 | 2013-07-31 | Integrated document editor |
US16/133,688 US11442619B2 (en) | 2005-06-02 | 2018-09-17 | Integrated document editor |
US16/158,235 US10810352B2 (en) | 2005-06-02 | 2018-10-11 | Integrated document editor |
US17/036,292 US20210012057A1 (en) | 2005-06-02 | 2020-09-29 | Integrated document editor |
US17/870,114 US20220357844A1 (en) | 2005-06-02 | 2022-07-21 | Integrated document editor |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/144,492 US7961943B1 (en) | 2005-06-02 | 2005-06-02 | Integrated document editor |
US13/092,114 US8548239B1 (en) | 2005-06-02 | 2011-04-21 | Integrated document editor |
US13/955,288 US10169301B1 (en) | 2005-06-02 | 2013-07-31 | Integrated document editor |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/092,114 Division US8548239B1 (en) | 2005-06-02 | 2011-04-21 | Integrated document editor |
US15/391,710 Continuation-In-Part US10133477B1 (en) | 2005-06-02 | 2016-12-27 | Integrated document editor |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/133,688 Continuation-In-Part US11442619B2 (en) | 2005-06-02 | 2018-09-17 | Integrated document editor |
US16/158,235 Continuation US10810352B2 (en) | 2005-06-02 | 2018-10-11 | Integrated document editor |
Publications (2)
Publication Number | Publication Date |
---|---|
US10169301B1 US10169301B1 (en) | 2019-01-01 |
US20190005001A1 true US20190005001A1 (en) | 2019-01-03 |
Family
ID=44121950
Family Applications (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/144,492 Expired - Fee Related US7961943B1 (en) | 2005-06-02 | 2005-06-02 | Integrated document editor |
US13/092,114 Active US8548239B1 (en) | 2005-06-02 | 2011-04-21 | Integrated document editor |
US13/955,288 Active US10169301B1 (en) | 2005-06-02 | 2013-07-31 | Integrated document editor |
US13/955,378 Active US9582095B1 (en) | 2005-06-02 | 2013-07-31 | Integrated document editor |
US15/391,710 Active US10133477B1 (en) | 2005-06-02 | 2016-12-27 | Integrated document editor |
US16/152,244 Active 2025-07-27 US10810351B2 (en) | 2005-06-02 | 2018-10-04 | Integrated document editor |
US16/158,235 Active 2025-07-10 US10810352B2 (en) | 2005-06-02 | 2018-10-11 | Integrated document editor |
US17/036,267 Pending US20210012056A1 (en) | 2005-06-02 | 2020-09-29 | Integrated document editor |
US17/036,292 Pending US20210012057A1 (en) | 2005-06-02 | 2020-09-29 | Integrated document editor |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/144,492 Expired - Fee Related US7961943B1 (en) | 2005-06-02 | 2005-06-02 | Integrated document editor |
US13/092,114 Active US8548239B1 (en) | 2005-06-02 | 2011-04-21 | Integrated document editor |
Family Applications After (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/955,378 Active US9582095B1 (en) | 2005-06-02 | 2013-07-31 | Integrated document editor |
US15/391,710 Active US10133477B1 (en) | 2005-06-02 | 2016-12-27 | Integrated document editor |
US16/152,244 Active 2025-07-27 US10810351B2 (en) | 2005-06-02 | 2018-10-04 | Integrated document editor |
US16/158,235 Active 2025-07-10 US10810352B2 (en) | 2005-06-02 | 2018-10-11 | Integrated document editor |
US17/036,267 Pending US20210012056A1 (en) | 2005-06-02 | 2020-09-29 | Integrated document editor |
US17/036,292 Pending US20210012057A1 (en) | 2005-06-02 | 2020-09-29 | Integrated document editor |
Country Status (1)
Country | Link |
---|---|
US (9) | US7961943B1 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11442619B2 (en) * | 2005-06-02 | 2022-09-13 | Eli I Zeevi | Integrated document editor |
US7961943B1 (en) | 2005-06-02 | 2011-06-14 | Zeevi Eli I | Integrated document editor |
US8982066B2 (en) * | 2012-03-05 | 2015-03-17 | Ricoh Co., Ltd. | Automatic ending of interactive whiteboard sessions |
US8892990B2 (en) * | 2012-03-07 | 2014-11-18 | Ricoh Co., Ltd. | Automatic creation of a table and query tools |
WO2014179890A1 (en) * | 2013-05-09 | 2014-11-13 | Sunnybrook Research Institute | Systems and methods for providing visual feedback of touch panel input during magnetic resonance imaging |
US9696810B2 (en) | 2013-06-11 | 2017-07-04 | Microsoft Technology Licensing, Llc | Managing ink content in structured formats |
US10270819B2 (en) | 2014-05-14 | 2019-04-23 | Microsoft Technology Licensing, Llc | System and method providing collaborative interaction |
US9552473B2 (en) | 2014-05-14 | 2017-01-24 | Microsoft Technology Licensing, Llc | Claiming data from a virtual whiteboard |
US9530318B1 (en) | 2015-07-28 | 2016-12-27 | Honeywell International Inc. | Touchscreen-enabled electronic devices, methods, and program products providing pilot handwriting interface for flight deck systems |
US10636074B1 (en) * | 2015-09-18 | 2020-04-28 | Amazon Technologies, Inc. | Determining and executing application functionality based on text analysis |
US10402751B2 (en) * | 2016-03-21 | 2019-09-03 | Ca, Inc. | Document analysis system that uses machine learning to predict subject matter evolution of document content |
EP3682319A4 (en) * | 2017-09-15 | 2021-08-04 | Zeevi, Eli | Integrated document editor |
US10713424B2 (en) * | 2018-04-10 | 2020-07-14 | Microsoft Technology Licensing, Llc | Automated document content modification |
CN111382621A (en) * | 2018-12-28 | 2020-07-07 | 北大方正集团有限公司 | Parameter adjusting method and device |
WO2021055243A1 (en) * | 2019-09-16 | 2021-03-25 | Texas Tech University System | Data visualization device and method |
CN112989786B (en) * | 2021-01-18 | 2023-08-18 | 平安国际智慧城市科技股份有限公司 | Document analysis method, system, device and storage medium based on image recognition |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6493702B1 (en) * | 1999-05-05 | 2002-12-10 | Xerox Corporation | System and method for searching and recommending documents in a collection using share bookmarks |
US20040001627A1 (en) * | 2002-06-28 | 2004-01-01 | Microsoft Corporation | Writing guide for a free-form document editor |
US20040196313A1 (en) * | 2003-02-26 | 2004-10-07 | Microsoft Corporation | Ink repurposing |
US20050175242A1 (en) * | 2003-04-24 | 2005-08-11 | Fujitsu Limited | Online handwritten character input device and method |
US7120872B2 (en) * | 2002-03-25 | 2006-10-10 | Microsoft Corporation | Organizing, editing, and rendering digital ink |
US7844893B2 (en) * | 2005-03-25 | 2010-11-30 | Fuji Xerox Co., Ltd. | Document editing method, document editing device, and storage medium |
US7961943B1 (en) * | 2005-06-02 | 2011-06-14 | Zeevi Eli I | Integrated document editor |
US8253708B2 (en) * | 2005-03-18 | 2012-08-28 | Microsoft Corporation | Systems, methods, and computer-readable media for invoking an electronic ink or handwriting interface |
Family Cites Families (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US755187A (en) * | 1903-06-19 | 1904-03-22 | Adrian Wire Fence Company | Dies. |
US5157737A (en) * | 1986-07-25 | 1992-10-20 | Grid Systems Corporation | Handwritten keyboardless entry computer system |
US4972496A (en) * | 1986-07-25 | 1990-11-20 | Grid Systems Corporation | Handwritten keyboardless entry computer system |
US6539363B1 (en) * | 1990-08-30 | 2003-03-25 | Ncr Corporation | Write input credit transaction apparatus and method with paperless merchant credit card processing |
US5666139A (en) * | 1992-10-15 | 1997-09-09 | Advanced Pen Technologies, Inc. | Pen-based computer copy editing apparatus and method for manuscripts |
JP3362913B2 (en) * | 1993-05-27 | 2003-01-07 | 松下電器産業株式会社 | Handwritten character input device |
JPH07200155A (en) * | 1993-12-10 | 1995-08-04 | Microsoft Corp | Detection of nonobjective result of pen-type computer system |
US5544255A (en) * | 1994-08-31 | 1996-08-06 | Peripheral Vision Limited | Method and system for the capture, storage, transport and authentication of handwritten signatures |
JPH08235269A (en) * | 1995-02-28 | 1996-09-13 | Dainippon Printing Co Ltd | Character arraying method and slip kind design system |
JPH10207873A (en) * | 1997-01-17 | 1998-08-07 | Casio Comput Co Ltd | Ruled line processor and storage medium |
JPH10240220A (en) * | 1997-03-03 | 1998-09-11 | Toshiba Corp | Information processing equipment having annotation display function |
JP3746378B2 (en) * | 1997-08-26 | 2006-02-15 | シャープ株式会社 | Electronic memo processing device, electronic memo processing method, and computer-readable recording medium recording electronic memo processing program |
US6408092B1 (en) * | 1998-08-31 | 2002-06-18 | Adobe Systems Incorporated | Handwritten input in a restricted area |
US6415256B1 (en) * | 1998-12-21 | 2002-07-02 | Richard Joseph Ditzik | Integrated handwriting and speed recognition systems |
US6167376A (en) * | 1998-12-21 | 2000-12-26 | Ditzik; Richard Joseph | Computer system with integrated telephony, handwriting and speech recognition functions |
CN1173247C (en) * | 1999-01-13 | 2004-10-27 | 国际商业机器公司 | Hand written information processing system with user's interface for cutting characters |
JP3498624B2 (en) * | 1999-03-31 | 2004-02-16 | 株式会社デンソー | Radar equipment |
US6931153B2 (en) * | 2000-04-20 | 2005-08-16 | Matsushita Electric Industrial Co., Ltd. | Handwritten character recognition apparatus |
WO2002003189A1 (en) * | 2000-06-30 | 2002-01-10 | Zinio Systems, Inc. | System and method for encrypting, distributing and viewing electronic documents |
US6941507B2 (en) * | 2000-11-10 | 2005-09-06 | Microsoft Corporation | Insertion point bungee space tool |
US6912308B2 (en) * | 2000-12-01 | 2005-06-28 | Targus Communications Corp. | Apparatus and method for automatic form recognition and pagination |
US20030007018A1 (en) * | 2001-07-09 | 2003-01-09 | Giovanni Seni | Handwriting user interface for personal digital assistants and the like |
US7158678B2 (en) * | 2001-07-19 | 2007-01-02 | Motorola, Inc. | Text input method for personal digital assistants and the like |
US7039234B2 (en) * | 2001-07-19 | 2006-05-02 | Microsoft Corporation | Electronic ink as a software object |
US6727896B2 (en) * | 2001-08-01 | 2004-04-27 | Microsoft Corporation | Correction of alignment and linearity errors in a stylus input system |
JP4050055B2 (en) * | 2002-01-10 | 2008-02-20 | 株式会社リコー | Handwritten character batch conversion apparatus, handwritten character batch conversion method, and program |
US20030214531A1 (en) * | 2002-05-14 | 2003-11-20 | Microsoft Corporation | Ink input mechanisms |
US7050632B2 (en) * | 2002-05-14 | 2006-05-23 | Microsoft Corporation | Handwriting layout analysis of freeform digital ink input |
US7925987B2 (en) * | 2002-05-14 | 2011-04-12 | Microsoft Corporation | Entry and editing of electronic ink |
US7353453B1 (en) * | 2002-06-28 | 2008-04-01 | Microsoft Corporation | Method and system for categorizing data objects with designation tools |
US7751623B1 (en) * | 2002-06-28 | 2010-07-06 | Microsoft Corporation | Writing guide for a free-form document editor |
JP3783956B2 (en) * | 2002-07-23 | 2006-06-07 | 株式会社リコー | Image recording apparatus and image data selection method |
US7002560B2 (en) * | 2002-10-04 | 2006-02-21 | Human Interface Technologies Inc. | Method of combining data entry of handwritten symbols with displayed character data |
US7634729B2 (en) * | 2002-11-10 | 2009-12-15 | Microsoft Corporation | Handwritten file names |
US20040225965A1 (en) * | 2003-05-06 | 2004-11-11 | Microsoft Corporation | Insertion location tracking for controlling a user interface |
US7249320B2 (en) * | 2003-03-04 | 2007-07-24 | Microsoft Corporation | Method and system for displaying a title area for a page series |
US8074184B2 (en) * | 2003-11-07 | 2011-12-06 | Mocrosoft Corporation | Modifying electronic documents with recognized content or other associated data |
US7106312B2 (en) * | 2003-11-10 | 2006-09-12 | Microsoft Corporation | Text input window with auto-growth |
JP2005149140A (en) * | 2003-11-14 | 2005-06-09 | Wacom Co Ltd | Position detector and position indicator |
KR100549304B1 (en) * | 2003-12-05 | 2006-02-02 | 엘지전자 주식회사 | Method and apparatus for controlling screen light of an image display device |
US7506271B2 (en) * | 2003-12-15 | 2009-03-17 | Microsoft Corporation | Multi-modal handwriting recognition correction |
US7298904B2 (en) * | 2004-01-14 | 2007-11-20 | International Business Machines Corporation | Method and apparatus for scaling handwritten character input for handwriting recognition |
EP1569140A3 (en) * | 2004-01-30 | 2006-10-25 | Hewlett-Packard Development Company, L.P. | Apparatus, methods and software for associating electronic and physical documents |
US7551187B2 (en) * | 2004-02-10 | 2009-06-23 | Microsoft Corporation | Systems and methods that utilize a dynamic digital zooming interface in connection with digital inking |
JP2006031342A (en) * | 2004-07-15 | 2006-02-02 | Fujitsu Component Ltd | Pointing device, information display system, and input method using pointing device |
US7372993B2 (en) * | 2004-07-21 | 2008-05-13 | Hewlett-Packard Development Company, L.P. | Gesture recognition |
US7634738B2 (en) * | 2004-11-19 | 2009-12-15 | Microsoft Corporation | Systems and methods for processing input data before, during, and/or after an input focus change event |
US7461348B2 (en) * | 2004-11-19 | 2008-12-02 | Microsoft Corporation | Systems and methods for processing input data before, during, and/or after an input focus change event |
JP4733415B2 (en) * | 2005-04-05 | 2011-07-27 | シャープ株式会社 | Electronic document display apparatus and method, and computer program |
KR100703771B1 (en) * | 2005-05-17 | 2007-04-06 | 삼성전자주식회사 | Apparatus and method for displaying input panel |
JP4569397B2 (en) * | 2005-06-15 | 2010-10-27 | 富士ゼロックス株式会社 | Electronic document management system, image forming apparatus, electronic document management method, and program |
JP4770360B2 (en) | 2005-09-26 | 2011-09-14 | 富士通株式会社 | CAD program, CAD apparatus and CAD system for performing projection control processing |
US8884990B2 (en) | 2006-09-11 | 2014-11-11 | Adobe Systems Incorporated | Scaling vector objects having arbitrarily complex shapes |
US8189920B2 (en) * | 2007-01-17 | 2012-05-29 | Kabushiki Kaisha Toshiba | Image processing system, image processing method, and image processing program |
US8116569B2 (en) * | 2007-12-21 | 2012-02-14 | Microsoft Corporation | Inline handwriting recognition and correction |
US8896597B2 (en) | 2008-04-14 | 2014-11-25 | Siemens Product Lifecycle Management Software Inc. | System and method for modifying geometric relationships in a solid model |
CN101986249A (en) | 2010-07-14 | 2011-03-16 | 上海无戒空间信息技术有限公司 | Method for controlling computer by using gesture object and corresponding computer system |
JP5946216B2 (en) | 2012-12-21 | 2016-07-05 | 富士フイルム株式会社 | Computer having touch panel, operating method thereof, and program |
US20140250410A1 (en) * | 2013-03-04 | 2014-09-04 | Triology LLC | Scheduling menu system and method having flip style graphical display |
KR20140132171A (en) * | 2013-05-07 | 2014-11-17 | 삼성전자주식회사 | Portable terminal device using touch pen and handwriting input method therefor |
WO2015059787A1 (en) * | 2013-10-23 | 2015-04-30 | 株式会社 東芝 | Electronic device, method, and program |
US10120529B2 (en) | 2014-07-08 | 2018-11-06 | Verizon Patent And Licensing Inc. | Touch-activated and expandable visual navigation of a mobile device via a graphic selection element |
US10671795B2 (en) * | 2014-12-23 | 2020-06-02 | Lenovo (Singapore) Pte. Ltd. | Handwriting preview window |
USD786274S1 (en) * | 2015-04-27 | 2017-05-09 | Lg Electronics Inc. | Display screen of a navigation device for a vehicle with a graphical user interface |
USD788157S1 (en) * | 2015-08-12 | 2017-05-30 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with animated graphical user interface |
US10359864B2 (en) * | 2016-04-27 | 2019-07-23 | Sharp Kabushiki Kaisha | Input display device and input display method |
US11112963B2 (en) * | 2016-05-18 | 2021-09-07 | Apple Inc. | Devices, methods, and graphical user interfaces for messaging |
US10671844B2 (en) * | 2017-06-02 | 2020-06-02 | Apple Inc. | Handwritten text recognition |
-
2005
- 2005-06-02 US US11/144,492 patent/US7961943B1/en not_active Expired - Fee Related
-
2011
- 2011-04-21 US US13/092,114 patent/US8548239B1/en active Active
-
2013
- 2013-07-31 US US13/955,288 patent/US10169301B1/en active Active
- 2013-07-31 US US13/955,378 patent/US9582095B1/en active Active
-
2016
- 2016-12-27 US US15/391,710 patent/US10133477B1/en active Active
-
2018
- 2018-10-04 US US16/152,244 patent/US10810351B2/en active Active
- 2018-10-11 US US16/158,235 patent/US10810352B2/en active Active
-
2020
- 2020-09-29 US US17/036,267 patent/US20210012056A1/en active Pending
- 2020-09-29 US US17/036,292 patent/US20210012057A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6493702B1 (en) * | 1999-05-05 | 2002-12-10 | Xerox Corporation | System and method for searching and recommending documents in a collection using share bookmarks |
US7120872B2 (en) * | 2002-03-25 | 2006-10-10 | Microsoft Corporation | Organizing, editing, and rendering digital ink |
US7322008B2 (en) * | 2002-03-25 | 2008-01-22 | Microsoft Corporation | Organizing, editing, and rendering digital ink |
US20040001627A1 (en) * | 2002-06-28 | 2004-01-01 | Microsoft Corporation | Writing guide for a free-form document editor |
US20040196313A1 (en) * | 2003-02-26 | 2004-10-07 | Microsoft Corporation | Ink repurposing |
US20050175242A1 (en) * | 2003-04-24 | 2005-08-11 | Fujitsu Limited | Online handwritten character input device and method |
US8253708B2 (en) * | 2005-03-18 | 2012-08-28 | Microsoft Corporation | Systems, methods, and computer-readable media for invoking an electronic ink or handwriting interface |
US7844893B2 (en) * | 2005-03-25 | 2010-11-30 | Fuji Xerox Co., Ltd. | Document editing method, document editing device, and storage medium |
US7961943B1 (en) * | 2005-06-02 | 2011-06-14 | Zeevi Eli I | Integrated document editor |
Also Published As
Publication number | Publication date |
---|---|
US10810352B2 (en) | 2020-10-20 |
US8548239B1 (en) | 2013-10-01 |
US20210012056A1 (en) | 2021-01-14 |
US20210012057A1 (en) | 2021-01-14 |
US10169301B1 (en) | 2019-01-01 |
US20190042547A1 (en) | 2019-02-07 |
US20190034079A1 (en) | 2019-01-31 |
US10810351B2 (en) | 2020-10-20 |
US10133477B1 (en) | 2018-11-20 |
US9582095B1 (en) | 2017-02-28 |
US7961943B1 (en) | 2011-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210012056A1 (en) | Integrated document editor | |
US7137076B2 (en) | Correcting recognition results associated with user input | |
KR101014075B1 (en) | Boxed and lined input panel | |
US8667410B2 (en) | Method, system and computer program product for transmitting data from a document application to a data application | |
US20080115046A1 (en) | Program, copy and paste processing method, apparatus, and storage medium | |
US20020107885A1 (en) | System, computer program product, and method for capturing and processing form data | |
US20090049375A1 (en) | Selective processing of information from a digital copy of a document for data entry | |
US20080170785A1 (en) | Converting Text | |
US8989497B2 (en) | Handwritten character input device, remote device, and electronic information terminal | |
CN108700994A (en) | System and method for digital ink interactivity | |
JP2013196479A (en) | Information processing system, information processing program, and information processing method | |
US20220357844A1 (en) | Integrated document editor | |
US20080301542A1 (en) | Digital paper-enabled spreadsheet systems | |
CN111492338B (en) | Integrated document editor | |
JP2001101162A (en) | Document processor and storage medium storing document processing program | |
JP6190549B1 (en) | Document processing system | |
JP6676121B2 (en) | Data input device and data input program | |
JP6838669B1 (en) | Electronic form editing equipment, methods, and programs | |
JP2018136709A (en) | Data input device, data input program and data input system | |
JP2024025219A (en) | PDF form reading device, reading method, and reading program | |
JPH1091618A (en) | Document preparation device and medium stored with document preparation device control program | |
JP2003248794A (en) | Document information processor, document information processing system and its program | |
JPH039465A (en) | Text preparing device | |
WO2007050372A2 (en) | Document recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |