JP5102614B2 - Processing techniques for visually acquired data from rendered documents - Google Patents

Processing techniques for visually acquired data from rendered documents Download PDF

Info

Publication number
JP5102614B2
JP5102614B2 JP2007509565A JP2007509565A JP5102614B2 JP 5102614 B2 JP5102614 B2 JP 5102614B2 JP 2007509565 A JP2007509565 A JP 2007509565A JP 2007509565 A JP2007509565 A JP 2007509565A JP 5102614 B2 JP5102614 B2 JP 5102614B2
Authority
JP
Japan
Prior art keywords
document
text
identified
user
system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2007509565A
Other languages
Japanese (ja)
Other versions
JP2008516297A (en
JP2008516297A6 (en
Inventor
マーティン ティー. キング,
クリフォード エー. クシュラー,
ジェームス クエンティン スタッフォード−フレーザー,
デール ローレンス グローバー,
Original Assignee
グーグル インコーポレイテッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US56348504P priority Critical
Priority to US56352004P priority
Priority to US60/563,485 priority
Priority to US60/563,520 priority
Priority to US56468804P priority
Priority to US56484604P priority
Priority to US60/564,846 priority
Priority to US60/564,688 priority
Priority to US60/566,667 priority
Priority to US56666704P priority
Priority to US60/571,381 priority
Priority to US60/571,560 priority
Priority to US57138104P priority
Priority to US57156004P priority
Priority to US60/571,715 priority
Priority to US57171504P priority
Priority to US60/589,203 priority
Priority to US60/589,202 priority
Priority to US58920104P priority
Priority to US58920304P priority
Priority to US58920204P priority
Priority to US60/589,201 priority
Priority to US59882104P priority
Priority to US60/598,821 priority
Priority to US60/602,947 priority
Priority to US60/602,896 priority
Priority to US60289704P priority
Priority to US60295604P priority
Priority to US60294704P priority
Priority to US60293004P priority
Priority to US60289804P priority
Priority to US60289604P priority
Priority to US60292504P priority
Priority to US60/602,925 priority
Priority to US60/602,956 priority
Priority to US60/602,898 priority
Priority to US60/602,930 priority
Priority to US60/602,897 priority
Priority to US60346604P priority
Priority to US60/603,082 priority
Priority to US60308204P priority
Priority to US60/603,081 priority
Priority to US60308104P priority
Priority to US60/603,466 priority
Priority to US60349804P priority
Priority to US60/603,498 priority
Priority to US60/603,358 priority
Priority to US60335804P priority
Priority to US60/604,098 priority
Priority to US60/604,103 priority
Priority to US60410004P priority
Priority to US60410204P priority
Priority to US60409804P priority
Priority to US60410304P priority
Priority to US60/604,102 priority
Priority to US60/604,100 priority
Priority to US60/605,229 priority
Priority to US60510504P priority
Priority to US60/605,105 priority
Priority to US60522904P priority
Priority to US61363304P priority
Priority to US60/613,454 priority
Priority to US61358904P priority
Priority to US61363204P priority
Priority to US61345504P priority
Priority to US61324204P priority
Priority to US61333904P priority
Priority to US61345604P priority
Priority to US61360204P priority
Priority to US61362804P priority
Priority to US61336104P priority
Priority to US60/613,633 priority
Priority to US61345404P priority
Priority to US61346004P priority
Priority to US61324304P priority
Priority to US61346104P priority
Priority to US61334104P priority
Priority to US61363404P priority
Priority to US61340004P priority
Priority to US61334004P priority
Priority to US60/613,243 priority
Priority to US60/613,632 priority
Priority to US60/613,461 priority
Priority to US60/613,634 priority
Priority to US60/613,341 priority
Priority to US60/613,242 priority
Priority to US60/613,361 priority
Priority to US60/613,339 priority
Priority to US60/613,602 priority
Priority to US60/613,340 priority
Priority to US60/613,460 priority
Priority to US60/613,400 priority
Priority to US60/613,628 priority
Priority to US60/613,455 priority
Priority to US60/613,589 priority
Priority to US60/613,456 priority
Priority to US60/615,538 priority
Priority to US60/615,112 priority
Priority to US61553804P priority
Priority to US61511204P priority
Priority to US61537804P priority
Priority to US60/615,378 priority
Priority to US61712204P priority
Priority to US60/617,122 priority
Priority to US62290604P priority
Priority to US60/622,906 priority
Priority to US11/004,637 priority
Priority to US11/004,637 priority patent/US7707039B2/en
Priority to US60/633,452 priority
Priority to US60/633,486 priority
Priority to US63345304P priority
Priority to US63345204P priority
Priority to US63367804P priority
Priority to US63348604P priority
Priority to US60/633,453 priority
Priority to US60/633,678 priority
Priority to US60/634,627 priority
Priority to US60/634,739 priority
Priority to US63473904P priority
Priority to US63462704P priority
Priority to US60/647,684 priority
Priority to US64768405P priority
Priority to US64874605P priority
Priority to US60/648,746 priority
Priority to US65337205P priority
Priority to US60/653,372 priority
Priority to US60/653,899 priority
Priority to US60/653,663 priority
Priority to US65366905P priority
Priority to US65367905P priority
Priority to US65384705P priority
Priority to US65366305P priority
Priority to US65389905P priority
Priority to US60/653,669 priority
Priority to US60/653,847 priority
Priority to US60/653,679 priority
Priority to US60/654,379 priority
Priority to US65437905P priority
Priority to US60/654,196 priority
Priority to US65436805P priority
Priority to US60/654,368 priority
Priority to US65432605P priority
Priority to US65419605P priority
Priority to US60/654,326 priority
Priority to US65527905P priority
Priority to US60/655,281 priority
Priority to US65598705P priority
Priority to US60/655,280 priority
Priority to US65528105P priority
Priority to US65569705P priority
Priority to US65528005P priority
Priority to US60/655,279 priority
Priority to US60/655,697 priority
Priority to US60/655,987 priority
Priority to US65730905P priority
Priority to US60/657,309 priority
Priority to US11/098,043 priority patent/US20060053097A1/en
Priority to US11/097,828 priority
Priority to US11/097,093 priority patent/US20060041605A1/en
Priority to US11/098,042 priority
Priority to US11/097,093 priority
Priority to US11/097,103 priority
Priority to US11/098,038 priority
Priority to US11/097,981 priority
Priority to US11/097,836 priority
Priority to US11/097,835 priority patent/US7831912B2/en
Priority to US11/097,835 priority
Priority to US11/097,961 priority patent/US20060041484A1/en
Priority to US11/097,836 priority patent/US20060041538A1/en
Priority to US11/097,089 priority patent/US8214387B2/en
Priority to US11/098,043 priority
Priority to US11/097,981 priority patent/US7606741B2/en
Priority to US11/096,704 priority
Priority to US11/098,038 priority patent/US7599844B2/en
Priority to US11/097,961 priority
Priority to US11/096,704 priority patent/US7599580B2/en
Priority to US11/097,833 priority patent/US8515816B2/en
Priority to US11/097,828 priority patent/US7742953B2/en
Priority to US11/098,016 priority patent/US7421155B2/en
Priority to US11/097,833 priority
Priority to US11/097,089 priority
Priority to US11/098,014 priority patent/US8019648B2/en
Priority to US11/098,042 priority patent/US7593605B2/en
Priority to US11/098,016 priority
Priority to US11/098,014 priority
Priority to US11/097,103 priority patent/US7596269B2/en
Application filed by グーグル インコーポレイテッド filed Critical グーグル インコーポレイテッド
Priority to PCT/US2005/013297 priority patent/WO2005101192A2/en
Publication of JP2008516297A publication Critical patent/JP2008516297A/en
Publication of JP2008516297A6 publication Critical patent/JP2008516297A6/en
Application granted granted Critical
Publication of JP5102614B2 publication Critical patent/JP5102614B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9554Retrieval from the web using information identifiers, e.g. uniform resource locators [URL] by using bar codes

Description

(Cross-reference of related applications)
This application is a continuation-in-part of the following applications, each of which is incorporated by reference in its entirety. US Patent Application No. 11 / 004,637 (filed on December 3, 2004), US Patent Application No. 11 / 097,961 (Title "METHODS AND SYSTEMS FOR INITITING APPLICATION BY DATA CAPTURE FROM RENDERED DOCUMENT US") Patent Application No. 11 / 097,093 (Title “DETERMINING ACTIONS INVOLVING CAPTURED INFORMATION AND ELECTRONIC CONTENT ASSOCIATED WITH RENDERED DOCUMENTS”, US Patent Application No. DATA CAPTURE DEVICES "), U.S. Patent Application No. 11 / 098,014 (title" SEARCH ENGINES AND SYSTEMS WITH HANDHELD DOCUMENT DATA CAPTURE DEVICES "), U.S. Patent Application No. 11 / 097,103 (title" TRIGGENS OPTICALLY OR ACOUSTICALLY CAPTURING KEYWORDS FROM A RENDERED DOCUMENT "), US Patent Application No. 11 / 098,043 (Title" SEARCHING AND ACCESSING DOCUMENTS WRITINGS WRITING WRIMENTS WRIFT NDERED DOCUMENTS "), US patent application No. 11 / 097,981 (title" INFORMATION GATHERING SYSTEM AND METHOD "), US patent application No. 11 / 097,089 (title" DOCUMENT ENHANCEMENT SYSTEM AND US patent application ") No. 11 / 097,835 (Title “PUBLISHING TECHNIQUES FOR ADDING VALUE TO A RENDERED DOCUMENT”), US Patent Application No. 11 / 098,016 (Title “ARCHIVED TEXT CAPTURES READ RENDRED U.S. Patent Application”) / 097,828 (Title "ADDING INFO MATION OR FUNCTIONALITY TO A RENDERED DOCUMENT VIA ASSOCIATION WITH AN ELECTRONIC COUNTERPART "), U.S. Patent Application No. 11 / 097,833 (entitled" AGGREGATE ANALYSIS OF TEXT CAPTURES PERFORMED BY MULTIPLE USERS FROM RENDERED DOCUMENTS "), U.S. Patent Application No. 11 / No. 097,836 (title “ESTABLISHING AN INTERACTIVE ENVIRONMENT FOR RENDERED DOCUMENTS”), US Patent Application No. 11 / 098,042 (title “DATA CAPTURE FROM RENDERED DOCU” ENTS the USING HANDHELD DEVICE "), and U.S. Patent Application No. 11 / 096,704 (entitled" CAPTURING TEXT FROM RENDERED DOCUMENTS USING SUPPLEMENTAL INFORMATION ").

  This application claims priority to the following US provisional patent applications and incorporates the entire contents of these applications by reference. Application No. 60 / 563,520 (filed Apr. 19, 2004), Application No. 60 / 563,485 (filed Apr. 19, 2004), Application No. 60 / 564,688 (2004) Filed April 23), application number 60 / 564,846 (filed April 23, 2004), application number 60 / 556,667 (filed April 30, 2004), application number 60 / 571,381 (filed on May 14, 2004), application number 60 / 571,560 (filed on May 14, 2004), application number 60 / 571,715 (filed on May 17, 2004) ), Application No. 60 / 589,203 (filed on July 19, 2004), Application No. 60 / 589,201 (filed on July 19, 2004), Application No. 60 / 589,202 ( 2004 7 No. 60 / 598,821 (filed Aug. 2, 2004), No. 60 / 602,956 (filed Aug. 18, 2004), No. 60/602 No. 925 (filed on August 18, 2004), application number 60 / 602,947 (filed on August 18, 2004), application number 60 / 602,897 (filed on August 18, 2004), Application No. 60 / 602,896 (filed on August 18, 2004), Application No. 60 / 602,930 (filed on August 18, 2004), Application No. 60 / 602,898 (2004) Application No. 60 / 603,466 (filed Aug. 19, 2004), Application No. 60 / 603,082 (filed Aug. 19, 2004), Application No. 60 / 603,081 (Filed on August 19, 2004), application number 60 / 603,498 (filed on August 20, 2004), application number 60 / 603,358 (filed on August 20, 2004), application number No. 60 / 604,103 (filed Aug. 23, 2004), Application No. 60 / 604,098 (filed Aug. 23, 2004), Application No. 60 / 604,100 (August 23, 2004) Application No. 60 / 604,102 (filed Aug. 23, 2004), Application No. 60 / 605,229 (filed Aug. 27, 2004), Application No. 60 / 605,105 No. (filed Aug. 27, 2004), Application No. 60 / 613,243 (filed Sep. 27, 2004), Application No. 60 / 613,628 (filed Sep. 27, 2004), Application Number 60/6 No. 13,632 (filed September 27, 2004), application number 60 / 613,589 (filed September 27, 2004), application number 60 / 613,242 (filed September 27, 2004) ), Application No. 60 / 613,602 (filed on Sep. 27, 2004), Application No. 60 / 613,340 (filed on Sep. 27, 2004), Application No. 60 / 613,634 ( No. 60 / 613,461 (filed Sep. 27, 2004), No. 60 / 613,455 (filed Sep. 27, 2004), No. No. 60 / 613,460 (filed September 27, 2004), Application No. 60 / 613,400 (filed September 27, 2004), Application No. 60 / 613,456 (September 27, 2004) Application) No. 60 / 613,341 (filed on September 27, 2004), Application No. 60 / 613,361 (filed on September 27, 2004), Application No. 60 / 613,454 (September 2004) Application No. 60 / 613,339 (filed September 27, 2004), Application No. 60 / 613,633 (filed Sep. 27, 2004), Application No. 60/615 , 378 (filed October 1, 2004), application number 60 / 615,112 (filed October 1, 2004), application number 60 / 615,538 (filed October 1, 2004) Application No. 60 / 617,122 (filed on Oct. 7, 2004), Application No. 60 / 622,906 (filed Oct. 28, 2004), Application No. 60 / 633,452 (2004) Year 1 Application No. 60 / 633,678 (filed on Dec. 6, 2004), Application No. 60 / 633,486 (filed Dec. 6, 2004), Application No. 60/633 No. 453 (filed on Dec. 6, 2004), Application No. 60 / 634,627 (filed Dec. 9, 2004), Application No. 60 / 634,739 (filed Dec. 9, 2004) Application No. 60 / 647,684 (filed Jan. 26, 2005), Application No. 60 / 648,746 (filed Jan. 31, 2005), Application No. 60 / 653,372 (2005) Application No. 60 / 653,663 (filed Feb. 16, 2005), Application No. 60 / 653,669 (filed Feb. 16, 2005), Application No. 60 / 653,899 (Filed February 16, 2005), application number 60 / 653,679 (filed February 16, 2005), application number 60 / 653,847 (filed February 16, 2005), application number No. 60 / 654,379 (filed Feb. 17, 2005), Application No. 60 / 654,368 (filed Feb. 18, 2005), Application No. 60 / 654,326 (Feb. 2005) Application No. 60 / 654,196 (filed Feb. 18, 2005), Application No. 60 / 655,279 (filed Feb. 22, 2005), Application No. 60/655 280 (filed February 22, 2005), application number 60 / 655,987 (filed February 22, 2005), application number 60 / 655,697 (filed February 22, 2005), Application number 60 / No. 55,281 (Feb. 22, 2005 filed), and Application No. 60 / 657,309 (February 28, 2005 filed).

(Technical field)
The described technology is directed to the field of document processing.

  Paper documents have an unwavering appeal, as can be seen by the proliferation of paper documents in the computer age. It is not as easy to print and publish paper documents as it is today. Paper documents are popular even though electronic documents are easier to copy, transmit, search, and edit.

  In view of the popularity of paper documents and the advantages of electronic documents, it would be useful to combine both benefits.

Overview A system for interpreting and interacting with rendered documents (eg, printed or displayed documents) and associated digital “source”, “duplicate”, or “reference” versions of these documents ( “System”) will be described. In some embodiments, the system itself may perform this function in some cases, but is not directly involved in character recognition and interpretation, but in recognizing and understanding printed characters. Not involved. Rather, the system knows that the version of the document is known, has a machine-readable (eg ASCII or some other machine-readable text) source or reference version document, or is machine accessible or available in the future It is assumed that it will be done. The system uses various features (including text) in the rendered document for navigation (ie, measuring location within the document). Locations are then used to allow a rich set of user functions and interactions, some of which are described below.

  The system is based, in part, on the process of interpreting and interpreting patterns of marks (eg, text and any rendered supplemental information marks) in a document to measure location information. In various embodiments, this location information relates to the document itself—eg, from a location within the document, often to a single paragraph, sentence, word, and single character. However, if the physical layout of the special rendering of the document is also known, the location information may be converted to a location on the display screen or printed page or the like.

  In discussing various embodiments of the system, the term “printed text” is used. “Printed” is used in a general sense for documents that render in any form that is human-readable (eg, on paper, display screen, braille format, etc.). It should be understood that often the various features and applications of the system apply well to non-alphanumeric rendered content such as punctuation marks, graphics and images, special marks, and the like. System embodiments include these additional uses.

Part 1-Introduction 1. System Properties For every paper document that has an electronic copy, there is a discrete amount of information in the paper document that can identify the electronic copy. In some embodiments, the system uses a sample of text captured from a paper document to identify and locate an electronic duplicate of the document, for example using a portable scanner. In most cases, the amount of text required by a function is very small, of which only a few of the text from a document can serve as an identifier for a paper document and a link to its electronic duplicate. Is a word. The system can also use those few words to identify not only the document, but also the location within the document.

  Thus, paper documents and their digital copies can be related in a number of useful ways using the systems discussed herein.

1.1. Future Overview When a system associates a portion of text in a paper document with an established specific digital entity, the system can build enormous functionality in that association.

  Most paper documents can be accessed on the World Wide Web, or from some other online database or collection of sentences, or can be made accessible, for example, upon payment of a fee or subscription fee. So, at the simplest level, if the user scans a few words in a paper document, the system will pick up or display the electronic copy or part of it, email it to someone, buy it, Can be printed or posted on a web page. As a further example, by scanning a few words of a book that a person is reading while having breakfast, the audiobook version in the person's car is taken from the point where the person starts the car for work. You can start reading, or you can start the replacement order process by scanning the printer cartridge serial number.

  The system gives these completely new layers of digital functionality to applicable traditional rendered documents without the need to change the current process of writing, printing and publishing documents, and these “paper / digital integration” Many other examples are implemented.

1.2. Terminology The general use of the system starts with using an optical scanner to scan text from paper documents, but it should be noted that other methods of obtaining from other types of documents are equally applicable. is important. Thus, the system may be described as scanning or obtaining text from a rendered document, where these terms are defined as follows:

  The rendered document is a printed document or a document shown on a display or monitor. Whether it is a permanent form or a temporary display, human beings can perceive a document.

  Scanning or acquisition is a systematic examination process for acquiring information from a rendered document. The process may include light acquisition using a scanner or camera (eg, a cell phone camera), or may include reading from a document to a voice acquisition device, or typing on a keypad or keyboard. See Section 15 for more examples.

2. System Introduction This section describes some of the devices, processes, and systems that are components of the system for paper / digital integration. In various embodiments, the system builds a wide variety of services and applications on this basic core that provides basic functionality.

2.1. Process FIG. 1 is a data flow diagram illustrating the flow of information in one embodiment of a core system. Other embodiments may not use all of the steps or elements illustrated herein, but some use more.

  Text is acquired 100 from the rendered document, typically in optical form by an optical scanner or in voice form by a voice recorder, and then this image or sound data is used to remove, for example, artifacts of the acquisition process or to improve the signal to noise ratio The process 102 is performed. A recognition process 104, such as OCR, speech recognition, or autocorrelation, then converts the data to a signature that in some embodiments includes text, text offsets, or other symbols. Alternatively, the system performs an alternative form of document signature extraction from the rendered document. A signature represents a set of possible text transcripts in some embodiments. This process is affected by feedback from other stages, for example, when the search process and context analysis 110 have identified some candidate documents where acquisition can occur and thus narrowed the possible interpretation of the original acquisition. obtain.

  The post-processing 106 stage can capture the output of the recognition process and filter it or perform other appropriate operations on it to be useful. Depending on the implementation implemented, some of the steps taken immediately at this stage, for example, when a phrase or symbol is obtained that contains enough information to itself convey the user's intention, regardless of the subsequent stage. Such direct execution 107 may be inferred. In these cases, there is no need to refer to the digital copy and not even to inform the system.

  In general, however, the next step will be to build a query 108 or set of queries for use in the search. Some aspects of query construction may depend on the search process used, so it cannot be performed until the next stage, but can generally be performed in advance, such as removal of clearly misrecognized or inappropriate characters There will be some operations.

  The query is passed to the search and context analysis stage 110. Here, the system optionally attempts to identify the document from which the original data was obtained. To do so, the system generally uses the search index and search engine 112, knowledge about the user 114 and knowledge about the user context or the context 116 where the acquisition took place. The search engine 112 may use and / or index information about rendered documents, about their digital duplicate documents, and about documents with the web (Internet presence). In addition to reading from many of these sources, you can also write to them and, as already mentioned, let's come to the recognition system 104 for language, fonts, rendering and next, for example, based on its knowledge of candidate documents By providing information about simple words, information can be provided to other stages of the process.

  Depending on the circumstances, the next step is to retrieve 120 a copy of the identified document. The source of the document 124 may be directly accessible from, for example, a local filing system or database or web server, or may enforce authentication, security, or payment, or convert the document to a desired format, etc. , May need to be contacted through several access services 122 that can provide other services.

  System applications can take advantage of the association of extra functions or data with some or all of the documents. For example, the advertising application discussed in Section 10.4 may use an association of a particular advertising message or subject with a portion of a document. This extra related functionality or data can be thought of as one or more overlays in the document, referred to herein as “markup”. The next stage of process 130 is then identifying any markup associated with the acquired data. Appropriate markup can be provided by the user, drafter, or publisher of the document, or some other party, and can be directly accessible from some source 132, or some May be generated by other services 134. In various embodiments, the markup may be associated with or applied to a rendered document and / or a digital copy for the rendered document, or for a group of either or both of these documents. .

  Finally, some actions are taken 140 as a result of the initial stage. These may be default actions such as simply recording the information found, may depend on data or documents, or may be derived from markup analysis. Sometimes an action simply passes data to another system. Various possible actions suitable for acquisition at a given point in time in the rendered document are presented to the user as a menu on the associated display, for example on the local display 332, on the computer display 212 or on the mobile phone or PDA display 216. May be presented. If the user does not respond to the menu, a default action may be taken.

2.2. Components FIG. 2 is a component diagram of components included in a typical implementation of a system in the context of a general operating environment. As shown, the operating environment includes one or more optical scan acquisition devices 202 or audio acquisition devices 204. In some embodiments, the same device performs both functions. Each acquisition device can communicate using either direct wiring or a wireless connection, or using a wired or wireless connection, the latter typically via a network 220 that includes a wireless base station 214. Can communicate with other parts of the system, such as computer 212 and mobile station 216 (eg, a cell phone or PDA). In some embodiments, the acquisition device is integrated into the mobile station and optionally shares some audio and / or optical components used by the device for audio communication and photography.

  Computer 212 may include a storage device that includes computer-executable instructions for processing instructions from scanning devices 202 and 204. As examples, the instructions may include an identifier (such as a serial number of the scanning device 202/204 or an identifier that partially or uniquely identifies a user of the scanner), scan context information (eg, scan time, scan position, etc.) and / or Or it may include scanned information (such as a text string) used to uniquely identify the document being scanned. In alternative embodiments, the operating environment may include more or less compotes.

  Search engine 232, document source 234, user account service 236, markup service 238, and other network services 239 are also available on network 220. Network 220 may be a corporate intranet, public internet, cellular network or some other network, or any of the interconnections described above.

  Regardless of the manner in which the devices are coupled to each other, they can operate according to known commerce and communication protocols (eg, Internet Protocol (IP)). In various embodiments, the functionality and performance of the scanning device 202, computer 212, and mobile station 216 may be fully or partially integrated into a single device. Thus, the terms scanning device, computer, and mobile station can refer to the same device depending on whether the device incorporates the functionality or performance of the scanning device 202, computer 212, and mobile station 216. Also, some or all of the functions of the search engine 232, document source 234, user account service 236, markup service 238, and other network services 239 may be used on any of these devices and / or other devices not shown. May be implemented.

2.3. Acquisition Device As described above, the acquisition device uses an optical scanner that acquires image data from the rendered document, or uses an audio recording device that acquires verbal text reading by the user, or other methods. And get the text. Some embodiments of the acquisition device may also acquire images, graphical symbols and icons, etc., including machine readable code such as barcodes. The device is very simple and consists of only a transducer, some storage and data interface, depends on other functionality somewhere else in the system, or even a full-featured device It may be. As an example, this section describes a device with an appropriate number of features based on an optical scanner.

  A scanner is a known device that acquires and digitizes images. The first scanner, a byproduct of the photographic copier industry, was a relatively large device that captured entire pages of a document at once. In recent years, portable optical scanners with convenient form factors such as pen-type portable devices have been introduced.

  In some embodiments, a portable scanner can be used to scan text, graphics, or symbols from a rendered document. Portable scanners have scanning elements that obtain text, symbols, graphics, etc. from the rendered document. In addition to documents printed on paper, in some embodiments, rendered documents include documents displayed on a screen such as a CRT monitor or LCD display.

  FIG. 3 is a block diagram of an embodiment of the scanner 302. The scanner 302 scans information from the rendered document and transmits an image to the scan head, generally an optical scan head 308 and an optical path 306, for converting it to machine compatible data. Lens, aperture or image root for. The scan head 308 may incorporate a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS) imaging device, or another type of photosensor.

  Microphone 310 and associated circuitry converts ambient sounds (including spoken words) into machine compatible signals, and other input functions include buttons, scroll wheels, or other touch sensors such as touchpad 314. Present in form.

  Feedback to the user is possible via a visual display or indicator light 332, a loudspeaker or other audio transducer 334, and via a vibration module 336.

  Scanner 302 includes logic 326 for interacting with various other components, possibly processing received signals into different formats and / or interpretations. Logic 326 may be operable to read and write data and program instructions stored in associated storage area 330, such as RAM, ROM, flash, or other suitable storage device. Further, the time signal from the clock unit 328 may be read. The scanner 302 also includes an interface 316 for communicating scanned information and other signals to the network and / or associated computer equipment. In some embodiments, the scanner 302 may have an onboard power supply 332. In other embodiments, the scanner 302 may be powered by a tether connection to another device, such as a universal serial bus (USB) connection.

  As an example of the use of the scanner 302, a reader can scan some text from a newspaper article with the scanner 302. The text is scanned as a bitmap image via the scan head 308. The logic 326 causes the storage device 330 to store the bitmap image along with the associated time stamp read from the clock unit 328. Logic 326 may perform optical character recognition (OCR) or other post-scan processing on the bitmap image to convert it to text. Logic 326 optionally extracts a signature from the image by performing a process such as convolution to locate recurring characters, symbols, or objects, and is between these repeated elements. The distance or number of other characters, symbols, or objects can be measured. The reader can then upload the bitmap image (or text or other signature if post-scan processing was performed by logic 326) to the associated computer via interface 316.

  As another example of the use of the scanner 302, the reader can use the microphone 310 as an acoustic acquisition port to acquire some text from an article as an audio file. The logic 326 stores the audio file in the storage device 328. Logic 326 may also perform speech recognition or other post-scan processing on the audio file to convert it to text. As described above, the reader can then upload the audio file (or text created by post-scan processing performed by logic 326) to the associated computer via interface 316.

Part 2-Overview of Core System Areas As paper-digital integration becomes commonplace, current technology changes to make better use of this integration or to make it more efficient to implement There are many aspects that can be done. This section will clarify these problems.

3. Search Searching a collection of documents has become commonplace for the general user, even for large collections like the World Wide Web, and the user uses the keyboard to build a search query that is sent to the search engine . This section and issue discusses aspects of both the construction of queries that result from retrieval from rendered documents and the search engine that handles those queries.

3.1. Scan / Speak / Search Query Type Use of the described system is generally obtained from the rendered document using any of several methods, including those described in Section 1.2 above. Start with a few words. If the input requires some interpretation to convert it to text, for example OCR or speech input, there is end-to-end feedback in the system so that the document set can be used to facilitate the recognition process. It's okay. End-to-end feedback performs an approximation of recognition or interpretation, identifies a set of one or more candidates that match the document, and then uses the information from possible matches in the candidate document to recognize or interpret Can be applied by further refinement and restriction. Candidate documents can be weighted according to estimated relevance (eg, based on the number of other users who scanned within these documents, or their popularity on the Internet), and these weightings are It can be applied to an iterative recognition process.

3.2. Searching for short phrases The selectivity of search queries based on a few words is greatly enhanced when the relative positions of these words are known, so the system needs to be acquired to identify the location of the text in the set There is only a small amount of text. Usually, the input text will be an array of consecutive words such as short phrases.

3.2.1. Finding documents and locations within documents from short acquisitions In addition to locating documents from which phrases are derived, the system can identify locations within the documents and take action based on this knowledge.

3.2.2. Other Methods of Finding Locations The system may use other methods of finding documents and locations, such as by using watermarks or other special markings in the rendered document.

3.3. Incorporating other factors into the search query In addition to the retrieved text, other factors (ie, user ID, profile, information about the context) include acquisition time, user ID and geographic location, user habits and Part of the search query may be formed, such as knowledge about recent activity.

  The document ID and other information regarding previous acquisitions can form part of the search query, especially if they are very recent.

  The user's ID can be determined from a unique identifier associated with the acquisition device and / or biometric or other supplemental information (speaking, fingerprint, etc.).

3.4. Knowledge about the nature of unreliability in search queries (such as OCR errors)
The search query may be constructed taking into account the types of errors that are likely to occur in the particular acquisition method used. An example of this is the display of suspicious errors in the recognition of certain characters, in which case the search engine can treat these characters as wildcards or assign them a low priority.

3.5. During local caching / offline use of the index for performance, the acquisition device may not be able to communicate with the search engine or collection during data acquisition. Thus, information useful for offline use of the device can be downloaded in advance to the device or to some entity with which the device can communicate. In some cases, all or a substantial portion of the index associated with the set can be downloaded. This topic is further discussed in Section 15.3.

3.6. In any form, the query can be recorded and acted on later. If there is likely to be a delay and cost associated with query communication and result reception, this preloaded information can improve local device performance and reduce communication costs. Reduced, useful and timely user feedback can be provided.

  In situations where communication is not available (local device is “offline”), the query can be saved and transmitted to the rest of the system as soon as communication is restored.

  In these cases, it may be important to transmit a time stamp for each query. Acquisition time can be a critical factor in query interpretation. For example, Section 13.1 discusses the importance of acquisition time in relation to initial acquisition. It is important to note that the acquisition time is not always the same as the time the query is executed.

3.7. Parallel search For performance reasons, multiple queries may be initiated in sequence or simultaneously with a single retrieval. In response to a single acquisition, several queries may be sent, for example when new words are added to the acquisition, or to query multiple search engines simultaneously.

  For example, in some embodiments, the system sends a query to a special index of the current document, a search engine on the local machine, a search engine on the corporate network, and a remote search engine on the Internet.

  Certain search results may be given higher priority than other search results.

  A response to a given query may indicate that there are too many other pending queries, and these may be canceled before completion.

4). Paper and search engines In many cases, traditional search engines that handle online queries also want to handle queries originating from rendered documents. Conventional search engines can be enhanced or modified in a number of ways to make them more suitable for use in the described system.

  Search engines and / or other components of the system can create and maintain indexes with different or extra features. The system can modify incoming paper-derived queries, or change the way in which queries are handled in the resulting search, so these paper-derived queries can be typed into web browsers and other sources. Can be distinguished from queries that originated from other queries. The system can also take different actions or suggest different options when returned from a paper-derived search compared to those from other sources. Each of these approaches is discussed below.

4.1. Indexing In many cases, the same index can be searched using paper-derived or conventional queries, but the index may be enhanced for use in various ways in current systems.

4.1.1. Knowledge of paper forms Extra fields can be added to the appropriate index, which is useful for paper-based searches.

Index entry displaying document availability in paper form A first example is a field indicating that a document is known to exist or is distributed in paper form. The system may give higher priority to the corresponding document if the query originates from paper.

Knowledge about paper form popularity In this example, statistical data about the popularity of paper documents (and optionally about sub-regions within these documents)-eg amount of scan activity, publications provided by publishers and other sources The number of copies etc. is used for giving higher priority to the corresponding document, increasing the priority of the digital duplicate document (for example, browser-based query or web search), etc.

Knowledge about the rendered format Another important example may be recording information about the layout of a particular rendering of a document.

  For example, for a particular edition of a book, the index may include information about where line breaks and page breaks occur, what fonts were used, and any exceptional capitalization.

  The index may include information about the proximity of other items on the page, such as images, text boxes, tables, and advertisements.

Use of the original semantic information Finally, it can be inferred from source markup, such as certain parts of the text refer to items that are put on the market, or certain paragraphs contain program code, but it is not obvious in paper documents It is also possible to record no semantic information in the index.

4.1.2. Indexing in Acquisition Method Knowledge A second factor that can modify the nature of the index is knowledge of the type of acquisition that is likely to be used. A search initiated by an optical scan may be beneficial if the index takes into account characters that are likely to be confused in the OCR process or if it contains some knowledge about the fonts used in the document. Similarly, if the query originates from speech recognition, an index based on homophones can be searched much more efficiently. A further factor that can affect the use of indexes in the model described is the importance of iterative feedback during the recognition process. If the search engine can provide feedback from the index when the text is being acquired, the accuracy of the acquisition can be greatly improved.

Indexing using offsets If the index is likely to be searched using the offset-based / autocorrelation OCR method described in Section 9, in some embodiments, the system may provide the appropriate offset or signature information. Store in the index.

4.1.3. Multiple Indexes Finally, in the system described, it may be common to search for multiple indexes. The index can be maintained on several machines or enterprise networks. The partial index may be downloaded to the acquisition device or a machine close to the acquisition device. A separate index may be created for users or groups of users with specific interests, habits, or permissions. An index can exist for each file system, each directory, and even each file on the user's hard disk. The index is published and subscribed by users and systems. Therefore, it is important to build an index that can be efficiently distributed, updated, merged, and separated.

4.2. Handling queries 4.2.1. Knowing that the acquisition is from paper A search engine can take different actions when it recognizes a search query originating from a paper document. The engine may handle queries in a more robust manner, for example, against the types of errors that are likely to appear in a given acquisition method.

  This may be inferred from some indicator included in the query (eg, a flag indicating the nature of the acquisition), or it can be inferred from the query itself (eg, an error or uncertainty common to the OCR process). Can be recognized).

  Alternatively, queries originating from the acquisition device can reach the engine by a different channel or port or connection type than queries originating from other sources and can be so distinguished. For example, in some embodiments of the system, the query will be sent to the search engine at a dedicated gateway. Thus, the search engine keeps track of all queries originating from paper documents that pass through a dedicated gateway.

4.2.2. Use of Context Section 13 below describes a variety of different factors that are outside of the acquired text itself but can be of great help in document identification. These include the history of recent scans, the long-term reading habits of a particular user, the user's geographical location and the recent use of the user's particular electronic document, and so forth. The relevant factor is referred to herein as “context”.

  Part of the context may be handled by the search engine itself and may be reflected in the search results. For example, a search engine can track a user's scan history, which can also be cross-referenced with conventional keyboard-based queries. In such cases, the search engine retains and uses more state information about each individual user than most of the traditional search engines do, and each interaction with the search engine involves several searches and currently common It may be considered to extend over a longer period than is.

  A portion of the context can be transmitted to a search engine within a search query (Section 3.3), and in some cases can be stored in the engine to play a role in future queries. Finally, some of the context is best handled elsewhere, resulting in a filter or second search applied to the results produced by the search engine.

Data stream input for search An important input to the search process is how the user community interacts with the rendered version of the document—for example, which document is the most widely read by whom Is it a broader context? Search engines that return the most frequently linked pages or the most frequently selected pages from past search results are similar. See Sections 13.4 and 14.2 for further discussion on this topic.

4.2.3. Document Sub-Region The described system can be used not only for information about the entire document, but also for information about sub-regions of the document and even individual words. Many existing search engines simply focus on locating documents or files associated with a particular query. A search engine that can tackle the details and identify the location within the document will provide significant benefits to the described system.

4.3. Returning results Search engines can use some of the additional information they currently hold to influence the results returned.

  The system can also return a document accessed by the user only as a result of having a paper copy (Section 7.4).

  The search engine may suggest new actions and options appropriate to the described system besides just text retrieval.

5. Markup, Annotation Input and Metadata In addition to performing the acquisition-search-retrieve process, the described system also associates extra functionality with documents, particularly with specific locations and segments of text within the document. This extra functionality is often, but not limited to, associated with the rendered document by being associated with the electronic duplicate. As an example, a hyperlink in a web page may have the same functionality when a printout of the web page is scanned. There is also functionality that is not defined in electronic documents, but is stored or occurs elsewhere.

  This layer of added functionality is referred to as “markup”.

5.1. Overlay, static and dynamic One approach that considers markup to be an “overlay” in a document can further provide information about the document or a portion thereof and identify actions associated therewith. The markup can include human-readable content, but is often invisible to the user and / or intended for machine use. Examples include options that are displayed in a pop-up menu on a nearby display when the user obtains text from a particular area in the rendered document, or a voice sample that shows the pronunciation of a particular phrase.

5.1.1. Several layers, possibly conceived from several sources Any document can have multiple overlays at the same time, which can be sourced from various locations. Markup data can be created or provided by the author of the document, by the user, or by some other party.

  The markup data may be attached to or embedded in the electronic document. It may be found at a conventional location (eg, the same location as the document but with a different file name suffix). The markup data may be included in the search results of the query that locates the original document, or may be found by separate queries to the same or another search engine. Markup data may be found using the original acquired text or other acquired information or context information, or may be found using already guessed information about the document and acquisition location. Markup data may be found at a specified location in the document even if the markup itself is not included in the document.

  Markup can be largely static and document-specific, just as traditional linking techniques on HTML web pages are often embedded as static data in HTML documents, Markup may occur dynamically and / or may be applied to multiple documents. An example of dynamic markup is information attached to a document containing the latest stock prices of companies mentioned in the document. An example of widely applied markup is translation information that is automatically available in multiple documents or document sections in a particular language.

5.1.2. Personal “plug-in” layer Users can also install markup data or subscribe to specific sources thereof, and thus personalize the system's response to specific acquisitions.

5.2. Keywords and phrases, trademarks and logos Some elements in a document have specific “markups” or related functionality based on their own characteristics rather than their location in a particular document be able to. Examples include logos and trademarks that can link the user to further information about the organization of interest, as well as special marks that are printed in the document for pure scanning purposes. This also applies to “keywords” or “key phrases” in the text. An organization may register a particular phrase with which it relates or wants to be associated, and attach to it certain markup that is available wherever the phrase is scanned.

  Any word, phrase, etc. may have an associated markup. For example, the system may display certain items in a pop-up menu (eg, a link to an online bookstore) whenever the user gets the word “book”, or the title of a book, or a topic associated with a book. May be added. In some embodiments of the system, a digital duplicate document or index is created whether the acquisition occurred near the word “book”, or the title of the book, or a topic associated with the book—and the system behavior is a keyword element Browse to determine if it has been modified according to this proximity to. It should be noted that in the previous example, markup can trigger non-sale text or documents for commerce.

5.3. User-defined content 5.3.1. User comments and annotations, including multimedia. Annotations are another type of electronic information that can be associated with a document. For example, a user can attach an audio file of his thoughts about a particular document for later retrieval as a voice annotation. As another example of multimedia annotation, a user can attach a photo of a location mentioned in a document. Users generally provide annotations for documents, but the system can associate annotations from other sources (eg, other users in a workgroup can share annotations).

5.3.2. Proofing notes An important example of user source markup is the annotation of paper documents as part of the proofreading, editing, or review process.

5.4. Third Party Content As mentioned earlier, markup data can often be provided by a third party, such as other readers of a document. Online discussions and reviews are good examples of community management information about specific jobs, translations and explanations contributed by volunteers.

  Another example of third party markup is that provided by an advertiser.

5.5. By analyzing data obtained from documents by some or all users of a dynamic markup system based on other user data streams, markup can occur based on community activity and interest. An example might be an online bookstore that creates markups or annotations that tell the user that "everyone who enjoyed this book also enjoyed ...". The markup may be low anonymity and may tell the user who has recently read this document in the contact list that the user has. Other examples of data stream analysis are included in Section 14.

5.6. Markup based on external events and data sources Markups are often based on external events and data sources, such as input from corporate databases, information from the public Internet, or statistics collected by the local operating system. Let's go.

  The data source may be more local and may provide information about the user's context, such as the user's ID, location, and activity. For example, the system may provide a markup layer that communicates with the user's mobile phone and gives the user the option to send the document to someone whom the user has recently spoken on the phone.

6). Authentication, personalization and security In many cases, the identity of the user will be known. This may be an “anonymous ID”, in which case the user is identified only by the serial number of the acquisition device, for example. In general, however, it is expected that the system will have much more detailed knowledge about the user and can use it to personalize the system and allow activities and transactions to be performed by username.

6.1. User history and "life library"
One of the simplest and most useful functions is for the user, including the text that the user has obtained, as well as any document details found, the location within the document, and any action taken Keeping further information about the acquisition in record.

  This stored history is beneficial to both the user and the system.

6.1.1. About the User The user may be presented with all the records that the user has read and acquired, called “Life Library”. This may simply be for personal interest, but may be used in a library, for example, by a researcher collecting material to be referenced in the next paper.

  Depending on the situation, the user wants to make the library public, such as by publishing it in a manner similar to a weblog so that others can see what they are reading and interested in You may wish.

  Finally, in situations where the user gets some text and the system cannot act on the acquisition immediately (for example, because an electronic version of the document is not yet available), the acquisition is stored in the library and automatically or upon user request Can be processed later. Users can also subscribe to new markup services and apply them to previously acquired scans.

6.1.2. About the system Records about the user's past acquisitions are also useful for the system. Many aspects of system operation can be enhanced by knowing the user's reading habits and history. The simplest example is that any scan performed by a user is likely to result from a document that the user has recently scanned, especially if the last scan was within the past few minutes. It is very likely that it is from a document. Similarly, there is a high possibility that the document is read from the beginning to the end. Thus, for English documents, later scans are likely to occur well below the document. A relevant factor is that the system can help establish the location of acquisition in the case of ambiguity, and can also reduce the amount of text that needs to be acquired.

6.2. Scanner as a payment, identification and authentication device Since the acquisition process generally begins with some type of device, typically an optical scanner or voice recorder, this device is used as a key to identify the user and allow certain actions be able to.

6.2.1. Associating a scanner with a phone or other account The device may be embedded in a cell phone or some other technique associated with a cell phone account. For example, a scanner can be associated with a mobile phone account by inserting a SIM card associated with the account into the scanner. Similarly, the device may be embedded in a credit card or other payment card, or may have a function for the appropriate card to be connected to it. Thus, the device can be used as a payment token and a financial transaction can be initiated by acquisition from a rendered document.

6.2.2. Using Scanner Input for Authentication A scanner may be associated with a user or account through the process of scanning any token, symbol, or text associated with a particular user or account. The scanner may also be used for biometric authentication, for example by scanning a user's fingerprint. In the case of a voice-based acquisition device, the system can identify a user by matching the user's voiceprint or by requesting the user to say a certain password or phrase.

  For example, if a user scans a citation from a book and is offered an option to buy the book from an online retailer, the user can select this option, then the user's You are prompted to scan your fingerprint.

  See also Sections 15.5 and 15.6.

6.2.3. Secure scanning device When an acquisition device is used to identify and authenticate a user and initiate a transaction on behalf of the user, it is important that the communication between the device and the rest of the system is secure. is there. It is also important to protect against situations such as so-called “man-in-the-middle” attacks that imitate the scanner, where communication between the device and other components is intercepted.

  Techniques for providing applicable security are well understood in the art, and in various embodiments, hardware and software in devices or elsewhere in the system are configured to implement the applicable techniques. The

7). Publishing Models and Elements An advantage of the described system is that the traditional process of creating, printing, or publishing documents does not need to be modified to benefit many systems. However, document creators or publishers—hereinafter simply referred to as “issuers” —because they may wish to create functionality to support the described system.

  In this section, we will mainly consider the published document itself. For information on other relevant commercial transactions, such as advertisements, see Section 10 entitled “P-Commerce”.

7.1. The electronic guide system for printed documents allows the printed document to have an associated electronic presence. Traditionally, publishers often ship CD-ROMs with books that contain additional digital information, tutorial movies and other multimedia data, sample code or documentation, or additional reference material. Some publishers also apply in addition to information that may be updated after publication, such as errata, additional comments, updated references, additional sources of references and related data, and translations into other languages. Maintain a website associated with the specific publication that provides the material. Online forums allow readers to contribute comments about the publication.

  The described system can make the relevant material more closely related to the rendered document than ever before, making their discovery and interaction with them easier for the user. By obtaining a portion of text from a document, the system can automatically connect the user to a digital document associated with the document, and more specifically, associated with a particular portion of the document. Similarly, users can be connected to annotations and commentary by online communities discussing text sections, or by other readers. In the past, relevant information would generally have to be found by searching for a specific page number or chapter.

  An example of this is in the academic book area (Section 17.5).

7.2. "Subscription" for printed documents
Some publishers may have a mailing list that the reader can subscribe to if they want to be notified of new related matters and when a new edition of the book is published. Using the described system, users can more easily register interest in a specific document or part of a document, and even before the publisher considers providing any applicable functionality There is a case. Reader interest is provided to publishers, and in some cases also about when and where updates, more information, new editions, or entirely new publications on topics identified as interested in existing books will be provided. affect.

7.3. Print marks with special meaning or containing special data Many aspects of the system are possible simply through the use of text already present in the document. However, if the document is created with knowledge that can be used in conjunction with the system, extra functionality may be added and printed by printing extra information in the form of special marks. Thus, the text or requested action can be more closely identified, or the interaction of the document with the system can be enhanced. The simplest and most important example is an indication to the reader that the document is indeed accessible through the system. For example, a special icon may be used to indicate that this document has an online discussion forum associated with it.

  The relevant symbol can be purely targeted to the reader or recognized by the system when used to initiate a scan and some action. In a symbol, enough data can be encoded to identify more than just a symbol. For example, information about documents, edits, and symbol locations can be stored, which can be recognized and retrieved by the system.

7.4. Authorization by possession of paper documents There are several situations in which possession or access to a printed document will give the user certain privileges, such as an electronic copy of the document or access to additional material. is there. Using the described system, appropriate privileges can be granted as a result of a user simply obtaining a piece of text from a document or scanning a specially printed symbol. If the system needs to verify that the user owned the entire document, the user may be prompted to scan a specific item or phrase from a specific page, eg, “page 46, line 2”.

7.5. Expired Documents If the printed document is a gateway to extra material and functionality, access to the relevant features may also be timed. After the expiration date, the user may be required to pay a fee or obtain a new version of the document to regain access to the feature. Of course, paper documents are still usable, but some of the enhanced electronic functionality is lost. This may be because, for example, the publisher is profitable by receiving a fee for access to electronic materials, or by requiring users to purchase new editions from time to time, or on older printed documents that are still in circulation. It may be desirable due to associated disadvantages. A coupon is an example of a type of commercial document that may have an expiration date.

7.6. Popularity analysis and publication decisions Section 10.5 discusses the use of system statistics to influence author rewards and advertising prices.

  In some embodiments, the system infers the popularity of the publication from activity in the electronic community associated with the publication in addition to the use of paper documents. These factors can help issuers make decisions regarding what to issue in the future. For example, if an existing book turns out that a chapter is very popular, it may be worth developing a separate volume.

8). Document Access Service An important aspect of the described system is the ability to provide a user with access to a rendered copy of the document with access to an electronic version of the document. Documents may be freely available on public or private networks accessible to the user. The system uses the retrieved text to identify, locate, and retrieve the document, possibly displaying it on the user's screen or depositing it in the user's email inbox.

  Even if the document is available in electronic form, it may not be accessible by the user for various reasons. Some possibilities may not be enough connectivity to retrieve the document, the user may not be entitled to retrieve it, and there is an expense associated with gaining access to it In some cases, the document is out of print, and in some cases may be replaced by a new version, etc. The system generally provides feedback to the user about these situations.

  As noted in Section 7.4, the degree or nature of access granted to a particular user may vary if it is known that the user can already access a printed copy of the document. .

8.1. Authenticated document access Access to documents may be restricted to special users or users who meet certain criteria, for example when the user is connected to a secure network, or in certain circumstances May be available only in Section 6 describes some of the ways in which user and scanner certificates can be established.

8.2. Document Purchase-Copyright Owner Reward Documents that are not freely available to the public may often remain accessible by paying a fee as a reward to the publisher or copyright holder. . The system can directly implement the payment function or take advantage of other payment methods associated with the user, including those described in Section 6.2.

8.3. Document Escrow and Active Retrieval Electronic documents are often temporary and a digital source version of the rendered document is now available but not accessible in the future. The system can retrieve and store the current version on behalf of the user, even if the user has not requested it, thus ensuring the availability that the user should request it in the future. This also makes it available for searching as part of the process of identifying the use of the system, eg, future acquisitions.

  In the event that payment is required to access a document, a reliable “document escrow” service will ensure that if the user requests a document from the service, such as when a small fee is paid, the copyright holder will The document can be retrieved on behalf of the user, with the promise of being fully rewarded.

  If the document is not available in electronic form at the time of acquisition, variants to this effect can be implemented. A user can allow a service to submit a request for or pay for a document on his behalf if the electronic document needs to be available later.

8.4. Association with other subscriptions and accounts Payments may be abandoned, reduced or satisfied based on associations with other existing accounts or subscriptions of the user. For example, a subscriber to a printed version of a newspaper may be given the right to automatically retrieve an electronic version.

  In other cases, the association may not be as straightforward and the user may grant access based on an account established by the employer or based on a scan of a printed copy owned by a friend who is a subscriber. May be granted.

8.5. Replacing photographic copying with scanning and printing The process of taking text from a paper document, identifying the electronic original, and printing the original or a portion of the original associated with the acquisition has many advantages: Form an alternative to conventional photographic reproductions with
The paper document need not be in the same location as the final printout, and need not be there at any time;
Wear and damage caused to documents, particularly old, fragile and valuable documents by the photocopying process can be avoided;
Generally the copy quality is much higher;
Keep track of which document, or which part of the document is most frequently copied;
As part of the process, the copyright holder may be paid.

  Unauthorized copying may be prohibited.

8.6. When a document is particularly valuable, such as in the case of a legal document or a document that has historical or other special significance, it is generally the case that for many years, Use copies of those documents, while keeping the original in a safe place.

  The described system can be linked, for example, to a database that records the location of the original document in an archive warehouse, making it easy for someone to access the copy to locate the original archived document.

9. Text Recognition Technology Optical character recognition (OCR) technology has traditionally focused on images containing large amounts of text, such as by flatbed scanners that capture entire pages. OCR techniques often require considerable user training and correction to produce useful text. OCR technology often requires considerable processing power for the machine performing OCR, while many systems use dictionaries, which are generally expected to operate on a virtually infinite vocabulary.

  In the system described, all of the above conventional characteristics can be improved.

  Although this section focuses on OCR, many of the issues discussed can be mapped directly in other recognition techniques, particularly speech recognition. As described in Section 3.1, the process of acquiring from paper can be realized by the user reading out text to the device that acquires the sound. One skilled in the art will appreciate that the principles discussed herein in connection with images, fonts, and text fragments often apply to speech samples, user speech models, and phonemes.

9.1. Optimizing to the appropriate device Scanning devices for use with the described system will often be small, portable and low power. The scanning device can only get a few words at a time, and in some implementations it can't even get a horizontal slice across the text, rather than the whole character at once, and the corresponding slice can recognize the text It is bound to form a possible signal. A scanning device may have very limited processing power, storage space, etc., while in some embodiments, it may perform all of the OCR process itself, many embodiments may be Will later rely on a connection to a more powerful device to convert the acquired signal to text. Finally, the scanning device may have very limited functionality for user interaction, so any user input requests will be postponed later, or much more “best estimate” mode than is currently common Need to work with.

9.2. "Uncertain" OCR
The main new property of OCR in the described system is the fact that it is generally possible to examine an image of text that exists somewhere else and retrieve it in digital form. Accurate transcription of the text is therefore not always required from the OCR engine. The OCR system can output a set or matrix of possible matches, possibly including probability weights, which can also be used to search for digital originals.

9.3. Iterative OCR-Estimate, clarify, estimate ...
If the device performing recognition can contact the document index during processing, the OCR process can be informed by the contents of the document set as it progresses, potentially providing substantially higher recognition accuracy. .

  Appropriate connections will also allow the device to inform the user when enough text has been acquired to identify the digital source.

9.4. Use probable rendering knowledge If the system has knowledge of the probable print rendering aspects of the document—for example, the font typeface used for printing, or the layout of the page, or which section is italicized Etc.-this can also help in the recognition process. (Section 4.1.1)
9.5. Font Cache—Determines fonts on the host and downloads to the client Once candidate source text in the document collection is identified, the font or its rendering can be downloaded to the device to aid recognition.

9.6. Autocorrelation and character offsets The component characters of a text fragment can be the most appreciated technique for representing a fragment of text that is used as a document signature, but the actual text of the text fragment does not need to be used Other representations can work well when trying to locate text fragments in a digital document and / or database, or when clarifying the display of text fragments in a readable form. Other representations of text fragments can provide the benefit of actual text shortages. For example, optical character recognition of text fragments differs from other representations of retrieved text fragments that can be used to search and / or recreate text fragments without resorting to optical character recognition of the entire fragment. , Often error prone. The appropriate method may be more appropriate for some devices used in current systems.

  Those skilled in the art will appreciate that there are many ways to describe the appearance of text fragments. The characterization of the relevant text fragment can include, but is not limited to, word length, relative word length, character height, character width, character shape, character frequency, token frequency, and the like. In some embodiments, the offset between matching text tokens (ie, the number of tokens in between plus 1) is used to characterize a fragment of text.

  Conventional OCR attempts to determine characters in scanned text using knowledge of fonts, character structures, and shapes. Embodiments of the invention differ and employ various methods that use the rendered document itself to assist in the recognition process. These embodiments use characters (or tokens) to “recognize each other”. One method for indicating relevant self-recognition is “template matching”, which is similar to “convolution”. To perform the appropriate self-recognition, the system slides a copy of the text horizontally on the system itself and notices the matching area of the text image. Previous template matching and convolution techniques encompass a variety of related techniques. If text is used to directly correlate these techniques for tokenizing and / or recognizing characters / tokens with their own component parts in matching characters / tokens, this document Is collectively referred to as “autocorrelation”.

  When performing autocorrelation, matching fully connected regions are of interest. This occurs when a character (or group of characters) overlays another instance of the same character (or group). Matching fully connected regions automatically provide text tokenization for component tokens. As the two copies of the text pass each other, the area where a perfect match occurs (ie, all pixels in the vertical slice match) is noticed. If the character / token matches itself, the horizontal extension of this matching (eg, the connected matching portion of the text) also matches.

  At this stage, the actual ID of each token (ie, a specific letter, number or symbol, or group of these corresponding to the token image), with only an offset to the next occurrence of the same token in the scanned text. Note that there is no need to determine. The offset number is a distance (number of tokens) until the next occurrence of the same token. If the token is unique within the text string, the offset is zero (0). The token offset array generated in this way is a signature that can be used to identify the scanned text.

  In some embodiments, the token offset measured string of scanned tokens is compared to an index that indexes a collection of electronic documents based on their content token offsets (Section 4.1. Section 2). In another embodiment, the measured token offset of the scanned token string is converted to text and compared to a more conventional index that indexes a collection of electronic documents based on their content.

  As mentioned earlier, if the acquisition process consists of speech samples of spoken words, a similar token correlation process can be applied to the speech fragments.

9.7. Font / character "self-recognition"
Conventional template matching OCR compares the scanned image with a library of character images. In short, the alphabet is stored with each font, and the newly scanned image is compared with the stored image to find matching characters. The process generally has an initial delay until the correct font is identified. Since most documents use the same font from beginning to end, the subsequent OCR process is relatively quick. Thus, subsequent images can be converted to text by comparison with a recently identified font library.

  The most commonly used font character shapes are related. For example, in most fonts, the characters “c” and “e” are visually related, such as “t” and “f”. The OCR process is enhanced by using this relationship to build a template for characters that have not yet been scanned. For example, if a reader scans from a paper document a short string of text in a font that has never been encountered before, the system does not have a set of image templates for comparing scanned images. The system can take advantage of the inferred relationship between certain characters to build a font template library without having to encounter all of the letters of the alphabet. The system can then use the constructed font template library to recognize subsequently scanned text and further refine the constructed font library.

9.8. Send something unrecognized (including graphics) to the server If the image cannot be mechanically transferred to a form suitable for use in the search process, for later use by the user, for possible manual transfer, or later The image itself can be saved for processing when different resources become available to the system.

10. Many of the actions enabled by the P-commerce system result in some kind of commercial transaction. The phrase “P-commerce” is used herein to describe commercial activities initiated from paper through the system.

10.1. Selling a document resulting from a physically printed copy When a user obtains text from a document, the user may be offered to purchase the document in either paper or electronic form. The user may be provided with relevant documents such as those cited or mentioned in paper documents, or similar subjects, or by the same author.

10.2. Sales of other things initiated or assisted by paper The acquisition of text may be linked in various ways with other commercial activities. The acquired text may be in a catalog that is specifically designed for selling items, in which case the text will be fairly directly related to the purchase of the item (18.2. Section). The text may be part of the advertisement, in which case the sale of the advertised item may result.

  In other cases, the user obtains other text from a potential interest in a business transaction that can be inferred. For example, a reader of a particular country's novel set may be interested in a national holiday. Someone reading a new car review may be considering buying it. A user may obtain a specific fragment of text that is known to result in some commercial opportunity being presented to him or that it may be a secondary result of the acquisition activity.

10.3. Obtain labels, icons, serial numbers, barcodes for items that cause sales. Text or symbols may actually be printed on the item or its packaging. Examples include the serial number or product ID found on the label on the back or bottom of the electronic device. The system can suggest a convenient technique for purchasing one or more identical items to the user by obtaining the text. Manuals, support, or repair services may also be provided.

10.4. Contextual advertising In addition to obtaining text directly from advertisements, the system allows for a new kind of advertisements based on what people are reading, although not necessarily obvious in the rendered document.

10.4.1. Advertising based on scan context and history In traditional paper publications, advertising typically consumes a significant amount of space compared to the text of a newspaper article, and a limited number of them are placed around a particular article It is. In the system described, advertisements may be associated with individual words or phrases that are selected according to the particular interest indicated by the user by taking the text and possibly considering the history of past scans. It's okay.

  Using the described system, it is possible to relate purchases to specific printed documents and to give advertisers more feedback about the effectiveness of advertising on specific printed documents.

10.4.2. User context and history-based advertising systems are collecting large amounts of information about other aspects of the user context for their use (Section 13), and an example of a good estimate of the user's geographic location. is there. Appropriate data can also be used to adjust advertisements presented to users of the system.

10.5. The reward model system allows several new models of rewards for advertisers and merchants. The publisher of the printed document, including the advertiser, can receive some revenue from purchases generated from the document. This may be true, whether or not the advertisement exists in its original printed form and has been added electronically by either the publisher, advertiser or some third party Often, the source of the relevant advertisement may have been subscribed by the user.

10.5.1. Analysis of the statistics generated by the popularity-based reward system can show the popularity of certain parts of the publication (Section 14.2). For example, in a newspaper, the system may indicate the time a reader is spending viewing a particular page or article, or the popularity of a particular columnist. Depending on the circumstances, it may be appropriate for authors or publishers to receive rewards based on reader activity rather than more traditional rating indices such as the number of words written or the number of copies distributed. Authors whose work on a subject is becoming an authoritative source of frequent reading may be considered different from authors who sell the same number of copies but rarely open in future subscriptions. (See also section 7.6)
10.5.2. Popularity-based advertising Decisions about advertising in documents may be based on statistics about readership. The advertising space around the most popular columnists may be sold at an extra charge. Advertisers may be charged several times or even paid based on knowledge of how the document was received after it was published.

10.6. Life Library Based Marketing The “life library” or scan history described in Sections 6.1 and 16.1 may be a source of extremely valuable information about user interests and habits. Applicable to appropriate content and privacy issues, the relevant data can inform the user of the provision of goods or services. Even in anonymous form, the collected statistics can be very useful.

10.7. Later sales / information (if available)
Advertisements and other commerce opportunities may not be presented to the user immediately upon text acquisition. For example, the opportunity to purchase a sequel to a novel may not be available when the user is reading a novel, but the system may present the user with the opportunity to issue a sequel.

  The user can obtain data regarding purchases or other commercial transactions, but cannot choose not to initiate and / or complete the transaction when the acquisition occurs. In some embodiments, data regarding acquisitions is stored in the user's life library, and these life library entries may remain “active” (ie, similar to those available at the same time the acquisition is made). The following interactions are possible): Thus, the user can review the acquisition after some time and optionally complete the transaction based on the acquisition. Since the system can track when and where the original acquisition occurred, all parties involved in the transaction can be paid accordingly. For example, the author who wrote the story next to the advertisement from which the user retrieved the data and the publisher who published the story, visited the life library, selected that particular capture from the history, and displayed a pop-up menu. A reward can be paid 6 months after selecting “Purchase this item on Amazon” from (may be similar or identical to the menu presented at the time of acquisition).

11. Integration of Operating System and Applications Modern operating systems (OS) and other software packages can be advantageously utilized for the use of the described system. It has many characteristics and may be modified in various ways to provide a better platform for its use.

11.1. Incorporating scan and print related information into metadata and indexing Current and future file systems, and their associated databases, often have the ability to store various metadata associated with each file. Conventionally, this metadata includes the ID of the user who created the file, the creation date and time, the last modification, the last use, and the like. Newer file systems allow extra information such as keywords, image characteristics, document sources, and stored user comments, and in some systems, this metadata can be arbitrarily expanded. Thus, the file system can be used to store information that can be useful for implementing the current system. For example, the date and time when a document was last printed can be stored by the file system, such as details about which text was retrieved from paper by who and when using the described system.

  Operating systems are also beginning to incorporate search engine functionality that allows users to find local files more easily. These functions can be used advantageously by the system. Many of the search-related concepts discussed in Sections 3 and 4 mean that they apply to all personal computers as well as today's Internet-based and similar search engines.

  Certain software applications may include system support in addition to the functions provided by the OS.

11.2. OS that supports the acquisition device
Because the applicability of the acquisition device extends beyond a single software application, as the use of acquisition devices such as pen scanners increases, support for mice and printers is provided in much the same way. It would be desirable to build support for those operating systems. The same can be said for other aspects of system operation. Here are some examples: In some embodiments, the entire described system, or its core, is provided by the OS. In some embodiments, system support is provided by an application programming interface (API) that can be used by other software packages, including those that directly implement aspects of the system.

11.2.1. Support for OCR and other recognition technologies Most methods for obtaining text from rendered documents interpret source data, generally scanned images, or any spoken language as text suitable for private use in the system. In order to do that, you need some recognition software. Since in the past the use of OCR has generally been limited to a narrow range of applications, it is less common for OSs to include OCR support, but some OSs include support for speech or handwriting recognition.

  As the recognition component becomes part of the OS, other functions provided by the OS can be better utilized. Many systems include, for example, spelling dictionaries, grammar analysis tools, internationalization and localization functions, and in particular they can be customized for specific users to include frequently encountered words and phrases, so they all all have their recognition process. Can be advantageously used by the described system.

  If the operating system includes full-text indexing capabilities, these can also be used to inform the recognition process, as described in Section 9.3.

11.2.2. Actions taken at scan time If an optical scan or other acquisition occurs and is presented to the OS, the default action to be taken in such circumstances in the event of no other subsystem claiming acquisition ownership May have. Examples of default actions include presenting alternative choices to the user, or submitting the acquired text to an OS built into the search function.

11.2.3. The OS has a default action for a specific document or document type When a digital source of a rendered document is found, the OS has a standard action to take when a specific document, or a document of that category, is scanned There is. Applications and other subsystems can register with the OS as potential handlers for certain types of acquisitions in a manner similar to that known by applications for their ability to handle certain file types.

  The rendered document, or markup data associated with acquisition from the document, may include instructions to the operating system to initiate special applications, path application arguments, parameters, data, or the like.

11.2.4. Interpreting Gestures and Mappings to Standard Actions Section 12.1.3 discusses the use of “gestures”, particularly in the case of optical scanning, where the specific movements made by a portable scanner are regions of text May represent standard actions such as marking the start and end of

  This is similar to actions such as pressing the shift key on the keyboard while selecting an area of text using the cursor keys, or using the wheel on the mouse to scroll the document. Appropriate actions by the user are sufficiently standard to be interpreted in a system-wide manner by the OS, thus ensuring consistent behavior. The same is desirable for scanner gestures and other scanner related actions.

11.2.5. Setting Responses to Standard (and Non-Standard) Icon Text Printing Menu Items Similarly, certain items of text and other symbols can cause standard actions when scanned, and the OS can select these Can be provided. As an example, a scan of the text “[Print]” in any document may cause the OS to retrieve a copy of the document and print it. The OS can also provide a technique for registering relevant actions and associating them with a particular scan.

11.3. Support in system GUI components for common activities resulting from scanning Most software applications are essentially based on standard graphical user interface components provided by the OS.

  Developers using these components do not require all programmers to implement the same functionality alone, for example, pressing the left cursor key in any text editing context should move the cursor to the left. As such, it helps to ensure consistent behavior across multiple packages.

  Similar consistency in these components is desirable when activity is initiated by text acquisition or other aspects of the described system. Here are some examples:

11.3.1. A common use of an interface system to find specific text content is that a user scans an area of a paper document and opens and scans an electronic copy in a software package that the system can display or edit. The text may be scrolled and highlighted in the package (Section 12.2.1). The first part of this process, finding and opening electronic documents, is typically provided by the OS and is standard across software packages. However, the second part-locating a specific part of the text in the document and scrolling and highlighting it in the package-is not yet standardized and is implemented differently for each package There are many. The availability of a standard API for this functionality can greatly enhance the operation of this aspect of the system.

11.3.2. Text Interaction When a portion of text is located in a document, the system may wish to perform various operations on the text. By way of example, the system may request surrounding text, so obtaining a few words by the user may result in the system accessing the entire sentence or paragraph that contains them. Again, this functionality is not implemented in all of the text handling software, but can be effectively provided by the OS.

11.3.3. Some operations enabled by the context (pop-up) menu system require user feedback, which may optionally be required within the context of the application that handles the data. In some embodiments, the system uses an application pop-up menu that is conventionally associated with clicking the right mouse button on the same text. The system inserts extra options in the appropriate menu and displays them as a result of activities such as scanning a paper document.

11.4. Web / Network Interface In today's increasingly networked world, most of the functionality available on individual machines can also be accessed over the network, and the functionality associated with the described system is no exception. As an example, in an office environment, many paper documents received by a user may have been printed by another user's machine on the same corporate network. A system on one computer can receive an acquisition, apply appropriate authorization controls, and query those other machines for documents that can accommodate the acquisition.

11.5. Document printing causes retention An important factor in the integration of paper and digital documents is to keep as much information as possible about the transactions between them. In some embodiments, the OS keeps a simple record of when and when every document was printed. In some embodiments, the OS further performs one or more actions that make it more suitable for use by the system. Examples include the following:

Save a digitally rendered version of every document printed with information about the source from which it was printed;
Save a subset of useful information about the printing plate that may help in the interpretation of future scans, such as the font used and where the line breaks occur;
Save a version of the source document associated with any printed copy;
Automatically index documents when printing and store results for future retrieval.

11.6. My Documents (Printed / Scanned) OSes often hold certain categories of folders and files that have specific significance. The user's document is found by specification and design, for example in the “My Documents” folder. The standard file open dialog can automatically include a list of recently opened documents.

  In an OS optimized for use with the described system, the relevant categories can be enhanced or enhanced in a manner that takes into account user interaction with the paper version of the stored file. Categories such as “Printed My Documents” or “Recently Read My Documents” may be effectively identified and incorporated into the operation.

11.7. OS Level Markup Hierarchy Since important aspects of the system are generally provided using the “markup” concept discussed in Section 5, they are provided by the OS in a manner that allows access to multiple applications in addition to the OS itself. It would be clearly advantageous to have support for any relevant markup. Also, the markup layer can be provided by the OS based on the knowledge of the document under its control and the functions that can be provided.

11.8. OS DRM Device Usage Increasing operating systems have some form of “digital rights management”, the ability to control the use of specific data according to rights granted to specific users, software entities, or machines. Supported. Thereby, for example, unauthorized copying or distribution of specific documents can be prevented.

12 User Interface The user interface of the system may be entirely on the PC if the acquisition device is relatively dumb and connected to it by a cable, or it is high performance and itself has high processing power. If you have it, it may be completely on the device. Each component may have some functionality. Some or all of the functionality of the system may be implemented on other devices such as mobile phones or PDAs.

  Thus, the descriptions in the following sections display what may be desirable in certain implementations, but these are not necessarily appropriate for all and can be modified in several ways.

12.1. On the acquisition device For all acquisition devices, but especially in the case of optical scanners, the user's attention when scanning will generally be focused on the device and paper. Thus, it is highly desirable that any input and feedback required as part of the scanning process do not unnecessarily require that the user's attention be somewhere, for example on a computer screen.

12.1.1. Feedback in Scanners Portable scanners can have a variety of ways to provide feedback to the user about a particular situation. The most obvious types are the direct visual that the scanner incorporates up to the indicator light or full display, and the auditory one that the scanner can emit beeps, clicks, or other sounds. An important alternative is tactile feedback that allows the scanner to vibrate, buzz, or stimulate the user's haptics, and status by projecting something from a colored point of light to a high-performance display onto paper. Projection feedback indicating

  Important immediate feedback that can be provided at the device includes:

Feedback in the scanning process-the user scans too fast at too high an angle, or the drift on a particular row is too high or too low;
Enough content-if present, enough scans to find a match-important for disconnected operation;
A source of text with a known context location;
One unique source of text with a known unique context location;
Content availability-an indication of whether content is freely available to users or costly;
Many of the user interactions usually associated with later stages of the system can also take place at the acquisition device, for example if they have sufficient capability to display some or all of the document.

12.1.2. In addition to basic text acquisition, the scanner control device can provide the user with various ways to provide input. Even if the device is closely associated with a host machine that has input options such as a keyboard and mouse, it is confusing for the user to switch between operating the scanner and using the mouse, for example. May occur.

  The portable scanner may have a button, scroll / jog wheel, touch panel, and / or accelerometer to detect device movement. Some of these allow for a richer set of interactions while holding the scanner.

  For example, following some text scan, the system presents the user with several sets of documents that can be matched. The user uses the scroll wheel on the side of the scanner to select one from the list and clicks the button to confirm the selection.

12.1.3. Gestures The main reason for moving the scanner across the paper is to acquire text, but some movement can be detected by the device and used to indicate other intents of the user. The corresponding movement is referred to as “gesture” in this specification.

  As an example, a user can indicate a wide area of text by scanning the first few words in a conventional left-to-right order and the last few words in reverse order, that is, from right to left. The user can also indicate a vertical extension of the text of interest by moving the scanner down several lines within the page. A backward scan may indicate cancellation of the previous scan operation.

12.1.4. Online / offline behavior Many aspects of the system depend on network connectivity either between the system components, such as scanners and laptop host computers, or externally in the form of connections to corporate databases and Internet searches. There is a case. However, this connectivity cannot always be present, thus some or all of the system may be considered “offline”. It is desirable to ensure that the system continues to function effectively in those situations.

  If not in contact with the rest of the system, the device can be used to obtain text. A very simple device is one that can simply store image or audio data related to acquisition, ideally related to a time stamp indicating when it was acquired. Various acquisitions can be uploaded and handled in the rest of the system the next time the device contacts. The device may also upload other data related to acquisition, such as voice annotations related to optical scans, or location information.

  Higher performance devices can perform some or all of the system operations despite themselves being disconnected. Various techniques for improving their ability to do so are discussed in Section 15.3. In many cases, some, but not all, of the desired actions may be performed while offline. For example, text can be recognized, but source identification may depend on connectivity with an Internet-based search engine. Thus, in some embodiments, the device stores sufficient information about how far each operation has progressed with respect to the rest of the system in order to proceed efficiently when connectivity is restored.

  While system operation generally benefits from out-of-the-box connectivity, there are some situations where it is advantageous to perform some acquisitions and then process them as batches. For example, as discussed in Section 13 below, the identification of the source of a particular acquisition can be greatly enhanced by examining other acquisitions made by the user at about the same time. In a fully connected system where raw feedback is provided to the user, the system can only use past acquisitions when processing the current one. However, if the acquisition is one of the batches stored by the device when offline, the system will take into account any data available from later acquisitions in addition to the initial acquisition when doing its analysis. Would be able to.

12.2. At the host device, the scanner often communicates with some other device, such as a PC, PDA, phone, or digital camera, to perform many of the functions of the system, including more detailed interaction with the user. Let's go.

12.2.1. Activities Performed upon Acquisitions Upon receipt of an acquisition, the host device can initiate various activities. Listed below is a list of possible activities performed by the system after locating, as well as electronic duplicates related to acquisition and location within the document.

Acquisition details can be stored in the user history. (Section 6.1)
Documents can be retrieved from local storage or remote locations. (Section 8)
Operating system metadata and other records associated with documents can be updated. (Section 11.1)
The markup associated with the document can be examined to determine the next related operation. (Section 5)
The software application can be edited, viewed, or manipulated in the document. The choice of application may depend on the source document, or the content of the scan, or some other aspect of acquisition. (Section 11.2.2, Section 11.2.3)
The application can scroll to the acquisition location, highlight it, move the insertion point there, or display it. (Section 11.3)
The exact range of the acquired text can be modified to select, for example, the entire word, sentence, or paragraph surrounding the acquired text. (Section 11.3.2)
Options can be given to the user to copy the captured text to the clipboard or to perform other standard operating systems or application-specific operations thereon.

Annotation input may relate to a document or acquired text. These may arise from immediate user input or may be obtained initially, for example in the case of voice annotations associated with optical scanning. (Section 19.4)
The markup can be examined to determine a set of possible future operations for the user to select.

12.2.2. The appropriate action taken by the context pop-up menu system may be obvious, but may require selections made by the user. One good way to do this is by using a “pop-up menu” or, if the content is also displayed on the screen, using a so-called “context menu” that appears near the content. (See Section 11.3.3). In some embodiments, the scanner device projects a pop-up menu onto a paper document. The user uses conventional methods such as a keyboard and mouse, or controls the acquisition device (Section 12.1.2), gestures (Section 12.1.3), or uses a scanner By exchanging information with the computer display (Section 12.2.4), the corresponding menu can be selected. In some embodiments, pop-up menus that may appear as a result of an acquisition include default items that represent actions that occur if the user does not respond—for example, if the user ignores the menu and makes another acquisition.

12.2.3. Feedback in disambiguation When a user begins text acquisition, there will initially be some documents or other text locations that are likely to match. As the text acquisition proceeds and other factors are taken into account (Section 13), the number of candidate locations decreases until the actual location is identified, or further disambiguation is not possible without user input. Is possible. In some embodiments, the system may provide a real-time display of found documents or locations, for example in the form of lists, thumbnail images, or text segments, and to reduce the number of elements in the display as acquisition continues. Provide number. In some embodiments, the system displays thumbnails of all candidate documents, where the thumbnail size or location depends on the likelihood that it is a correct match.

  Once the acquisition is uniquely identified, this fact can be emphasized to the user, for example using audio feedback.

  Acquired text occurs in many documents and may be recognized as a quote. The system can indicate this on the screen, for example, by categorizing documents that contain reference material cited around the original source document.

12.2.4. Scanning from the screen Some optical scanners can capture text displayed on the screen in addition to paper. Thus, the term rendered document is used herein to indicate that printing on paper is not the only form of rendering, and that obtaining text or symbols for use by the system is subject to the electronic display. Used to indicate that text can be equally valuable when displayed.

  The user of the described system may need to interact with the computer screen for various other reasons, such as selecting from a list of options. It may be inconvenient for the user to put the scanner down and start using the mouse or keyboard. In other sections, physical control in a scanner (Section 12.1.2) or gesture (Section 12.1.3) has been described as a method of input that does not require modification of this tool, but any text or symbol Using a scanner on the screen itself to scan is an important alternative offered by the system.

  In some embodiments, the scanner light is a light that directly senses its location on the screen, possibly with the aid of special hardware or software on the computer, without actually having to scan the text. It can be used in the same way as a pen.

13. Context Interpretation An important aspect of the described system is the use of factors other than simple retrieval of text strings to help identify documents in use. Acquiring the right amount of text can often uniquely identify a document, but in many situations it is a small number of candidate documents. One solution is to prompt the user to see the document being scanned, but the preferred alternative is to take advantage of other factors to automatically narrow down the possibilities. Applicable supplemental information can dramatically reduce the amount of text that needs to be obtained and / or increase the reliability and speed with which a location can be identified within an electronic copy. This extra material is called "context" and was briefly discussed in Section 4.2.2. Here we consider it more deeply.

13.1. System and Acquisition Context Perhaps the most important example of relevant information is the user's acquisition history.

  In particular, if the previous acquisition was made in the last few minutes, it is almost certain that any acquisition originated from the same document or related documents as the previous one (Section 6.1.2). Conversely, if the system detects that the font has changed between the two scans, they are likely to originate from different documents.

  The user's long-term acquisition history and reading habits are also useful. They can also be used to develop models of user interest and association.

13.2. User's Real World Context Another example of a useful context is the user's geographic location. For example, users in Paris are more likely to read “Le Monde” than “Seattle Times”. Thus, the timing, size, and geographical distribution of the printed version of the document may be important and can be estimated to some extent from system operation.

  For example, for a user who always reads one type of publication while commuting and reads different publications at lunch or on the train home, the time zone may also be relevant.

13.3. Related digital contexts, including those retrieved or retrieved by more conventional means, recent use by users of electronic documents can also be useful indicators.

  Other factors may be considered useful, such as on a corporate network.

Which documents have recently been printed Which documents have been recently modified on the corporate file server Which documents have been recently emailed? All of these examples give users a paper version of these documents. It may indicate that you are likely to read. In contrast, if a vault with a document can be asserted that the document has never been printed or sent to a location where it may be printed, Can be safely deleted in any search.

13.4. Other Statistics-Global Context Section 14 deals with parsing the resulting data stream from paper-based searches, but here we will discuss the popularity of documents to other readers, the timing of popularity, and the most It should be noted that all statistics on portions of a document that are frequently scanned are examples of additional factors that can be beneficial in the search process. The system brings Google-type page ranking possibilities to the paper world.

  See Section 4.2.2 for some other implications on the use of context for search engines.

14 The use of a data stream analysis system produces a very valuable data stream as a side result. This stream is a record of what the user is reading and what is often a record of what the user feels is particularly valuable. The relevant data was not actually available for paper documents before.

  Several approaches where this data can be useful for the system and for users of the system are described in Section 6.1. This section concentrates on other uses. Of course, there are substantial privacy issues to consider regarding the distribution of data about what people are reading, but the problem of anonymity protection is known to those skilled in the art.

14.1. Document Tracking If the system knows which document a given user is reading, the system can also guess who is reading any given document. This allows you to analyze, for example, who is reading it, when it was distributed, how long it took to distribute, and who is looking at the current version and who is still using a copy of the old version. Documents can be tracked through the organization to enable

  For published documents that are more widely distributed, tracking individual copies is more difficult, but analysis of readership distribution is still possible.

14.2. Reading ranking-popularity of documents and sub-areas In situations where users are acquiring text or other data of particular interest, the system can infer the popularity of certain documents and specific sub-areas of those documents . This provides valuable input to the system itself (Section 4.2.2), as well as important sources of information for authors, publishers, and advertisers (Sections 7.6, 10.5). Form. This data is integrated into the search engine and search index--for example, to help ranking search results for queries that result from rendered documents and / or to rank traditional queries typed into a web browser It is also useful in some cases.

14.3. Analyzing the user-building a profile The knowledge of what the user is reading allows the system to create a very detailed model of the user's interests and activities. This can be useful in an abstract statistical basis-"35% of users who buy this newspaper also read the author's latest book"-but other interactions with individual users as discussed below are also possible To.

14.3.1. Social networking An example is the connection of one user with other users with related interests. These may be people known to the user. The system can ask university professors, "Did you know that your colleagues at XYZ University have just read this newspaper?" The system can ask the user "Do you want to link with someone reading" Jane Eyre "in the neighborhood?" Such links can be the basis for the automatic formation of book clubs or similar social structures, either in the material world or online.

14.3.2. Marketing Section 10.6 already mentioned the idea of providing products and services to individual users based on their interaction with the system. For example, current online book sellers often make recommendations to users based on their previous interactions with book sellers. Relevant recommendations are even more useful when they are based on actual book interactions.

14.4. Marketing systems based on other aspects of the data stream have discussed several approaches that can affect people who publish documents, those who advertise through them, and other sales that start with paper ( Section 10). Some commercial activities do not have any direct interaction with paper documents, but may nevertheless be affected by them. For example, the knowledge that people in one community spend more time reading sports than economics may be of interest to someone trying to establish a health club.

14.5. The type of data that can be retrieved In addition to the statistics discussed, such as who is reading which part of which document, and where it is read, examine the actual content of the retrieved text, regardless of whether the document is located or not That may also attract interest.

  In many cases, the user will not only get some text, but will also take some action as a result. For example, you may be trying to send an email by referring to a document. Even if there is no information about the identity of the user or email recipient, the knowledge that someone considers the document worth sending by email is extremely useful.

  In addition to the various methods described above for inferring the value of a particular document or portion of text, the user will clearly indicate the value by assigning a rating to it depending on the situation.

  Finally, if a particular set of users is known to form a group, for example, if it is known to be an employee of a particular company, the group's collective statistics can be used for that group. The importance of certain documents can be inferred.

15. Device Features and Functions The acquisition device used in the system only requires a technique for acquiring text from the rendered document. As mentioned above (Section 1.2), this acquisition can be accomplished in a variety of ways, including taking a picture of a portion of the document and typing some words into the cell phone keypad. This acquisition can be realized using a small portable optical scanner capable of simultaneously recording one or two lines of text, or a voice acquisition device such as a voice recorder that allows a user to read text from a document. The device used may be a combination of these—for example, an optical scanner that can also record voice annotations—and the acquisition functionality is built into other devices such as cell phones, PDAs, digital cameras, or portable music players. May be.

15.1. Inputs and Outputs Many of the additional input and output functions that may be useful for the device in question are described in Section 12.1. They include buttons for input, scroll wheel and touchpad, as well as displays for output, indicator lights, voice and haptic transducers. A device may incorporate many of them, or very few. Sometimes the acquisition device will be able to communicate with another device that already has them, for example using a wireless link (15.6), and sometimes the acquisition functionality is built into other applicable devices. (Section 15.7).

15.2. Connectivity In some embodiments, the device implements the majority of the system itself. However, in some embodiments, the device often communicates with a PC or other computer equipment and with a wider world using communication facilities.

  In many cases, these communication facilities are standard data networks such as Ethernet, 802.11, or UWB, or standards such as USB, IEEE-1394 (Firewire), Bluetooth (TM), or infrared. Is a typical peripheral device connection network. If a wired connection such as firewire or USB is used, the device can receive power through the same connection. Depending on the situation, the acquisition device may appear that the connected machine is a conventional peripheral such as a USB storage device.

  Finally, a device may “dock” with another device in some circumstances for use in conjunction with the device or for convenient storage space.

15.3. Caching and other online / offline functionality Sections 3.5 and 12.1.4 addressed the topic of disconnected operation. If the acquisition device has a limited subset of the overall functionality of the system and is not communicating with the rest of the system, the available functionality may be reduced, but the device It can still be useful. At the simplest level, the device can record the raw image or audio data being acquired, which can be processed later. However, for the benefit of the user, if possible, the acquired data is likely to be sufficient for the task in progress, it can be identified or likely to be identifiable, and the data It is important to provide feedback on whether the source of the source can be identified or later identified. The user will then know if those acquisition activities are meaningful. Even if all of the above matters are unknown, raw data can still be stored at a minimum so that the user can refer to them later. For example, if the scan cannot be recognized by the OCR process, an image of the scan may be presented to the user.

  To illustrate some of the range of options available, both a somewhat simpler optical scanning device and even more fully equipped are described below. Many devices occupy an intermediate point between them.

15.3.1. SimpleScanner-Low End Offline Example SimpleScanner has a scan head that can read pixels from a page as it moves along a line of text. SimpleScanner can detect motion along the page and record pixels with some information about the motion. It also has a clock and can time stamp each scan. When SimpleScanner has connectivity, the clock is synchronized with the host device. The clock cannot represent the actual time, but the host determines the relative time from the actual time so that the actual time of the scan, or at worst, the elapsed time between scans can be derived It is possible.

  SimpleScanner does not have enough processing power to perform the OCR itself, but has some information regarding typical word length, word spacing, and their relationship to font size. SimpleScanner has several basic indicator lights that identify whether a scan can be readable, whether the head is moving too early, too late, or incorrectly traversing the page. The user is notified when it has been determined that enough words of a given size have been scanned for the document to be.

  SimpleScanner has a USB connector and is connected to a USB port on the computer and recharged. To the computer, SimpleScanner looks like a USB storage device where the time-stamped data file is recorded and the rest of the system software takes over the data file from that location.

15.3.2. SuperScanner-high-end offline example SuperScanner also relies on connectivity for its full operation, but with a large amount of on-board storage and processing that can help make better decisions regarding data acquired while offline. Have.

  When the SuperScanner moves along a text line, the acquired pixels are stitched together and passed to the OCR engine that tries to recognize the text. Multiple fonts, including those from publications that the user is most reading to have a dictionary that syncs with the user's PC spell checker dictionary and contains many of the words that the user frequently encounters , Downloaded to SuperScanner to assist in the execution of this task. The scanner also stores a list of standard usage words and phrases that can be combined with a dictionary. The scanner can use frequency statistics to assist in the recognition process and to inform decisions about when a sufficient amount of text has been acquired. More frequently used phrases are not very useful as criteria for search queries.

  In addition, a complete index of recent newspaper articles, such as an index of recently purchased books from online bookstores, or from a user scanned within the past few months, and most commonly read by users Periodicals are stored in the device. Finally, the system allows users to scan titles without using other information and have a good idea as to whether what they get from a particular task can later be retrieved in an electronic form The titles of thousands of the most popular publications with data available at are stored.

  During the scanning process, the system confirms that the acquired data is of sufficient quality and of sufficient nature to allow the electronic copy to be retrieved when the connection is restored. Notify the user. Often, the system knows that the scan was successful, and that the context is recognized in one of the onboard indexes, or related publications whose data is available in the system To the user that later retrieval should be successful.

  SuperScanner docks in a cradle connected to a PC's firewire or USB port, where in addition to uploading acquired data, various on-board indexes and other databases can be used to monitor recent user activity and Updated based on new books. The SuperScanner also has a function to connect to a wireless public network, to a mobile phone via Bluetooth, and to communicate with the public network when the function is available.

15.4. Functions for optical scanning In the following, some functions that are particularly desirable for optical scanner devices are considered.

15.4.1. Flexible positioning and convenient optics One of the reasons paper continues to be popular is that it is easy to use in a variety of situations, for example where computers are impractical or inconvenient. Therefore, a device that attempts to obtain the essential part of the user's information exchange with paper must be equally convenient when used. This is an example that scanners did not have in the past. Even the smallest portable devices were somewhat tricky. A scanner intended to contact the page must be held at an accurate angle with respect to the paper and must be moved very carefully along the text to be scanned. This is acceptable when scanning business reports on an office desk, but impractical when scanning a phrase from a novel while waiting for a train. Scanners based on camera-type optics that operate slightly away from the paper can be equally useful in some situations.

  Some embodiments of the system use a scanner that scans in contact with the paper and transmits the image from the page to the photosensor device using the image root of the bundle of optical fibers instead of the lens. The device can be formed so that it can be held in its natural position. For example, in some embodiments, the portion that contacts the page is wedge-shaped and the user's hand can move more naturally on the page with the same movement as when using a marker pen. The image root can be in direct contact with or near the paper and can include a replaceable transparent tip that can protect the image root from possible damage. As described in Section 12.2.4, the scanner can be used for scanning from screens and paper, and the material at the tip can be selected to reduce the likelihood of damaging the display. it can.

  Finally, some embodiments of the device use light, sound, or tactile feedback when a user's scan is too early, too late, or too high or too low for the line being scanned. Provide feedback to the user during the scanning process.

15.5. Security, identity, authentication, personalization, and billing As described in Section 6, acquisition devices may form an important part of identification and authorization for secure transactions, purchases, and various other processes. . Thus, in addition to the circuitry and software required for that role, the acquisition device has various hardware functions such as a smart card reader, RFID, or keypad for entering a PIN that can make the device more secure. It is possible to incorporate.

  It may also include various biometric sensors to assist in user identification. For example, in the case of an optical scanner, the scan head can also read a fingerprint. In the case of an audio recording device, the user's audio pattern can be used.

15.6. Device Connection In some embodiments, a device can form an association with itself or with other neighboring devices to improve their functionality. In some embodiments, for example, the device uses the display or telephone of an adjacent PC or uses a network connection to provide more detailed feedback regarding its operation. On the other hand, devices can perform operations in the role of device security and identification device to authenticate operations performed by other devices. Or it can simply be associated to function as a peripheral for that device.

  An interesting aspect of such an association is that it can be initiated and authenticated using the device acquisition function. For example, a user who wants to identify himself / herself to a public computer terminal can use the device's scan function to scan for a key or code by scanning a code or symbol displayed in a specific area of the terminal screen. It is possible to achieve. Similar processing can be performed using an audio signal extracted by an audio recording device.

15.7. Integration with other devices In some embodiments, the functionality of the acquisition device is integrated into several other devices already in use. The integrated device may be able to share power, data acquisition and storage functions, and network interfaces. Such integration can be done simply for convenience in order to enable cost savings or otherwise unavailable features.

  Some examples of devices that can integrate acquisition functions are given below.

Existing peripherals such as a mouse, stylus, USB “webcam” camera, Bluetooth headset or remote control;
Another processing / storage device such as a PDA, MP3 player, audio recording device, digital camera or mobile phone;
Items that are often carried around for other conveniences, such as watches, jewelry, pens, and car key fobs.

15.7.1. Cell Phone Integration As an example of the benefits of integration, consider the use of an improved cell phone as an acquisition device.

  In some embodiments, the phone hardware can be handled by the phone itself or by the system at the other end of the call if the text acquisition can be done sufficiently through speech recognition. In some cases, such as when stored in phone memory for future processing, it has not been improved to support the system. Many modern telephones have the ability to download software that can run several parts of the system. Such voice acquisition is suboptimal in many situations, but accurate speech recognition is a difficult task even in the best situation, for example when there is substantial background noise. The voice function can be the best function used to obtain a voice comment.

  In some embodiments, a camera built into many cell phones is used to obtain an image of text. The phone display, usually serving as a camera finder, can be overlaid with live camera image information regarding the quality of the image and the suitability of the OCR from which the text segment is obtained. Even text rewriting is possible if it can be executed on the telephone.

  In some embodiments, the telephone is modified to add a dedicated acquisition function or to provide such functionality in a clip-on adapter or Bluetooth connected peripheral that communicates with the telephone. Whatever the nature of the acquisition mechanism, integration with modern cell phones has many other advantages. The phone has connectivity to the wider world, but it can submit queries to a remote search engine or other part of the system and a copy of the document is retrieved for immediate storage or display It is possible. Telephones typically have sufficient processing power for many of the system functions performed locally and have sufficient storage to obtain the appropriate amount of data. The amount of storage can often be expanded by the user. Telephones have fairly good display and audio capabilities to provide feedback to the user and often provide a vibration function for haptic feedback. The telephone also has an excellent power source.

  The most notable of all is that it is a device that most users already carry.

Part 3 System Application Examples This section gives examples of the use of applications that can be incorporated into systems and systems. This list is intended to be merely illustrative and not an exhaustive sensation.

16. Personal application 16.1. Life Library The Life Library (see also Section 6.1.1) is a digital archive of any important document that a subscriber wants to save and is an embodiment of a set of services of the system. All important books, magazine articles, newspaper clippings, etc. can be stored in the Life Library in digital form. In addition, subscriber notes, comments, and notes can be saved with the document. The life library can be used via the Internet and the World Wide Web.

  The system creates and manages a life library document archive for subscribers. Subscribers indicate which documents they want to be stored in their life library by scanning the information from the documents, otherwise indicating to the system to add specific documents to the subscriber's life library . The scanned information is typically text from the document, but can also be a barcode or other code that identifies the document. The system approves the code and uses it to identify the source document. After the document is identified, the system can store a copy of the document in the user's life library or a link to a source from which the document can be obtained.

  In one embodiment of the life library, it can be determined whether the subscriber is authorized to obtain the electronic copy. For example, if a reader scans text or identifiers from a copy of an article in New York Times (NYT) so that the article is added to the reader's life library, the life library system subscribes to the online version of NYT. Whether it is live is verified by NYT. If subscribed, readers get a copy of the article stored in their Life Library account. If not subscribed, information identifying the document and how to order is stored in the reader's life library.

  In some embodiments, the system maintains a subscriber profile for each subscriber that includes access rights information. Document access information can be compiled in several ways. Two of them are shown below. 1) The subscriber supplies document access information to the library system along with his account name and password. 2) The life library service provider queries the publisher for subscriber information, and the publisher responds by providing electronic copy usage, if the life library subscriber is authorized to use the material. To do. If a Life Library subscriber is not authorized to own an electronic copy of the document, the publisher provides the Life Library service provider with a price and then offers the customer the option to purchase the electronic document To do. In that case, the life library service provider either pays the publisher directly and charges the life library customer, or the life library service provider charges the customer's credit card for the purchase. Life Library service providers receive a certain percentage of the purchase price or a small fixed fee to facilitate the transaction.

  The system can archive documents from the subscriber's personal library and / or other libraries for which the subscriber has archiving rights. For example, when a user scans text from a printed document, the life library system can identify the rendered document and its electronic duplicate. After the source document is identified, the life library system may record information about the source document of the user's personal library and the group library for which the subscriber has archive rights. A group library is a collaborative archive, such as a document repository, for groups working on a project, academic research groups, weblog groups, etc.

  Life libraries are chronologically, by topic, by subscriber interest, by type of publication (newspaper, book, magazine, technical paper, etc.), read location, read time, ISBN, or Dewey decimal , And so on. In one alternative, the system can learn classification based on how other subscribers classified the same document. The system can suggest classification to the user or automatically classify the document for the user.

  In various embodiments, the annotations can be inserted directly into the document or held in a separate file. For example, if a subscriber scans text from a newspaper article, the article is archived in the subscriber's life library with the scanned text highlighted. Alternatively, the article is archived in the subscriber's life library along with the associated comment file (thus the archived document remains unchanged). In an embodiment of the system, a copy of the source document in each subscriber's library, a copy in a master library available to multiple subscribers, or a link to a copy maintained by the publisher can be maintained.

  In some embodiments, the life library stores only changes to the user's document (eg, highlights, etc.) and links to the online version of the document (stored elsewhere). The system or subscriber merges the changes with the document when the subscriber subsequently retrieves the document.

  If the comments are held in separate files, the source document and comment file are provided to the subscriber, which combines them to create a change document. Alternatively, the system combines the two files before presenting them to the subscriber. In another alternative, the comment file is an overlay to the document file and can be overlaid on the document by software in the subscriber's computer.

  Life Library subscribers pay a monthly fee to have the system maintain a subscriber archive. Alternatively, the subscriber pays a small amount (eg, micropayment) for each document stored in the archive. Alternatively, the subscriber pays a usage fee per access to the subscriber's use of the archive. Alternatively, the subscriber can compile the library and make the material / annotations on the revenue allocation model by the life library service provider and copyright holder available to others. Alternatively, the life library service provider receives payment from the publisher when the life library subscriber orders the document (the life library service provider receives the publisher's revenue share, Revenue allocation model).

  In some embodiments, the life library service provider may use an intermediary between the subscriber and the copyright holder (or Copyright, also known as CCC) to facilitate billing and payment for copyrighted material. It serves as a Clearance Center. The life library service provider uses the subscriber's billing information and other user account information to provide this intermediary service. Basically, life library service providers take advantage of existing relationships with subscribers in order to be able to purchase copyrighted material on behalf of subscribers.

  In some embodiments, the life library system can store excerpts from documents. For example, when a subscriber scans text from a paper document, the entire document is not archived in the life library, but the area around the scanned text is extracted and placed in the life library. This is particularly advantageous when the document is long because it preserves the original scan status so that the subscriber does not reread the document to find the part of interest. Of course, a hyperlink to the entire electronic copy of the paper document can be included with the excerpt.

  In some embodiments, the system may also include author, publication title, publication date, publisher, copyright holder (or copyright holder's license agent), ISBN, link to general annotations on the document, reading ranking. Information about documents such as is also stored in the life library. Some of this additional information about the document is in the form of paper document metadata. A third party can create a general comment file for access by individuals other than themselves, such as the general public. Linking to a third party comment on a document is advantageous because reading the other user's comment file increases subscribers' understanding of the document.

  In some embodiments, the system archives material by class. This feature allows Life Library subscribers to quickly store electronic copies in the entire class of paper documents without using each paper document. For example, if a subscriber scans some text from a copy of National Geographic magazine, the system provides the subscriber with an option to archive all National Geographic back numbers. If a subscriber chooses to archive all back numbers, the life library service provider checks with the American Geographical Society to determine if the subscriber is authorized to archive. Otherwise, the life library service provider can negotiate the purchase of the right to archive the National Geographic collection.

16.2. Life Saver A variation or enhancement of the life library concept is a “life saver”, where the system uses text obtained by the user to derive further about those other activities. By scanning menus from specific restaurants, scanning programs from theater performances, scanning timetables at specific stations, or scanning articles from local newspapers, the system infers about the user's location and social activities Be able to configure an automatic diary of social activities such as websites. The user will be able to edit and change the diary, attach additional materials such as photos, and, of course, view the scanned item again.

17. Academic Applications Portable scanners supported by the systems described above are often essential in an academic environment. Portable scanners can enhance the exchange of information between students / teachers and increase the learning experience. Among other uses, research materials can be commented to suit student specific needs. Teachers can monitor lessons in the classroom. In addition, teachers can automatically verify source material cited in student assignments.

17.1. Children's Books Information exchange between children and paper documents such as books is monitored by a reading comprehension acquisition system using a specific set of embodiments of the system. The child uses a portable scanner that communicates with other elements of the reading skills acquisition system. In addition to the portable scanner, the reading comprehension acquisition system includes a computer having a display and speakers, and a database accessible by the computer. The scanner is connected to a computer (hardwired, short range RF, etc.). When a child looks up an unknown word in a book, the child scans the word with a scanner. In one embodiment, the reading comprehension learning system compares the scanned text with the resources in its database to identify the word. The database includes dictionaries, thesaurus, and / or multimedia files (eg, audio, graphics, etc.). After the word is identified, the system uses a computer speaker to communicate the pronunciation of the word and its definition to the child. In another embodiment, the words and their definitions are displayed on a computer monitor by a reading comprehension acquisition system. Multimedia files for scanned words can also be played back through a computer monitor and speakers. For example, if a child reading "Goldilocks and the Three Bears" scans for the word "bear", the system will pronounce the word "bear" and play a short video about the bear on the computer monitor It is possible. In this way, the child learns the pronunciation of the written word and visually learns the meaning of the word through multimedia display.

  Reading comprehension acquisition systems provide audio and / or visual information to enhance the learning process. The child uses this supplemental information to quickly and better understand the written material. The system can be used for teaching elementary readers to read, helping a child to acquire more vocabulary, and so on. The system provides the child with information about words that the child is not familiar with or information about words that the child wants further information.

17.2. Learning Reading Comprehension In some embodiments, the system compiles a personal dictionary. When a reader looks up a word that is new, interesting, or particularly useful, or particularly problematic, the reader saves that word (with its definition) in a computer file. This computer file becomes the reader's personalized dictionary. This dictionary is generally smaller in size than a typical dictionary, so it can be downloaded to a mobile station or associated device, and thus can be utilized even when the system is not immediately accessible. In some embodiments, the personal dictionary entry includes an audio file to assist in pronunciation of the appropriate word and information identifying the paper document from which the word was scanned.

  In some embodiments, the system creates customized spellings and vocabulary tests for students. For example, when a student reads an assignment, the student can scan a word that he is not familiar with using a portable scanner. The system stores a list of all words scanned by the student. Later, the system manages a customized spell / vocabulary test for the student on the associated monitor (or prints the test on the associated printer).

17.3. Music education The arrangement of the code of the staff is similar to the arrangement of characters in the text line. The same scanning device described above for acquiring text in this system can be used for score acquisition, so that the work that has been acquired can be recognized by a similar process that constitutes a search for known music works. Can then be retrieved, played, or used as a reference for some further action.

17.4. Theft detection The teacher can use the system to detect plagiarism or to verify the source by scanning text from student documents and submitting the scanned text to the system. For example, a teacher who wishes to verify that a citation in a student's document is from a source cited by the student, scans the citation and finds the title of the document identified by the system and the student You can compare the titles of the cited documents. Similarly, the system can use text scanned from an assignment submitted as the student's original to determine whether the text was copied instead.

17.5. Enhanced textbooks In some embodiments, obtaining text from textbooks can be used to identify students or staff members with more detailed explanations, further assignments, discussions between students and staff on the material, and past exam questions. Links to examples, further reading of the subject, recordings of lectures on the subject, etc. (see also section 7.1).

17.6. Language Teaming In some embodiments, the system is used to teach foreign languages. For example, scanning a Spanish word may result in reading the word aloud in Spanish with its English definition.

  The system provides direct audio and / or visual information to enhance the new language acquisition process. Readers use this supplemental information to quickly and better understand the material. The system can be used to teach elementary students to read a foreign language, to help students acquire more vocabulary, and so on. The system provides information about words that the reader is not familiar with or about words for which the reader wants more information.

  Information exchange between readers and paper documents such as newspapers or books is monitored by a language skills system. The reader has a portable scanner that communicates with the language skills system. In some embodiments, the language skills system includes a computer having a display and speakers, and a database accessible by the computer. The scanner communicates with a computer (hardwired, short range RF, etc.). When a reader looks up an unknown word in an article, the reader scans that word with a scanner. The database includes foreign language dictionaries, thesauruses, and / or multimedia files (eg, audio, graphics, etc.). In one embodiment, the system compares the scanned text with resources in its database to identify the scanned word. After the word is identified, the system uses a computer speaker to communicate the pronunciation and its definition to the reader. In some embodiments, both the word and its definition are displayed on a computer monitor. Multimedia files related to grammatical hints associated with scanned words can also be played through a computer monitor and speakers. For example, if the word “to spike” is scanned, the system will pronounce the word “hablar”, play a short audio clip showing the appropriate Spanish pronunciation, and complete the various inflections of “hablar”. A list can be displayed. In this way, the student learns the pronunciation of the written word, writes the multimedia display, visually learns the spelling of the word, and learns how the verb changes in form. The system can also provide grammatical hints about proper use of “hablar” along with common phrases.

  In some embodiments, the user scans a word or short phrase from a rendered document in a language other than the user's native language (or some other language that the user is fairly familiar with). In some embodiments, the system maintains a preferred list of user “preferred” languages. The system identifies an electronic duplicate of the rendered document and determines the location of the scan within the document. The system also identifies a second electronic copy of the document that has been translated into one of the user's preferred languages and determines the location of the translated document that corresponds to the location of the scan within the original. If the corresponding location is not known accurately, the system identifies a small area (eg, paragraph) that includes the location corresponding to the scanned location. The corresponding translated location is then shown to the user. This provides the user with an accurate translation of a particular usage at the scanned location, including any slang or other idiomatic usage that is often difficult to accurately translate on a word order basis.

17.7. Collection of Research Materials Users who are investigating a particular topic may encounter a variety of materials, both in print or on the screen, and note that they are recorded in some personal archives as related to that topic. May want. Depending on the system, this process can be done automatically as a result of scanning a short phrase in any part of the material, creating a reference suitable for insertion into the publication on the subject. it can.

18. Commercial Applications Clearly, commercial activities do almost all the processing described herein, but here we focus on a few obvious revenue streams.

18.1. Fee-based search and indexing Traditional Internet search engines generally provide free search for electronic documents and do not charge content providers to include their content in the index. In some embodiments, in connection with the operation and use of the system, the system provides for payment to the user and / or payment to the search engine and / or content provider.

  In some embodiments, the subscriber pays the service of the system a search fee that results from scanning a paper document. For example, a stockbroker may have read a Wall Street Journal article about a new product offered by Company X. By scanning the name of Company X from a paper document and agreeing to pay the necessary fee, this stockbroker examines a special or dedicated database, such as an analyst report, Use the system to get special information about. The system also indexes documents that are more likely to be read in paper form, for example, by indexing all newspapers published on a particular day and making sure they are available by the time you go out to town. It can also be processed so that creation is prioritized.

  A content provider may pay a fee associated with a particular term in a search query submitted from a paper document. For example, in one embodiment, the system selects the most preferred content provider based on further context for the provider (in this case, the context paid a fee for the content provider to advance the results list) That is). In essence, the search provider adjusts the search results for paper documents based on pre-existing payment agreements by the content provider. See also the description of keywords and key phrases in Section 5.2.

  When access to specific content is restricted to a specific group of people (such as clients or employees), the content is generally protected by a firewall and therefore generally cannot be indexed by a third party. Content providers may still want to provide an index for protected content. In such a case, the content provider can pay the service provider to provide the content provider's index to the subscribers of the system. For example, a law firm can index all client documents. Documents are stored behind the law firm's firewall. However, since the law firm wants its employees and clients to use the document via a portable scanner, it provides the index (or pointer to the index) to the service provider and then the law firm employee. When an employee or client submits a search term that scans paper through a portable scanner, the law firm index is searched. The law firm may provide a list of employees and / or clients to the service provider's system to enable this feature, or the system may provide legal services prior to searching the law firm's index. Access rights can be verified by querying the location. Note that in the above example, the index provided by the law firm is only for the client's document, not the index for all documents in the law firm. Thus, a service provider can only allow a law firm client to use a document that the law firm has indexed to the client.

  There are at least two separate revenue streams that can result from searches originating from paper documents, one with the search function and one with the content distribution function. Search function revenue can be generated by subscriptions paid by scanner users, but can also be generated for pre-search fees. Content distribution revenue can be shared with content providers or copyright holders (service providers may take a fixed percentage of sales or a fixed fee such as micropayment for each distribution Yes, even with a “query” model where the system obtains a commission or a fixed percentage for each item that a subscriber orders from an online catalog and the system delivers or offers, regardless of whether the service provider mediates the transaction Can occur. In some embodiments, the service provider of the system may provide all the subscribers have made from the content provider for a certain period of time or at a later time when the identified product purchase is made. Receive revenue for purchases of.

18.2. Catalog Consumers can use portable scanners to make purchases from paper catalogs. The subscriber scans the catalog for information identifying the catalog. This information is text from a catalog, barcode, or another identifier of the catalog. The subscriber scans information identifying the product that the subscriber wishes to purchase. The catalog address label may include an identification number that identifies the customer to the catalog vendor. In that case, the subscriber can also scan this customer identification number. The system acts as an intermediary between subscribers and vendors to facilitate catalog purchases by providing the customer selection and customer identification number to the vendor.

18.3. Coupon The consumer scans a paper coupon and stores an electronic copy of the coupon in a scanner or remote device such as a computer for later retrieval and use. The advantage of an electronic storage device is that it frees consumers from carrying paper coupons. A further advantage is that electronic coupons can be taken from any location. In some embodiments, the system can track coupon expiration dates, notify consumers about coupons that are about to expire, and / or delete expired coupons from storage. An advantage for the coupon issuer is the possibility of receiving more feedback regarding who used the coupon and when and where they were obtained and used.

19. General application 19.1. The foam system can be used to automatically populate electronic documents that correspond to paper forms. The user scans some text or barcode that uniquely identifies the paper form. The scanner communicates the identity of the form and information identifying the user to the adjacent computer. The adjacent computer has an internet connection. The adjacent computer can utilize a first database form and a second database (such as a service provider's subscriber information database) that has information about the user of the scanner. The adjacent computer uses an electronic version in paper form from the first database, and automatically populates the field form from the user information obtained from the second database. The neighboring computer then emails the completed form to the intended recipient. Alternatively, the computer can print the completed form on an adjacent printer.

  Rather than utilizing an external database, in some embodiments, the system has a portable scanner that contains the user's information, such as in an identity module, SIM, or security card. The scanner provides information identifying the form to the adjacent PC. An adjacent PC uses an electronic form to query the scanner for any information necessary to fill in the form.

19.2. The business card system can be used to automatically populate electronic address books or other contact lists from paper documents. For example, when a business card of a new acquaintance is received, the user can acquire an image of the card with his mobile phone. The system can be used to locate the electronic copy of the card and update the mobile phone's onboard address book with the contacts of the new acquaintance. Electronic copies can contain more information about new acquaintances than can be packed into a single business card. In addition, the onboard address book can store a link to the electronic copy so that any changes to the electronic copy are automatically updated in the mobile phone address book. In this example, the business card includes a symbol or text indicating the presence of an electronic copy, depending on the situation. In the absence of an electronic copy, the mobile phone can use information in OCR or standard business card format to fill in an entry in the address book for a new acquaintance. Symbols can also assist in the process of extracting information directly from the image. For example, an icon next to a business card phone number can be recognized to determine the location of the phone number.

19.3. Proofreading / editing systems can enhance the proofreading and editing process. As one method, the system can enhance the editing process by linking the editor's paper document and its electronic copy information exchange. As the editor reads a paper document and scans various parts of the document, the system annotates or edits the electronic copy of the paper document appropriately. For example, if an editor scans a portion of text and makes a “new paragraph” control gesture at the scanner, the computer communicating with the scanner will place a “new” at the location of the scanned text in the electronic copy of the document. Insert a paragraph break.

19.4. Voice comment A user can attach a voice comment to a document by scanning a portion of the text from the document and then making a voice recording associated with the scanned text. In some embodiments, the scanner has a microphone for recording comments of the user's words. After the verbal comment is recorded, the system identifies the document from which the text was scanned, locates the scanned text within the document, and attaches a voice comment at that location. In some embodiments, the system converts the speech to text and attaches the annotation as a text comment.

  In some embodiments, the system maintains annotations separately from the document, only in relation to the annotations that are maintained with the document. The annotation then becomes a markup layer of annotation for the document for a particular subscriber or user group.

  In some embodiments, for each acquisition and associated annotation, the system identifies the document, opens the document using a software package, scrolls to the location to scan, and plays the voice comment. The user then interacts with the document while referring to the voice comment and suggesting changes or other comments recorded by himself or someone else.

19.5. Help in Text The system described above can be used to enhance paper documents through an electronic help menu. In some embodiments, the markup layer associated with a paper document includes help menu information for the document. For example, if the user scans text from a specific part of the document, the system will check the markup associated with the document and present a help menu to the user. The help menu is shown on the scanner display or an associated adjacent display.

19.6. Depending on the situation of use with the display, it may be advantageous to be able to scan information from a television, a computer monitor, or other similar display. In some embodiments, portable scanners are used to scan information from computer monitors and televisions. In some embodiments, the portable light scanner has an illumination sensor that is optimized to work with conventional cathode ray tube (CRT) display technology, such as rasterization, screen blanking, and the like.

  A voice acquisition device that operates by acquiring voice of text that a user reads from a document generally works whether the document is on paper, on a display, or on some other medium.

19.6.1. Public kiosk and dynamic session ID
One use for direct scanning of the display is device association, as described in Section 15.6. For example, in some embodiments, a public kiosk displays a dynamic session ID on its monitor. The kiosk is connected to a communication network such as the Internet or a corporate intranet. The session ID changes periodically, but at least every time the kiosk is used so that a new session ID is displayed to all users. To use the kiosk, the subscriber scans the session ID displayed on the kiosk. By scanning the session ID, the user tells the system that he or she wants to temporarily associate the kiosk with his scanner for document scanning or distribution of content from the kiosk screen itself. The scanner can communicate the session ID and other information to authenticate the scanner (such as a serial number, account number, or other identification information) directly to the system. For example, the scanner can communicate directly with the system by sending a session start message via the user's mobile phone (paired with the user's scanner via Bluetooth ) “Direct” means that the message does not pass through the kiosk). Alternatively, the scanner uses the kiosk's communication link by establishing a radio link with the kiosk and transferring session initiation information to the kiosk (or via a short range RF such as Bluetooth ). be able to. In response, the kiosk sends session initiation information to the system via its Internet connection.

The system can prevent others from using a scanner that is already associated with a scan during the period (or session) in which the device is associated with the scanner. This feature is useful to prevent others from using public kiosks before another person's session ends. As an example of this concept for using a computer in an Internet cafe, a user scans a bar code on the monitor of the PC he / she wants to use. In response, the system sends a session ID to the monitor displaying the barcode. The user initiates the session by scanning the session ID from the monitor (or by entering the session ID via a keypad, touch screen, or microphone on the morphology scanner). The system then associates the session ID with the scanner's serial number (or other identifier that uniquely identifies the user's scanner) in the database, so that another scanner scans the session ID during its session. Cannot use the monitor. The scanner communicates with a PC associated with the monitor (via a wireless link such as Bluetooth , a hardwired drink such as a docking station, etc.) or via another means such as a mobile phone, for example. Communicate directly with the system (ie without going through the PC).

Part 4 System Details FIG. 4 is a diagram illustrating a typical environment in which an embodiment of the system operates. The system functions within a distributed computing environment 400 that includes multiple devices interconnected by a wireless network 401, the Internet 402, or other network (not shown). All these communications and connections are interconnected via a suitable network connection using a suitable network communication protocol. In various embodiments, the server and other devices communicate with each other according to their respective APIs to form further embodiments of the system. In another embodiment, the device and server can communicate according to an open / standard protocol.

  Servers and other devices may use the OCR device 411 or other text acquisition device used to obtain text from the rendered document 412 and the wireless device 421 and / or various of the acquired text and other user input. The user device 422, the user account server 431, and the system manages user account information for the user, through which the text acquisition device can upload the display, through which the system can provide various types of feedback to the user Related to the user account database 432 and the search engine server 441 and the system used to make a query that includes text retrieved from the rendered document to identify the location in the electronic document where the retrieval of the text occurs You Includes a search database 442, is a copy of the document system retrieves determined to contain document server 451 and retrieve text, and associated document database 452, a. Also, although these servers are shown as a single device, it is understood that each server may actually comprise one or more devices in an actual system implementing system embodiments. I want to be. It should also be understood that the server includes a file server, a database server, or a combination of a file server and a database server. Furthermore, although the various servers are described as independent devices, those skilled in the art will appreciate that in other embodiments of the system, the servers may reside on a single device.

  If the scanner incorporates a magnetic sensor, the data can be encoded magnetically in the document, as well as optically, acoustically, and tactilely.

  Although the process of converting an electronic document to a printed form has existed almost from the beginning of computing, there is a lack of an efficient way to refer back to the original digital source of the printed document. In some embodiments, the system accomplishes this by scanning a desired location in the document to identify the unique text “signature”, which signature corresponds to the corresponding in the original digital source document. Provide information that can be used to identify locations to do. The system sends this digital signature to a server that has access to the database of electronic documents, although it is possible to obtain useful results at other times (as described below). It is desirable to include an electronic version of the document. The server then identifies the corresponding location (or group of locations) in the electronic source document and associates it with a scan of the original paper document. Establishing this relationship allows a number of useful innovations for use with printed documents in various contexts. Various embodiments of the system are described below.

  In one aspect, the system uses auxiliary or augmented information to convert document recognition into document navigation (eg, finding document locations and intersections, and generating information for intersections). It is regarded. Some of the many “hints” that the system can use / discover include:

How fast do users read? What direction do users read? What subscriptions do users subscribe to? Daily and weekly actions of users (eg, reading Sunday version on Sunday morning)
Recent marks the user has made on this and other documents Material / subject type the user has historically interested in Explicit user profile Current user location (near the user's PC and / or its PC Given by wireless environment, such as activity at
Text properties Other.

  In many cases, the first mark that a user makes in a document is used to obtain a typeface or font. The meaning of these character objects can then be determined by (offset-based) template matching plus a disambiguation technique described elsewhere, or by more conventional techniques. Once the current typeface or font is known, the device can either retrieve and send the actual text (eg, ASCII) or use the (offset-based) template matching display described elsewhere.

  In rare cases capital letters occur, so in some embodiments the system handles them in a special way. Since the system generally has a source or reference copy of the document available, the system can predict where capital letters (and punctuation marks) may or may occur.

  In many cases, there is no guarantee that a particular instance of a document will be rendered when the document appears in the source or reference copy. Nevertheless, the system can often infer how the rendered copy handles these marks (capital letters).

  A good example is the capital letter that usually starts an English sentence. Since these are rare, template matching and disambiguation are generally not easy to use to determine these initial capital letters. One alternative is to basically ignore uppercase letters by ignoring the first letter of a new paragraph, sentence, etc.

  Then, in the disambiguation process, capital letters and other rare marks are automatically processed appropriately. Characters that occur only once (do not repeat) are given a special default offset (eg, a code of 0).

  If a special index is constructed from an offset-based display (or other ambiguous display), uncertainty about the leading character can be expected. That is, it should be noted that the system knows from the source document that a capital letter occurs at a particular location and does not match.

  This capitalization problem is a good example of how a system is distinguished from previous OCR systems. Since the system considers (and possibly depends on) the source document (current or future) available, various uncertainties and problems are easily handled. And because it focuses primarily on document navigation rather than interpretation, it can challenge the traditional OCR system (all fonts must have special information about all capital letters) , Etc.) does not cause problems for the system.

  As an example, suppose the user wishes to show “Take as an example this sensence.” That occurs in the rendered document. Conventional OCR systems attempt to understand and interpret the letter “T” to see if the first word is “Take”, “Make”, “Fake”, “Rake”, etc. . However, the system is only looking for a characteristic reference function for navigation. The system can simply omit the “T” and search the source document for “ake as an example”. This phrase may be expressed in letters, offsets, or other forms. As long as the rest of the phrase constitutes an identification signature, the interpretation of the first letter is not important.

  Another way of thinking about this problem and distinction is gained by understanding that conventional OCR was used to recognize (ie, interpret) text characters. For example, an OCR pen user moves the pen over a text line to obtain and interpret the text. System users have different purposes. The user moves the wand or scanning device over the text line to * indicate "or" point "to this location in the document. Thus, it enables many features and functions associated with that location in the document.

  Furthermore, if the user is interested in a particular basic text, the user's action is generally to act on the text, not to obtain and interpret the text. Therefore, the user may underline this text, change it to italic, extract it, place a bookmark there, and so on.

  The placement of bookmarks is one useful feature of the system, each indicating a location within the rendered document. In general, they can later be used to find locations within a document. One simple but interesting application is for marking where the end of a document that a user was reading, which matches the traditional meaning of "bookmark" very well. The application can be useful for the user to easily utilize this information (where the user stopped reading from the book or document). This data may occur on the user's PC or PDA or mobile phone. In some embodiments, the device itself indicates the last read location, such as by using the device's own small LCD display. In some embodiments, the display is binary. For example, an on or off LED. This LED can be turned on when scanning text that the user has already read, and turned off when scanning new text. In this way, the user can “search” for places where reading has been completed.

  FIG. 5 is a flow diagram illustrating the steps generally performed by the system to implement a bookmark. In step 501, the system receives text scanned by a user. In step 502, the system preprocesses the text scanned in step 501. In step 503, the system compares the scanned text with the document history maintained for the user. In step 504, if the scanned text was previously scanned by the user, proceed to step 505 to return the previous scan indication, otherwise proceed to step 506 to locate the scan in the document Is identified. In step 507, if the scan is located before the last bookmark, go to step 508 and return the previously read instruction; otherwise, go to step 509 and the instruction that was not previously read return it.

  In some embodiments, impressive examples of new functionality provided to the system are found in historical uses such as library books, school textbooks, and the like. Regardless of margin notes, underlines, and highlights, or other forms, writing to a book is always what the reader wants. However, in the case described above (and even in the user's own book), the creation of these marks is a major obstacle. They hinder the enjoyment of future work by others (and the users themselves). The system allows a user to mark or comment on a book or document and at the same time choose to leave the original untouched.

  One method for viewing user actions in a particular document includes overlay or transparency. Users interact with the physical rendering of the document, but the marks are virtual. That is, since the marks are acquired and stored electronically, no physical mark need be displayed in the rendered version. However, in some embodiments, the system uses a text acquisition device with an integrated marker or pen. FIG. 6 shows a scanning device with an integrated marker and pen. It can be seen that the scanning device 600 includes a marker 601 and a pen 602 and is retractable depending on the situation.

  Therefore, this overlay is considered an abstract virtual layer. This layer can then be merged with or “overlaid” on the source or reference version of the document. In one example, this occurs when the user views the reference document on the computer screen, and the data overlaid by the user's actions is displayed on or integrated into the reference document. Thus, it should be noted that the reference document does not need to be changed for display. In some embodiments, the overlay information is combined with a reference or source document when the user prints the document. In some embodiments, the system applies or merges an overlay to the source document when the source document is delivered electronically to the user. For example, the source document and overlay may be combined into a PDF document and emailed to the user.

  In either of these examples, the user's overlaid information can be stored as a separate layer, so there is no need to change the source document. Thus, the user can mark everything and interact with a single copy of the document. User marks and notes are stored separately, so there is no need to change the original.

  There is generally less data for each user compared to basic documents. Considering the highlighting case, what needs to be stored is the start and end location of the highlighted text in the document and the highlight color. One way to store this data is to offset the character from the beginning of the document. Another method is an address such as document: page: line. Alternatively, the system stores the actual xy coordinates of user actions in the rendered document.

  Documents that are occasionally used by the system are accompanied by special marks (eg, barcodes, etc.) that can be scanned by the user to indicate which publications or copies of the document are being scanned. This additional identification information allows the system to determine what document the user has and how it was rendered.

  In some cases, the system may require the user to scan an identification code or mark so that the user can interact with the document using the system. This can be requested by the user before using the device in other parts of the document. Alternatively, the system can allow the user to exchange information with the document, but requires an identification scan to be performed at some point in the future. Or, in yet another alternative, the identification scan may be optional. Without an identification scan, the system may have further ambiguity. That is, it is not very clear about the specific document used. With additional scanning, the system will know more about a particular document.

  In some embodiments, the acquisition device may provide an error indicator or signal (e.g., LED) that indicates to the user that the document is not recognized or not valid, i.e., wants or needs an identification scan. Or audible sound). This discriminative scan can be used specifically to indicate what documents the user has (eg, local morning newspapers) so that the system can identify and locate the scan. To determine, it is possible to reference a cached copy of the document or an associated dictionary.

  The scanned special mark can be a one-dimensional or two-dimensional barcode, or a human-readable text in a specific area, or otherwise encoded data. In some embodiments, an area of text in the rendered document is used to indicate to the user that the area must be scanned for document recognition (eg, margin marks, highlights, Especially marked by underline, special color ink etc.).

  All of the above description can also be applied to a plurality of marks in a document. For example, different articles in magazines or newspapers, magazine individual advertisements, individual pages, etc. can be accompanied by special marks or explicitly scan one or more items in a small area of the document. It is possible to request the user to do so. Thus, individual parts of the document can be clearly identified by explicit user action.

  In some cases, these scans can be used to help the system know the user's context. In other cases, these scans may enable or unlock system functions that would otherwise be unavailable. For example, purchase from a printed catalog may not be allowed unless the user scans a mailing label with an identification code on the catalog.

  In some embodiments, the user can scan a region of text, especially to establish the context (which location of which document). For this purpose, the terminal device can have a special switch or input to indicate this desired function (configuration context). Alternatively, the user can perform a special gesture by the device that can indicate a context-setting reference scan function, such as by scanning text backwards. Alternatively, the action or gesture by the device can indicate “erase” or “cancel” of the previous action.

  In general, device behavior and actions can be used to indicate a user's purpose. Below is a long list of possible actions.

Scan in the reading direction = generation of document signatures;
Scan backwards = set context;
Drag in the vertical (up and down) direction of the page (system can count traversed rows and get data fragments from traversed rows) = set region;
Back and forth motion or up and down motion = Cancel previous action. ;
Text area rotation = area selection;
Tap or click (via a switch or sensor at the end of the device that contacts the rendered document, or via another switch that the user can control) = request for context-related menus;
Note that this is only a partial list. It should also be noted that there is a high probability due to the combination of two or more of these operations and the order of operations performed.

  One interesting use of the system is to sign documents. Note that the device can record the specific time, location, etc. that a particular document was scanned, including which parts were scanned. The optical system of the device can also acquire and store an image of the signature. In the system, a document is provided with a special mark or code in one or more locations (or includes a unique identification code for the entire document). These marks may include human readable text that is specifically marked or shown (eg, portions of legal documents printed in bold, underlined, etc.). The user can then indicate that the user has scanned various parts of the document and read it. Furthermore, the user can sign the document according to the situation and scan the signature with the terminal device. The device itself can incorporate writing means, as shown in FIG. 6, in which case the user can both scan and sign with a single device.

  Encoding documents with special codes (eg, barcodes) and sub-parts of documents has long been required. However, historically, efforts to solve this have not been successful enough. One reason is that the device's own barcode scanner does not have enough utilities for the end user. This gives rise to the following unexplained situation. That is, the publisher does not print the code because the user does not carry the scanner. Since the publisher does not print the code, the user does not obtain and use the code scanning device.

  However, this obstacle can be solved by a unique combination of a document navigation tool and a code scanner (and a tool that performs OCR depending on the situation). The value of utility and text scanning and / or OCR capabilities motivates the user to obtain and carry or use the terminal device. The terminal device can include hardware and / or software that can read encoded information (eg, a barcode). Note that all additional components for processing barcodes can be located on the server or elsewhere in the system. The terminal device can simply obtain an image of the barcode to be read and transfer it for interpretation.

  In some embodiments, when the device is scanning an image, it recognizes that the image is one-dimensional as in a barcode. For example, it is possible to check whether there is one axis that has no information (software, hardware, or a combination thereof). One-dimensional barcodes have the property that they are composed of parallel lines that are considered to be parallel to the y-axis. In this case, only the change along the x-axis (in the x direction, intersects the row and is perpendicular to the row) contains information. If the device references data with this one-dimensional characteristic, it may have local intelligence (hardware and / or software) to reduce this scanned data by folding / ignoring the y-axis. Is possible. That is, it is possible to interpret the code partially or completely (eg, before communicating with the server).

  This description of barcodes creates an interesting technical component “deskew” for most OCRs or systems. Deskew is a process of removing any artificial angle component from scanned or imaged data. A situation often caused by hand scanners includes holding the scanner at an angle and rotating it about an axis perpendicular to the page so that the acquired data has an artificial angle or tilt. Note, for example, that the angle may change over time as the user's hand crosses the page. It is useful if this artificial skew is removed in one of the data or image processing steps.

  If the system uses a template matching or convolution based approach (described elsewhere), one advantage is that the artificial skew or angle is not a problem first. That is, letters or symbols that are each skewed at the same angle will match each other without removing this skew component.

  Many type fonts have multiple strong vertical elements. These are often straight lines perpendicular to the baseline. In some embodiments, the system deskews the text by performing a mathematical transformation to data that can easily determine the skew angle. This transformation is applied locally so that skew changes (eg, crossing a single line of text) can be detected and measured locally.

  Template matching (offset based), a convolution technique that can be used in the system, has the ability to find matching objects using previous occurrences of these objects as templates. One interesting result of this capability is that any iterative object can carry easily readable information, and a token representing this information does not need to be predefined or communicated to the system.

  As an example, the document includes character strings of 1 and 0 (eg, binary data notation) such as “100101001”.

  In the template matching approach, the system does not need to recognize and understand the meaning of “1” or “0”. Rather, the sample string can be interpreted as "one object of the first type, followed by two objects of the second type, followed by one object of the first type, ...". . This information in the sample string can conveniently be represented by “abbababa” or “011010110”.

  The data can be encoded using multiple different objects or symbols, and the space can be treated as one of these objects (if using space, the measured distance is Can be used to count adjacent spaces, or each space object is interpreted as, for example, “1 11 111 1” is interpreted as being identical to “1 11 111 1”. From this point of view, a language written in 26 Roman letters (all lowercase) is a special instance of this encoding, and the number of symbols is 26 It becomes.

  In some embodiments, the system represents data in a sequence, such as the above example of “011010110” as an offset (number of character positions separating repeated instances of each symbol). In this notation, “011010110” is 3, 1, 2, 2, 2, 3, 1,? ,? Each number corresponds to a character in the original string, and the value of that number is the distance or offset for the next occurrence of this same character.

  There are certain missing elements and / or forbidden codes in this notation. For example, the number “2” after the first number “3” never occurs naturally. This is because, when “3” becomes “1”, the second character becomes the same as the first character.

  Also, since the last two entries ("?" And notes) are redundant and the characters at these positions are known by the previous offset / reference to that entry, most of these two end positions are Or no information at all. That is, if a character has an offset of m—must move to the right and measure the offset in the right direction—the next character cannot have an offset m−1, and the character after it has an offset cannot have m-2. This is because these “forbidden” offsets collide and conflict with previous given offsets.

  In some embodiments, the system utilizes forbidden codes in decrypting and / or representing the data. For example, in some embodiments, the function takes advantage of forbidden codes and uses them to store additional data such as exception codes. Thus, whenever the sequence of offsets contains the code “m, m−1”, the system can enter a special mode or routine or treat the following code specially.

  This effect can be cumulative. That is, each offset must satisfy all the offset constraints seen in advance. As an example, consider an input data string “xyzyyzzzzyx” with an offset “5, 2, 4, 1, 5, 5, 1, 1,? Each of these entries has the following constraints:

5-Can be anything, no prior information cannot be 2-4 (otherwise the above entry is 1 instead of 5)
Cannot be 4-3 or 1 (when these contradict 5 and 2 above)
Cannot be 1-2 (collides with 5) or 3 (collides with 4) Cannot be 5-2 (collides with 4) 5-4 or 1 (collides with 4 and 5) Cannot be 1-4 or 3 (clash with 5 and 5) cannot be 1-3 or 2 (crash with 5 and 5)? Can't be -2 or 1 (collides with 5 and 5)? -1 (can't collide with 5) -Anything is possible because nothing is referenced beyond this position.

  In the alternative, any offset that extends beyond one character position imposes a logical constraint on all intervening positions.

  One use of this aspect is to perform error detection. For example, if a forbidden code is received, the system interprets this as an error and either reports it or takes action on it.

  Another use is the encoding of additional data. In some embodiments, the system interprets the forbidden code as an “escape sequence” that triggers a special action or process, or reads subsequent embedded data from the stream. The system can then continue resynchronizing to the input stream (since the escape sequence carries information about the sequence length or can be known or estimated by the system).

  Yet another use of these forbidden codes is to reduce the amount of information in data encoding. One algorithm here is: “If this is the first (minimum) forbidden code, treat it as the first allowed / valid code, and if it is the second (next smallest) forbidden code, Can be treated as a second allowed / valid code ". In this way, a smaller (forbidden) number can represent a larger (allowed) code, thus reducing the number of bits used to store and transmit data.

  In some of these examples using forbidden codes, for example, the first (minimum) forbidden code can be interpreted as an escape sequence, while the higher order code is the next available valid code. Can be combined as a map. Also, other uses of these codes can be applied as well.

  In general, little additional information is carried by the repetitive sequence. Thus, for example, the character sequence “abcabcabcabcabcabc” is more concisely represented as “5 (abc)”. The offset notation for repeated sequences is also the same. “Abcabcabcabcabcabc” is represented in the offset as “333333333333 ???” and can be expressed as “12 (3) ???”.

  In another example, the sequence “abcbcbcabbbcbcababbcbcbc” is first encoded as an offset “722222327222233-2222?” And then “74 (2) 2 (3) 74 (2) 2 (3)? 4 (2) 2 (?) ", And further reduced as follows.

“2 (74 (2) 2 (3))? 4 (2) 2 (?)”. (The rule shown here is “count (object)” in which parentheses delimit objects, but can be expressed in a number of ways in the data system.)
In some embodiments, the number of iterations the system creates is a grand total and only the object itself is stored or transmitted. The “abcabcabcabcabcab” applied to the series of offsets that iterates in the above example is first coded as the offset “333333333333 ???” and also count (object) 12 (3)? ? ? Can be expressed as This can simply be stored or transmitted as "+3 ???" Here, “+” is an indicator indicating that the object repeats. Alternatively, the system omits any reference to the number of iterations and simply stores or transmits “3”.

  One reason why this problem of repetitive sequences is important is that the system may not know how much the user scans. As an example, the user can scan a series of dashes “----------------------”. If the system considers these dashes to last a certain length, the user may not want to scan them to the end and may not know how many dashes are shown. In this case, in some embodiments, the system simply stores or transmits a “length 1 repeating sequence”.

  This is also valid for more complex sequences. Alternatively, the portion of the layer that the user is reading includes the following boundary markers:

"-***-***-***-***-***-***-***-"
In some embodiments, the system does not require a complete scan of this marker to recognize. The offset from this sequence can be expressed as “31641153164115... (Counting space as an object). This can be stored as a counting plus object, or as “multiple indicators” (see “+3164115” —use of the plus sign above), or without indicating multiple (simply “3164115”) Can be sent.

  These latter two examples are almost similar to matching constructs in standard expressions. The example of “+3164115” corresponds to “occurrence of a match greater than 1”, and the example of “3164115” corresponds to “occurrence of one or more matches”. In the last case, when searching for a sequence in an index or database, the agreed rule is that every sequence is matched by one or more successive occurrences of itself.

  If the source or reference copy of the document is in the system, some of the utilities of this construct will occur and the system will try to find the user's location. Terminal and / or local system components, and back-end components (eg, archives, indexes, etc., or server-based) both understand and agree on how the repeated sequence is handled, and then the redundant data is stored And can be omitted from communication. In the “one or more” example of the preceding paragraph, the server that indexes the data can store only a single first instance (raw data or derived offset) of the repetitive sequence; The scanning terminal can store or transmit only one instance of the repetitive sequence.

  In another way of describing the process, every repetitive sequence is represented by a count or is completely ignored in another simpler model. Therefore, the phrase “*** buy cheap cheap tools here !!!!” (with cheap tools) will be indexed or represented as “* buy cheap tools here! *” Or its offset In the same way, it can be compressed as “*** buy cheap chain tools here! ***”.

  "11 * 4 ??? 6666666 ??? *** ??? 6 ??? 1 ???? 8 ??? 2 ??? 11 ??? 11?" (Offsets greater than 9 are indicated by "*") are compressed as follows: .

  "2 (1) * 43 (?) 7 (6)? ** 2 (?) 6-13 (?) 8-22 (?) 2 (1) 2 (?) 2 (1)?" Alternatively, remove all iterations (leave "*" for offsets greater than 9):

  "1 * 4? 6? **? 6? 1? 8? 2.2.1? 1".

  An independent system, such as operating on a remote server, can then search the sequence that matches this compressed notation, or look up the index it is looking for. To do so, each object in the sequence is treated as a potentially occurring “one or more times” and the code and algorithm are run to find these matches, similar to the standard representation.

  Similar efficiencies in storage and / or transmission can be obtained by noting that with a character offset, all scans of text end with an unknown offset (indicated by “???” above). . This means that if the user scans from left to right and the offset is to the next matching character on the right, the scan must be terminated at some places, so some of the last characters This is because there is no known offset and the next occurrence is not included in the scan. In one data encoding technique, these unknowns are represented as zero, but in another embodiment, these unknown tails are omitted from the transmitted or stored data.

  Template matching and / or autocorrelation uses one instance of a token, object, character, or symbol as a template for recognizing subsequent occurrences of this same object. Here, a simplified overview is provided.

  Suppose a user is scanning a single horizontal text line with a terminal device.

  When a user scans a single horizontal text line with an acquisition device, in some embodiments, the system acquires an image of the text, stores it in memory, and / or transmits it. In some embodiments, the system performs on-the-fly template matching by immediately calculating the offset of matching objects and storing only individual templates. These can then be discarded when the offset is known.

  First, the system does not need to know much about the shape of the object being scanned (if any). In the template matching process, these shapes appear when various templates are discovered.

  This also applies to the horizontal range of characters. Special information about white space and character width (eg, the ratio of width to height for most characters is about x, or the average word length is about y, etc.) can be useful, Absent. Indeed, in some embodiments, the acquisition device ignores margins entirely.

  In some embodiments, when the user scans (or after) a portion of a horizontal text line, the system convolves the line. That is, a copy of the line that has passed itself is slid effectively in the horizontal direction, and an appropriately matching area is searched. At the start of this process, convolution can be useful for determining the baseline of the text and deskewing it. Both are known techniques in the field of document imaging. However, it should be noted that there are ways to search for matching regions without using any of these steps.

  Note their horizontal extent as coincident or nearly coincident areas. In this example, assume that the vertical range of matches is the overall height of the character (we will consider some matching techniques later called “self-recognition”).

  This process can choose to use connected region analysis depending on the situation, where tokens / objects / characters / symbols are assumed to be “connected” (ie, continuous) pixels or inks. The Therefore, in this case, a matching connected area is searched. Note that by definition, the area outside the ink connection area is blank, so this relates to blank information.

  In some embodiments, the system uses a simple horizontal extent (as another approach). In other words, these components can be introduced to aid processing, but with little or no attention to the connected areas or margins, the horizontal width and the location of the matching area of the ink or pixel area. Keep in mind.

  FIG. 7 is a flow diagram illustrating the steps generally performed by the system to process a text retrieval action. In step 701, the system receives text obtained by a user. In step 702, the system preprocesses the text scanned in step 701. In step 703, the system identifies word and line boundaries in the scanned text. At step 706, the system convolves the text as described above. In step 705, the system uses delimiters to determine the boundaries of unknown regions in the text. At step 706, the system processes the scan to generate a display of the acquired text. In step 707, the system searches for an indication of matching text in the collection of electronic documents. In step 708, if the search in step 707 is successful, the system returns a notification of successful search in step 709, otherwise the system takes over to step 710. In step 710, the system takes over to step 711 if the search can be refined, otherwise it takes over to step 712 and returns a search failure notification. In step 711, the system indicates to the user that refinement is required. After step 711, the system takes over to step 701 and receives additional text obtained by the user in response to the instruction of step 711.

  In matching two regions of text captured from a rendered document, the system encounters a “match” problem. Since all physical measurements include errors, the system matching process is basically not accurate. Thus, in some embodiments, the system makes a determination as to how much one region matches another region. Multiple tools can be used to achieve this, many of which are already known in the fields of OCR, document imaging, and machine vision. In some embodiments, one method of estimating the fit or match used by the system first finds the best alignment of the objects to be compared and then calculates the difference on the objects. For example, for simple black and white pixels (no grayscale), the system simply finds pixels that are on in one image / object and off in the other image / object. These "error" counts are approximate fits.

  This counting approximation can be improved by “normalizing it” (dividing it) by the total number of pixels involved. Thus, in various embodiments, the system uses any of the following:

fit_error = # _ bad_pixels / # _ pixels_in_x_y_region_compared
Or fit_error = # _ bad_pixels / # _ pixels_in_object
The former considers the number of error pixels compared to the area being compared. The latter considers the number of error-free pixels or the number of error pixels compared to the match. In various embodiments, the system, for example, adds various refinements to these techniques or uses other matching techniques from OCR.

  In the matching process described above, the error and the physical distribution of matching pixels (eg, xy coordinates) are potentially important. In a shortened similar language, when matching pixels are “gathered” (ie, occur contiguously and closely) and error pixels are “distributed” (ie, discontinuous and occur remotely) May fit better.

  As an example of this, consider two situations. In one case, the system compares the “r” image with the “n” image. Only a small part of the image contains error pixels, and the right side of [n] goes down to the baseline, but not “r”. Note that the number of error pixels may not be large, but occurs closely and continuously. Next, consider comparing two images of the letter [n]. The quality of the character is not good because it is blurry, unclear or ambiguous. It should be noted that in this case there may be a very large number of error pixels, but this can occur in many places around the character. This proves that widely distributed error pixels suggest fewer errors than closely packed pixels.

  This description of matching results in another related new invention called “self-matching”. Historically, OCR uses direct information about absolute character shapes and fonts to recognize various characters. In some embodiments, the system uses non-indirect information about * relative * character shapes to recognize characters.

  8A-8D show how a two-letter alphabet often has approximately the same * relative * shape in multiple fonts. FIG. 8A shows the letters “D”, “C”, and “L” in Arial's lowercase font, and FIG. 8C shows the Times New Roman lowercase font for these characters. Although the fonts are very different and unique, the relationship between these characters in a particular font * is nearly identical.

  FIGS. 8B and 8D are diagrams showing that “d” can be constructed with a certain degree of accuracy by combining “c” and “I” in any font. Or, in a pseudo-algebraic statement, “d = c + I”. There are many other similar relationships that apply to many fonts, such as “e = c + −”, “P = B−b + I”, “8 = 6 + 9”. These relationships are not meant to be accurate, but rather mean that the relative shapes of various characters in different fonts are * almost * identical. These relationships are similar, but the relationship allows a group of characters to be used in a font to recognize (or actually construct) additional characters.

  One application of this technology is in OCR. When the system determines a few characters of a font (or no information about that character), the system can predict and / or recognize the remaining unknown characters.

  It is often possible to establish the identity of several characters using simple cryptographic techniques such as character frequency and n-gram analysis. Observations about where and how often a character appears in a word, relative to what other characters are, provide initial information about a character even if it appears in a font that has no information about that character can do. The simplest example is the single letters “a” and “I”, and when you refer to a one-letter word, you can immediately see that it is probably one of two letters. Similarly, repeated characters (eg, ee, oo, etc.) are unlikely to be “hh” or “qq”.

  Suppose the system is learning the letters “d” and “o” and has encountered the letter “c” (not yet knowing what it is). By comparing the shape of the letter “C” with the learned letter shape, the system matches the “d” except for the vertical axis and it is “o” except for the right tip. It is determined that it matches. Knowing that these are the relative properties of “d”, “o”, and “c” in most fonts, the system identifies the new character “c”. By doing so, the system adds the character to the repertoire of symbols known to the system and uses it for further character decoding. Therefore, the system information regarding the font without specific information can be increased and expanded sequentially based on the * relative * character shape * general * information.

  One way to implement this self-recognition system is the general relationship of an m × m matrix, where m is a letter number in the alphabet. Each entry in this table describes how much the letter i is related to the letter j, possibly with a general relative shape and rule ("the letter i extends below the base but the letter j And, in some cases, references to additional letters of the alphabet (eg, entries for row “d” and column “c” may be “I” —from “d” Subtract "I" to form "c"-entry for row "c" and column "d" can be "+ I"-add "I" to "d" from "c" Form).

  In the special case of self-recognition, a set of symbols or tokens (ie new fonts) is constructed with an explicit purpose that is self-recognizable as described above. That is, others can be derived or estimated in view of a subset of these symbols. These symbols can be used for error checking with each other because their components / designs are interrelated. This redundancy also provides reliability against noise.

  In order to use this technique, it is not necessary for the system to know * any * of the symbols in advance. By using one of the template matching or correlation methods described above, the system can determine the entire set of symbols to be used based on matching and non-matching symbols. Even if the system has not seen this set of symbols before, it then uses each known relationship between symbols to verify each of the symbols, or possibly missing / unused Generate a symbol for

  FIG. 9 is a diagram illustrating an approach used by a function in some embodiments to learn a new set of new symbols using self-awareness. The figure shows a set of symbols 900 with a small “box” in a 2 × 2 array. Using margin and baseline information, establish vertical and horizontal spacing, and there are 16 possible symbols: 4 types with a single box at each corner, 6 types of replacement with 2 boxes, 4 types of replacement with 3 boxes each, and 1 type of symbol with 4 boxes in total, One type of blank symbol that you do not have.

  FIG. 10 is a diagram illustrating a subset of the symbols shown in FIG. 9 that self-define vertical and horizontal ranges. Each symbol of the subset 1000 has a width of two boxes and a height of two boxes.

  FIG. 11 is a diagram showing the relationship between some of the symbols shown in FIG. For example, each of relationships 1101 and 1102 indicates a relationship in which one symbol of the set can be composed of two other symbols. Note that those symbols with the best redundancy and / or error correction characteristics can be selected as a subset. Note that some symbols can be configured in at least two ways different from the set of symbol combinations.

  Multiple dedicated barcode scanning devices aimed at the consumer market have failed to market, perhaps because barcode scanning alone does not provide sufficient value or functionality to be widely adopted by consumers. On the other hand, the combination of document marking / scanning and barcode scanning creates a combined function that is useful and interesting for a wide range of consumers. This combination of text and barcode imaging can be accomplished by many of the same components, hardware, and software. However, one very interesting new element is a scanning device that uses a reference or source copy of the rendered document being scanned, so that the user's action on the rendered document is source or reference It is possible to interpret a document and map it to it.

  Another means for locating the source document conveys machine-readable code in which the marked version provides instructions for identifying the document and / or retrieving the source document (eg, URL) Whether or not. This code may be a bar code, machine readable font, or any machine readable means for conveying this information.

  An interesting extension of the machine readable document ID and document locator is to include access information for this data. That is, for example, when protected by a password or hidden behind a corporate firewall, the machine-readable code contains information, which allows the system to use the document. Note that additional data may be required by the user or another individual confirming the request to use this document.

  In some embodiments, the system maintains the relationship between the user's notes and marks, the content of the document, and the functions associated with these notes and marks. This can be important, for example, if the source document is re-rendered in a different style or format, and if the system wants to re-display the user's mark in the appropriate location. As an example, if a user draws a line through text characters, the system may want to show that line through the same text in subsequent renderings.

  One means of accomplishing this is to “anchor” each of the user's marks or groups of marks with several recognizable features in the document (eg, single words, punctuation marks, images, etc.). Can be mentioned. In some embodiments, the system anchors (eg, by geometric distance) by finding the closest source document and associates the mark with this feature.

  In some embodiments, the system finds neighboring functions and weights or ranks them and then associates the user's mark with the highly rated function. As an example, if the system encounters an out-of-line note by the user, it examines all adjacent words and associates the mark with the most relevant word (for example, perhaps not a keyword, but perhaps a topic in the text of the source document Related stopwords, etc.). This aspect of the system can use some of the many techniques known to identify important elements in a document.

  Thus, notes and marks with associated anchor points can be associated with a digital document so that the associated notes or marks can be utilized when viewing or editing the document (eg, by a word processor). it can. For example, all such notes can be represented as special symbols embedded in the document (and optionally stored in the document). The user can then pass the mouse over these symbols or click the mouse to cause embedded or associated notes or marks to appear. Similarly, user annotations can be turned on or off via menu commands.

Conclusion Those skilled in the art will appreciate that the system described above can be applied and expanded in various ways. Although the foregoing description refers to particular embodiments, the scope of the present invention is defined solely by the following claims and the elements detailed therein.

FIG. 1 is a data flow diagram showing a flow of information in an embodiment of a core system. FIG. 2 is a component diagram of components included in a general implementation of the system in the context of a general operating environment. FIG. 3 is a block diagram of an embodiment of a scanner. FIG. 4 is a diagram illustrating a general environment in which system embodiments operate. FIG. 5 is a flow diagram illustrating the steps generally performed by the system to implement a bookmark. FIG. 6 is a diagram illustrating a scanning device having an integrated highlighter pen and pen. FIG. 7 is a flow diagram illustrating the steps generally performed by the system to process a text retrieval action. Figures 8A-8D show how a two-letter alphabet often has approximately the same "relative" shape in multiple fonts. FIG. 9 is a diagram illustrating an approach used by functions in some embodiments to learn a new set of new symbols using self-awareness. FIG. 10 is a diagram showing a subset of the symbols shown in FIG. 9 that self-define vertical and horizontal extensions. FIG. 11 is a diagram showing the relationship between some of the symbols shown in FIG.

Claims (14)

  1. A method for processing data obtained from a rendered document, comprising:
    Receiving user input to select a portion of the rendered document, wherein the selected portion of the document includes text;
    Determining an encoded representation of text included in the selected document portion, wherein determining the encoded representation of text included in the selected document portion is an offset representing the text Each offset represents a number of character positions that separate repeated instances of each character in the text of the selected portion of the document; and
    Using said coded representation of text includes a portion of the document was to identify an electronic document, the identified electronic documents that are the selected included in the set of (1) electronic document And (2) identifying the location within the identified electronic document where the selected document portion occurs;
    Displaying a menu of selectable user actions associated with the identified location in the identified electronic document on a display device;
    Accessing a markup layer associated with the identified electronic document;
    Identifying an action defined by the markup layer for the identified location in the identified electronic document, wherein the identified action is selectable in the menu of user inputs selectable. Being displayed,
    Making the identified action available to a user who provided the received user input.
  2.   The method of claim 1, wherein the identified electronic document and the identified portion are identified without determining a standard character set representation of the portion of the selected document.
  3.   The method of claim 2, wherein the identifying is performed using an image of the portion of the selected document.
  4. The method of claim 2, wherein determining the encoded representation is performed using an image of the portion of the selected document.
  5. Further encoding both of the two selected document parts into an encoded representation , wherein the two selected document parts comprise two different natural language texts; 5. The method of claim 4, comprising.
  6. 5. The method further comprises encoding the text independent data contained in the selected document portion into the encoded representation by including a forbidden value in the encoded representation. The method described in 1.
  7.   The selected portion of the document is unique within the set of electronic documents, and the version of the selected portion of the document from which a single word is deleted is not unique within the set of electronic documents. The method of claim 2, wherein:
  8. A method for processing data obtained from a rendered document, comprising:
    Receiving user input to select a portion of the rendered document, wherein the selected portion of the document includes text;
    Determining an encoded representation of text included in the selected document portion, wherein determining the encoded representation of text included in the selected document portion is an offset representing the text Each offset represents a number of character positions that separate repeated instances of each character in the text of the selected portion of the document; and
    Using the encoded representation of text , (1) identifying a plurality of electronic documents included in the set of electronic documents, each of the identified plurality of electronic documents being the selected document And (2) identifying a position within each electronic document of the plurality of electronic documents from which the selected document portion occurs;
    Identifying a user that provided the received user input;
    Retrieving context information for the identified user;
    Identifying a document that is most likely to correspond to the rendered document among the plurality of electronic documents based on the content of the retrieved context information.
  9. A method for processing data obtained from a rendered document, comprising:
    Receiving user input to select a portion of the rendered document, wherein the selected portion of the document includes text;
    Determining an encoded representation of text included in the selected document portion, wherein determining the encoded representation of text included in the selected document portion is an offset representing the text Each offset represents a number of character positions that separate repeated instances of each character in the text of the selected portion of the document; and
    Using the encoded representation of text , (1) identifying an electronic document included in a set of electronic documents, the identified electronic document including a portion of the selected document; And (2) identifying the location within the identified electronic document where the selected portion of the document occurs;
    Receiving user input identifying annotation text attached to the identified location in the identified electronic document;
    Storing the identified annotation text together with an indication of the identified electronic document and an identified location in a location outside the identified electronic document;
    In response to a user request for the annotation,
    Retrieving at least a portion of the identified electronic document that includes the identified location;
    Displaying an area of the identified electronic document that includes the identified position;
    Retrieving the identified annotation text; and
    Displaying the identified annotation text in relation to the area of the displayed document at a location adjacent to the identified location.
  10. A computer readable medium, wherein the content of the computer readable medium causes a computer system to perform a method of processing data obtained from a rendered document , the method comprising:
    The method comprising: receiving a user input selecting the parts of the said rendered document, the selected document portion includes a text, and that,
    Determining an encoded representation of text included in the selected document portion, wherein determining the encoded representation of text included in the selected document portion is an offset representing the text Each offset represents a number of character positions that separate repeated instances of each character in the text of the selected portion of the document; and
    Using the encoded representation of text , (1) identifying an electronic document included in a set of electronic documents, the identified electronic document including a portion of the selected document; And (2) identifying the location within the identified electronic document where the selected portion of the document occurs;
    Displaying a menu of selectable user actions associated with the identified location in the identified electronic document on a display device, wherein at least one of the selectable user actions is received. Associated with the identified location prior to the reception of the user input;
    Accessing a markup layer associated with the identified electronic document;
    Identifying an action defined by the markup layer for the identified location in the identified electronic document , wherein the identified action is selectable in the menu of user inputs selectable. When displayed ,
    Making the identified action available to a user who provided the received user input.
  11. A computer readable medium, wherein the content of the computer readable medium causes a computer system to perform a method of processing data obtained from a rendered document , the method comprising:
    The method comprising: receiving a user input selecting a portion of the rendered document, portion of the selected document contains text, and that,
    Determining an encoded representation of text included in the selected document portion, wherein determining the encoded representation of text included in the selected document portion is an offset representing the text Each offset represents a number of character positions that separate repeated instances of each character in the text of the selected portion of the document; and
    Using the encoded representation of text , (1) identifying an electronic document included in a set of electronic documents, the identified electronic document including a portion of the selected document; And (2) identifying the location within the identified electronic document where the selected portion of the document occurs;
    Receiving user input identifying annotating text for the identified location in the identified electronic document;
    Storing the identified annotation text together with an indication of the identified electronic document and an identified location in a location outside the identified electronic document;
    In response to a user request for the annotation,
    Retrieving at least a portion of the identified electronic document that includes the identified location;
    Displaying an area of the identified electronic document that includes the identified position;
    Retrieving the identified annotation text; and
    Displaying the identified annotation text in association with the area of the displayed document at a location adjacent to the identified location.
  12.   The method of claim 1, wherein the identified action comprises an action of purchasing an item referenced by the identified electronic document at the identified location.
  13.   The method of claim 1, wherein the identified action comprises an action of establishing a bookmark at the identified location within the identified electronic document.
  14. A computer readable medium, wherein the content of the computer readable medium causes a computer system to perform a method of processing text acquisition, the method comprising:
    Receiving an encoded representation of text optically obtained by a user from a rendered document , wherein the encoded representation of text has a sequence of offsets representing the text, Represents the number of character positions that separate repeated instances of each character in the text, and
    Comparing the encoded representation of the received text with a full-text index of a set of documents;
    Determining that an encoded representation of the received text does not exist in the index;
    Identifying text that is similar to an encoded representation of the received text that is present in the index;
    Identifying the rendered document containing the similar text by using the index;
    Informing the user that the acquisition was performed from the identified rendered document.
JP2007509565A 2004-02-15 2005-04-19 Processing techniques for visually acquired data from rendered documents Active JP5102614B2 (en)

Priority Applications (187)

Application Number Priority Date Filing Date Title
US56348504P true 2004-04-19 2004-04-19
US56352004P true 2004-04-19 2004-04-19
US60/563,485 2004-04-19
US60/563,520 2004-04-19
US56468804P true 2004-04-23 2004-04-23
US56484604P true 2004-04-23 2004-04-23
US60/564,846 2004-04-23
US60/564,688 2004-04-23
US56666704P true 2004-04-30 2004-04-30
US60/566,667 2004-04-30
US57138104P true 2004-05-14 2004-05-14
US57156004P true 2004-05-14 2004-05-14
US60/571,560 2004-05-14
US60/571,381 2004-05-14
US57171504P true 2004-05-17 2004-05-17
US60/571,715 2004-05-17
US58920104P true 2004-07-19 2004-07-19
US58920304P true 2004-07-19 2004-07-19
US58920204P true 2004-07-19 2004-07-19
US60/589,203 2004-07-19
US60/589,202 2004-07-19
US60/589,201 2004-07-19
US59882104P true 2004-08-02 2004-08-02
US60/598,821 2004-08-02
US60289704P true 2004-08-18 2004-08-18
US60295604P true 2004-08-18 2004-08-18
US60294704P true 2004-08-18 2004-08-18
US60293004P true 2004-08-18 2004-08-18
US60289804P true 2004-08-18 2004-08-18
US60289604P true 2004-08-18 2004-08-18
US60292504P true 2004-08-18 2004-08-18
US60/602,925 2004-08-18
US60/602,956 2004-08-18
US60/602,898 2004-08-18
US60/602,896 2004-08-18
US60/602,897 2004-08-18
US60/602,947 2004-08-18
US60/602,930 2004-08-18
US60308204P true 2004-08-19 2004-08-19
US60346604P true 2004-08-19 2004-08-19
US60308104P true 2004-08-19 2004-08-19
US60/603,466 2004-08-19
US60/603,082 2004-08-19
US60/603,081 2004-08-19
US60349804P true 2004-08-20 2004-08-20
US60335804P true 2004-08-20 2004-08-20
US60/603,498 2004-08-20
US60/603,358 2004-08-20
US60410004P true 2004-08-23 2004-08-23
US60410204P true 2004-08-23 2004-08-23
US60409804P true 2004-08-23 2004-08-23
US60410304P true 2004-08-23 2004-08-23
US60/604,098 2004-08-23
US60/604,103 2004-08-23
US60/604,102 2004-08-23
US60/604,100 2004-08-23
US60510504P true 2004-08-27 2004-08-27
US60522904P true 2004-08-27 2004-08-27
US60/605,229 2004-08-27
US60/605,105 2004-08-27
US61358904P true 2004-09-27 2004-09-27
US61363204P true 2004-09-27 2004-09-27
US61345504P true 2004-09-27 2004-09-27
US61324204P true 2004-09-27 2004-09-27
US61333904P true 2004-09-27 2004-09-27
US61345604P true 2004-09-27 2004-09-27
US61360204P true 2004-09-27 2004-09-27
US61362804P true 2004-09-27 2004-09-27
US61336104P true 2004-09-27 2004-09-27
US61363304P true 2004-09-27 2004-09-27
US61345404P true 2004-09-27 2004-09-27
US61346004P true 2004-09-27 2004-09-27
US61324304P true 2004-09-27 2004-09-27
US61346104P true 2004-09-27 2004-09-27
US61334104P true 2004-09-27 2004-09-27
US61363404P true 2004-09-27 2004-09-27
US61340004P true 2004-09-27 2004-09-27
US61334004P true 2004-09-27 2004-09-27
US60/613,243 2004-09-27
US60/613,632 2004-09-27
US60/613,461 2004-09-27
US60/613,634 2004-09-27
US60/613,341 2004-09-27
US60/613,242 2004-09-27
US60/613,361 2004-09-27
US60/613,339 2004-09-27
US60/613,602 2004-09-27
US60/613,340 2004-09-27
US60/613,460 2004-09-27
US60/613,400 2004-09-27
US60/613,628 2004-09-27
US60/613,455 2004-09-27
US60/613,589 2004-09-27
US60/613,633 2004-09-27
US60/613,454 2004-09-27
US60/613,456 2004-09-27
US61553804P true 2004-10-01 2004-10-01
US61511204P true 2004-10-01 2004-10-01
US61537804P true 2004-10-01 2004-10-01
US60/615,378 2004-10-01
US60/615,538 2004-10-01
US60/615,112 2004-10-01
US61712204P true 2004-10-07 2004-10-07
US60/617,122 2004-10-07
US62290604P true 2004-10-28 2004-10-28
US60/622,906 2004-10-28
US11/004,637 2004-12-03
US11/004,637 US7707039B2 (en) 2004-02-15 2004-12-03 Automatic modification of web pages
US63345304P true 2004-12-06 2004-12-06
US63345204P true 2004-12-06 2004-12-06
US63367804P true 2004-12-06 2004-12-06
US63348604P true 2004-12-06 2004-12-06
US60/633,453 2004-12-06
US60/633,678 2004-12-06
US60/633,452 2004-12-06
US60/633,486 2004-12-06
US63473904P true 2004-12-09 2004-12-09
US63462704P true 2004-12-09 2004-12-09
US60/634,739 2004-12-09
US60/634,627 2004-12-09
US64768405P true 2005-01-26 2005-01-26
US60/647,684 2005-01-26
US64874605P true 2005-01-31 2005-01-31
US60/648,746 2005-01-31
US65337205P true 2005-02-15 2005-02-15
US60/653,372 2005-02-15
US65366905P true 2005-02-16 2005-02-16
US65367905P true 2005-02-16 2005-02-16
US65384705P true 2005-02-16 2005-02-16
US65366305P true 2005-02-16 2005-02-16
US65389905P true 2005-02-16 2005-02-16
US60/653,669 2005-02-16
US60/653,847 2005-02-16
US60/653,679 2005-02-16
US60/653,663 2005-02-16
US60/653,899 2005-02-16
US65437905P true 2005-02-17 2005-02-17
US60/654,379 2005-02-17
US65436805P true 2005-02-18 2005-02-18
US65432605P true 2005-02-18 2005-02-18
US65419605P true 2005-02-18 2005-02-18
US60/654,326 2005-02-18
US60/654,196 2005-02-18
US60/654,368 2005-02-18
US65598705P true 2005-02-22 2005-02-22
US65527905P true 2005-02-22 2005-02-22
US65528105P true 2005-02-22 2005-02-22
US65569705P true 2005-02-22 2005-02-22
US65528005P true 2005-02-22 2005-02-22
US60/655,279 2005-02-22
US60/655,697 2005-02-22
US60/655,987 2005-02-22
US60/655,281 2005-02-22
US60/655,280 2005-02-22
US65730905P true 2005-02-28 2005-02-28
US60/657,309 2005-02-28
US11/097,093 US20060041605A1 (en) 2004-04-01 2005-04-01 Determining actions involving captured information and electronic content associated with rendered documents
US11/098,042 2005-04-01
US11/097,093 2005-04-01
US11/097,103 2005-04-01
US11/098,038 2005-04-01
US11/097,981 2005-04-01
US11/097,836 2005-04-01
US11/097,835 US7831912B2 (en) 2004-02-15 2005-04-01 Publishing techniques for adding value to a rendered document
US11/097,835 2005-04-01
US11/097,961 US20060041484A1 (en) 2004-04-01 2005-04-01 Methods and systems for initiating application processes by data capture from rendered documents
US11/097,836 US20060041538A1 (en) 2004-02-15 2005-04-01 Establishing an interactive environment for rendered documents
US11/097,089 US8214387B2 (en) 2004-02-15 2005-04-01 Document enhancement system and method
US11/098,043 2005-04-01
US11/097,981 US7606741B2 (en) 2004-02-15 2005-04-01 Information gathering system and method
US11/096,704 2005-04-01
US11/098,038 US7599844B2 (en) 2004-02-15 2005-04-01 Content access with handheld document data capture devices
US11/097,961 2005-04-01
US11/098,014 2005-04-01
US11/097,833 US8515816B2 (en) 2004-02-15 2005-04-01 Aggregate analysis of text captures performed by multiple users from rendered documents
US11/097,828 US7742953B2 (en) 2004-02-15 2005-04-01 Adding information or functionality to a rendered document via association with an electronic counterpart
US11/098,016 US7421155B2 (en) 2004-02-15 2005-04-01 Archive of text captures from rendered documents
US11/097,833 2005-04-01
US11/097,089 2005-04-01
US11/098,043 US20060053097A1 (en) 2004-04-01 2005-04-01 Searching and accessing documents on private networks for use with captures from rendered documents
US11/098,042 US7593605B2 (en) 2004-02-15 2005-04-01 Data capture from rendered documents using handheld device
US11/098,016 2005-04-01
US11/096,704 US7599580B2 (en) 2004-02-15 2005-04-01 Capturing text from rendered documents using supplemental information
US11/097,103 US7596269B2 (en) 2004-02-15 2005-04-01 Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US11/098,014 US8019648B2 (en) 2004-02-15 2005-04-01 Search engines and systems with handheld document data capture devices
US11/097,828 2005-04-01
PCT/US2005/013297 WO2005101192A2 (en) 2004-04-19 2005-04-19 Processing techniques for visual capture data from a rendered document

Publications (3)

Publication Number Publication Date
JP2008516297A JP2008516297A (en) 2008-05-15
JP2008516297A6 JP2008516297A6 (en) 2008-09-25
JP5102614B2 true JP5102614B2 (en) 2012-12-19

Family

ID=37684666

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2007509565A Active JP5102614B2 (en) 2004-02-15 2005-04-19 Processing techniques for visually acquired data from rendered documents
JP2011248290A Active JP5496987B2 (en) 2004-02-15 2011-11-14 Processing techniques for visually acquired data from rendered documents

Family Applications After (1)

Application Number Title Priority Date Filing Date
JP2011248290A Active JP5496987B2 (en) 2004-02-15 2011-11-14 Processing techniques for visually acquired data from rendered documents

Country Status (4)

Country Link
EP (1) EP1759278A4 (en)
JP (2) JP5102614B2 (en)
KR (1) KR20070092596A (en)
WO (1) WO2005101192A2 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080313172A1 (en) 2004-12-03 2008-12-18 King Martin T Determining actions involving captured information and electronic content associated with rendered documents
US9116890B2 (en) 2004-04-01 2015-08-25 Google Inc. Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US8874504B2 (en) 2004-12-03 2014-10-28 Google Inc. Processing techniques for visual capture data from a rendered document
US8620083B2 (en) 2004-12-03 2013-12-31 Google Inc. Method and system for character recognition
US8447066B2 (en) 2009-03-12 2013-05-21 Google Inc. Performing actions based on capturing information from rendered documents, such as documents under copyright
US8346620B2 (en) 2004-07-19 2013-01-01 Google Inc. Automatic modification of web pages
US7990556B2 (en) 2004-12-03 2011-08-02 Google Inc. Association of a portable scanner with input/output and storage devices
US20070300142A1 (en) 2005-04-01 2007-12-27 King Martin T Contextual dynamic advertising based upon captured rendered text
US9143638B2 (en) 2004-04-01 2015-09-22 Google Inc. Data capture from rendered documents using handheld device
US20120041941A1 (en) 2004-02-15 2012-02-16 Google Inc. Search Engines and Systems with Handheld Document Data Capture Devices
US9275052B2 (en) * 2005-01-19 2016-03-01 Amazon Technologies, Inc. Providing annotations of a digital work
US8300261B2 (en) * 2006-02-24 2012-10-30 Avery Dennison Corporation Systems and methods for retrieving printable media templates
JP2009540404A (en) * 2006-06-06 2009-11-19 エクスビブリオ ベースローテン フェンノートシャップ Context dynamic ads based on the captured rendering text
US9672533B1 (en) 2006-09-29 2017-06-06 Amazon Technologies, Inc. Acquisition of an item based on a catalog presentation of items
US9665529B1 (en) 2007-03-29 2017-05-30 Amazon Technologies, Inc. Relative progress and event indicators
US7716224B2 (en) 2007-03-29 2010-05-11 Amazon Technologies, Inc. Search and indexing on a user device
US8266173B1 (en) 2007-05-21 2012-09-11 Amazon Technologies, Inc. Search results generation and sorting
JP5299625B2 (en) * 2009-02-13 2013-09-25 日本電気株式会社 Operation support apparatus, operation support method, and program
KR101015740B1 (en) * 2009-02-18 2011-02-24 삼성전자주식회사 Character recognition method and apparatus
CN102349087B (en) 2009-03-12 2015-05-06 谷歌公司 Automatically providing content associated with captured information, such as information captured in real-time
US9081799B2 (en) 2009-12-04 2015-07-14 Google Inc. Using gestalt information to identify locations in printed information
US9323784B2 (en) 2009-12-09 2016-04-26 Google Inc. Image search using text-based elements within the contents of images
US8340429B2 (en) 2010-09-18 2012-12-25 Hewlett-Packard Development Company, Lp Searching document images
US9378290B2 (en) 2011-12-20 2016-06-28 Microsoft Technology Licensing, Llc Scenario-adaptive input method editor
CN104428734A (en) 2012-06-25 2015-03-18 微软公司 Input Method Editor application platform
WO2014032244A1 (en) * 2012-08-30 2014-03-06 Microsoft Corporation Feature-based candidate selection
CN105264486B (en) * 2012-12-18 2018-10-12 汤姆森路透社全球资源无限责任公司 Mobile phone for intelligent study platform may have access to system and process
US9514376B2 (en) * 2014-04-29 2016-12-06 Google Inc. Techniques for distributed optical character recognition and distributed machine language translation
KR101995540B1 (en) * 2016-06-03 2019-07-15 주식회사 허브케이 Appratus and method of correcting image reading/input error word

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146552A (en) * 1990-02-28 1992-09-08 International Business Machines Corporation Method for associating annotation with electronically published material
JP3017851B2 (en) * 1991-07-31 2000-03-13 キヤノン株式会社 Image storage device
JPH06282375A (en) * 1993-03-29 1994-10-07 Casio Comput Co Ltd Information processor and electronic pen
US5640193A (en) * 1994-08-15 1997-06-17 Lucent Technologies Inc. Multimedia service access by reading marks on an object
AU4851300A (en) * 1999-05-19 2000-12-05 Digimarc Corporation Methods and systems for controlling computers or linking to internet resources from physical and electronic objects
JPH10134004A (en) * 1996-10-28 1998-05-22 Casio Comput Co Ltd Image data processing system
JP4183311B2 (en) * 1997-12-22 2008-11-19 株式会社リコー Annotation of documents, annotation device and a recording medium
JPH11212691A (en) * 1998-01-21 1999-08-06 Fuji Xerox Co Ltd Method and device for pen input
JP2000123114A (en) * 1998-10-15 2000-04-28 Casio Comput Co Ltd Handwritten character input device and storage medium
GB9922214D0 (en) * 1999-09-20 1999-11-17 Ncr Int Inc Creation transmission and retrieval of information
US7337389B1 (en) * 1999-12-07 2008-02-26 Microsoft Corporation System and method for annotating an electronic document independently of its content
GB2366033B (en) * 2000-02-29 2004-08-04 Ibm Method and apparatus for processing acquired data and contextual information and associating the same with available multimedia resources
JP4261779B2 (en) * 2000-03-31 2009-04-30 富士通株式会社 Data compression apparatus and method
US20010053252A1 (en) * 2000-06-13 2001-12-20 Stuart Creque Method of knowledge management and information retrieval utilizing natural characteristics of published documents as an index method to a digital content store
AU9686601A (en) * 2000-09-05 2002-03-22 Zaplet Inc Methods and apparatus providing electronic messages that are linked and aggregated
JP2002269253A (en) * 2001-03-13 2002-09-20 Ricoh Co Ltd Electronic document conversion service system and accounting method of electronic document conversion service system
US7239747B2 (en) * 2002-01-24 2007-07-03 Chatterbox Systems, Inc. Method and system for locating position in printed texts and delivering multimedia information
JP2003216631A (en) * 2002-01-25 2003-07-31 Canon Inc Information processor, information delivery device, retrieval device, information acquisition system and method, computer readable recording media, and computer program
JP2004050722A (en) * 2002-07-23 2004-02-19 Canon Inc Printer

Also Published As

Publication number Publication date
WO2005101192A2 (en) 2005-10-27
JP5496987B2 (en) 2014-05-21
WO2005101192A3 (en) 2007-10-11
EP1759278A4 (en) 2009-05-06
JP2008516297A (en) 2008-05-15
EP1759278A2 (en) 2007-03-07
JP2012094156A (en) 2012-05-17
KR20070092596A (en) 2007-09-13

Similar Documents

Publication Publication Date Title
CN101681350B (en) Providing annotations of digital work
CN102625937B (en) Architecture for responding to visual query
US7793824B2 (en) System for enabling access to information
US20120134590A1 (en) Identifying Matching Canonical Documents in Response to a Visual Query and in Accordance with Geographic Information
US20130021346A1 (en) Knowledge Acquisition Mulitplex Facilitates Concept Capture and Promotes Time on Task
US20070130117A1 (en) Method of Providing Information via a Printed Substrate with Every Interaction
US8811742B2 (en) Identifying matching canonical documents consistent with visual query structural information
Guyon et al. UNIPEN project of on-line data exchange and recognizer benchmarks
US20080104503A1 (en) System and Method for Creating and Transmitting Multimedia Compilation Data
US9400806B2 (en) Image triggered transactions
US8332401B2 (en) Method and system for position-based image matching in a mixed media environment
US8600989B2 (en) Method and system for image matching in a mixed media environment
US10073859B2 (en) System and methods for creation and use of a mixed media environment
US8335789B2 (en) Method and system for document fingerprint matching in a mixed media environment
US8521737B2 (en) Method and system for multi-tier image matching in a mixed media environment
US9171202B2 (en) Data organization and access for mixed media document system
US8838591B2 (en) Embedding hot spots in electronic documents
US8949287B2 (en) Embedding hot spots in imaged documents
US7917554B2 (en) Visibly-perceptible hot spots in documents
US9405751B2 (en) Database for mixed media document system
US7707039B2 (en) Automatic modification of web pages
US8385589B2 (en) Web-based content detection in images, extraction and recognition
US8989431B1 (en) Ad hoc paper-based networking with mixed media reality
US20110302197A1 (en) System for providing information via context searching of printed substrate
US9357098B2 (en) System and methods for use of voice mail and email in a mixed media environment

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080331

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080331

RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7423

Effective date: 20081215

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20081215

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20081224

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20110325

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110512

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20110812

A602 Written permission of extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A602

Effective date: 20110819

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20111114

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120223

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20120523

A602 Written permission of extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A602

Effective date: 20120530

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120823

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120913

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120928

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151005

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250