WO1999045521A1 - Multimedia linking device with page identification and active physical scrollbar - Google Patents

Multimedia linking device with page identification and active physical scrollbar Download PDF

Info

Publication number
WO1999045521A1
WO1999045521A1 PCT/US1999/004823 US9904823W WO9945521A1 WO 1999045521 A1 WO1999045521 A1 WO 1999045521A1 US 9904823 W US9904823 W US 9904823W WO 9945521 A1 WO9945521 A1 WO 9945521A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
page
system
time
book
user
Prior art date
Application number
PCT/US1999/004823
Other languages
French (fr)
Inventor
Barry M. Arons
Lisa J. Stifelman
Stephen D. Fantone
Kevin M. Sevigny
Original Assignee
Audiovelocity, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor ; File system structures therefor
    • G06F17/30017Multimedia data retrieval; Retrieval of more than one type of audiovisual media
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/062Combinations of audio and printed presentations, e.g. magnetically striped cards, talking books, magnetic tapes with printed texts thereon

Abstract

A multimedia linking device automatically links user notations (e.g., handwritten notes) made on a page of a book to time-varying data (e.g., audio). The device includes a page identification feature that automatically identifies the page of the book. An optical sensor reads an identification code printed on each page (and the cover) of the book and processes the code to determine identifying information (e.g., a page number) for the page. The device also includes an active physical scrollbar for controlling and displaying information.

Description

Multimedia Linking Device with Page Identification and Active Physical Scrollbar

Related Applications This application claims priority to U.S. Provisional Applications, Serial Nos. 60/077,061, 60/077,066, 60/077,067, 60/077,078 and 60/077,098, which were filed on March 6, 1998.

Field of the Invention

The invention relates generally to information processing devices and processes. Background of the Invention

The present invention addresses the problem of trying to capture and later review orally

presented information (e.g., lecture, meeting, interview, telephone call, conversation, etc.). A listener must simultaneously attend to a talker while attempting to write notes about what is said. A tape recorder can capture exactly what and how things are said, however, it is time consuming and often frustrating to find information on a tape. A user must shuttle between fast forward and

rewind to find the portions of interest on the tape. It is difficult to skim through a recording or

correlate it with one's handwritten notes.

U.S. Pat. Nos. 5,629,499; 5,734,129; 5,627,349; and 5,243,149 and the CrossPad (described in W.S Mossberg, The CrossPad Sends Paper-and-Ink Notes To Your PC Screen, Wall Street Journal, 4/9/98, p. Bl) are systems that capture writing on paper or a document but

do not record audio or video.

There are graphical computer playback applications that allow a user to select a single point in an audio or video recording (e.g., by positioning a cursor in a visual representation of the media) and then type in a keyword(s) (U.S. Pat. Nos. 5,786,814; 5,717,879; and 5,717,869; and Cruz et al., Capturing and Playing Multimedia Events with STREAMS, In Proceedings of ACM - 2 - Multimedia 1994, pages 193-200, ACM, 1994). As described in Degen et al., Working with audio: Integrating personal tape recorders and desktop computers, In Proceedings of CHI '92, pages 413-418, ACM, 1992; a user manually creates an index or "marker" during recording by

pressing one of two buttons on a tape recorder and these marks are then displayed graphically. These systems have limited utility because they rely on the user to manually index the recordings. Similarly, U.S. Pat. Nos. 5,592,607 and 5,564,005 describe a system where a user indexes a video recording by manually creating "time zones." A time-zone is created by drawing a line

across the screen. There is a single time point in the video (the time the line was drawn) associated with the area of the screen below this line until the next line is drawn. Users can write notes with a stylus. Individual pen strokes (i.e., handwritten notes) do not index the video — the strokes are located inside a time zone which corresponds to the instant that the time zone was created. Additional writing can be added to a time zone at any time but this does not create any

new indices into the recording. This system has many disadvantages. Instead of leveraging the natural activity of the user, this system enforces one particular behavior — drawing a line across the screen to manually create an index into the recording. The granularity of indices is limited since each time zone only relates to a single time point in a recording, and individual pen strokes

do not index the recording.

Some systems attempt to automatically generate indices into recorded media using signal processing techniques. For example, some systems have attempted to segment recordings by speaker changes (e.g., speaker A started talking at time tl, speaker B at time t2, etc.) as described in Kimber et al., Speaker Segmentation for Browsing Recorded Audio, In Proceedings of CHI

'95, pages 212-213, ACM, 1995.

Some systems use handwritten notes taken during recording to index audio or video. U.S. Pat. No. 4,841,387 describes a system that indexes tape recordings with notes captured during - 3 - recording. The writing surface is an electronic touchpad. All indices are created during recording only and stored in a reserved portion at the beginning of a microcassette tape. The user cannot add notes that index the recording during playback. The display surface is grouped into rectangular areas to save storage space; this has the disadvantage of making the system coarser grained than if each mark or pen stroke was indexed. In addition, a user has to put the device in a

special "review mode" (by pressing a button) before being able to select a location in the notes for playback. Other systems index audio and/or video recordings with notes handwritten on a computer display screen or electronic whiteboard during recording (U.S. Pat. Nos. 5,535,063; 5,818,436; 5,786,814; 5,717,879; and 5,717,869, as described in Whittaker et al., Filochat:

Handwritten Notes Provide Access to Recorded Conversations, In the Proceedings of CHI '94, pages 271-277, ACM-SIGCHI, 1994; and as described in Wilcox et al., Dynomite: A Dynamically Organized Ink and Audio Notebook, In the Proceedings of CHI '97, pages 186-193,

ACM-SIGCHI, 1997).

Systems described in L.J. Stifelman, Augmenting Real- World Objects: A Paper-Based Audio Notebook. In the Proceedings of CHI '96. ACM-SIGCHI, 1996 ("Stifelman 1996") and L.J. Stifelman, The Audio Notebook: Paper and Pen Interaction with Structured Speech, Doctoral Dissertation, Massachusetts Institute of Technology, September 1997 ("Stifelman 1997") index digital audio recordings with notes written in a paper notebook during recording.

Some limitations of these systems are as follows. Like the previous systems just described, Stifelman 1996 and Stifelman 1997 focused on real-time indexing — a limitation is that notes written during playback do not index the recording. A further problem is the issue of distinguishing writing activity from selections made for playback. In Stifelman 1996 and Stifelman 1997, if a user adds to their notes when in a play mode, this could falsely trigger a playback selection. Also, selections left visible marks on the pages. With systems that use a - 4 - display screen as the writing surface instead of paper, sometimes a circling gesture or other gesture is used to select areas of writing for playback. This can also be error-prone because the system has to distinguish between a circle drawn as data versus a circling gesture or else the user must put the system in a special mode before making the gesture, causing selection to be a two-

step procedure. Page Identification

Bar code readers are known in the art and generally come in one of two forms: (1) a scanning laser or charge coupled device (CCD) that is pointed at a bar code while depressing a trigger; and (2) a wand that is swept over a bar code. These devices suffer from a number of limitations for solving the problems addressed by the present invention (e.g., the ability to automatically recognize an identification code printed on one or more pages within a collection of pages). First, known bar code readers require manual input from the user such as pointing, or swiping the instrument over the code. Second, known devices require moving parts which increase the cost and complexity of manufacture, while decreasing reliability. Third, these known devices are designed to identify a bar code in a variety of orientations, making them an overly complex solution for identifying a code in a relatively fixed position with respect to the reader. Fourth, such devices use a visible light source which can disturb the user, or others in the vicinity.

Fifth, these device use a scanning laser or a much larger and more expensive sensor array (e.g., 2048 x 1) than needed for some of the applications addressed by the present invention. Other known page identification systems use black-and-white or colored squares or stripes

(hereinafter "blocks") as the printed codes rather than bar codes, as described in Stifelman 1996, Stifelman 1997; and U.S. Pat. No. 4,884,974 to DeSmet ("DeSmet").

These systems, however, have also suffered from a number of limitations. First, the systems do not scale well because of the space required for the blocks representing the codes. - 5 -

Each block represents only one bit of information. Thus, in Stifelman 1997, eleven printed code bits required five inches in width along the bottom of a notebook page. Second, the system is

composed of discrete sensors and therefore requires one sensor for every bit of information. This requirement limits the amount of information that can be encoded, and makes the system expensive. Third, the sensor mechanism impedes a user's handwriting movements causing a portion of the page to be unusable. Fourth, the sensors must be precisely aligned with the code

on a page to perform properly.

Additional limitations of Stifelman 1996 and DeSmet include the following. First, the systems rely on ambient light, so performance is degraded under dark conditions. Performance is also degraded under very bright conditions since the sensors can be saturated. Second, the sensors can be blocked or shadowed by the user's hand or pen, degrading the performance of the detector.

Additional limitations of Stifelman 1997 include the following. First, each discrete sensor

must be activated and read one at a time to conserve power. Second, the system cannot be exposed to any ambient light. Ambient light (e.g., bright sunlight) would saturate the sensors and cause false readings. Third, the sensors need to be positioned directly over the page code. The

sensors are embedded inside of a ledge that is positioned over the bottom of a notebook. This causes several problems: (1) a book must be slid in and out under the ledge to turn pages; (2) a user cannot rest his/her hand on the notebook or table while writing. Since the ledge was placed over the bottom portion of the notebook, the user must place his/her hand over the ledge. This makes writing in the book, particularly on the lower portion of the page closest to the ledge,

difficult and uncomfortable.

Limitations of a system described in J. Rekimoto and K. Nagao, The World through the

Computer: Computer Augmented Interaction with Real World Environments, ACM User - 6 - Interface Software Technology (UIST) Conference Proceedings, 1995 ("Rekimoto") include the following. First, Rekimoto requires manual operation by a user. A user must point a camera at the code, and a video camera is an expensive solution. Second, color codes are more expensive to print than a black-and-white bar code and require the availability of a color printer. Third, as

stated above, the codes do not scale well. The number of detectable identification codes is limited by the size of the codes. Also, each code stripe represents only one bit of information (stripes are one of two colors, and of equal size).

Another related area known in the art includes electronic books, which modify book pages for identification purposes by putting tabs on pages, cutting out notches, or other similar means. For example, U.S. Pat. Nos. 4,862,497 to Seto et al ("Seto").; 5,485,176 to Ohara et al.("Ohara"); and 4,809,246 to Jeng ("Jeng") use photo sensors to sense the presence or absence of a tab on the edge of each page. Similarly, Ohara uses notches instead of tabs, and Jeng additionally requires a button to be pushed whenever a page is turned. The systems described in these patent have several limitations. First, tabs or notches must be cut into pages, and the pages

must be rigid or stiff. Second, one sensor is needed for each page. This limits the number of pages that can be coded. With more pages, additional sensors are required, and more space needed for them. Also, each additional sensor adds an additional cost. Third, the tabs must be

reflective (white or metal) or opaque.

Other electronic books have used magnets or switches embedded in book pages for page

identification. See, for example, U.S. Pat. Nos. 5,417,575 to McTaggart ("McTaggart"); 5,631,883 to Li ("Li"); and 5,707,240 to Haas et al. ("Haas"). McTaggart uses electronics embedded in laminated pages. Electromagnetic switches on pages are used to detect which page is open. Li uses conductive stripes on pages and electro-mechanical contacts which are prone to failure (e.g., due to dirt on contacts) and require manual operation (i.e., the user touches a button - 7 - to open/close the contact mechanism). The conductive stripes also have to be exactly aligned with the contacts. Also, turning pages can be difficult since the contacts come down over the page. Haas detects the position of pages using magnets embedded in the page. In each of these systems, the book must be specially produced (i.e., it cannot be printed using standard book printing techniques) and can be expensive to manufacture.

Another area known in the art includes devices for capturing writing on forms. See, for example, U.S. Pat. Nos. 5,629,499 to Flickinger et al. ("Flickinger"); 5,734,129 to Belville et al.

("Belville"); 5,627,349 to Shetye et al. ("Shetye"); and 5,243,149 to Comerford et al. ("Comerford"). Flickinger and Shetye describe a device which secures a form to the device using a clip at the top or bottom. Flickinger suggests that a bar code reader could be embedded in the clip that could automatically read a code on the form. However, the design of this component is not disclosed, and it appears that such a design would be limited because the clip would have to

be positioned directly over a form to identify it. Thus, the user would have to lift the clip manually and insert a sheet. It also appears that this design would not generalize for use with a book or notebook, where a user is accustomed to turning pages freely without clipping them in and out. Comerford describes a device where documents are read into the system using a scanner

that is slid over the page. The scanner must be manually operated by the user, requires contact

with the paper, and is expensive.

U.S. Pat. No. 4,636,881 to Brefka et al. ("Brefka") uses infrared sensors to detect when a page is turned. This is limited to detecting only relative movement of pages (i.e., previous page, next page) and cannot identify an exact page number. If two or more pages are turned at once,

this approach does not work properly.

Active Physical Scrollbar - 8 - A wide range of screen-based timelines and screen-based scrollbars are known in the art.

The Apple Macintosh computer, for example shows a percent done indicator while copying files, and scrollbars are commonly used to navigate though long documents with a mouse. A wide variety of graphical interaction techniques for interacting with audio data are also known in the

art.

Stifelman 1996 provided a continuous input control and a groove for a pen — it did not include any display elements. Stifelman 1997 provided a continuous input control with a display composed of discrete light emitting diode (LED) elements. Users touched directly on individual LEDs (using an ink digitizing pen) to trigger playback of an audio recording. The resulting

display was not robust because the individual LED elements were exposed, making them susceptible to ink marks from the pen, bending of the LED leads, gaps between LEDs due to bending, etc. Since the display was assembled using through-hole components, assembly was labor intensive, and not conducive to mass production. The nature of the assembly also led to

inconsistent spacing and alignment of the LEDs. The through-hole LEDs produced a display surface that was too far above the surface of the digitizing tablet surface to consistently sense the position of the digitizing pen (e.g., the user had to press very hard with the pen). The display did not provide an intuitive indication of where to put the pen (e.g., a groove) since no covering

could be added on top of the LEDs (because the surface of the LEDs was already too far above the tablet). Because standard leaded LEDs were placed next to one another, so that the light could bleed from one element to the next, blurring the location of the cursor or other lighted

element.

U.S. Pat. No. 5,819,451 discloses a copy holder that highlights the current line of text. A display is controlled by buttons or a foot pedal, is not controlled by directly touching on the display, and does not move automatically. Similarly, U.S. Pat. No. 4,385,461 discloses a copy - 9 - holder that highlights text though a translucent screen. U.S. Pat. No. 5,191,320 discloses a mechanical belt-based input method with a linear bar graph display. U.S. Pat. No. 5,751,819 discloses a non-interactive audio level meter with a bar graph-like display. U.S. Pat. No.

5,786,814 to Moran and U.S. Pat. No. 5,717,869 to Moran describe graphical timelines with

events indexed into a media stream.

A variety of "bargraph" modules that use LEDs as display elements are know in the art. A basic ten element module is available from many sources, including Radio Shack. Some vendors make integrated modules with multiple light emitting elements as displays, but these displays are often expensive or inappropriate to use for the present invention (too dim, include components

that adversely affect the digitizing tablet, etc.)

Objects and Advantages of the Invention

The present invention offers several advantages over the art. The objects and advantages of the invention include the following. One object of the invention is to allow a user to index time-varying media (audio, video, etc.) while recording, while playing, or while stopped. Another

object is for the indexing to be automatically created from natural activity (e.g., user notations such as handwritten notes and page turns) of the user during recording and playback, and that these indexes can be created in a continuous fashion while the recording is originally being made or while it is being played back, or when stopped. Still another object is to allow the indices to be dynamically updated with new indices added during playback, while creating additional recordings, or while stopped. Yet another object is to allow a user to create multiple indices for any part of a recording. Another object of the invention is to allow a user to add new recorded

segments of audio, video, etc. for any page of data.

Another object of the invention is to reliably distinguish between user notations created to index the recording and selection actions that are intended to cue playback to a location - 10 - associated with a user notation. Still another object of this invention is to allow this distinction to be made without requiring a user to explicitly instruct the device to enter a special "mode", and that only a single step or action is needed to make a selection. Another object is for the selecting action to be intuitive and not require training or reading a manual to learn. Yet another object is to allow a single input device to be used both for making notations and selections, and without

creating unwanted marks. Page Identification

The invention offers several advantages over the art in terms of page identification. The objects and advantages of the invention include the following. One object of the invention is to provide an apparatus that does not require manual operation by a user (e.g., button pushing,

pointing an instrument at a code, swiping an instrument over a code). Another object is to provide an apparatus with no moving parts and a minimal number of components, making it inexpensive and simple to manufacture. Yet another object is to provide an apparatus that operates reliably under variable lighting conditions, i.e., from bright light to complete darkness. Still another object is to provide an apparatus that automatically adapts to various light levels.

Another object of the invention is to provide an apparatus that allows multiple pages to be

turned at once and does not require sequential page turning. Yet another object is to provide an apparatus that includes an optical sensor and related components that can operate unobtrusively at a distance from the book, and that do not need to be directly looking down over a page and do not require physical contact with the book. Still another object is to provide an apparatus that does not impede the user's hand movements when writing or gesturing in the book.

Another object of the invention is to provide an apparatus that does not impede page turning by the user. Yet another object is to provide an apparatus that allows book pages to be turned without removal of the book from the receiving surface of the apparatus. Still another - 11 - object is to provide an apparatus that does not generate false readings when an external object, such as a pen or a finger, partially obscures the optical sensor. Another object is to provide an apparatus that allows minor misalignments of the book in relation to the optical sensor. Yet

another object is for the printed code to take up minimal space on the book pages (e.g., only a small corner of a page as opposed to a whole edge of a page). Still another object is to provide an apparatus that only requires a single sensor chip to read a code containing multiple bits of data. Another object of the invention is to provide an apparatus that reads identification codes from a relatively fixed position in relation to the optical sensor. Yet another object is to provide an

apparatus that uses an invisible light source so as not to disturb a user or others in the vicinity of the apparatus. Still another object is to provide an apparatus with a rapid response time for recognizing the code on a book page with minimal delay following a page turn.

Another object of the invention is to provide an apparatus with identification codes that have built-in redundancy for greater reliability. Yet another object is to provide an apparatus that

can operate with a low resolution optical sensor. Still another object is to provide an apparatus that uses identification codes that can be printed on commonly available computer printers (e.g., laser printer, ink jet printer, etc.) or through standard printing techniques (e.g., offset printing).

Another object of the invention is a book with coded pages to identify the pages, or information about a page. Yet another object is a book coded with a width-modulated bar code

to ease reading of the code. Still another object is that the codes can identify individual books uniquely, or identify a specific page within a specific book. Another object of the invention is a code along the edge of the book. Yet another object is a code on the cover of the book to identify the cover or uniquely identify an entire book. Still another object is a code on the background area where the book is placed to identify when a book is not present. Still another object is to prevent vertical misalignment of pages. Another object of the invention is to provide - 12 - an apparatus that does not require holes, tabs, or notches to be cut in the paper. Yet another object is to provide an apparatus that does not require thick or rigid pages. Still another object is to provide an apparatus that can use a bar code symbology with minimal or no start and stop codes. Further objects and advantages of the invention will become apparent from a consideration of the drawings and ensuing description.

Active Physical Scrollbar

The invention offers several advantages over the art in terms of the active physical scrollbar control and display. The objects and advantages of the invention are as follows. One object of the invention is to provide high resolution, fine pitch display and control. Another object of the invention is to provide an interaction surface that is closer to the tablet. Still another object of the invention is to use components that are reliable and inexpensive. Yet another object of the invention is to help prevent bleeding of light from between the light emitting elements.

Another object of the invention is increasing the effective resolution of the display by selectively controlling the brightness of the light emitting elements. Still another object of the invention is to provide consistent starting points for playback locations. Further objects and

advantages of the invention will become apparent from a consideration of the drawings and

ensuing description.

Summary of the Invention

In a multimedia book recording application, a real-time continuous stream of media such as audio and/or video is recorded and linked with handwritten notes, other notations, or other type of indexing information (referred to as a "user notations" or simply "notations"). Such an

application or device will be referred to as a "multimedia recorder." This indexing information can then be used to cue a recording to a location corresponding to the user notation. - 13 - The present invention describes a multimedia recording device that combines the best aspects of a paper book and a media recorder (i.e., for recording audio, video, music or other time- varying media). The device can be used to record and index interviews, lectures, telephone calls, in-person conversations, meetings, etc. In one embodiment, a user takes notes in a paper book, and every pen stroke made during recording, playback, or while stopped is linked with an

audio and/or video recording. In other embodiments, the writing medium could be a book, flip chart, white board, stack of sheets held like a clip-board, pen computer, etc. (hereinafter referred

to as a "book").

For playback, users can cue a recording directly to a particular location simply by turning to the corresponding page of notes. An automatic page identification system recognizes the

current page, making it fast and easy to navigate through a recording that spans a number of pages of data. Users can select any word, drawing, or mark on a page to instantly cue playback to the time around when the mark was made. A selection is made using a "stylus", where stylus is defined as a pen (either the writing end of a digitizing pen or the selecting end of a digitizing pen), finger, or other pointing mechanism. The multimedia recorder is able to reliably distinguish between user notations that index the recording and selections intended to trigger playback.

More particularly, the invention links a user notation (e.g. handwritten notes) on a page to time-varying data (e.g. audio). The invention includes a user interface for capturing attribute data for each user notation made on the page during record-time and play-time, a recording device for recording the time-varying data corresponding to the attribute data for each notation, and a processor for dynamically linking the attribute data to a corresponding element of the time- varying data.

The user interface can for example include a stylus for making and selecting a user notation and a digitizing tablet or other sensing device for capturing the attribute data for each - 14 - user notation. The attribute data can include pressure information from the stylus corresponding to the pressure applied by the writing end of the stylus when making the notation onto the page, location information corresponding to the location of the stylus when making the notation, time

information for when each user notation was made, and index-type information (e.g. play-time, record-time, and stop-time). The stylus can include both a writing end for making the notation onto the page and a selection end for selecting a user notation and thereby selecting the corresponding time-varying data to reproduce.

The recording device can include a sensory device (e.g., a microphone, telephone output,

digital camera, television output, etc.) for receiving the time-varying data and a storage device (e.g., a hard disk, removable disk, etc.) for storing the time-varying data.

The processor is coupled to the sensing device (e.g., a digitizing tablet), the recording device, and a memory (where the attribute data is stored) for dynamically linking the attribute data for a particular user notation to the corresponding time-varying media that was recorded or

reproduced at the same time that the user notation was made. During record-time, the invention records the time-varying data and the attribute data for each user notation and links the attribute data to a corresponding element of the time- varying data. When the user wants to review these notes and listen to the corresponding time-varying data, i.e. during play-time, the user selects the user notation desired with a selection end of the

stylus and the system automatically plays/reproduces the recorded time-varying data (e.g., audio). The user can then add additional notations to the page while the time-varying data is being played and the invention will automatically link these new notations to the time- varying data. The user can also stop the recorder and make notations on the page at her leisure (i.e. during stop-time). These stop-time notations will be automatically linked to the time-varying data that was playing when the playback was stopped. - 15 - Page Identification System

The present invention includes a system and method for automatically identifying information (e.g., page numbers) in a book by reading an identification code (e.g., a bar code) printed on each page, on the book cover, and on a surface below the book. The codes on the cover and surface below the book can be used to signal that the book is closed or removed from the system. In addition, the code on the cover can be used to uniquely identify the book, and the

codes on the pages can uniquely identify the book and each individual page. Hereinafter, the term

"page" generically refers to planar surfaces such each side of a leaf in a book, the book cover, a surface below the book, a touch sensitive surface, etc. In some embodiments, the invention uses an optical sensing technique that allows an

optical sensor and related components to be located adjacent to and at a distance from the book so as to not impede the user when turning the pages of the book. The system operates automatically to identify the book and its pages (i.e., no manual operation by a user is required).

The system can operate using ambient light. If, however, ambient light conditions are inadequate for the optical sensor to properly read the codes, the system selectively activates a

light source that artificially illuminates the code area. The light source can be non-visible, so that

it cannot be seen to anyone looking at, or writing on, the page.

More particularly, the system includes a holder for receiving the book and for keeping it in a substantially fixed position relative to the rest of the system. Once the book is in place, the optical assembly that is located adjacent to the book (such that it does not impede page turning or

book insertion and removal) detects the identification code on the page and converts this optical image information into electrical signals for further processing. The optical assembly includes, for example, a reflecting element (e.g., a mirror), a focusing element (e.g., a lens), and an optical sensor. The reflecting element directs light reflected from the identification code onto the - 16 - focusing element which focuses the light onto the optical sensor so that the optical image can be converted into electrical signals for further processing.

The processor operates on the electrical signals in order to decode them into book and page number information. If the ambient light conditions are insufficient to detect the code, the processor can automatically activate a light source to improve the illumination of the code area on the book. The reflecting element of the optical assembly can direct the light received from the light source onto the identification code on the page.

The identification code can be in the form of a width modulated bar code. The bar code can be a standard bar code or it can be a custom bar code containing, for example, a single bar stop code or no start and stop codes whatever. The system is capable of accepting these simplified bar codes, thereby enabling a higher signal-to-noise ratio in the resultant image.

Active Physical Scrollbar

The present invention also addresses the problem of providing a combined control mechanism and display in a single device. The invention couples user input from a stylus with a

timeline display. The invention provides a cost effective and manufacturable means of addressing these issues. The invention can be used for a variety of control and display tasks. For example, the invention can be used with a multimedia recorder as an interactive timeline display for

controlling time-varying media such as audio or video.

The invention features a physically active scrollbar system that acts as both a display and a control for interacting with audio, video, or other data. A physical control is something with which a user directly interacts. This is opposed to a virtual control such as is typically found on a computer screen. A user can, for example, touch the scrollbar directly with a stylus, pen, finger, or other pointing mechanism. The control is active in that it provides visual feedback to a user by

activating one or more display elements directly under where a user is touching. The control is - 17 - considered a scrollbar in that allows a user to move through a data set. Display elements can also be activated through other means, such as by the state of a host computer system.

If a display element of the scrollbar is lighted, it can act as a "cursor". The cursor location can be used to show the current position through the information (as with a percent done indicator), or act as a timeline for a position in an audio or video recording or other data set. One or all display elements can be enabled or disabled at the same time, and in any sequence.

More specifically, in one aspect, the invention features a physically active scrollbar system. The system includes an array of display elements and a translucent material that covers the display elements. The array of display elements can be, for example, LEDs, surface-mounted LEDs, or

elements of a liquid crystal display. The translucent material covers the display elements and has an interaction surface. The system also includes a sensing device and a processor. The sensing device (e.g., a digitizing tablet) is disposed adjacent to the array of display elements senses the location of a stylus proximately disposed relative to the interaction surface. The processor is

electrically coupled to the array of display elements and the sensing device. The processor activates at least one of the display elements based upon the sensed location of the stylus. The processor can manipulate the display elements to produce various results. For example, the processor can activate (i) at least one of the display elements near the sensed location of the

stylus, (ii) the display element closest to the sensed location, (iii) at least two adjacent elements at less than full brightness, or (iv) at least one of the display elements independent of the location of

the stylus.

In another aspect, the invention features a method for controlling and displaying information on a scrollbar system. The location of a stylus proximately disposed relative to an interaction surface of a translucent material disposed over an array of display elements is sensed with the sensing device. A signal representative of the sensed location of the stylus is generated. - 18 - At least one of the display elements is activated based on the signal representative of the sensed location of the stylus.

In yet another aspect, the invention features a method for manufacturing a scrollbar system capable of inputting and displaying information. An array of display elements is mounted

adjacent to a sensing device. A translucent material having an interaction surface is disposed over

the array of display elements. The array of display elements and the sensing device is electrically coupled to a processor. In one embodiment, the array of display elements is surface mounted onto a substantially planar surface of the sensing device.

Brief Description of the Drawings The details and advantages of this invention may be better understood by referring to the

following description taken in conjunction with the accompanying drawings.

Figure 1 A shows a top view of the multimedia recorder with associated components.

Figure IB shows several embodiments of a stylus for use with the multimedia recorder.

Figure 2 shows data captured by the multimedia recorder. Figure 3 shows examples of different kinds of recording indices.

Figure 4A shows one embodiment of a page, a link table, and a recording illustrating

record-time indices.

Figure 4B shows one embodiment of a page, a link table, and a recording illustrating record-time plus play-time indices. Figure 4C shows one embodiment of a page, a link table, and a recording illustrating

record-time plus play-time, plus stop-time indices.

Figure 4D shows one embodiment of a page, a link table, and a recording illustrating record-time plus play-time, plus stop-time indices with added record-time data.

Figure 5 shows a block diagram of the hardware components. - 19 - Figure 6 A shows an overview of the page identification system including a book.

Figure 6B illustrates a number of sample orientations for the relative positions of the optical assembly and the identification code on the book, in accordance with the present invention.

Figure 6C is a cross-sectional view of the optical assembly and the book in accordance with the present invention. The cross-sectional view of the optical assembly illustrates the relative positions of the optical components: mirror, lens, and optical sensor. The optical path between

the identification code on the book and the mirror of the optical assembly is also illustrated.

Figure 7 is a perspective view of one embodiment of the optical housing that is part of the optical assembly (the mirror and lens are not shown).

Figure 8 illustrates the relative position and orientation of the light source, mirror, and lens of the optical assembly, in accordance with the present invention.

Figure 9 illustrates the orientation of the optical assembly relative to the electronic

subsystem of the present invention. Figure 10 is a block diagram of the electronic subsystem of the present invention. In particular, the processor is shown coupled with the optical sensor, the light source, and a host computer.

Figure 11 illustrates that the system can incorporate circuitry to reduce the minimum

integration times on slow processors. Figure 12 depicts sample electronic circuitry that can be used to reduce the minimum integration times on slow processors.

Figure 13 depicts a sample page with a printed page code.

Figure 14 illustrates sample page code locations for double sided books.

Figure 15 illustrates the border of the valid writing area on a page. - 20 - Figure 16 depicts sample locations of key numbers that correlate the book with related recording media.

Figure 17 is a flow chart illustrating how the integration times can be modified to optimize the output signal level of the optical sensor in response to varying ambient light conditions. Figure 18 shows an overview of one embodiment of the active physical scrollbar.

Figure 19 shows one embodiment of a portion of the display component of the active physical scrollbar.

Figure 20 shows one embodiment of a translucent interaction surface over the display elements. Figure 21 shows a side view of the interaction surface shown in Figure 20.

Figure 22 shows a top view of the interaction surface shown in Figure 20.

Figure 23 shows a block diagram for one embodiment of the hardware.

Figure 24 A shows two adjacent light emitting elements.

Figure 24B shows two adjacent light emitting elements. Figure 24C shows two adjacent light emitting elements at partial brightness.

Figure 24D shows two adjacent light emitting elements at partial brightness.

Figure 25 shows two embodiments of a stylus.

Detailed Description of Invention

Figure 1A shows a schematic representation of one embodiment of a multimedia recorder 61. In one embodiment, the device links notes written in a paper book 17 with a digital audio recording. In other embodiments, the writing medium could be a book, flip chart, white board, stack of sheets held like a clip-board, pen computer, etc. The user places the book 17 on the surface of the device inside a book holder 47. This component is designed to hold a book 17 over the recorder's writing area. All user notations in the book (writing, drawings, etc.) are - 21 - accurately captured and digitally stored by the device. The notations are captured through the thickness of the bookf 17.

Page Identification System

In one embodiment a page identification system 71 is located adjacent to the top edge of

the book 17. The design of the page identification system 71 allows it to read a code 43 printed on the pages, cover, or surfaceF below the book without being directly over the code 43. The book holder 47 is positioned so that the page identification code 43 will be positioned in front of the page number identification system 71. An opening in the corner of the book holder 49 allows easy page turning. Making a Recording

In one embodiment, to start recording, a user simply presses the record button 404. In

one embodiment, a microphone 427 is built into the device. In another embodiment, the device can also have jacks for additional microphone or line level inputs. In other embodiments, a video source can also be built in or plugged into the device. In one embodiment, there are built-in speakers 429. There can also be an input jack for plugging in external speakers or headphones. Data Captured at Record-Time. Play-Time, and Stop-Time

A listener can write notes that index a recording while the recording is being captured (referred to as record-time indices), during playback (referred to as play-time indices), or when the system is idle or stopped, i.e., not playing or recording (referred to as stop-time indices). In some cases, during recording, users may make only a few notes, because they are focusing their attention on a live presentation. Afterward recording, a user may want to add to their notes while playing it back. In the present invention, the notes written during playback (or when stopped) are

dynamically added to the indices already stored during recording. In the extreme case, a user does not have to write any notes during recording at all. The notes can be written while playing - 22 - back the recording, and every mark made during playback will index the material being played in the same exact manner as if it were being heard live. The user interface is consistent for all three

types of indices (record-time, play-time, and stop-time) — the notes always index whatever the user is currently listening to (or just listened to before stopping) whether the user is listening to the material live or after the fact. Stop-time indices allows a user to write a note that indexes what they just heard before stopping playback or recording.

Figure 2 shows some sample data captured for each page of notes for one embodiment of a multimedia recorder. There are many different possible sources of data. Some examples are shown in the figure: an audio source 440 (speech, music, or other sound from a microphone, telephone, radio, etc.), a video source 442 (analog or digital video from an internal or external camera, video system, television output, etc.), and a writing source 444 (from a paper book, book, stack of paper, flip chart, white board, pen computer, etc.). Note that this is not an

exhaustive list and their are other possible sources of input.

In one embodiment, the recordings are segmented and stored by page. Each page of notes has an associated recording. The recording can be stored in a file, in memory, on a tape, etc.

This will be referred to as a "memory location" or file. In one embodiment, newly recorded data

is appended to a file for the currently open book 17 page. For example, consider the case where a user is opened to page one for the first ten minutes of a lecture, and then turns to page five for ten minutes, and then turns back to page one for five minutes. The first ten minutes of the lecture plus the last five minutes of the lecture will be stored in the page one file; the ten minutes in between will be stored in the page five file. Recording can be started and stopped as many times as desired. A new recording can be added to any page of notes at any time. For example, in one embodiment, if a user wants to add another recording to page ten, the user simply turns to page - 23 - ten and presses record. Each additional recording is appended to the recorded data already stored for the associated page of notes.

Figures 3 and 4A-4D show attribute data for the recordings stored in a link table 478. The link table 478 is representative of the types of attribute data that can be stored. The link table 478 can be stored in a file, in RAM, in an in-memory data structure, etc. In the link table 478:

"X" 446 represents the X coordinate of the stylus data or other event data;

"Y" 448 represents the Y coordinate of the stylus data or other event data;

"P" 450 represents the pressure of the stylus on a page or other event data; and

"T" 452 represents the time offset into the recording.

The term "time offset" will be used to mean an offset, time stamp, time code, frame code, etc., into a recording for an event. The time offset can represent a time relative to the beginning of the recording, or other event. In some embodiments, this time offset may be fine grained, and represent intervals such as one millisecond, or a single video frame (1/30 of a second, 33.3 milliseconds). In Figures 4 A through 4D, the time offset is the number of milliseconds from the

beginning of the recording.

"C" 454 represents the absolute date and clock time when the X-Y point or other event occurred (for example the number of seconds since some pre-determined epoch, such as 00:00 January 1, 1970).

"I-T" 456 represents the index type, or flags associated with the entry into the link table.

For example the flags can represent if the data was captured at record time, play time, or stop time. Other data can be stored in this field, such as an indication of a page turn, starting or stopping of a recording, etc. For example, a code indicating the page was turned to another page (TURNED_TO flag), or turned to this page from another page (TURNED_FROM flag) is stored in this field. In this case, the number of the page (turned to or turned from) is stored in the X - 24 - field. Note that in some embodiments, these flags can be logically OR'd together (such as a page

TURNED _TO flag OR'd with a RECORD TIME flag).

Note that stylus data is always captured by the multimedia recorder 61 — when the device is recording (referred to as record-time), when the device is playing (referred to as play-time), and when the device is stopped (referred to as stop-time). In some embodiments, only a subset of this attribute data may be used, while in other embodiments, additional information may be captured

(such as the tilt of the stylus, color of the ink used in the stylus, etc.).

The multimedia recorder 61 captures a complete spatial map of all information written on each page. This includes every X-Y point (446, 448) that makes up every stylus stroke, as well as a pressure reading 450 from the stylus. The stylus pressure data 450 is used to determine stylus ups and downs (i.e., when the stylus was placed down on a page, and when it was picked up).

Each X-Y point (446, 448) that makes up each letter, word, or drawing acts as an index into the recording. Each X-Y point (446, 448) indexes the location in the recording that was being recorded or played back at the time the stroke was made, or if the system is stopped, the last portion of the recording that was played or recorded prior to stopping. In other embodiments where video is used, a video time code could also be stored for every X-Y point (446, 448).

Figure 3 shows an example of a page of notes, distinguishing between three types of

recording indices. The drawing shown under the heading "all indices" 470 shows a complete page of notes. This is then broken out into notes that were written during recording (record-time indices 472), notes written during playback (play-time indices 474), and notes written when the system is stopped or idle (stop-time indices 476). In this example, a simple outline was made during recording and filled in with further details during playback, and when stopped. This way, a user does not have to worry about making a notation for every point of interest during recording.

Notes can be added at the user's leisure after a recording is made. - 25 - In the example shown in Figure 3, someone is giving a talk about the sport of ice hockey.

During recording, the user outlined the topics of the talk — "mens" was written when the lecturer

began speaking about men's ice hockey, and "womens" was written when the lecturer began speaking about women's ice hockey. These notes are record-time indices 472 because they were written during recording. After recording the lecture, the user plays it back. During playback, the user makes additional notes. For example, the user writes "nhl" while they are playing back a portion of the lecture about the National Hockey League. This new note now becomes an additional index into the audio recording. The X-Y points that make up the word "nhl" are playtime indices 474 since the note was written during playback. In this case, it indexes a portion of the recording that was not indexed when the recording was originally made (i.e., the part about the NHL). A user can also add a note during playback that indexes something that was already indexed during recording. In this way there can be multiple indices at different locations on a page that index the same time offset in a recording. The notation "Nagano '98" was created during stop-time (i.e., stop-time indices 476) — the user played a portion of the recording about

the 1998 Olympics in Nagano, pressed the stop button 406, and then wrote the note. In one embodiment, notations added when the system is stopped indexes the last portion of the recording that was played or recorded prior to stopping. However, notations made during stop-time may just be used to "touch-up" a note (e.g., dot an "i", cross a "t", or darken light writing). Therefore, in another embodiment, stop-time indices 476 are treated as notations with no associated time index into the recording. Another alternative is to allow a user to selectively enable or disable

stop-time indices 476 depending on their usage style.

This invention allows multiple indices to be created for the same information from different locations in notes — the user adds indices simply by making notations anywhere on a page - 26 - while the multimedia recording is playing or even when it is stopped. There is no limit on the number of indices that can be created.

Indices for each page of notes are updated dynamically. The new X-Y points (446, 448) created during play and stop time are dynamically added to the link table for a given page of

notes. Thus, immediately after a stylus stroke is made, it is available as an index for retrieving a portion of the recording (note that each "stroke" is composed of a series of X-Y points (446,

448), and each point has an associated time offset 452).

Figures 4 A through 4D shows an example of notes taken by someone about a talk on the

sport of ice hockey for another embodiment. In Figure 4 A, the user writes the topic "Ice Hockey" at the top of a page 45 along with a small graphic representing crossed hockey sticks.

During recording, the user outlined the topics of the talk — "womens" was written when the lecturer began speaking about women's ice hockey, and "mens" was written when the lecturer began speaking about men's ice hockey. These notes are record-time indices because they were created during recording. The link table 478 shows some of the information that is stored. Each

line of the table corresponds to a different stylus position on a page 45. In this example, two

minutes (120 seconds) of the talk are recorded.

In the embodiment shown in Figures 4A-4D, it is possible to get multiple stylus stokes in

the smallest time offset interval (e.g., one millisecond).

In one embodiment, each of the entries in the table link the writing to a recording 480 of

the talk. For example, the first few X-Y points shown occur in the first few milliseconds of the recording, and thus substantially refer to the start of the recording. The last few X-Y points refer to the end of the recording. The links are graphically indicated in Figures 4A-4D by arrows pointing from the link table 478 to the recording 480. - 27 - In Figure 4B, after recording the talk, a user plays it back, and the user makes additional notes filling in the outline. For example, the user writes "nhl" while they are playing back a

portion of the lecture about the National Hockey League. This new note now becomes an additional index into the recording. The X-Y coordinates that make up the word "nhl" are play- time indices since the note was written during playback. In this case, it indexes a portion of the recording that was not indexed when the recording was originally made (i.e., the part about the

NHL).

In Figure 4C, the user makes additional notes. The writing "Nagano '98" was written during stop-time — the user played a portion of the recording about the 1998 Olympics in Nagano, pressed the stop button 406, and then wrote the note. In one embodiment, user notations made when the system is stopped index the last portion of the recording that was played or recorded

prior to stopping.

Figure 4D shows additional notes and corresponding links that are added when an additional recording segment is appended to the end of the recording for the page 45. The user writes "Womens hockey is great for all ages!" while an additional 60 second recording is made.

These record-time indices index this new segment of the recording 480.

Selecting on any portion of the handwritten text causes the recording to be played back from the corresponding time offset. For example, if the user selects on the handwritten text "nhl", the recording starts playing at the point where the lecturer began talking about the National Hockey League.

Multiple parts of the link table 478 can index the same portion of the recording. Figure 4B and 4C show play-time and stop-time indices that both refer to the 30.000 second point in the

recording. Multiple play-time or stop-time indices can refer to the same point in the recording.

For example, several different play-time indices can each refer to a particular point in the - 28 - recording that discusses womens ice hockey. Play-time and stop-time indices can also refer to a

point already indexed at record-time.

Storage of Data

In one embodiment of the multimedia recorder, the recordings and link tables are stored digitally on a disk. In another embodiment, the device can store the data in internal flash memory

or on a hard drive, or other storage means. Playback by Selection on a Note

In order to access a point in a recording associated with a note, the user "selects" on any part of the writing. A selection can be made by pointing at any mark on a page (or circling, or other gesture). Note that because the present invention allows notes written during playback to index a recording, it is necessary to distinguish between writing activity and selections (i.e., to determine whether a user is writing notes or making selections for playback).

Figure IB shows two possible embodiments of a stylus 519. In one embodiment, a selecting-end 531 of a stylus 519 is pressed down on or pointed at a desired location on a page. The two embodiments of the stylus 519 shown in Figure IB from left to right are as follows: a

stylus 519 with an ink tip on the writing-end and a button on the selecting-end 531, and a stylus 519 with an ink tip on the writing-end 529 and a non-marking tip on the selection-end 531. In these embodiments, the selecting-end 531 of the stylus is used to trigger playback from a location on a page. Whenever the writing-end 529 of the stylus 519 makes contact with the page, this is

considered writing activity; whenever the selecting-end 531 of the stylus 519 makes contact with

the page, this is considered selecting. The device can sense the stylus 519 location without making contact with the page. In another embodiment, a single-ended stylus (not shown) can be used where a button on the stylus switches between a writing function and a selecting function. - 29 - In yet another embodiment, the system can distinguish between a writing stylus and a selecting stylus using an identifier communicated from the stylus 519 to the multimedia recorder 61.

When a selection is made, the system then searches for the closest matching stored X-Y

location to the selected X-Y location for the page of notes. A matching index must be within a threshold distance from the selection or the system determines that there was no match. If there is more than one match for a given selection, depending on the use, the earlier or later matching stroke can be used as the index since the order and time of strokes is known. In one embodiment,

the earliest matching stroke is used. When multiple matches occur, the system can also select between them based upon the index type (record-time, play-time, or stop-time). For example, a record-time index can be given preference over a play-time index, or vice-versa. Playback is then started at a point in the recording at or near (e.g., a few seconds prior to) the time offset of the matching X-Y location. Timeline

An active physical scrollbar or timeline 532 acts as both a display and control. The timeline 532 displays a visual representation of the recording. In one embodiment, one end of the timeline 532 represents the beginning of a page and the other end represents the end of the page. The cursor indicator light 409 displays the current location in the recording.

In one embodiment, a stylus 519 can be dragged along the timeline 532 to continuously adjust the playback position. There is a groove 509 in the timeline for touching or dragging the stylus. As the stylus 519 is dragged along the timeline 532, the cursor indicator light 409 moves along with it. In one embodiment, touching or dragging the stylus 519 to the left of the indicator light 409 moves backward in the recording; touching or dragging to the right of the indicator light 409 moves forward in the recording. Pressing the play button 405 causes playback to begin from - 30 - the recording position shown by the cursor indicator light 409. Pressing the stop button 406 stops recording or playback.

Hardware Description

Figure 5 shows a block diagram of the hardware components for one embodiment of a multimedia recorder 61. In one embodiment, the multimedia recorder 61 has a host computer 103 that communicates with a variety of other devices. The host computer 103 contains a central

processing unit (CPU), random access memory (RAM), serial input/output ports, a bus for communicating with other components, etc. The host computer 103 can "boot" itself off of an

internal or external flash memory unit, or by other means. An audio subsystem 307 communicates to the host computer 103. The audio subsystem

307 plays and records audio by using analog-to-digital and digital-to-analog converters, or a codec (coder-decoder). The audio subsystem 307 connects to microphone 319 or line level inputs

(the output of 317), and speaker or line level outputs 321. Microphone level inputs 319 can be amplified to line level through the use of an optional external pre-amplifier 317 and connected to

the line level input connectors of the audio subsystem 307.

The host computer 103 reads and writes data to a disk 301 or other permanent storage

(such as a magnetic disk, a magneto-optical disk, or solid state storage module). The host computer 103 communicates to the disk 301 through a disk controller unit 303 (such as SCSI) or the disk controller 303 may be built directly into the host computer 103 (such as IDE). Stylus 519 location data (e.g., X-Y location, pressure, tilt, etc.) from a digitizing tablet 513 (such as with

a commercially available tablet from Wacom Technology Corp.) is communicated to the host computer 103 through a serial (RS-232) or parallel port. A power supply 315 provides electrical power to the system components 311. The power supply 315 can be run off of an internal or external battery, wall power adapter, etc. An internal battery can be charged by the power supply - 31 - 315. The host computer 103 communicates with a processor 101 (such as a Motorola

MC68HC11) that monitors and controls a variety of user interface components 309.

Overview of Page Identification System

Figure 6A shows an overview of the page identification system including a book. Light from a printed page code 43 is reflected off a mirror 3 through a lens 1 onto an optical sensor 5. Data from the sensor is processed by a processor 101 which decodes the printed page code 43. The processor can turn on illuminators 25 if the light level is too low to properly read the code 43. In addition, a book 17 is held in place by holders 47 that maintain a substantially fixed position of the printed page code 43 with respect to the illuminators 25, and the optical assembly comprising the mirror 3, lens 1, and optical sensor 5. In the embodiment shown in Figure 6 A, the holder 47 consists of a number of pegs that hold the book 17 in position. In other embodiments, the holder 47 can be a frame that holds the book in position.

The page identification system 71 includes an optical assembly 65, a light source 25, and a processor 101. At least a portion of the page identification system 71 is disposed in a case 35. The page identification system works in conjunction with a book 17 with codes 43 on each page. The case 35 includes an opening 83 in which a transparent window 33 is positioned to protect the optical assembly 65. In some embodiments the window can be deep red, or visibly opaque while transparent to IR light. A book 17 can be placed in a holder 47 such that the location of the book 17 (and an identification code 43 located on a page) is relatively fixed in relation to the page

identification system 71.

Figure 6B shows some representative positions of the portion of the page identification mechanism 71. As shown, the page identification mechanism 71 can be located at the top left position, the left at top position, the top centered position, the top at right position, or the right at top position. It is noted, however, that other positions can be used without departing from the - 32 - spirit or scope of the invention. Figure 6B also shows that the page code 43 can be printed anywhere on the book pages. In one embodiment, the code 43 is printed along the top edge of the page. Note that the pages could also be held together by a removable binding (so pages can be

taken in and out) or by a clip (as with a clipboard), used as single sheets, etc. Optics

Figure 6C shows the optical assembly 65 for the page identification system 71. One

advantage of the invention is that the optical elements that form the assembly are inexpensive and

easily manufactured. A simple lens 1 focuses the image of the page code 43 onto the sensor 5. In some embodiments, a more complex lens or lens system can be used to reduce optical aberrations. A "fold" mirror 3 changes the angle and shortens the distance between the optical sensor 5 and the page 17. As shown, a ray of light 13 from the page code 43 reflects off the mirror 3 through the lens 1 to the optical sensor 5. The mirror 3 is also used to direct the infrared (IR) illumination from the illuminators (i.e., source of IR light) 25 onto the code 43. This arrangement allows the optical sensor 5 to be placed off to the side of the page, allowing a user to read the page, write on the page, or turn the page without the portion of the page identification system 71 disposed in the case 35 getting in the way. This configuration of optical elements permits the optical sensor 5 and

the IR illuminators 25 to be mounted on a circuit board 11 that is parallel to the plane 19 of the

book 17.

To obtain an optimal image, the optical sensor 5 would need to "look" directly down at the page code 43. However, such a configuration would interfere with turning the pages. In accordance with the invention, the mirror 3, lens 1, and housing 7 are configured to place the virtual location of the optical sensor 5 above the page. To increase the contrast of the image, the

virtual height of the optical sensor 5 above the page is increased and the angle of incidence, normally 45 degrees, is made closer to 90 degrees relative to the page. - 33 - In one embodiment, the lens 1 is a small aperture lens (e.g., f/10) for increased depth of field. As such, all book thicknesses from zero pages to the maximum number of pages in the book remain in focus. The lens 1 reduces the size of the image and the surrounding white space and produces a real image of the code 43 on the sensor 5. In some embodiments the lens 1 can increase the size of the image on the optical sensor 5; in other embodiments the lens 1 can keep

the image the original size. The magnification of the lens 1 is set so that minor misalignments of the page code 43 can be tolerated (the image of the code only fills about 70% of the active area of

the optical sensor 5).

Figure 7 is a perspective view of the one embodiment of the optical housing 7. The housing 7 prevents stray light from impinging on the optical sensor 5. This feature is desirable as stray light on the optical sensor 5 decreases the contrast of the code image. Brackets 9 are built into the housing to support the mirror 3 at the correct angle. The case 35 protects the optical assembly 65. The window 33 prevents dirt and external objects from disturbing the optical assembly 65. In some embodiments, the housing 7 can be integrated into the case 35. The lens 1

is mounted in a lens assembly 23 that also acts as an optical stop. The lens assembly 23 is mounted in a receptacle 21 in the housing 7. The housing 7 has holes 27 to connect the housing to a printed circuit board 11 with bolts 31. Of course, the design of the housing 7 can vary.

Figure 8 illustrates the relative position and orientation of the light source, mirror 3, and lens 1 of the optical assembly 65. In one embodiment, the light source comprises Infrared Light

Emitting Diodes (IR-LEDs) 25. The IR-LEDs 25 are small, inexpensive, and practical sources of infrared light. As shown, illumination from two IR-LEDs 25 is reflected by the mirror 3 onto the page code 43. The housing 7 holds the IR-LEDs 25 at an angle such that when reflected off the mirror 3, the illumination is centered on the page code 43. This dual IR-LED 25 configuration - 34 - provides relatively even illumination over the page code target area. Of course, the number and configuration of IR-LEDs can vary.

Figure 9 illustrates the orientation of the optical assembly 65 relative to the electronic subsystem. The optical housing 7 is mounted on a circuit board 11 along with external control circuitry 29. In one embodiment, the optical sensor 5 can be a single chip optical sensor (e.g., Texas Instruments TSL1401, 128 x 1, linear sensor array), which reads the page code 43. The

TSL1401, for example, is a charge mode CMOS photodiode array that is less expensive and requires less external components than a CCD array. The linear dimension of the optical sensor 5 is used to electronically scan across the printed page code 43. The use of this type of low cost optical sensor is possible because the size and location of the page code 43 are relatively fixed with respect to the optical sensor 5. This small optical sensor 5 is matched to the information content in the codes that are read by the system. In other embodiments, a two-dimensional sensor chip could be used to read area-based bar codes, recognize numerals (e.g., "1", "2", "3", ...) directly, or identify other objects printed on the page. Bar Codes

In one embodiment, a width modulated bar code is used so that precise alignment between the sensor 5 and the page code 43 is not required. For example the interleaved 2-of-5 (ITF)

format consists of 5 black bars and 5 white bars. Two bars of each color are wide and three bars are narrow with the code ending in a white bar. The ITF format encodes an even number of digits, and provides a relatively high data density. The ITF specifies a two bar start code and a two bar stop code. While the ITF format contains built-in redundancies, additional digits can be used for further redundancy. Note that the terms "start code" and "stop code" are used herein, in

the art these are sometime referred to as a "start/stop character" or a "start/stop pattern". - 35 - The start and stop codes as defined by the ITF specification can be simplified because of the physical constraints of the system (i.e., fixed length code, known orientation, etc.).

Simplifying the start and stop codes (such as only using a single narrow-bar stop code) allows wider bars to be used in the same amount of space, leading to a higher signal-to-noise ratio in the resultant image. The start and stop codes are traditionally used because the location and orientation of the bar code is unknown. The background of the area 63 that is viewed by the

sensor can contain a special code 53 so that the system can easily determine if a book is not

present. Electronics Figure 10 shows a block diagram of the electrical components of one detailed embodiment of the invention. A processor 101 (e.g., a Motorola MC68HC11 microcontroller) operates the optical sensor 5, and decodes the code 43 in software. Control lines from the processor 101 to the optical sensor 5 include a clock signal 113 and a signal to start the integration 115 of the optical information. The analog output 117 from the optical sensor 5 is read by an analog-to-

digital converter (A/D) in the processor 101 or by an external A D.

Figures 11 and 12 show additional circuitry can be added to allow short integration times

under bright light conditions. This circuitry would be advantageous if the processor 101 is not fast enough to provide clock signals to allow for the minimum integration time of the optical sensor 5. This circuitry uses a trigger signal 119 to produce a start integration (SI) signal 115 from a single shot 111 and gate a fast clock signal 121 to the sensor 5. The trigger signal 119 is turned off by the processor 101 after the appropriate number of clocks have passed. Raw SI-R 125 and CLK-R 123 signals directly from the processor 101 are also used to clock out the analog output of the optical sensor 5 under software control. - 36 - A method that can be used to adapt the optical sensor 5 to the ambient light level follows.

The processor 101 is programmed in software to vary the integration time of the optical sensor 5 to optimize the resulting signal level output from the optical sensor 5. The analog output level of the optical sensor 5 is linearly related to the amount of light falling on the optical sensor 5. The

analog output level of the optical sensor 5 is also linearly related to the integration time of the optical sensor 5. Thus, increasing the integration time can be used to compensate for a low light level, and decreasing the integration time can compensate for a high light level. A binary search of different integration times is used to adjust for different lighting levels. This technique acts as an automatic gain control, so that the optical sensor 5 always outputs an acceptable signal level

regardless of the ambient light level. The maximum analog output level 117 that can be generated by the optical sensor 5, without saturating the optical sensor 5, is used as a goal value. The integration times are varied in an attempt to come within a small delta of this goal value. If this goal value cannot be attained, the integration time that produces the widest range of outputs (i.e., black-to-wbite contrast without saturating the optical sensor 5) is used. If the lighting conditions

change, or the page code 43 is temporarily obscured, the search can fail. If the search fails to find

an appropriate integration time, the search is restarted.

The processor 101 is programmed in software to decode the image data from the optical sensor 5. After a successful decoding, the page identifier is compared to the last successfully decoded page identifier. If the page identifier is different (i.e., a page has been turned), the new page identifier is sent to the host computer system 103. This technique reduces the amount of data that is transferred to the host computer system 103. The array of raw data (Rarray) from the optical sensor 5 is first smoothed such as with a median filter. This smoothed result is then differentiated to produce an array of derivative values (D Array). The maximum of the absolute value of these differentiated values is multiplied by a constant to determine a threshold value - 37 - (THRESH). This threshold value is used in finding peaks and valleys in the derivative array. A peak in the derivative array corresponds to the start or end of a bar within the optical code. Each point in the derivative array is compared against the threshold and surrounding points to find local minima and maxima. If the code begins with the correct type of extrema (i.e., a minimum indicating a white to black transition for the first bar in the code), the decoding process continues.

The width of each bar is found by calculating the difference in the position of each extrema. If the correct number of bars are found for the bar code symbology, the decoding process continues. If the widths of the bars in the start and stop code match those specified by the bar code symbology, the bar code is decoded using a standard "reference" decoding algorithm. If a valid decoded value is found, it is used as the page identifier.

In accordance with the invention, the IR-LED illuminators 25 are selectively activated if

the ambient light level is too low for the optical sensor 5 to operate properly. This technique saves power and produces less heat. In one embodiment, the illuminators 25 are either on or off; in another embodiment, the illuminators 25 can provide a variable output level of light. If the integration time of the optical sensor 5 is greater than a pre-determined threshold, the IR-LED illuminators 25 are turned on during the sensor 5 integration period. The IR-LED illuminators 25 are turned on at a light level when they just become effective (relative to the

ambient light level) at illuminating the page code 43 otherwise there could be an instability in the binary search light-level detection algorithm. Turning on the IR-LED illuminators 25 prevents the integration time from getting too long, and slowing down the responsiveness of the system. Figure 13 shows a sample page 45 with a printed page code 43. Figures 14A-14D show that page codes 43 can be printed on both sides of a page to

provide a double-sided book. In this case printing the page code 43 at the top center of the page may be optimal. If the page identification system 71 is at the top and left of the page (Figures - 38 - 14A and 14B) the page code 43 for the back side of the page must be at the outside corner of the page (Figure 14B). If the page identification system is at the center of the page (Figures 14C and

14D), the page codes 43 for both the front side and back side of the pages are equally far from the corners of the page, and hence less prone to bending, folding, or curling.

Figure 15 shows that in some configurations the active area sensed under the book may not extend all the way to the edges of the book page. For example, a digitizing tablet under the book 17 can to sense pen or stylus strokes on a page, but the tablet area may not extend all the way to the edge of the paper. In this case placing a visible border on the page indicates the allowable writing area 67. The edge of the page 69 is shown along with the code 43, and a

human readable page number 59.

Figure 16 shows the book 17 may also contain a pocket or sleeve 73 for holding storage media 75 (such as a disk or memory card) when it is not in use. This media, for example, may hold data relating to a book, such as audio, video, pen data, and so on. This sleeve 73 is labeled with a key number 77C that matches the key number 77A on the storage media so that a one-to-

one correspondence between books and storage media is maintained. This key number 77B also appears on the cover 41 of the book, and may be contained in the code 77D on the front cover of the book for automatic identification of the book. The key number on the cover 77B can be correlated with a key number encoded in the storage media.

Figure 17 illustrates one method that can be used to adapt the optical sensor 5 to the ambient light level. The microcontroller 101 is programmed in software to vary the integration

time of the optical sensor 5 to optimize the resulting signal level output by the optical sensor 5. The analog output level of the optical sensor 5 is linearly related to the amount of light falling on the optical sensor 5. The analog output level of the optical sensor 5 is also linearly related to the integration time of the optical sensor 5. Thus, increasing the integration time can be used to - 39 - compensate for a low light level, and decreasing the integration time can compensate for a high light level. In one embodiment, a binary search of different integration times is used to adjust for different lighting levels. This technique acts as an automatic gain control, so that the optical

sensor 5 always outputs an acceptable signal level regardless of the ambient light level. The maximum analog output level 117 that can be generated by the optical sensor 5, without saturating the optical sensor 5, is used as a goal value. The integration times are varied in an attempt to come within a small delta of this goal value. If this goal value cannot be attained, the integration time that produces the widest range of outputs (i.e., black-to-white contrast without

saturating the optical sensor 5) is used. The flow chart in Figure 17 shows the sequence of steps used to vary the integration times in an attempt to match the goal light level.

Figure 17 is an overview of the binary search used on real-time data to find the proper integration time for the optical sensor 5. The light level adaptation process begins with the start initialization step 151. Step 151 initializes variables such as Low and High that may be uninitialized. Low and High represent the lower and upper limits of the integration time as the

search progresses. In the embodiment shown, these values represent a loop counter delay controlled by the processor 101. In step 153, if variable Low is greater than or equal to variable High, both variables are reset in step 155. A Middle value is set to the average of High and Low in step 157. The Middle value is the integration time that is used by the optical sensor 5 on the current iteration of the binary search. The Middle value is tested in step 159 — if it is greater than a threshold (IR Thresh), the illuminators 25 are turned on in step 161. The optical sensor 5 is the cleared in step 163 and the integration is begun in step 165. The processor waits Middle amount of time in step 167. Note that the overhead of other processing can be compensated for, and a Middle delay time of zero corresponds to the minimum integration of the optical sensor 5. The processor then stops the integration in step 169. The illuminators 25 are then turned off in step - 40 - 171. In some embodiments, it is simplest and fastest to always turn off the illuminators 25, rather than bother to check to see if they are turned on.

In step 173, the analog values are read out of the optical sensor 5, digitized, and stored in an array called Rarray. The maximum value of the elements in Rarray is found in step 175 and stored in variable RMax. RMax thus represents the brightest portion of the image captured by the optical sensor 5. RMax is then compared against a desired value called GOAL in steps 177, 181,

and 185. GOAL is a desired output value from the optical sensor 5 that is less than the saturation point of the optical sensor 5. If RMax is within a small delta of the GOAL as shown in step 177, the integration time produces a good image on the optical sensor 5, so the optical information in Rarray is decoded in step 179. Step 179 decodes the barcode data from the image of the page code 43. Step 181 shows that if RMax is less than the GOAL, meaning the integration time

should be increased, the variable Low is set to Middle+1 in step 183. If RMax is greater than the GOAL in step 185, the integration time should be decreased, so variable High is set to Middle- 1 in step 187. The binary search algorithm is then iterated by returning to step 151. While the embodiment shown in Figure 17 attempts to maximize RMax, other embodiments can optimize

other values, such as the black to white contrast in the image (e.g., RMax minus the minimum

value stored in Rarray).

If the lighting conditions change, or the code 43 is temporarily obscured, the search can fail. If the search fails to find an appropriate integration time, the search is restarted by re-

initializing the values of Low and High in step 155.

Active Physical Scrollbar

In the following descriptions, "stylus" is used to mean a pen (either the writing end 529 of a digitizing pen or the selecting end 531 of a digitizing pen ), finger, or other pointing mechanism (see Figure 8). The term translucent is used to mean transmitting light but causing sufficient - 41 - diffusion to prevent perception of distinct images, as well as transparent (capable of transmitting light so that objects or images can be seen as if there were no intervening material).

Figure 18 shows an overview of one embodiment of the active physical scrollbar. The

scrollbar system is comprised of an input component, and a display component. The display includes display elements, represented in this embodiment by surface mount LEDs 501 mounted on a printed circuit board 11. The display elements are enabled or disabled by a driver chip 503, that is controlled by a processor 101. A host computer system 103 communicates with the processor 101, instructing it to change the display depending on the state of the host computer

system 103. The input component of the system tracks user input on the display. One method of accomplishing this is with a digitizing tablet 513 that electomagnetically senses the location of an input stylus 519 (such as with a commercially available tablet from Wacom Technology Corp.) as it moved on or over the display elements. The digitizing tablet 513 communicates any user input

with the host computer system 103, which in turn may update the display. In other embodiments, alternative position sensing mechanisms can be used, such as pressure sensitive panels. A transparent or translucent material having an interaction surface 507 covers the display and protects it from the stylus 519. A groove 509 in the substantially transparent interaction surface 507 provides an intuitive place to use the stylus 519. In one embodiment, the system provides

continuous control of the data by tracking the movement of the stylus 519 through a groove 509 in the system. The groove 509 acts as an affordance for where the stylus 519 should be used. The groove 509 extends the length of the display elements 501, indicating the area that can be touched. The groove 509 can be "N" shaped, "U" shaped, semicircular, etc.

In one embodiment, the digitizing tablet 513 is used to sense the position of the stylus 519, although other sensing mechanisms can be used. The display elements 501, circuit board 11, - 42 - etc, are placed over the digitizing tablet 513, and the position of the stylus is sensed through this hardware. Minimizing the height (thickness) of the circuit board 11, display elements 501, and interaction surface 507 assures that the digitizing tablet 513 is able to accurately sense the stylus

519 position. In the embodiment shown in Figure 18, LEDs are used as the lighted elements to provide visual feedback. Surface mount devices are used to allow the LEDs to be placed at a fine pitch (i.e., close together) and for reduced height. For example, using 0603 size (0.06 x 0.03 inches, 0.8 x 1.6 mm) surface mount LEDs it is theoretically possible to achieve a density of up to 32 LEDs per inch. With a 10 mil (0.010 inch) space between the solder pads needed for attaching

the surface mount components, a practical density of 25 LEDs per inch is achievable. It is possible to obtain even higher LED densities using smaller (e.g., 0402 size) components, or by

integrating the LEDs directly into a semiconductor chip. The use of surface mount LEDs also decreases the height of the assembly above the circuit board and the tablet (e.g., some currently available 0603 sized LEDs are only 0.03 inches high). Surface mount display elements are advantageous since they are typically less expensive, easier to assemble into circuit boards, and

more reliable than leaded components. In addition, the use of surface mount LEDs provides less "bleeding" of light between adjacent elements when compared with non-surface mount LEDs.

In one embodiment, the hardware includes a processor 101 (such as a Motorola MC68HC11 microcontroller) that communicates with a host computer system 103. The host computer system gathers data from a digitizing tablet 513 and combines that data with other state information to determine which lights should be turned on. This information is communicated to

the processor 101 that in turn controls the logic lines to an LED driver chip(s) 503 such as the Maxim 7219. Some driver chips (such as the Maxim 7219) can be connected serially, so that any number of LED elements 501 can be controlled from a single connection to the processor 101. - 43 - The LED driver chip(s) 503 allow the individual LED elements 501 to be turned on or off independently. The host computer system 103 sends a command to the processor to turn on or off any LED. One or all LEDs 501 can be turned on or off at the same time, and in any sequence.

In other embodiments, other types of display elements can be used, and may call for a different type of driver. Different color or multicolor light emitting elements can be used in the scrollbar to

display various types of information. The different color elements can be used individually or in

combination to produce a range of colors.

Figure 19 shows one embodiment of a portion of the display component of the active physical scrollbar. The figure shows surface mount LEDs used as display elements 501, a driver chip 503, and a cable 505 to the processor (not shown).

Figure 20 similarly shows surface mount LEDs used as display elements 501, a driver chip

503, and a cable 505 to the processor (not shown). In addition, an L-shaped embodiment of the substantially transparent interaction surface 507 is shown with a groove 509 for the stylus. In other embodiments, the interaction surface may be rectangular, flat, or another shape. The display elements are covered with the transparent or translucent material so the display emitting elements can be seen only when they are turned on. This covering material also protects the hardware. In one embodiment, a translucent L-shaped acrylic interaction surface is used to cover the individual LEDs and circuitry. The L-shape permits the acrylic interaction surface to "wrap around" the two sides of the circuit board that are exposed (i.e., the edge in front of the LEDs, and on top of the LEDs). The LEDs can shine out of both the top and front of

the L-shaped interaction surface for easy viewing.

Figures 21 and 22 show one embodiment of the substantially transparent interaction surface 507. In the embodiment shown, an L-shaped translucent interaction surface 507 is used to protect the top and edge of the printed circuit board 11. This shape and material protects the - 44 - display elements, yet permits the display to be easily seen. The L-shaped embodiment permits the display to be seen from both the top and from the edge. In addition, the interaction surface

includes a groove 509. In Figure 22 the interaction surface 507 is shown in both horizontal and vertical orientations, as the system can be oriented as desired. Figure 23 shows a block diagram for alternative embodiment of the hardware using a liquid crystal display (LCD) as the display elements. An LCD may be easier to see in bright sunlight, than LEDs.

Figures 24A, 24B, 24C, and 24D show that with a given hardware configuration of LEDs it is possible to increase the apparent resolution of the scrollbar by averaging the light between adjacent elements. For example, when used as a timeline each element is turned on sequentially when moving through the data set, and thus the resolution is limited to the number of lighted elements. If there are 64 LEDs, only 64 distinct positions can be shown. By averaging between elements it is possible to increase the granularity of the system. For example rather than turning on each element at full brightness in sequence, two adjacent elements are turned on at less then full brightness. Under these conditions it visually appears as if an LED element between the two actual elements is turned on. In the case of a 64 element scrollbar an additional 63 averaged (or "phantom") elements can be created in this manner thus increasing the granularity of the scrollbar

from 64 to 127 elements without adding any additional hardware components. Figures 24 A and 24B show two adjacent light emitting elements, one of which is at full brightness 525, and one which is not producing light 521. In the cases illustrated in Figures 24 A and 24B, the light is perceived at the position of the light emitting element set to full brightness. Figure 24C shows two adjacent light emitting elements that are at roughly half brightness 523. Under the conditions illustrated in Figure 24C, a phantom light 527 is perceived halfway between the elements. Similarly, Figure 24D shows that if one element is at roughly one quarter brightness, and an - 45 - adjacent element is at three quarters brightness, a phantom light appears roughly three fourths the way between the elements. Thus the effective resolution of the system can be more than double the actual number of light emitting elements.

Figure 25 shows two embodiments of the stylus 519. As shown, the stylus can be a

digitizing pen having a writing end 529 and a selecting end 531., The selecting end can have

varying shapes as shown.

In one embodiment, the host system reads X-Y position and stylus pressure data from a digitizing tablet. The tablet outputs data when the stylus is near the tablet. The host has a predefined set of X-Y coordinates that define the location of the scrollbar. The host system then

determines if the stylus is pressed down in the scrollbar region. If the X-Y value returned by the

tablet is within the scrollbar region and the stylus pressure value is greater than zero (indicating that the stylus is down), then a scrollbar selection is triggered. Next the host program maps the X-Y location of the stylus to an LED number. For example, if the scrollbar region on the tablet goes from X=l to X=1000, and the LEDs are numbered from 1 to 64, then an input value of

X=500 would map to LED number 32. Lastly, the host program sends a command to the

processor requesting that the 32nd LED be turned on. The host communication with the processor can be configured such that turning on a new LED number also causes a previously

lighted LED to be turned off.

Although the system can provide a continuous control, in many applications it is desirable to additionally have any point touched on the scrollbar to "snap" to the nearest LED. For

example, in an audio playback application this permits touching anywhere near an LED to always turn on the LED nearest the stylus, and have audio playback start at the location corresponding to the lighted LED. This snapping to the nearest grid point is necessary because of the finite - 46 - resolution of the scrollbar, and results in consistent behavior of the control when a user attempts to find the same point in the timeline again.

The host computer system 103 may also turn on and off the individual light emitting

elements of the scrollbar independently from the position of the stylus. For example when playing an audio recording, the host can turn on the elements in sequence as the host plays the recording from beginning to end thus "animating" the scrollbar, making it appear that the lighted element is moving. In some embodiments one of the display elements can be activated when the system is

started, or in response to some other user input (e.g., not on the interaction surface) to display a corresponding point such as when the system is displaying a timeline. The scrollbar design is flexible in that the display elements can be turned on or off individually. For example, a single enabled element can indicate the current position.

Alternatively, all the elements to one side of the current point can be used it indicate the position

(as in a bar graph display).

When the scrollbar is used to animate a timeline (or percent done indicator) the host program determines when to turn on each LED. For example, to animate a 64 element scrollbar during a 64 second audio recording, each LED would be turned on in sequence for a duration of one second each. The host computer system 103 sends a command to turn on LED #1, waits one second, sends a command to turn on LED #2 (LED #1 is turned off by the processor), waits one

second, etc.

Claims

- 47 - Claims 1. A system for sensing information from an identification code disposed on a page of a book, comprising: a holder configured to receive the book; an optical assembly disposed relative to the holder and located substantially adjacent to an edge of the book, the optical assembly comprising an optical sensor for receiving
light reflected from the identification code and generating a signal representative thereof; and a processor coupled to the optical sensor for processing the signal representative of the received
light.
2. The system in accordance with claim 1, wherein the identification code comprises a bar
code.
3. The system in accordance with claim 1, wherein the processor activates a light source when the processor determines, based on the signal representative of the received light, that
ambient light levels are too low.
4. The system in accordance with claim 3, wherein the processor adjusts the light source to
optimize the signal representative of the received light.
5. The system in accordance with claim 3, wherein the light source generates non- visible
light.
6. The system in accordance with claim 1, wherein the processor executes a search of
integration times to maximize the signal representative of the received light.
7. The system in accordance with claim 6, wherein the search executed by the processor is a binary search. - 48 -
8. The system in accordance with claim 1, wherein the optical assembly comprises a reflecting element, a focusing element, and the optical sensor, wherein the reflecting element directs the reflected light onto the focusing element and the focusing element focuses the reflected
light onto the optical sensor.
9. The system in accordance with claim 8, wherein the optical sensor and the book are substantially coplanar.
10. The system in accordance with claim 8, wherein the relative locations of the book and the reflecting element position the optical sensor at a virtual location above the book.
11. A book, comprising:
a plurality of pages; and a width modulated bar code on each of the plurality of pages, wherein the bar code varies for each page to uniquely identify each such page of the book.
12. The book in accordance with claim 11, wherein the bar code is characterized by the
absence of a start code.
13. The book in accordance with claim 11, wherein the bar code is characterized by a single bar stop code.
14. The book in accordance with claim 11, wherein the bar code has an interleaved 2 of 5
format.
15. A scrollbar system for controlling and displaying information comprising:
an array of display elements; a translucent material covering the display elements and having an interaction surface; a sensing device disposed adjacent to the array of display elements for sensing the location
of a stylus proximately disposed relative to the interaction surface; and - 49 - a processor electrically coupled to the array of display elements and the sensing device,
the processor activating at least one of the display elements based upon the sensed location of the
stylus.
16. The scrollbar system of claim 15 wherein the processor activates at least one of the display
elements near the sensed location of the stylus.
17. The scrollbar system of claim 15 wherein the processor activates the display element
closest to the sensed location of the stylus.
18. The scrollbar system of claim 15 wherein the processor activates the plurality of display
elements at selected levels of brightness.
19. The scrollbar system of claim 15 wherein the processor is coupled to time- varying media and activates at one of the least display elements in coordination with the time- varying media.
20. The scrollbar system of claim 15 wherein the sensing device is a digitizing tablet.
21. The scrollbar system of claim 15 wherein the display elements are light emitting diodes.
22. The scrollbar system of claim 15 wherein the display elements are surface-mounted light emitting diodes.
23. The scrollbar system of claim 15 wherein the array of display elements form a liquid
crystal display.
24. The scrollbar system of claim 15 wherein the interaction surface of the translucent material has a groove for receiving the stylus.
25. The scrollbar system of claim 15 wherein the processor activates at least two adjacent elements at less than full brightness.
26. A system for linking a user notation on a page to time-varying data, comprising: a user interface for capturing attribute data for each user notation made on the page during record-time and play-time; - 50 - a recording device for recording time-varying data substantially corresponding to the attribute data for the page; and a processor for dynamically linking the attribute data for each user notation made on the page to a substantially corresponding element of the time- varying data.
27. The system in accordance with claim 26, wherein the attribute data is stored in a first memory location and the time-varying data is stored in a second memory location.
28. The system in accordance with claim 26, wherein the user interface captures attribute data for user notations made on the page during stop-time.
29. The system in accordance with claim 26, wherein the attribute data comprises location, time, and index-type information.
30. The system in accordance with claim 26, wherein the user interface comprises: a stylus for making the user notation on the page; a digitizing tablet coupled to the stylus for capturing attribute data for each user notation;
and a memory coupled to the digitizing tablet for storing the attribute data.
31. The system in accordance with claim 30, wherein the stylus comprises: a writing end for making user notations on the page; and a selection end for selecting the user notation on the page.
32. The system in accordance with claim 26, wherein the processor uses time-offset data to
dynamically link the attribute data for each user notation to the substantially corresponding
element of the time- varying data.
PCT/US1999/004823 1998-03-06 1999-03-05 Multimedia linking device with page identification and active physical scrollbar WO1999045521A1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
US7707898 true 1998-03-06 1998-03-06
US7706798 true 1998-03-06 1998-03-06
US7709898 true 1998-03-06 1998-03-06
US7706698 true 1998-03-06 1998-03-06
US7706198 true 1998-03-06 1998-03-06
US60/077,098 1998-03-06
US60/077,061 1998-03-06
US60/077,066 1998-03-06
US60/077,067 1998-03-06
US60/077,078 1998-03-06

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2895599A AU2895599A (en) 1998-03-06 1999-03-05 Multimedia linking device with page identification and active physical scrollbar

Publications (1)

Publication Number Publication Date
WO1999045521A1 true true WO1999045521A1 (en) 1999-09-10

Family

ID=27536110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/004823 WO1999045521A1 (en) 1998-03-06 1999-03-05 Multimedia linking device with page identification and active physical scrollbar

Country Status (1)

Country Link
WO (1) WO1999045521A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1242916A1 (en) * 1999-12-01 2002-09-25 Silverbrook Research Pty. Limited Video player with code sensor
WO2004081703A2 (en) * 2003-03-14 2004-09-23 Isolearn Limited A method and apparatus for identifying a page of a plurality of pages, and relaying the identity of the page to a computer
US7533816B2 (en) 2000-11-25 2009-05-19 Silverbrook Research Pty Ltd Method of associating audio with a position on a surface
EP2940635A1 (en) * 2014-04-30 2015-11-04 Samsung Electronics Co., Ltd. User terminal apparatus for managing data and method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5548106A (en) * 1994-08-30 1996-08-20 Angstrom Technologies, Inc. Methods and apparatus for authenticating data storage articles
DE19615986A1 (en) * 1996-04-22 1996-10-31 Markus Fischer Microcomputer input device for control of book reading by computer
US5572651A (en) * 1993-10-15 1996-11-05 Xerox Corporation Table-based user interface for retrieving and manipulating indices between data structures
US5581071A (en) * 1994-12-06 1996-12-03 International Business Machines Corporation Barcode scanner with adjustable light source intensity
EP0752675A1 (en) * 1995-07-07 1997-01-08 Sun Microsystems, Inc. Method and apparatus for event-tagging data files in a computer system
US5630168A (en) * 1992-10-27 1997-05-13 Pi Systems Corporation System for utilizing object oriented approach in a portable pen-based data acquisition system by passing digitized data by data type to hierarchically arranged program objects
WO1997018508A1 (en) * 1995-11-13 1997-05-22 Synaptics, Inc. Pressure sensitive scrollbar feature
DE29714828U1 (en) * 1996-08-27 1997-10-30 Liou Kenneth Optical bar code scanner
WO1999010834A1 (en) * 1997-08-27 1999-03-04 Cybermarche, Inc. A method and apparatus for handwriting capture, storage, and ind exing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5630168A (en) * 1992-10-27 1997-05-13 Pi Systems Corporation System for utilizing object oriented approach in a portable pen-based data acquisition system by passing digitized data by data type to hierarchically arranged program objects
US5572651A (en) * 1993-10-15 1996-11-05 Xerox Corporation Table-based user interface for retrieving and manipulating indices between data structures
US5548106A (en) * 1994-08-30 1996-08-20 Angstrom Technologies, Inc. Methods and apparatus for authenticating data storage articles
US5581071A (en) * 1994-12-06 1996-12-03 International Business Machines Corporation Barcode scanner with adjustable light source intensity
EP0752675A1 (en) * 1995-07-07 1997-01-08 Sun Microsystems, Inc. Method and apparatus for event-tagging data files in a computer system
WO1997018508A1 (en) * 1995-11-13 1997-05-22 Synaptics, Inc. Pressure sensitive scrollbar feature
DE19615986A1 (en) * 1996-04-22 1996-10-31 Markus Fischer Microcomputer input device for control of book reading by computer
DE29714828U1 (en) * 1996-08-27 1997-10-30 Liou Kenneth Optical bar code scanner
WO1999010834A1 (en) * 1997-08-27 1999-03-04 Cybermarche, Inc. A method and apparatus for handwriting capture, storage, and ind exing

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8031982B2 (en) 1999-05-25 2011-10-04 Silverbrook Research Pty Ltd Pen-shaped sensing device for sensing surface regions
US8295653B2 (en) 1999-05-25 2012-10-23 Silverbrook Research Pty Ltd Sensing device for sensing surface regions
EP1242916A1 (en) * 1999-12-01 2002-09-25 Silverbrook Research Pty. Limited Video player with code sensor
US8180193B2 (en) 1999-12-01 2012-05-15 Silverbrook Research Pty Ltd Video player with code sensor and memory for video data retrieval
US7263270B1 (en) 1999-12-01 2007-08-28 Silverbrook Research Pty Ltd Video player with code sensor
EP1242916A4 (en) * 1999-12-01 2006-01-11 Silverbrook Res Pty Ltd Video player with code sensor
US7533816B2 (en) 2000-11-25 2009-05-19 Silverbrook Research Pty Ltd Method of associating audio with a position on a surface
US7934654B2 (en) 2000-11-25 2011-05-03 Silverbrook Research Pty Ltd Method of associating recorded audio with position
WO2004081703A2 (en) * 2003-03-14 2004-09-23 Isolearn Limited A method and apparatus for identifying a page of a plurality of pages, and relaying the identity of the page to a computer
WO2004081703A3 (en) * 2003-03-14 2004-11-25 Isolearn Ltd A method and apparatus for identifying a page of a plurality of pages, and relaying the identity of the page to a computer
EP2940635A1 (en) * 2014-04-30 2015-11-04 Samsung Electronics Co., Ltd. User terminal apparatus for managing data and method thereof
CN105049658A (en) * 2014-04-30 2015-11-11 三星电子株式会社 User terminal apparatus for managing data and method thereof

Similar Documents

Publication Publication Date Title
Weher et al. Marquee: A tool for real-time video logging
US5896403A (en) Dot code and information recording/reproducing system for recording/reproducing the same
US8125461B2 (en) Dynamic input graphic display
US6710771B1 (en) Information processing method and apparatus and medium
US6573887B1 (en) Combined writing instrument and digital documentor
US6633282B1 (en) Ballpoint pen type input device for computer
Mackay et al. Video Mosaic: Laying out time in a physical space
US6518960B2 (en) Electronic blackboard system
Nelson et al. Palette: a paper interface for giving presentations
US20060256083A1 (en) Gaze-responsive interface to enhance on-screen user reading tasks
US20040070616A1 (en) Electronic whiteboard
US8320708B2 (en) Tilt adjustment for optical character recognition in portable reading machine
US4901364A (en) Interactive optical scanner system
US5898434A (en) User interface system having programmable user interface elements
US6554434B2 (en) Interactive projection system
US6054707A (en) Portable scanners capable of scanning both opaque and transparent materials
US5278673A (en) Hand-held small document image recorder storage and display apparatus
US20100079369A1 (en) Using Physical Objects in Conjunction with an Interactive Surface
US6188404B1 (en) Data display apparatus and method, recording medium and data transmission apparatus and method
US20070280627A1 (en) Recording and playback of voice messages associated with note paper
US20080180654A1 (en) Dynamic projected user interface
US7627142B2 (en) Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine
US20020044134A1 (en) Input unit arrangement
US6874683B2 (en) User programmable smart card interface system for an image album
US7659915B2 (en) Portable reading device with mode processing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)