US20140033128A1 - Animated contextual menu - Google Patents
Animated contextual menu Download PDFInfo
- Publication number
- US20140033128A1 US20140033128A1 US14/043,015 US201314043015A US2014033128A1 US 20140033128 A1 US20140033128 A1 US 20140033128A1 US 201314043015 A US201314043015 A US 201314043015A US 2014033128 A1 US2014033128 A1 US 2014033128A1
- Authority
- US
- United States
- Prior art keywords
- display
- menu
- point
- option
- response
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims description 20
- 239000000463 material Substances 0.000 claims description 18
- 230000000977 initiatory effect Effects 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000007246 mechanism Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000013500 data storage Methods 0.000 description 6
- 238000012015 optical character recognition Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000010079 rubber tapping Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- FDQGNLOWMMVRQL-UHFFFAOYSA-N Allobarbital Chemical compound C=CCC1(CC=C)C(=O)NC(=O)NC1=O FDQGNLOWMMVRQL-UHFFFAOYSA-N 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000005352 clarification Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0483—Interaction with page-structured environments, e.g. book metaphor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/134—Hyperlinking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0621—Item configuration or customization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
Definitions
- the invention generally relates to the field of electronic books and, more particularly, to systems and methods for accessing and otherwise working with book information via electronic devices.
- An electronic book reader including input recognition, annotation, and collaboration subsystems. Improved interaction methods provide more intuitive use of electronic books for studying.
- the provided annotation functionality allows a reader (e.g. a student) to make notes as is common with conventional paper books.
- the collaboration subsystem provides functionality to share these notes with others, enabling group learning on a variety of scales, from small study groups to collaborations with worldwide scope.
- the electronic book reader is configured to provide tabs or other interface controls through which a user can access a syllabus for a particular course, corresponding textbooks, the student's own electronic notebook for the course, and lecture material (whether actual audio/video of the lecture, a slide deck used with the lecture, or related materials from the lecture).
- the reader is configured to facilitate navigation through a textbook by providing various user options for provisionally or tentatively moving through the text, for instance to temporarily move to a glossary section before returning to the main text, or temporarily moving from a question page to a portion of the main text relating to the question.
- a bookmarking system facilitates easy access to portions the student identifies as important.
- the reader is configured to allow a student to attach annotations to an electronic textbook, in much the same way as a student might write notes in a conventional paper textbook. These notes can take on a wider range of forms than is possible conventionally. For example, a student can attach audio and video as well as more traditional textual annotations.
- the reader is configured to provide tools for converting student annotations into computer searchable and manipulatable form.
- the reader is configured to communicate with an accelerometer subsystem on the user's computer to allow the user to “pour” annotations off of, or onto, the user's view of the textbook to either remove clutter or provide annotations as the user may require at any particular time.
- the reader is configured to permit students to show all annotations, to show only certain annotations, to marginalize annotations, or to hide all annotations as preferred at any particular time.
- the reader is configured to allow students to use gestures to select portions of a textbook to copy over to the student's electronic notebook, for instance where such copying might provide a more efficient way of connecting concepts than merely annotating the textbook with notes.
- a user interface allows the student to include more or less information in such a guide, based on the student's needs and available study time.
- the reader is further configured to facilitate collaboration not only with other students, but with a professor or other teacher, or a teacher's assistant assigned to help students with the class.
- the reader is configured to recognize the student's hand gesture in the form of a question mark on a textbook page to open a question to a moderator (e.g., teaching assistant). The student can then type in a question, and the moderator will know which portion of the textbook relates to the question based on the reader transmitting that information along with the question to the moderator.
- the reader provides a number of other predefined gestures and is further configured to allow users to define their own gestures (e.g., scribbling in the initials of a friend opens a chat with that friend, again keyed to the currently displayed portion of the textbook).
- the reader is configured to assist a student in creating a personalized study guide.
- the presence of annotations made by the student and/or the prevalence of annotations made by other users informs which portions of an electronic book are included.
- the reader provides controls to allow the student to tailor the precise criteria used in generating the study guide to help meet their specific needs and requirements.
- FIG. 1 is a high-level diagram illustrating a networked environment that includes an electronic book reader.
- FIG. 2 illustrates a logical view of a reader module used as part of an electronic book reader.
- FIG. 3 illustrates a logical view of a system database that stores data related to the content hosting system.
- FIG. 4 illustrates a user computer configured as an electronic book reader.
- FIG. 5 illustrates a user computer configured as an electronic book reader, including user highlighting and annotations.
- FIG. 6 illustrates a user computer configured as an electronic book reader, including a notebook interface.
- FIG. 7 illustrates a user computer configured as an electronic book reader, including a contextual menu.
- FIG. 8 illustrates operation of the contextual menu of FIG. 7 .
- FIG. 1 is a high-level diagram illustrating a networked environment 100 that includes a book content hosting system 110 .
- the embodiments discussed herein are particularly suited for textbooks, but one skilled in the art will recognize that many of the features discussed herein are applicable to various other types of books as well.
- the content hosting system 110 makes available for purchase, licensing, rental or subscription textbooks that can be viewed on user and content provider computers 180 (depicted in FIG. 1 , for exemplary purposes only, as individual computers 180 A and 180 B) using a reader module 181 or browser 182 .
- the content hosting system 110 and computers 180 are connected by a network 170 such as a local area network or the Internet.
- the network 170 is typically the Internet, but can be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, a wired or wireless network, a private network, or a virtual private network.
- the content hosting system 110 is connected to the network 170 through a network interface 160 .
- reader module 181 and browser 182 include a content player (e.g., FLASHTM from Adobe Systems, Inc.), or any other player adapted for the content file formats used by the content hosting system 110 .
- User computer 180 A with reader module 181 is used by users to purchase or otherwise obtain, and access, materials provided by the content hosting system 110 .
- Content provider computer 180 B is used by content providers to create and provide material for the content hosting system 110 .
- a given computer can be both a client computer 180 A and content provider computer 180 B, depending on its usage.
- the hosting service 110 may differentiate between content providers and users in this instance based on which front end server is used to connect to the content hosting system 110 , user logon information, or other factors.
- the content hosting system 110 comprises a user front end server 140 and a content provider front end server 150 each of which can be implemented as one or more server class computers.
- the content provider front end server 150 is connected through the network 170 to content provider computer 180 B.
- the content provider front end server 150 provides an interface for content providers to create and manage materials they would like to make available to users.
- the user front end server 140 is connected through the network 170 to client computer 180 A.
- the user front end server 140 provides an interface for users to access material created by content providers.
- the content hosting system 110 is implemented by a network of server class computers that can include one or more high-performance CPUs and 1G or more of main memory, as well as 500 GB to 2 Tb of storage.
- An operating system such as LINUX is typically used.
- the operations of the content hosting system 110 , front end 140 and back end 150 servers as described herein can be controlled through either hardware (e.g., dedicated computing devices or daughter-boards in general purpose computers), or through computer programs installed in computer storage on the servers of the service 110 and executed by the processors of such servers to perform the functions described herein.
- hardware e.g., dedicated computing devices or daughter-boards in general purpose computers
- computer programs installed in computer storage on the servers of the service 110 and executed by the processors of such servers to perform the functions described herein.
- One of skill in the art of system engineering and, for example, video content hosting will readily determine from the functional and algorithmic descriptions herein the construction and operation of such computer programs.
- the content hosting system 110 further comprises a system database 130 that is communicatively coupled to the network 170 .
- the system database 130 stores data related to the content hosting system 110 along with user and system usage information.
- the system database 130 can be implemented as any device or combination of devices capable of persistently storing data in computer readable storage media, such as a hard disk drive, RAM, a writable compact disk (CD) or DVD, a solid-state memory device, or other optical/magnetic storage mediums.
- computer readable storage media such as a hard disk drive, RAM, a writable compact disk (CD) or DVD, a solid-state memory device, or other optical/magnetic storage mediums.
- Other types of computer-readable storage mediums can be used, and it is expected that as new storage mediums are developed in the future, they can be configured in accordance with the descriptions set forth above.
- the content hosting system 110 is further comprised of a third party module 120 .
- the third party module 120 is implemented as part of the content hosting system 110 in conjunction with the components listed above.
- the third party module 120 provides a mechanism by which the system provides an open platform for additional uses relating to electronic textbooks, much as an application programming interface allows third parties access to certain features of a software program.
- third party input may be limited to provision of content via content provide computers 180 B and content provider third party server 150 .
- module refers to computational logic for providing the specified functionality.
- a module can be implemented in hardware, firmware, and/or software. Where the modules described herein are implemented as software, the module can be implemented as a standalone program, but can also be implemented through other means, for example as part of a larger program, as a plurality of separate programs, or as one or more statically or dynamically linked libraries. It will be understood that the named modules described herein represent one embodiment of the present invention, and other embodiments may include other modules. In addition, other embodiments may lack modules described herein and/or distribute the described functionality among the modules in a different manner. Additionally, the functionalities attributed to more than one module can be incorporated into a single module.
- modules as implemented by software, they are stored on a computer readable persistent storage device (e.g., hard disk), loaded into the memory, and executed by one or more processors included as part of the content hosting system 110 .
- a computer readable persistent storage device e.g., hard disk
- hardware or software modules may be stored elsewhere within the content hosting system 110 .
- the content hosting system 110 includes hardware elements necessary for the operations described here, including one or more processors, high speed memory, hard disk storage and backup, network interfaces and protocols, input devices for data entry, and output devices for display, printing, or other presentations of data.
- system database 130 third party module 120 , user front end server 140 , and content provider front end server 150 can be distributed among any number of storage devices.
- the following sections describe in greater detail the reader module 181 , system database 130 , and the other components illustrated in FIG. 1 in greater detail, and explain their operation in the context of the content hosting system 110 .
- FIG. 2 illustrates a functional view of a reader module 181 used as part of a electronic textbook system.
- the reader module is implemented on user computer 180 A, but it should be recognized that in other embodiments, portions discussed herein could also be implemented on other computers (e.g., those in content hosting system 110 ) that are in communication with reader module 181 .
- Reader module 181 is configured to address the fact that students use textbooks differently than other readers use typical books. Students typically study from, rather than merely read, textbooks. Studying is typically less linear than other reading, as texts are rarely read in “start-to-finish” manner. Studying is often much more interactive than typical reading, with annotations, cross-referencing between problem sets and main portions, reference to glossary or definitions sections, and the like. Studying is also inherently social and collaborative as well—far more so than most other types of reading. Learning in general, and studying in particular, typically combines attention to textbooks with creation and reference to notebooks, problem sets, lab experiment results, lecture materials, and other related sources.
- Reader module 181 includes various subsystems to facilitate the specialized uses students make of textbooks.
- reader module 181 includes an annotation subsystem 220 , an OCR subsystem 230 , a collaboration subsystem 240 , an ordering subsystem 250 , an input recognition subsystem 260 , and a daemon subsystem 270 . Many of these subsystems interact with one another, as described below.
- Annotation subsystem 220 provides various user tools and interfaces to allow students to mark up portions of an electronic textbook as they may find most helpful for learning and studying purposes.
- Annotation subsystem 220 includes conventional features such as highlighting and text entry tools, and also includes more advanced tools. For example, as described below annotation subsystem 220 keeps track of textbook portions for which a student has provided annotations, and collects those portions into a personalized study guide based on a user command.
- OCR subsystem 230 is a recognition subsystem that takes information not originally in machine-readable form and converts it to machine readable form. For example, OCR subsystem 230 communicates with annotation subsystem 220 to convert handwritten student notes (entered graphically via finger or stylus gestures on a touch screen, for instance) into machine readable text. As used here, OCR subsystem 230 includes not only optical character recognition, but other types of recognition as well, for instance: voice-to-text recognition to allow a student to speak rather than write annotations; image to text recognition for photographs the student may take of a professor's notes on a blackboard during a lecture; and other types of recognition as well that may be provided within an electronic textbook or as a third party add-on.
- Collaboration subsystem 240 provides various user functions that allow students to work together. As detailed below, for example, users can share their annotations and notes with their study group, their class section, their entire class, or other users worldwide of the electronic textbook. Further, collaboration subsystem 240 includes social network facilities to permit students to undertake study sessions with audio and visual chat, to ask and answer questions, and to submit questions to professors or teaching assistants.
- Ordering subsystem 250 represents tools that allow students to obtain electronic textbooks and related materials.
- ordering subsystem 250 is implemented as an electronic marketplace (e.g., the DROIDTM marketplace implemented on the ANDROIDTM operating system for smart phones and tablet computers).
- Third parties offer electronic textbooks and related materials such as study guides, problem sets, updates, workbooks, and the like. Some of these materials are available for purchase; others are free.
- provision via other mechanisms e.g., subscription, barter, “pay-per-view” is supported, as may be desired by any subset of a student community or content provider group.
- Input recognition subsystem 260 provides user interface tools to facilitate use of electronic textbooks and related features. For instance, by sensing particular gestures on a touch screen of user computer 180 A as discussed in detail below, the system temporarily shifts display of a textbook from a current page to a new section, while keeping track of the section of primary interest. Thus, a student working on a problem set section of the textbook can quickly look back at the text of the chapter, or a student reading a section for the first time can quickly jump to a glossary section of the textbook for a definition of an unfamiliar term or concept.
- Reader module 181 is configured to permit user-selected applications to run to enhance a student's ability to work with an electronic textbook. For example, a student may purchase an application that provides study questions on a per-chapter basis for textbooks that do not include such questions.
- reader module 181 includes a daemon subsystem 270 to provide additional add-on features without the user launching a visible application for such features.
- reader module 181 and various subsystems thereof is provided below in connection with discussion of FIGS. 4-6 .
- FIG. 3 illustrates a functional view of the system database 130 that stores data related to the textbook content hosting system 110 .
- the system database 130 may be divided based on the different types of data stored within. This data may reside in separate physical devices, or it may be collected within a single physical device.
- partner data 370 comprises information regarding content providers or partners registered with the content hosting system 110 that have permission to create and deliver content.
- Partner data 370 includes provider contact information.
- User profile data storage 310 includes information about an individual user (e.g., a student), to facilitate the payment and collaborative aspects of system 100 .
- Subscriber data storage 320 includes identifying information about the student, such as the electronic textbooks the student has obtained and the social network groups the student has joined. In some embodiments, subscriber data storage 320 also maintains information regarding the location in each of the student's textbooks where the student is or was reading, to allow, for example, a student to read part of a textbook chapter on a smart phone while on a campus bus and continue reading from the same spot on the student's desktop computer in a dorm room.
- Account data storage 330 keeps track of the user's payment mechanisms (e.g., Google Inc.'s CHECKOUT®) related to the user's ability to obtain content from system 100 .
- Social network data storage 340 maintains the information needed to implement a social network engine to provide the collaborative features discussed herein, e.g., social graphs, social network preferences and rules.
- Textbook data 350 stores the actual content that is provided to users upon their request, such as electronic textbook files.
- Add-on data storage 360 maintains information for related features, such as non-static data relating to textbooks.
- conventional mechanisms are used to implement many of the aspects of system database 130 .
- the existing mechanisms from Google Inc.'s BOOKSTM GMAILTM, BUZZTM CHATTM, TALKTM, ORKUTTM, CHECKOUTTM, YOUTUBETM, SCHOLARTM, BLOGSTM and other products include aspects that can help to implement one or more of storage facilities 310 - 370 and modules 220 - 270 .
- Google Inc. already provides eBook readers for ANDROIDTM devices (phones, tablets, etc.), iOS devices (iPhones®, iPads® and other devices from Apple, Inc.) and various desktop Web browsers, and in one embodiment Google Inc.'s EDITIONSTM eBook reader application is modified to provide the functionality described herein.
- a portable computer 400 e.g., a tablet computer running the ANDROIDTM operating system
- a touch screen 401 e.g., a touch screen 401 , a microphone 402 , and a front-facing camera 403 .
- such devices currently available typically also provide rear-facing cameras, accelerometers, GPS receivers, Wi-Fi and advanced cellular communications capabilities, and various other features.
- computer 400 is running reader module 181 and displaying a page 404 from an electronic textbook.
- reader module 181 provides four tabs above a main content area 404 allowing selection of four class modules via the following user interface icons: a Syllabus tab 410 , a Textbook tab 420 , a Notebook tab 430 and a Lectures tab 440 .
- the Syllabus tab 410 provides course-specific information for the student, including a calendar of what portions of the text are going to be dealt with on what dates, when assignments are due, and when tests are scheduled. In one embodiment the student's performance during the class is also tracked here (e.g., grades on assignments and exams to date).
- the Textbook tab 420 shown in FIG. 4 as the currently selected tab, provides the actual textbook, as well as a number of navigational and other tools related to view of the textbook.
- the Notebook tab 430 when selected, causes the student's notebook for the course to be displayed (see discussion of FIG. 6 , below).
- the Lectures tab 440 when selected, causes display of lecture-related materials, such as a professor may choose to provide to students. For example, a professor may provide slide decks used in a lecture, videos, or other materials that repeat or supplement what the professor presents in a lecture session.
- the display provided under the Textbook tab 420 includes a number of reading and annotation tools 407 .
- To the right of the textbook title is an icon to display a table of contents, as well as an icon to change settings such as text size. To the right of that is an icon to toggle between regular view of the textbook and view of a user-generated study guide (discussed below). To the right of that is an eye-shaped icon, currently shown in the “eye shut” state, indicating whether to show user annotations (also detailed below).
- the last four icons are to add handwritten (pen) annotations (via a stylus or finger, as desired and as supported by computer 400 ), highlighting, sticky note annotations, and audio annotations to the textbook.
- a set of page navigation tools 408 Below the primary content display area 404 are a set of page navigation tools 408 . From left to right they include an icon to add a bookmark, an indicator of the current page (circle) in relation to various chapters (noted by breaks in the horizontal line) and previously set bookmarks, a number indicating the last page of the textbook, and arrows representing previous page and next page commands. The user touches on an appropriate portion of this display of tools 408 to accomplish a corresponding action.
- a large bar 405 which in one embodiment is colored yellow. Bar 405 indicates that the user has created a sticky note relating to this portion of the text. Smaller bars, in one embodiment displayed in gray, appear both within and below bar 405 ; in one embodiment these represent other types of annotations provided by the student, for example an audio annotation or a video annotation.
- vertical lines 406 indicate still other student input, in one embodiment highlighting (straight lines) and handwritten annotations (squiggles).
- the closed-eye icon in tools 407 indicates that all of this student-generated markup of the text is currently being hidden from view.
- the bar 405 now shows as a full sticky note, complete with a user control for settings (which in one embodiment include an OCR option to convert the handwritten text to clean machine-searchable text and an option to toggle between handwritten and machine text versions for display).
- a small “resize handle” icon appears at the bottom of the note to allow the note to be made larger or smaller as the user may desire, and an “X” in the upper right hand corner of the note allows the user to delete the note if desired.
- the small gray bars referenced above are replaced with a “TV” icon indicating a video annotation as well as a small green circle with a number in it indicating how many comments have been entered concerning this annotation (e.g., by other students in a collaborative study session).
- a similar loudspeaker icon with a small green circle and corresponding number indicates an audio annotation and comments on that.
- the highlighting and handwritten text previously indicated by vertical lines is now fully displayed. Also in this display, an indication of the current bookmarked status of the page is included in the upper left-hand corner, along with an “X” which, when touched by the user, removes the bookmark.
- reader module 181 uses accelerometer and other positioning input from computer 400 and interprets certain movements as commands.
- tilting computer 400 from portrait mode (as shown in FIGS. 4 and 5 ) to landscape mode triggers a change in the display from one page to two-page spread format.
- An abrupt partial tilt, on the other hand, when in the “show annotations” mode illustrated in FIG. 5 causes the annotations to be “poured” into the margin and the display switched to the “hide annotations” (corresponding to “closed eye” icon) mode illustrated in FIG. 4 .
- a tilt in the other direction pours the annotations back into view.
- a set of tools 607 provides icons that (from left to right) allow a student to change settings, such as displaying hand-entered text as shown here or a machine-recognized typed version of the same; to capture information from a whiteboard or chalkboard using camera 403 or (if equipped) a back-facing camera on computer 400 ; to share notes with others, to enter notes with a pen tool as described above, to highlight notes, to add a sticky note to the notebook, and to capture audio annotations corresponding to the notes from microphone 402 .
- the student can provide not only handwriting, but hand drawn shapes 605 as well in both the notebook and textbook annotation modes of operation. Likewise the student can provide notes in outline form 606 .
- the page navigation tools 608 in the notebook mode of operations include icons (from left to right) to add a bookmark, add a new page to the notebook, and navigate among the pages of the notebook as previously explained in connection with textbook page navigation.
- the reader provides user tools for audio/visual/character recognition to convert such annotations into a form that can be machine searched and accessed.
- the reader includes a user interface tool to allow a user to toggle as desired between such original images and the corresponding machine readable text.
- the reader also provides a user with an option to recognize lines and various geometric shapes from imaged or handwritten notes, such that cleaned-up versions of flow diagrams, molecular structures and the like can be easily made from sources having varied legibility.
- input recognition system 260 of reader module 181 provides a number of predetermined operations specifically oriented to textbook use, and also permits students to easily configure their own desired operations.
- input recognition system 260 detects when a user has circled a portion of a textbook's content, either with a finger or a stylus. To indicate such detection, the selected area is displayed with a “glowing” appearance for a period of time. If, during that period of time, the user touches the glowing area and drags it to hover over the Notebook tab 430 , input recognition system 260 detects this as a command to copy that portion of text into the student's notebook, where further room for annotation is available. In that event, the student's notebook becomes the active display, allowing the student to place the copied portion from the textbook anywhere in the notebook that may be desired, and to annotate on and around the added excerpt.
- specific annotations are immediately recognized as corresponding to commands rather than actual annotations.
- a handwritten annotation in the form of a question mark with a circle around it is interpreted as a request to send a question regarding the nearby text to the appropriate teaching assistant for that course (or other predetermined moderator), and a dialog box immediately opens, preaddressed to the teaching assistant, allowing the student to ask the question.
- the message to the teaching assistant is automatically tagged with the corresponding portion of the text so that the student does not need to include any context with the specific question, but can just include the question in a way that might be confusing without context. For example, if the text shows an illegal divide-by-zero operation, the student's question could simply be: “Why can't you do this?” without any further contextual information.
- gestures are provided in various embodiments.
- a “c” drawn with a circle around it, or a cartoon text balloon shape, is interpreted as a command to open a chat panel.
- a “k” with a circle around it or a pound sign (#) is interpreted as a command to open a keyboard panel.
- a squiggly line or repeated zig-zag is a command to delete a word or diagram.
- a handwritten name (“Jim”) opens a chat panel with a familiar classmate.
- a specified word (“calc”) invokes an installed add-on.
- a user may define a letter “Q” with a circle around it to mean “Quit thoughtfully” and make that gesture correspond to saving all notebook edits, quitting the open textbook, and emailing notes to other study group members (e.g., Mike, Bob and Mary).
- reader module 181 enables highlighting, sticky notes and annotations generally (e.g., 405 , 406 ) to be selectively shown or marginalized.
- An advantage of marginalizing, rather than completely hiding, annotation is that marginal marks remind the student upon a second or third reading of a section that there are potentially helpful annotations available for that section.
- use of the accelerometer of computer 400 to either show or marginalize annotations upon a quick tilting of computer 400 provides a very quick and intuitive way for the student to switch between these two types of display.
- user interface controls allow the specific gestures used to indicate show/marginalize annotations to be adjusted and otherwise changed, so that the sensitivity of these can be tuned to match a user's preference.
- Reader module 181 also enables a student to mark certain annotations as private.
- annotations are by default shared anonymously with the public (i.e., all others who have access to the electronic textbook), but in some environments, alternate embodiments may be more selective in sharing as may be more appropriate.
- reader 181 is configured to tag all portions of a textbook for which annotations have been provided such that a student can request a personalized study guide, comprised solely of the highlighted sections, to be generated.
- each tagged section remains hyperlinked to the original full text to allow the student to quickly switch back to the full text for additional context regarding a particular section of interest. In one embodiment, this is accomplished by placing an underlined page number at the left margin of each section of the study guide; clicking on that number takes the user to the indicated page in the textbook.
- collaboration subsystem 240 is configured to obtain information from other students as well regarding portions of the textbook that they have highlighted for generation of a user guide based on their annotated sections, in addition to the user's own annotated sections.
- the student can select the student's own work group, other classmates, other students at the same school or at other select schools, or even all students worldwide for purposes of determining which annotations should be used to generate the study guide.
- a slider-style user interface allows a student to adjust selectivity for generation of the study guide.
- one setting includes all sections highlighted by any student, but another setting requires that at least five students provided annotations for a section to include it in the study guide (or for consideration of all students worldwide, 5% of the students providing annotations).
- a student may tailor a user guide for the amount of time the student may have available to use the guide.
- some students who have not read the entire text may also use this feature to determine which portions are considered most important for a first reading before an examination.
- an animated user interface that moves or “slurps” these additional annotated sections from outside the current field of view is shown when the user changes the slider to include more sections, and the additional sections are slurped out of the field of view as the user changes the slider to be more selective in which sections to have in the study guide.
- user interface tabs/buttons allow a user to select “My highlights,” “Classmates' highlights,” or “Everyone's highlights.”
- reader module 181 Many of the computers 400 on which reader module 181 will be implemented support multi-touch navigation by a user. However, not all of the multi-touch commands that may be most helpful for use of electronic textbooks are provided in a native manner on such devices. For instance, the standard “pinch-zoom” and swipe features available to change magnification and move through pages and chapters are certainly useful with textbooks, but more specific navigation choices are supported by reader module 181 . For example, as noted above users of textbooks often need to make quick reference to another portion of the text and then return to where they were in the text. With a paper book, one often sticks a finger in the book at the current page and then moves to the page of temporary interest.
- Reader 181 permits a corresponding operation by placing a finger of one hand down on the screen 401 at a location showing the current page (e.g., near 404 on FIG. 4 ) and then using other existing page navigation techniques to move to another page (e.g., by swiping with two fingers of the other hand to move back a number of pages).
- the navigation footer 408 is persistent, and the user can quickly move around the book (either provisionally using the one-finger hold on the current page or normally) using this interface at any time.
- the user can either release using the left hand to return to the original page or release using the right hand to commit to the new page and abandon the original page.
- collaboration subsystem 240 keeps track of where each student is in the textbook during collaboration and sends that information to the computers 400 of the other students in the collaboration, so that their current location is indicated for the others to see.
- one student's annotations appear on the other students' computers 400 (with color coding for each student's annotations), as do gestures made by one student (e.g., pointing to a particular portion of text using either a mouse or a finger press on a touch screen device).
- a circle begins to appear, gradually drawn in a clockwise direction around the user's finger.
- the circle is complete after a finite period of short duration (say approximately 500 milliseconds) and then turns into the contextual menu 701 .
- the purpose of such animation is to alert the user that by holding a finger on the screen, the user is requesting such a menu (release of the finger before the menu is complete makes the incomplete circle disappear and the menu is not formed).
- Contextual menu 701 provides, in this embodiment, six areas for further user selection: a central area with an “X” in it to close the menu (tapping outside the menu will also close it), and five choices for further user selection.
- Menu 701 is a contextual menu because the user choices are not always the same, but instead are based on what is displayed on screen 401 as well as where on the screen the user has asked the menu to appear. For instance, if the user presses a finger over a chart or diagram, a different set of choices may appear than if the user presses a finger over body text, or over white space as shown in FIG. 7 .
- menu 801 includes five user choices related to annotations, in this case color, stroke, chat, sync and share, that the user can select.
- color denotes a choice of a color for annotations
- stroke denotes gesture recognition activation (and in alternate embodiments, various gesture-related configuration and operation choices)
- chat denotes activation of a chat window
- sync denotes synchronizing the user's display with that of other connected students (e.g., to share annotations)
- share denotes sharing of annotations with other students.
- the latter two choices also have small triangular blocks in the lower right of their respective menu portions in menu 801 ; in this embodiment these blocks indicate that the choices will spawn additional user choices (i.e., not result in any action being taken immediately without the opportunity for further user selection, for example by presentation of a further menu of user choices).
- a central circle 802 with an “X” in it provides a mechanism to close the circular menu, and is primarily used for newer users who may not understand that menu 801 can also be closed by simply tapping outside of menu 801 .
- small graphics rather than words are used to denote the user's options: An artist's palette for “color”, a swoosh symbol for “gesture”, a word bubble for “chat”, a circle with rotating arrows for “sync”, and a document with an arrow for “share”.
- a different number of user selections than five is provided in menus 801 and 821 , as may be appropriate for a given context in which the menu is enabled.
- Other contexts will also call for a different set of user choices within menu 801 .
- an annotation menu appears differing from menu 801 in that “delete” appears rather than “color”, “append” appears rather than “stroke”, and “question” appears rather than “chat” (with the remaining items, “sync” and “share” still appearing as in menu 801 ).
- delete is used to remove an annotation
- append is used to send the annotation from the textbook display to the user's notebook (shown on FIG. 6 )
- question is used to embed the annotation in a question to be addressed to a fellow student, teaching assistant or professor.
- menu items that are common across contexts are placed in consistent areas on menu 801 to facilitate ease of use.
- Contextual menus e.g., 801 are brought up in different forms based not only on location of the user's finger press (e.g., over body text of the book as opposed to over a user's own annotation), but also based on when the press is made (e.g., immediately after highlighting a section of text) and based on other triggering events (e.g., recently receiving a question or annotation from another student) that might warrant actions that would not be needed otherwise.
- triggering events e.g., recently receiving a question or annotation from another student
- any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
- the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion.
- a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
- “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Position Input By Displaying (AREA)
- Electrically Operated Instructional Devices (AREA)
- Document Processing Apparatus (AREA)
- Stored Programmes (AREA)
Abstract
An electronic book system provides interfaces particularly suited to students' use of textbooks. A finger press on a touch screen produces a contextual menu with user choices that relate to where the finger was pressed or what the user was recently doing with the book. A student provisionally navigates through a book by a specific gesture which, when it stops, returns the user to the previous position in the book. Annotations are displayed and hidden using specific gestures and through selective movement of the reader as sensed by its accelerometer.
Description
- This application is a continuation of U.S. patent application Ser. No. 13/901,110 filed May 23, 2013, entitled “Systems and Methods for Manipulating User Annotations in Electronic Books,” which is a divisional of U.S. patent application Ser. No. 13/182,787 filed Jul. 14, 2011, entitled “Systems and Methods for Manipulating User Annotations in Electronic Books,” which is a divisional of Ser. No. 13/171,130 filed Jun. 28, 2011, entitled “Electronic Book Interface Systems and Methods,” which claims the benefit of U.S. Patent Application Ser. No. 61/446,239, filed Feb. 24, 2011, entitled “Electronic Textbook Systems and Methods,” all of which are incorporated by reference in their entirety as if fully set forth herein.
- 1. Technical Field
- The invention generally relates to the field of electronic books and, more particularly, to systems and methods for accessing and otherwise working with book information via electronic devices.
- 2. Background Information
- Even as widespread use of the Web reaches its twentieth anniversary, there has been little change in how people make use of textbooks. Students still fill their backpacks with as many of the five-pound books as will fit, and the impact of such paper-based learning is felt not only in students' backs, but in the carbon footprint of all of the infrastructure required to supply, use and dispose of such materials. A change of just a few pages in a textbook may make it obsolete and call for a new version to be printed; students carry not just this week's chapters with them everywhere, but last month's and next month's chapters as well.
- Although some attempts have been made to transform study material from Gutenberg's era to the digital era, some of the advantages of using paper books for study purposes have not been replicated. Students from time immemorial have used their texts in different ways. Some highlight portions of particular interest; others place notes in the margins to keep track of clarifications of difficult concepts. Some used textbooks are more useful than new ones because they naturally fall open to the most important pages after repeated use, or because particularly important pages or sections are more dog-eared than others. Electronic reading devices have not to date provided interfaces to implement some of these subtle yet important features that help students learn from their texts most efficiently.
- It would be advantageous to provide improved interface mechanisms for students to obtain, read, study from, and otherwise use textbook content with some of the tablet, laptop and other electronic devices that are now entering widespread use.
- An electronic book reader including input recognition, annotation, and collaboration subsystems. Improved interaction methods provide more intuitive use of electronic books for studying. The provided annotation functionality allows a reader (e.g. a student) to make notes as is common with conventional paper books. The collaboration subsystem provides functionality to share these notes with others, enabling group learning on a variety of scales, from small study groups to collaborations with worldwide scope.
- In one aspect, the electronic book reader is configured to provide tabs or other interface controls through which a user can access a syllabus for a particular course, corresponding textbooks, the student's own electronic notebook for the course, and lecture material (whether actual audio/video of the lecture, a slide deck used with the lecture, or related materials from the lecture).
- In one aspect, the reader is configured to facilitate navigation through a textbook by providing various user options for provisionally or tentatively moving through the text, for instance to temporarily move to a glossary section before returning to the main text, or temporarily moving from a question page to a portion of the main text relating to the question. By using suitable gestures, the student navigates between these options in a provisional fashion that allows easy return to the main section of interest. In a related aspect, a bookmarking system facilitates easy access to portions the student identifies as important.
- In a further aspect, the reader is configured to allow a student to attach annotations to an electronic textbook, in much the same way as a student might write notes in a conventional paper textbook. These notes can take on a wider range of forms than is possible conventionally. For example, a student can attach audio and video as well as more traditional textual annotations. In a related aspect, the reader is configured to provide tools for converting student annotations into computer searchable and manipulatable form.
- In yet another aspect, the reader is configured to communicate with an accelerometer subsystem on the user's computer to allow the user to “pour” annotations off of, or onto, the user's view of the textbook to either remove clutter or provide annotations as the user may require at any particular time. The reader is configured to permit students to show all annotations, to show only certain annotations, to marginalize annotations, or to hide all annotations as preferred at any particular time.
- In still another aspect, the reader is configured to allow students to use gestures to select portions of a textbook to copy over to the student's electronic notebook, for instance where such copying might provide a more efficient way of connecting concepts than merely annotating the textbook with notes. In a specific aspect, a user interface allows the student to include more or less information in such a guide, based on the student's needs and available study time.
- The reader is further configured to facilitate collaboration not only with other students, but with a professor or other teacher, or a teacher's assistant assigned to help students with the class. In one aspect, the reader is configured to recognize the student's hand gesture in the form of a question mark on a textbook page to open a question to a moderator (e.g., teaching assistant). The student can then type in a question, and the moderator will know which portion of the textbook relates to the question based on the reader transmitting that information along with the question to the moderator. The reader provides a number of other predefined gestures and is further configured to allow users to define their own gestures (e.g., scribbling in the initials of a friend opens a chat with that friend, again keyed to the currently displayed portion of the textbook).
- In another aspect, the reader is configured to assist a student in creating a personalized study guide. The presence of annotations made by the student and/or the prevalence of annotations made by other users informs which portions of an electronic book are included. The reader provides controls to allow the student to tailor the precise criteria used in generating the study guide to help meet their specific needs and requirements.
- The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter.
-
FIG. 1 is a high-level diagram illustrating a networked environment that includes an electronic book reader. -
FIG. 2 illustrates a logical view of a reader module used as part of an electronic book reader. -
FIG. 3 illustrates a logical view of a system database that stores data related to the content hosting system. -
FIG. 4 illustrates a user computer configured as an electronic book reader. -
FIG. 5 illustrates a user computer configured as an electronic book reader, including user highlighting and annotations. -
FIG. 6 illustrates a user computer configured as an electronic book reader, including a notebook interface. -
FIG. 7 illustrates a user computer configured as an electronic book reader, including a contextual menu. -
FIG. 8 illustrates operation of the contextual menu ofFIG. 7 . - The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
-
FIG. 1 is a high-level diagram illustrating anetworked environment 100 that includes a bookcontent hosting system 110. The embodiments discussed herein are particularly suited for textbooks, but one skilled in the art will recognize that many of the features discussed herein are applicable to various other types of books as well. Thecontent hosting system 110 makes available for purchase, licensing, rental or subscription textbooks that can be viewed on user and content provider computers 180 (depicted inFIG. 1 , for exemplary purposes only, as individual computers 180A and 180B) using areader module 181 orbrowser 182. Thecontent hosting system 110 and computers 180 are connected by anetwork 170 such as a local area network or the Internet. - The
network 170 is typically the Internet, but can be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, a wired or wireless network, a private network, or a virtual private network. Thecontent hosting system 110 is connected to thenetwork 170 through anetwork interface 160. - As discussed above, only a single user computer 180A is shown, but in practice there are many (e.g., millions of) user computers 180A that can communicate with and use the
content hosting system 110. Similarly, only a single content provider computer 180B is shown, but in practice there are many (e.g., thousands or even millions of) content providers 180B that can provide textbooks and related materials forcontent hosting system 110. In some embodiments,reader module 181 andbrowser 182 include a content player (e.g., FLASH™ from Adobe Systems, Inc.), or any other player adapted for the content file formats used by thecontent hosting system 110. - User computer 180A with
reader module 181 is used by users to purchase or otherwise obtain, and access, materials provided by thecontent hosting system 110. Content provider computer 180B is used by content providers to create and provide material for thecontent hosting system 110. A given computer can be both a client computer 180A and content provider computer 180B, depending on its usage. The hostingservice 110 may differentiate between content providers and users in this instance based on which front end server is used to connect to thecontent hosting system 110, user logon information, or other factors. - The
content hosting system 110 comprises a userfront end server 140 and a content providerfront end server 150 each of which can be implemented as one or more server class computers. The content providerfront end server 150 is connected through thenetwork 170 to content provider computer 180B. The content providerfront end server 150 provides an interface for content providers to create and manage materials they would like to make available to users. The userfront end server 140 is connected through thenetwork 170 to client computer 180A. The userfront end server 140 provides an interface for users to access material created by content providers. - The
content hosting system 110 is implemented by a network of server class computers that can include one or more high-performance CPUs and 1G or more of main memory, as well as 500 GB to 2 Tb of storage. An operating system such as LINUX is typically used. The operations of thecontent hosting system 110,front end 140 andback end 150 servers as described herein can be controlled through either hardware (e.g., dedicated computing devices or daughter-boards in general purpose computers), or through computer programs installed in computer storage on the servers of theservice 110 and executed by the processors of such servers to perform the functions described herein. One of skill in the art of system engineering and, for example, video content hosting will readily determine from the functional and algorithmic descriptions herein the construction and operation of such computer programs. - The
content hosting system 110 further comprises asystem database 130 that is communicatively coupled to thenetwork 170. Thesystem database 130 stores data related to thecontent hosting system 110 along with user and system usage information. - The
system database 130 can be implemented as any device or combination of devices capable of persistently storing data in computer readable storage media, such as a hard disk drive, RAM, a writable compact disk (CD) or DVD, a solid-state memory device, or other optical/magnetic storage mediums. Other types of computer-readable storage mediums can be used, and it is expected that as new storage mediums are developed in the future, they can be configured in accordance with the descriptions set forth above. - The
content hosting system 110 is further comprised of athird party module 120. Thethird party module 120 is implemented as part of thecontent hosting system 110 in conjunction with the components listed above. Thethird party module 120 provides a mechanism by which the system provides an open platform for additional uses relating to electronic textbooks, much as an application programming interface allows third parties access to certain features of a software program. In some embodiments, third party input may be limited to provision of content via content provide computers 180B and content providerthird party server 150. Given the wide range of possible operation ofsystem 100, however, in some embodiments it may be desirable to open additional capabilities for third parties who are not providing content to access the system. For example, aggregated data regarding what sections of a textbook are most often annotated may be helpful to the author of the textbook (or to other authors) to determine where additional explanation of difficult concepts might be warranted. - In this description, the term “module” refers to computational logic for providing the specified functionality. A module can be implemented in hardware, firmware, and/or software. Where the modules described herein are implemented as software, the module can be implemented as a standalone program, but can also be implemented through other means, for example as part of a larger program, as a plurality of separate programs, or as one or more statically or dynamically linked libraries. It will be understood that the named modules described herein represent one embodiment of the present invention, and other embodiments may include other modules. In addition, other embodiments may lack modules described herein and/or distribute the described functionality among the modules in a different manner. Additionally, the functionalities attributed to more than one module can be incorporated into a single module. In an embodiment where the modules as implemented by software, they are stored on a computer readable persistent storage device (e.g., hard disk), loaded into the memory, and executed by one or more processors included as part of the
content hosting system 110. Alternatively, hardware or software modules may be stored elsewhere within thecontent hosting system 110. Thecontent hosting system 110 includes hardware elements necessary for the operations described here, including one or more processors, high speed memory, hard disk storage and backup, network interfaces and protocols, input devices for data entry, and output devices for display, printing, or other presentations of data. - Numerous variations from the system architecture of the illustrated
content hosting system 110 are possible. The components of thesystem 110 and their respective functionalities can be combined or redistributed. For example, thesystem database 130,third party module 120, userfront end server 140, and content providerfront end server 150 can be distributed among any number of storage devices. The following sections describe in greater detail thereader module 181,system database 130, and the other components illustrated inFIG. 1 in greater detail, and explain their operation in the context of thecontent hosting system 110. -
FIG. 2 illustrates a functional view of areader module 181 used as part of a electronic textbook system. In the embodiment described above in connection withFIG. 1 , the reader module is implemented on user computer 180A, but it should be recognized that in other embodiments, portions discussed herein could also be implemented on other computers (e.g., those in content hosting system 110) that are in communication withreader module 181. -
Reader module 181 is configured to address the fact that students use textbooks differently than other readers use typical books. Students typically study from, rather than merely read, textbooks. Studying is typically less linear than other reading, as texts are rarely read in “start-to-finish” manner. Studying is often much more interactive than typical reading, with annotations, cross-referencing between problem sets and main portions, reference to glossary or definitions sections, and the like. Studying is also inherently social and collaborative as well—far more so than most other types of reading. Learning in general, and studying in particular, typically combines attention to textbooks with creation and reference to notebooks, problem sets, lab experiment results, lecture materials, and other related sources. -
Reader module 181 includes various subsystems to facilitate the specialized uses students make of textbooks. In the embodiment illustrated inFIG. 2 ,reader module 181 includes anannotation subsystem 220, anOCR subsystem 230, acollaboration subsystem 240, anordering subsystem 250, aninput recognition subsystem 260, and adaemon subsystem 270. Many of these subsystems interact with one another, as described below. -
Annotation subsystem 220 provides various user tools and interfaces to allow students to mark up portions of an electronic textbook as they may find most helpful for learning and studying purposes.Annotation subsystem 220 includes conventional features such as highlighting and text entry tools, and also includes more advanced tools. For example, as described belowannotation subsystem 220 keeps track of textbook portions for which a student has provided annotations, and collects those portions into a personalized study guide based on a user command. -
OCR subsystem 230 is a recognition subsystem that takes information not originally in machine-readable form and converts it to machine readable form. For example,OCR subsystem 230 communicates withannotation subsystem 220 to convert handwritten student notes (entered graphically via finger or stylus gestures on a touch screen, for instance) into machine readable text. As used here,OCR subsystem 230 includes not only optical character recognition, but other types of recognition as well, for instance: voice-to-text recognition to allow a student to speak rather than write annotations; image to text recognition for photographs the student may take of a professor's notes on a blackboard during a lecture; and other types of recognition as well that may be provided within an electronic textbook or as a third party add-on. -
Collaboration subsystem 240 provides various user functions that allow students to work together. As detailed below, for example, users can share their annotations and notes with their study group, their class section, their entire class, or other users worldwide of the electronic textbook. Further,collaboration subsystem 240 includes social network facilities to permit students to undertake study sessions with audio and visual chat, to ask and answer questions, and to submit questions to professors or teaching assistants. -
Ordering subsystem 250 represents tools that allow students to obtain electronic textbooks and related materials. In one embodiment,ordering subsystem 250 is implemented as an electronic marketplace (e.g., the DROID™ marketplace implemented on the ANDROID™ operating system for smart phones and tablet computers). Third parties offer electronic textbooks and related materials such as study guides, problem sets, updates, workbooks, and the like. Some of these materials are available for purchase; others are free. In some embodiments, provision via other mechanisms (e.g., subscription, barter, “pay-per-view”) is supported, as may be desired by any subset of a student community or content provider group. -
Input recognition subsystem 260 provides user interface tools to facilitate use of electronic textbooks and related features. For instance, by sensing particular gestures on a touch screen of user computer 180A as discussed in detail below, the system temporarily shifts display of a textbook from a current page to a new section, while keeping track of the section of primary interest. Thus, a student working on a problem set section of the textbook can quickly look back at the text of the chapter, or a student reading a section for the first time can quickly jump to a glossary section of the textbook for a definition of an unfamiliar term or concept. -
Reader module 181 is configured to permit user-selected applications to run to enhance a student's ability to work with an electronic textbook. For example, a student may purchase an application that provides study questions on a per-chapter basis for textbooks that do not include such questions. In addition,reader module 181 includes adaemon subsystem 270 to provide additional add-on features without the user launching a visible application for such features. - Further detail regarding
reader module 181 and various subsystems thereof is provided below in connection with discussion ofFIGS. 4-6 . -
FIG. 3 illustrates a functional view of thesystem database 130 that stores data related to the textbookcontent hosting system 110. Thesystem database 130 may be divided based on the different types of data stored within. This data may reside in separate physical devices, or it may be collected within a single physical device. - With respect to content providers,
partner data 370 comprises information regarding content providers or partners registered with thecontent hosting system 110 that have permission to create and deliver content.Partner data 370 includes provider contact information. - User profile data storage 310 includes information about an individual user (e.g., a student), to facilitate the payment and collaborative aspects of
system 100.Subscriber data storage 320 includes identifying information about the student, such as the electronic textbooks the student has obtained and the social network groups the student has joined. In some embodiments,subscriber data storage 320 also maintains information regarding the location in each of the student's textbooks where the student is or was reading, to allow, for example, a student to read part of a textbook chapter on a smart phone while on a campus bus and continue reading from the same spot on the student's desktop computer in a dorm room. -
Account data storage 330 keeps track of the user's payment mechanisms (e.g., Google Inc.'s CHECKOUT®) related to the user's ability to obtain content fromsystem 100. Socialnetwork data storage 340 maintains the information needed to implement a social network engine to provide the collaborative features discussed herein, e.g., social graphs, social network preferences and rules.Textbook data 350 stores the actual content that is provided to users upon their request, such as electronic textbook files. Add-ondata storage 360 maintains information for related features, such as non-static data relating to textbooks. - In one embodiment, conventional mechanisms are used to implement many of the aspects of
system database 130. For example, the existing mechanisms from Google Inc.'s BOOKS™ GMAIL™, BUZZ™ CHAT™, TALK™, ORKUT™, CHECKOUT™, YOUTUBE™, SCHOLAR™, BLOGS™ and other products include aspects that can help to implement one or more of storage facilities 310-370 and modules 220-270. Google Inc. already provides eBook readers for ANDROID™ devices (phones, tablets, etc.), iOS devices (iPhones®, iPads® and other devices from Apple, Inc.) and various desktop Web browsers, and in one embodiment Google Inc.'s EDITIONS™ eBook reader application is modified to provide the functionality described herein. - Referring now to
FIG. 4 , there is shown a portable computer 400 (e.g., a tablet computer running the ANDROID™ operating system) with atouch screen 401, amicrophone 402, and a front-facingcamera 403. As is known to those skilled in the art, such devices currently available typically also provide rear-facing cameras, accelerometers, GPS receivers, Wi-Fi and advanced cellular communications capabilities, and various other features. As shown,computer 400 is runningreader module 181 and displaying apage 404 from an electronic textbook. - In one embodiment,
reader module 181 provides four tabs above amain content area 404 allowing selection of four class modules via the following user interface icons: a Syllabus tab 410, aTextbook tab 420, aNotebook tab 430 and aLectures tab 440. The Syllabus tab 410 provides course-specific information for the student, including a calendar of what portions of the text are going to be dealt with on what dates, when assignments are due, and when tests are scheduled. In one embodiment the student's performance during the class is also tracked here (e.g., grades on assignments and exams to date). TheTextbook tab 420, shown inFIG. 4 as the currently selected tab, provides the actual textbook, as well as a number of navigational and other tools related to view of the textbook. TheNotebook tab 430, when selected, causes the student's notebook for the course to be displayed (see discussion ofFIG. 6 , below). TheLectures tab 440, when selected, causes display of lecture-related materials, such as a professor may choose to provide to students. For example, a professor may provide slide decks used in a lecture, videos, or other materials that repeat or supplement what the professor presents in a lecture session. - More specifically, the display provided under the
Textbook tab 420 includes a number of reading andannotation tools 407. First, the name of the currently selected textbook is displayed (“Freshman Chemistry”) in a drop-down menu allowing selection of alternate texts for courses that use multiple textbooks. Not shown are controls, in one embodiment provided abovetools 407, for selecting among various courses, for purchasing textbooks and related items, for opening a session or chat as described below, for launching a search engine, for changing system settings and for getting automated help. - To the right of the textbook title is an icon to display a table of contents, as well as an icon to change settings such as text size. To the right of that is an icon to toggle between regular view of the textbook and view of a user-generated study guide (discussed below). To the right of that is an eye-shaped icon, currently shown in the “eye shut” state, indicating whether to show user annotations (also detailed below). The last four icons are to add handwritten (pen) annotations (via a stylus or finger, as desired and as supported by computer 400), highlighting, sticky note annotations, and audio annotations to the textbook.
- Below the primary
content display area 404 are a set ofpage navigation tools 408. From left to right they include an icon to add a bookmark, an indicator of the current page (circle) in relation to various chapters (noted by breaks in the horizontal line) and previously set bookmarks, a number indicating the last page of the textbook, and arrows representing previous page and next page commands. The user touches on an appropriate portion of this display oftools 408 to accomplish a corresponding action. - Also shown on
FIG. 4 is alarge bar 405, which in one embodiment is colored yellow.Bar 405 indicates that the user has created a sticky note relating to this portion of the text. Smaller bars, in one embodiment displayed in gray, appear both within and belowbar 405; in one embodiment these represent other types of annotations provided by the student, for example an audio annotation or a video annotation. Likewise,vertical lines 406 indicate still other student input, in one embodiment highlighting (straight lines) and handwritten annotations (squiggles). As noted above, the closed-eye icon intools 407 indicates that all of this student-generated markup of the text is currently being hidden from view. - Referring now also to
FIG. 5 , the same tablet computer is shown, this time with the aforementioned eye icon in the open state (annotations displayed). Thebar 405 now shows as a full sticky note, complete with a user control for settings (which in one embodiment include an OCR option to convert the handwritten text to clean machine-searchable text and an option to toggle between handwritten and machine text versions for display). In one embodiment, a small “resize handle” icon appears at the bottom of the note to allow the note to be made larger or smaller as the user may desire, and an “X” in the upper right hand corner of the note allows the user to delete the note if desired. The small gray bars referenced above are replaced with a “TV” icon indicating a video annotation as well as a small green circle with a number in it indicating how many comments have been entered concerning this annotation (e.g., by other students in a collaborative study session). A similar loudspeaker icon with a small green circle and corresponding number indicates an audio annotation and comments on that. Likewise, the highlighting and handwritten text previously indicated by vertical lines is now fully displayed. Also in this display, an indication of the current bookmarked status of the page is included in the upper left-hand corner, along with an “X” which, when touched by the user, removes the bookmark. - In some embodiments,
reader module 181 uses accelerometer and other positioning input fromcomputer 400 and interprets certain movements as commands. As one example, tiltingcomputer 400 from portrait mode (as shown inFIGS. 4 and 5 ) to landscape mode triggers a change in the display from one page to two-page spread format. An abrupt partial tilt, on the other hand, when in the “show annotations” mode illustrated inFIG. 5 , causes the annotations to be “poured” into the margin and the display switched to the “hide annotations” (corresponding to “closed eye” icon) mode illustrated inFIG. 4 . A tilt in the other direction pours the annotations back into view. - Referring now to
FIG. 6 , thesame user computer 400 is now shown after the user has selected theNotebook tab 430. In this mode of operation, in addition to the in-textbook annotations described above, a student can readily create and maintain a notebook keyed to specific portions of a textbook or lecture. A set oftools 607 provides icons that (from left to right) allow a student to change settings, such as displaying hand-entered text as shown here or a machine-recognized typed version of the same; to capture information from a whiteboard orchalkboard using camera 403 or (if equipped) a back-facing camera oncomputer 400; to share notes with others, to enter notes with a pen tool as described above, to highlight notes, to add a sticky note to the notebook, and to capture audio annotations corresponding to the notes frommicrophone 402. In one embodiment, the student can provide not only handwriting, but hand drawnshapes 605 as well in both the notebook and textbook annotation modes of operation. Likewise the student can provide notes inoutline form 606. Thepage navigation tools 608 in the notebook mode of operations include icons (from left to right) to add a bookmark, add a new page to the notebook, and navigate among the pages of the notebook as previously explained in connection with textbook page navigation. - For annotations that are not already in machine-readable text form, the reader provides user tools for audio/visual/character recognition to convert such annotations into a form that can be machine searched and accessed. The reader includes a user interface tool to allow a user to toggle as desired between such original images and the corresponding machine readable text. In addition to recognizing text images, the reader also provides a user with an option to recognize lines and various geometric shapes from imaged or handwritten notes, such that cleaned-up versions of flow diagrams, molecular structures and the like can be easily made from sources having varied legibility.
- Students' use of textbooks involves certain operations not typical of reading. For instance, most readers do not have questions that regularly arise in connection with their reading, but this is common for students who do not understand a concept very well from the textbook presentation. Accordingly,
input recognition system 260 ofreader module 181 provides a number of predetermined operations specifically oriented to textbook use, and also permits students to easily configure their own desired operations. - In one example, already mentioned, a student may find a portion of a text particularly “dense” in concepts, and may want to include more annotation for that section than could reasonably fit in the margins or elsewhere within the textbook display. In such instances,
input recognition system 260 detects when a user has circled a portion of a textbook's content, either with a finger or a stylus. To indicate such detection, the selected area is displayed with a “glowing” appearance for a period of time. If, during that period of time, the user touches the glowing area and drags it to hover over theNotebook tab 430,input recognition system 260 detects this as a command to copy that portion of text into the student's notebook, where further room for annotation is available. In that event, the student's notebook becomes the active display, allowing the student to place the copied portion from the textbook anywhere in the notebook that may be desired, and to annotate on and around the added excerpt. - As a second example, specific annotations are immediately recognized as corresponding to commands rather than actual annotations. For example, in one embodiment a handwritten annotation in the form of a question mark with a circle around it is interpreted as a request to send a question regarding the nearby text to the appropriate teaching assistant for that course (or other predetermined moderator), and a dialog box immediately opens, preaddressed to the teaching assistant, allowing the student to ask the question. In one embodiment, the message to the teaching assistant is automatically tagged with the corresponding portion of the text so that the student does not need to include any context with the specific question, but can just include the question in a way that might be confusing without context. For example, if the text shows an illegal divide-by-zero operation, the student's question could simply be: “Why can't you do this?” without any further contextual information.
- Likewise, other predefined gestures are provided in various embodiments. A “c” drawn with a circle around it, or a cartoon text balloon shape, is interpreted as a command to open a chat panel. A “k” with a circle around it or a pound sign (#) is interpreted as a command to open a keyboard panel. A squiggly line or repeated zig-zag is a command to delete a word or diagram. A handwritten name (“Jim”) opens a chat panel with a familiar classmate. A specified word (“calc”) invokes an installed add-on.
- Users are also given the option of predefining their own gestures and self-recording arbitrary sequences of actions (similar to macros) that will be associated with those gestures. As one example, a user may define a letter “Q” with a circle around it to mean “Quit thoughtfully” and make that gesture correspond to saving all notebook edits, quitting the open textbook, and emailing notes to other study group members (e.g., Mike, Bob and Mary).
- As noted above in connections with
FIGS. 4 and 5 ,reader module 181 enables highlighting, sticky notes and annotations generally (e.g., 405, 406) to be selectively shown or marginalized. An advantage of marginalizing, rather than completely hiding, annotation is that marginal marks remind the student upon a second or third reading of a section that there are potentially helpful annotations available for that section. Furthermore, use of the accelerometer ofcomputer 400 to either show or marginalize annotations upon a quick tilting ofcomputer 400 provides a very quick and intuitive way for the student to switch between these two types of display. In some embodiments, user interface controls allow the specific gestures used to indicate show/marginalize annotations to be adjusted and otherwise changed, so that the sensitivity of these can be tuned to match a user's preference. -
Reader module 181 also enables a student to mark certain annotations as private. In one embodiment, annotations are by default shared anonymously with the public (i.e., all others who have access to the electronic textbook), but in some environments, alternate embodiments may be more selective in sharing as may be more appropriate. - In one embodiment,
reader 181 is configured to tag all portions of a textbook for which annotations have been provided such that a student can request a personalized study guide, comprised solely of the highlighted sections, to be generated. In one embodiment, each tagged section remains hyperlinked to the original full text to allow the student to quickly switch back to the full text for additional context regarding a particular section of interest. In one embodiment, this is accomplished by placing an underlined page number at the left margin of each section of the study guide; clicking on that number takes the user to the indicated page in the textbook. - In a related embodiment,
collaboration subsystem 240 is configured to obtain information from other students as well regarding portions of the textbook that they have highlighted for generation of a user guide based on their annotated sections, in addition to the user's own annotated sections. In one embodiment, the student can select the student's own work group, other classmates, other students at the same school or at other select schools, or even all students worldwide for purposes of determining which annotations should be used to generate the study guide. To avoid a situation in which such “crowd-sourced” generation of annotated selections produces too large a study guide, in one embodiment a slider-style user interface (or other suitable user interface) allows a student to adjust selectivity for generation of the study guide. For instance, one setting includes all sections highlighted by any student, but another setting requires that at least five students provided annotations for a section to include it in the study guide (or for consideration of all students worldwide, 5% of the students providing annotations). Thus, a student may tailor a user guide for the amount of time the student may have available to use the guide. Undoubtedly, some students who have not read the entire text may also use this feature to determine which portions are considered most important for a first reading before an examination. - To provide a user experience showing that such additional annotated sections are being collected, an animated user interface that moves or “slurps” these additional annotated sections from outside the current field of view is shown when the user changes the slider to include more sections, and the additional sections are slurped out of the field of view as the user changes the slider to be more selective in which sections to have in the study guide.
- In one embodiment, user interface tabs/buttons allow a user to select “My highlights,” “Classmates' highlights,” or “Everyone's highlights.”
- Many of the
computers 400 on whichreader module 181 will be implemented support multi-touch navigation by a user. However, not all of the multi-touch commands that may be most helpful for use of electronic textbooks are provided in a native manner on such devices. For instance, the standard “pinch-zoom” and swipe features available to change magnification and move through pages and chapters are certainly useful with textbooks, but more specific navigation choices are supported byreader module 181. For example, as noted above users of textbooks often need to make quick reference to another portion of the text and then return to where they were in the text. With a paper book, one often sticks a finger in the book at the current page and then moves to the page of temporary interest.Reader 181 permits a corresponding operation by placing a finger of one hand down on thescreen 401 at a location showing the current page (e.g., near 404 onFIG. 4 ) and then using other existing page navigation techniques to move to another page (e.g., by swiping with two fingers of the other hand to move back a number of pages). - Additionally, the
navigation footer 408 is persistent, and the user can quickly move around the book (either provisionally using the one-finger hold on the current page or normally) using this interface at any time. - When a user provisionally moves to a page, for instance to skim it, the user can either release using the left hand to return to the original page or release using the right hand to commit to the new page and abandon the original page.
- In a related aspect to the collaboration among students discussed above, two or more students who are engaged in a chat regarding a textbook or who are in a study session using the textbook often need to help each other based on particular portions of a text. To facilitate this,
collaboration subsystem 240 keeps track of where each student is in the textbook during collaboration and sends that information to thecomputers 400 of the other students in the collaboration, so that their current location is indicated for the others to see. Likewise, one student's annotations appear on the other students' computers 400 (with color coding for each student's annotations), as do gestures made by one student (e.g., pointing to a particular portion of text using either a mouse or a finger press on a touch screen device). - Referring now to
FIG. 7 , thecomputer 400 implementing areader 181, discussed above with respect toFIG. 4 is shown once again, this time with adisplay screen 401 including acontextual menu 701. In one embodiment, once a user presses and holds on a portion of thescreen 401, a circle begins to appear, gradually drawn in a clockwise direction around the user's finger. The circle is complete after a finite period of short duration (say approximately 500 milliseconds) and then turns into thecontextual menu 701. The purpose of such animation is to alert the user that by holding a finger on the screen, the user is requesting such a menu (release of the finger before the menu is complete makes the incomplete circle disappear and the menu is not formed). In addition, the animation assists users pressing near the edge of a screen to see thatmenu 701 is being created, even if a portion of the developing circle is obscured by the edge of the screen.Contextual menu 701 provides, in this embodiment, six areas for further user selection: a central area with an “X” in it to close the menu (tapping outside the menu will also close it), and five choices for further user selection.Menu 701 is a contextual menu because the user choices are not always the same, but instead are based on what is displayed onscreen 401 as well as where on the screen the user has asked the menu to appear. For instance, if the user presses a finger over a chart or diagram, a different set of choices may appear than if the user presses a finger over body text, or over white space as shown inFIG. 7 . - Referring now to
FIG. 8 , there is shown a progression of contextual menus in one example, from a menu ofaction choices 801 to a display showing a selectedchoice 811, and then to amenu 821 of a series of additional user choices that result. Specifically,menu 801 includes five user choices related to annotations, in this case color, stroke, chat, sync and share, that the user can select. In this instance, color denotes a choice of a color for annotations, stroke denotes gesture recognition activation (and in alternate embodiments, various gesture-related configuration and operation choices), chat denotes activation of a chat window, sync denotes synchronizing the user's display with that of other connected students (e.g., to share annotations), and share denotes sharing of annotations with other students. The latter two choices also have small triangular blocks in the lower right of their respective menu portions inmenu 801; in this embodiment these blocks indicate that the choices will spawn additional user choices (i.e., not result in any action being taken immediately without the opportunity for further user selection, for example by presentation of a further menu of user choices). Acentral circle 802 with an “X” in it provides a mechanism to close the circular menu, and is primarily used for newer users who may not understand thatmenu 801 can also be closed by simply tapping outside ofmenu 801. In a related embodiment, small graphics rather than words are used to denote the user's options: An artist's palette for “color”, a swoosh symbol for “gesture”, a word bubble for “chat”, a circle with rotating arrows for “sync”, and a document with an arrow for “share”. - Assuming for present purposes that the user selects “color” from
menu 801, that portion of the menu gradually expands as shown incircle 811, providing the user recognition that the input has been received. Again, this takes approximately 500 milliseconds, after which the next set of user choices is displayed viamenu 821. In this instance, the user choices are not textual at all, but include different colors that the user may select by tapping on the appropriately colored portion. Once the user does that, a similar indication of recognition is provided by having that color similarly grow into a circle that is entirely made up of the selected color (not shown). For choices that do not result in further menus or other selection choices, indication of finality is provided by having the choice blink in confirmation and then fade from view. In this instance, the selected color, after growing to encompass the entire circle (other than the small circle 802) blinks and then fades from view. - In some embodiments, a different number of user selections than five is provided in
menus menu 801. For example, referring again toFIG. 4 , if the user presses and holds on a portion of an annotation area, e.g., 405, an annotation menu appears differing frommenu 801 in that “delete” appears rather than “color”, “append” appears rather than “stroke”, and “question” appears rather than “chat” (with the remaining items, “sync” and “share” still appearing as in menu 801). In this instance, delete is used to remove an annotation, append is used to send the annotation from the textbook display to the user's notebook (shown onFIG. 6 ), and question is used to embed the annotation in a question to be addressed to a fellow student, teaching assistant or professor. Wherever possible, menu items that are common across contexts are placed in consistent areas onmenu 801 to facilitate ease of use. - Contextual menus, e.g., 801 are brought up in different forms based not only on location of the user's finger press (e.g., over body text of the book as opposed to over a user's own annotation), but also based on when the press is made (e.g., immediately after highlighting a section of text) and based on other triggering events (e.g., recently receiving a question or annotation from another student) that might warrant actions that would not be needed otherwise. By providing menus with context-driven choices, the need for interface “real estate” on the screen is reduced, since inapplicable choices simply do not appear rather than appearing in grayed-out text as is done with many conventional menu systems.
- Some portions of above description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs executed by a processor, equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
- As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
- In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
- Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for providing interfaces for electronic books through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
Claims (18)
1. An device, comprising:
a display; and
an input recognition subsystem, operably connected to the display, configured to:
identify a point on the display based on a position of an object relative to the display as the object becomes proximate to the display;
initiate a menu-creation animation in response to the object becoming proximate to the display; and
provide a contextual menu in response to the object remaining at substantially the position until the menu-creation animation completes, the contextual menu including a plurality of options, creation of the contextual menu being cancelled in response to the object ceasing to remain at substantially the position before the menu-creation animation completes.
2. The device of claim 1 , wherein the plurality of options included is responsive to events occurring prior to the point on the display being identified.
3. The device of claim 1 , wherein the plurality of options included in the contextual menu is responsive to material displayed on the display proximate to the point.
4. The device of claim 1 , wherein the menu-creation animation comprises at least a portion of a circle being gradually drawn around the point on the display.
5. The device of claim 1 , wherein the input recognition subsystem is further configured to:
detect the object at a second position, proximate to a second point on the display; and
provide user selection of a contextual menu option responsive to detecting the object at the second position, the second point on the display corresponding to the contextual menu option.
6. The device of claim 5 , wherein the input recognition subsystem is further configured to initiate an option-selection animation in response to detecting the object at the second position, the contextual menu option being selected in response to the object remaining at substantially the second position until the option-selection animation completes, selection of the contextual menu option being cancelled in response to the object ceasing to remain at substantially the second position before the option-selection animation completes.
7. A computer implemented method of providing a contextual menu, comprising:
displaying content on a display;
identifying a point on the display based on a position of an object relative to the display as the object becomes proximate to the display;
initiating a menu-creation animation in response to the object becoming proximate to the display;
determining options to be presented in the contextual menu responsive to the point on the display; and
providing the contextual menu in response to the object remaining at substantially the position until the menu-creation animation completes, creation of the contextual menu being cancelled in response to the object ceasing to remain at substantially the position before the menu-creation animation completes.
8. The method of claim 7 , wherein the options to be presented are determined responsive to events occurring prior to the point on the display being identified.
9. The method of claim 7 , wherein the options to be presented are determined responsive to material displayed on the display proximate to the point.
10. The method of claim 7 , wherein the menu-creation animation comprises at least a portion of a circle being gradually drawn around the point on the display.
11. The method of claim 7 , further comprising:
detecting the object at a second position, proximate to a second point on the display; and
selecting a contextual menu option responsive to detecting the object at the second position, the second point on the display corresponding to contextual menu option.
12. The method of claim 11 , further comprising initiating an option-selection animation in response to detecting the object at the second position, the contextual menu option being selected in response to the object remaining at substantially the second position until the option-selection animation completes, selection of the contextual menu option being cancelled in response to the object ceasing to remain at substantially the second position before the option-selection animation completes.
13. A non-transitory computer readable storage medium containing computer executable instructions for providing a contextual menu, the instructions comprising:
instructions to display content on a display;
instructions to a point on the display based on a position of an object relative to the display as the object becomes proximate to the display;
instructions to initiate a menu-creation animation in response to the object becoming proximate to the display;
instructions to determine options to be presented in the contextual menu responsive to the point on the display; and
instructions to provide providing the contextual menu in response to the object remaining at substantially the position until the menu-creation animation completes, creation of the contextual menu being cancelled in response to the object ceasing to remain at substantially the position before the menu-creation animation completes.
14. The non-transitory computer readable storage medium of claim 13 , wherein the options to be presented are determined responsive to events occurring prior to the point on the display being identified.
15. The non-transitory computer readable storage medium of claim 13 , wherein the options to be presented are determined responsive to material displayed on the display proximate to the point.
16. The non-transitory computer readable storage medium of claim 13 , wherein the menu-creation animation comprises at least a portion of a circle being gradually drawn around the point on the display.
17. The non-transitory computer readable storage medium of claim 13 , wherein the instructions further comprise:
instructions to detect the object at a second position, proximate to a second point on the display; and
instructions to select a contextual menu option responsive to detecting the object at the second position, the second point on the display corresponding to contextual menu option.
18. The non-transitory computer readable storage medium of claim 17 , wherein the instructions further comprise instructions to initiate an option-selection animation in response to detecting the object at the second position, the contextual menu option being selected in response to the object remaining at substantially the second position until the option-selection animation completes, selection of the contextual menu option being cancelled in response to the object ceasing to remain at substantially the second position before the option-selection animation completes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/043,015 US20140033128A1 (en) | 2011-02-24 | 2013-10-01 | Animated contextual menu |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161446239P | 2011-02-24 | 2011-02-24 | |
US13/171,130 US20120221938A1 (en) | 2011-02-24 | 2011-06-28 | Electronic Book Interface Systems and Methods |
US13/182,787 US8520025B2 (en) | 2011-02-24 | 2011-07-14 | Systems and methods for manipulating user annotations in electronic books |
US13/901,110 US20130262973A1 (en) | 2011-02-24 | 2013-05-23 | Systems and methods for manipulating user annotations in electronic books |
US14/043,015 US20140033128A1 (en) | 2011-02-24 | 2013-10-01 | Animated contextual menu |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/901,110 Continuation US20130262973A1 (en) | 2011-02-24 | 2013-05-23 | Systems and methods for manipulating user annotations in electronic books |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140033128A1 true US20140033128A1 (en) | 2014-01-30 |
Family
ID=46718701
Family Applications (11)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/089,154 Active 2034-09-03 US9645986B2 (en) | 2011-02-24 | 2011-04-18 | Method, medium, and system for creating an electronic book with an umbrella policy |
US13/171,130 Abandoned US20120221938A1 (en) | 2011-02-24 | 2011-06-28 | Electronic Book Interface Systems and Methods |
US13/182,773 Abandoned US20120221968A1 (en) | 2011-02-24 | 2011-07-14 | Electronic Book Navigation Systems and Methods |
US13/182,787 Active 2031-09-30 US8520025B2 (en) | 2011-02-24 | 2011-07-14 | Systems and methods for manipulating user annotations in electronic books |
US13/182,809 Active US8543941B2 (en) | 2011-02-24 | 2011-07-14 | Electronic book contextual menu systems and methods |
US13/182,797 Active US9063641B2 (en) | 2011-02-24 | 2011-07-14 | Systems and methods for remote collaborative studying using electronic books |
US13/216,773 Abandoned US20120221441A1 (en) | 2011-02-24 | 2011-08-24 | Identifying and using bibliographical references in electronic books |
US13/901,110 Abandoned US20130262973A1 (en) | 2011-02-24 | 2013-05-23 | Systems and methods for manipulating user annotations in electronic books |
US13/946,937 Active 2033-03-11 US10067922B2 (en) | 2011-02-24 | 2013-07-19 | Automated study guide generation for electronic books |
US13/949,049 Active 2033-04-05 US9501461B2 (en) | 2011-02-24 | 2013-07-23 | Systems and methods for manipulating user annotations in electronic books |
US14/043,015 Abandoned US20140033128A1 (en) | 2011-02-24 | 2013-10-01 | Animated contextual menu |
Family Applications Before (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/089,154 Active 2034-09-03 US9645986B2 (en) | 2011-02-24 | 2011-04-18 | Method, medium, and system for creating an electronic book with an umbrella policy |
US13/171,130 Abandoned US20120221938A1 (en) | 2011-02-24 | 2011-06-28 | Electronic Book Interface Systems and Methods |
US13/182,773 Abandoned US20120221968A1 (en) | 2011-02-24 | 2011-07-14 | Electronic Book Navigation Systems and Methods |
US13/182,787 Active 2031-09-30 US8520025B2 (en) | 2011-02-24 | 2011-07-14 | Systems and methods for manipulating user annotations in electronic books |
US13/182,809 Active US8543941B2 (en) | 2011-02-24 | 2011-07-14 | Electronic book contextual menu systems and methods |
US13/182,797 Active US9063641B2 (en) | 2011-02-24 | 2011-07-14 | Systems and methods for remote collaborative studying using electronic books |
US13/216,773 Abandoned US20120221441A1 (en) | 2011-02-24 | 2011-08-24 | Identifying and using bibliographical references in electronic books |
US13/901,110 Abandoned US20130262973A1 (en) | 2011-02-24 | 2013-05-23 | Systems and methods for manipulating user annotations in electronic books |
US13/946,937 Active 2033-03-11 US10067922B2 (en) | 2011-02-24 | 2013-07-19 | Automated study guide generation for electronic books |
US13/949,049 Active 2033-04-05 US9501461B2 (en) | 2011-02-24 | 2013-07-23 | Systems and methods for manipulating user annotations in electronic books |
Country Status (5)
Country | Link |
---|---|
US (11) | US9645986B2 (en) |
EP (5) | EP2678760A4 (en) |
KR (6) | KR20150070431A (en) |
CN (5) | CN103493087A (en) |
WO (7) | WO2012115756A2 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120088554A1 (en) * | 2010-10-08 | 2012-04-12 | Hojoon Lee | Mobile terminal and control method thereof |
US20120284348A1 (en) * | 2011-05-05 | 2012-11-08 | Ariel Inventions Llc | System and method for social interactivity while using an e-book reader |
US20140304648A1 (en) * | 2012-01-20 | 2014-10-09 | Microsoft Corporation | Displaying and interacting with touch contextual user interface |
US20150121212A1 (en) * | 2013-10-31 | 2015-04-30 | Apollo Group, Inc. | Method and apparatus for presenting and navigating bookmarks in a set of electronic reading material |
US9928562B2 (en) | 2012-01-20 | 2018-03-27 | Microsoft Technology Licensing, Llc | Touch mode and input type recognition |
US20180292975A1 (en) * | 2017-04-05 | 2018-10-11 | Open Txt Sa Ulc | Systems and methods for animated computer generated display |
US10218652B2 (en) | 2014-08-08 | 2019-02-26 | Mastercard International Incorporated | Systems and methods for integrating a chat function into an e-reader application |
CN109582191A (en) * | 2017-09-28 | 2019-04-05 | 北京国双科技有限公司 | A kind of menu content display methods and device |
US10380226B1 (en) * | 2014-09-16 | 2019-08-13 | Amazon Technologies, Inc. | Digital content excerpt identification |
USD868834S1 (en) | 2017-04-05 | 2019-12-03 | Open Text Sa Ulc | Display screen or portion thereof with animated graphical user interface |
US10891320B1 (en) | 2014-09-16 | 2021-01-12 | Amazon Technologies, Inc. | Digital content excerpt identification |
Families Citing this family (264)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8892630B1 (en) | 2008-09-29 | 2014-11-18 | Amazon Technologies, Inc. | Facilitating discussion group formation and interaction |
US9083600B1 (en) | 2008-10-29 | 2015-07-14 | Amazon Technologies, Inc. | Providing presence information within digital items |
US8706685B1 (en) | 2008-10-29 | 2014-04-22 | Amazon Technologies, Inc. | Organizing collaborative annotations |
US8621380B2 (en) | 2010-01-06 | 2013-12-31 | Apple Inc. | Apparatus and method for conditionally enabling or disabling soft buttons |
US9811507B2 (en) * | 2010-01-11 | 2017-11-07 | Apple Inc. | Presenting electronic publications on a graphical user interface of an electronic device |
US9679047B1 (en) | 2010-03-29 | 2017-06-13 | Amazon Technologies, Inc. | Context-sensitive reference works |
US9542091B2 (en) | 2010-06-04 | 2017-01-10 | Apple Inc. | Device, method, and graphical user interface for navigating through a user interface using a dynamic object selection indicator |
US8477109B1 (en) | 2010-06-24 | 2013-07-02 | Amazon Technologies, Inc. | Surfacing reference work entries on touch-sensitive displays |
US8542205B1 (en) | 2010-06-24 | 2013-09-24 | Amazon Technologies, Inc. | Refining search results based on touch gestures |
US8250071B1 (en) | 2010-06-30 | 2012-08-21 | Amazon Technologies, Inc. | Disambiguation of term meaning |
US9098407B2 (en) * | 2010-10-25 | 2015-08-04 | Inkling Systems, Inc. | Methods for automatically retrieving electronic media content items from a server based upon a reading list and facilitating presentation of media objects of the electronic media content items in sequences not constrained by an original order thereof |
US8587547B2 (en) | 2010-11-05 | 2013-11-19 | Apple Inc. | Device, method, and graphical user interface for manipulating soft keyboards |
US8659562B2 (en) | 2010-11-05 | 2014-02-25 | Apple Inc. | Device, method, and graphical user interface for manipulating soft keyboards |
US20120182288A1 (en) * | 2011-01-18 | 2012-07-19 | Sony Corporation | Method and apparatus for information presentation |
US20120185802A1 (en) * | 2011-01-18 | 2012-07-19 | Yisia Young Suk Lee | Method and apparatus for retrieving and displaying information |
US9092132B2 (en) | 2011-01-24 | 2015-07-28 | Apple Inc. | Device, method, and graphical user interface with a dynamic gesture disambiguation threshold |
US10365819B2 (en) | 2011-01-24 | 2019-07-30 | Apple Inc. | Device, method, and graphical user interface for displaying a character input user interface |
US9645986B2 (en) | 2011-02-24 | 2017-05-09 | Google Inc. | Method, medium, and system for creating an electronic book with an umbrella policy |
US10019995B1 (en) | 2011-03-01 | 2018-07-10 | Alice J. Stiebel | Methods and systems for language learning based on a series of pitch patterns |
US11062615B1 (en) | 2011-03-01 | 2021-07-13 | Intelligibility Training LLC | Methods and systems for remote language learning in a pandemic-aware world |
US9268733B1 (en) * | 2011-03-07 | 2016-02-23 | Amazon Technologies, Inc. | Dynamically selecting example passages |
US9251130B1 (en) * | 2011-03-31 | 2016-02-02 | Amazon Technologies, Inc. | Tagging annotations of electronic books |
KR101294306B1 (en) * | 2011-06-09 | 2013-08-08 | 엘지전자 주식회사 | Mobile device and control method for the same |
US20120331023A1 (en) * | 2011-06-24 | 2012-12-27 | Inkling Systems, Inc. | Interactive exhibits |
US9086794B2 (en) * | 2011-07-14 | 2015-07-21 | Microsoft Technology Licensing, Llc | Determining gestures on context based menus |
JP5772331B2 (en) * | 2011-07-20 | 2015-09-02 | カシオ計算機株式会社 | Learning apparatus and program |
WO2013016719A1 (en) * | 2011-07-28 | 2013-01-31 | School Improvement Network, Llc | Management and provision of interactive content |
WO2013028569A2 (en) | 2011-08-19 | 2013-02-28 | Apple Inc. | Interactive content for digital books |
US9195373B2 (en) * | 2011-08-30 | 2015-11-24 | Nook Digital, Llc | System and method for navigation in an electronic document |
US8806335B2 (en) * | 2011-09-06 | 2014-08-12 | Pottermore Limited | Interactive digital experience for a literary work |
US20130060615A1 (en) * | 2011-09-06 | 2013-03-07 | Apple Inc. | Managing access to digital content items |
US9141404B2 (en) | 2011-10-24 | 2015-09-22 | Google Inc. | Extensible framework for ereader tools |
TWI570623B (en) * | 2011-11-07 | 2017-02-11 | 元太科技工業股份有限公司 | Reading apparatus and control method thereof |
US9031493B2 (en) | 2011-11-18 | 2015-05-12 | Google Inc. | Custom narration of electronic books |
KR20130068700A (en) * | 2011-12-16 | 2013-06-26 | 삼성전자주식회사 | Method and apparatus for displaying a electronic book |
KR20130104005A (en) * | 2012-03-12 | 2013-09-25 | 삼성전자주식회사 | Electrinic book system and operating method thereof |
IN2013MU01253A (en) * | 2012-03-30 | 2015-04-17 | Loudcloud Systems Inc | |
US20130346874A1 (en) * | 2012-03-30 | 2013-12-26 | Keys To Medicine, Llc | User configurable electronic textbook |
US9098186B1 (en) | 2012-04-05 | 2015-08-04 | Amazon Technologies, Inc. | Straight line gesture recognition and rendering |
US9373049B1 (en) * | 2012-04-05 | 2016-06-21 | Amazon Technologies, Inc. | Straight line gesture recognition and rendering |
US9996516B2 (en) * | 2012-05-16 | 2018-06-12 | Rakuten, Inc. | Image processing device for determining a display position of an annotation |
US8933312B2 (en) * | 2012-06-01 | 2015-01-13 | Makemusic, Inc. | Distribution of audio sheet music as an electronic book |
US10444836B2 (en) | 2012-06-07 | 2019-10-15 | Nook Digital, Llc | Accessibility aids for users of electronic devices |
US9747633B2 (en) * | 2012-06-11 | 2017-08-29 | The Board Of Trustees Of The Leland Stanford Junior University | Method and related apparatus for generating online and printing on-demand compilation of works with customer selectable printing options |
WO2014028068A1 (en) | 2012-08-17 | 2014-02-20 | Flextronics Ap, Llc | Media center |
US8904304B2 (en) * | 2012-06-25 | 2014-12-02 | Barnesandnoble.Com Llc | Creation and exposure of embedded secondary content data relevant to a primary content page of an electronic book |
JP5390669B1 (en) * | 2012-06-29 | 2014-01-15 | 楽天株式会社 | Post display system, post display method, and post display program |
JP6013051B2 (en) * | 2012-07-02 | 2016-10-25 | 東芝メディカルシステムズ株式会社 | Ultrasonic diagnostic apparatus and operation support method thereof |
US20140015749A1 (en) * | 2012-07-10 | 2014-01-16 | University Of Rochester, Office Of Technology Transfer | Closed-loop crowd control of existing interface |
US8522130B1 (en) * | 2012-07-12 | 2013-08-27 | Chegg, Inc. | Creating notes in a multilayered HTML document |
KR20140010274A (en) * | 2012-07-16 | 2014-01-24 | 삼성전자주식회사 | Method and apparatus for managing e-book |
US9658746B2 (en) | 2012-07-20 | 2017-05-23 | Nook Digital, Llc | Accessible reading mode techniques for electronic devices |
US20140033030A1 (en) * | 2012-07-24 | 2014-01-30 | Anthony R. Pfister | Indexing and providing electronic publications in a networked computing environment |
US20140047332A1 (en) * | 2012-08-08 | 2014-02-13 | Microsoft Corporation | E-reader systems |
US20160119675A1 (en) | 2012-09-06 | 2016-04-28 | Flextronics Ap, Llc | Programming user behavior reporting |
US11368760B2 (en) | 2012-08-17 | 2022-06-21 | Flextronics Ap, Llc | Applications generating statistics for user behavior |
US9819986B2 (en) | 2012-08-17 | 2017-11-14 | Flextronics Ap, Llc | Automated DLNA scanning with notification |
US9047356B2 (en) * | 2012-09-05 | 2015-06-02 | Google Inc. | Synchronizing multiple reading positions in electronic books |
US20140074648A1 (en) * | 2012-09-11 | 2014-03-13 | Google Inc. | Portion recommendation for electronic books |
KR20140037535A (en) * | 2012-09-19 | 2014-03-27 | 삼성전자주식회사 | Method and apparatus for creating e-book including user effects |
USD745566S1 (en) * | 2012-09-22 | 2015-12-15 | uMotif, Ltd. | Display screen or a portion thereof with animated graphical user interface |
WO2014051451A1 (en) * | 2012-09-25 | 2014-04-03 | Intel Corporation | Capturing objects in editable format using gestures |
US9454677B1 (en) | 2012-10-16 | 2016-09-27 | Truedata Systems, Inc. | Secure communication architecture including video sniffer |
WO2014062853A2 (en) * | 2012-10-16 | 2014-04-24 | Lloyd, James | Secure communication architecture |
US9356787B2 (en) | 2012-10-16 | 2016-05-31 | Truedata Systems, Inc. | Secure communication architecture including sniffer |
US9430776B2 (en) | 2012-10-25 | 2016-08-30 | Google Inc. | Customized E-books |
US10551928B2 (en) | 2012-11-20 | 2020-02-04 | Samsung Electronics Company, Ltd. | GUI transitions on wearable electronic device |
US10423214B2 (en) | 2012-11-20 | 2019-09-24 | Samsung Electronics Company, Ltd | Delegating processing from wearable electronic device |
US9030446B2 (en) | 2012-11-20 | 2015-05-12 | Samsung Electronics Co., Ltd. | Placement of optical sensor on wearable electronic device |
US11372536B2 (en) | 2012-11-20 | 2022-06-28 | Samsung Electronics Company, Ltd. | Transition and interaction model for wearable electronic device |
US11157436B2 (en) | 2012-11-20 | 2021-10-26 | Samsung Electronics Company, Ltd. | Services associated with wearable electronic device |
US11237719B2 (en) | 2012-11-20 | 2022-02-01 | Samsung Electronics Company, Ltd. | Controlling remote electronic device with wearable electronic device |
US8994827B2 (en) | 2012-11-20 | 2015-03-31 | Samsung Electronics Co., Ltd | Wearable electronic device |
US9477313B2 (en) | 2012-11-20 | 2016-10-25 | Samsung Electronics Co., Ltd. | User gesture input to wearable electronic device involving outward-facing sensor of device |
US10185416B2 (en) | 2012-11-20 | 2019-01-22 | Samsung Electronics Co., Ltd. | User gesture input to wearable electronic device involving movement of device |
US9665550B2 (en) * | 2012-11-30 | 2017-05-30 | Michael E. Lee | Expert based integrated annotation software interface and database using e-book technology |
KR102085225B1 (en) | 2012-12-05 | 2020-03-05 | 삼성전자주식회사 | User terminal apparatus and contol method thereof |
US20140164900A1 (en) * | 2012-12-11 | 2014-06-12 | Microsoft Corporation | Appending content with annotation |
US9001064B2 (en) | 2012-12-14 | 2015-04-07 | Barnesandnoble.Com Llc | Touch sensitive device with pinch-based archive and restore functionality |
US9030430B2 (en) | 2012-12-14 | 2015-05-12 | Barnesandnoble.Com Llc | Multi-touch navigation mode |
US9477382B2 (en) | 2012-12-14 | 2016-10-25 | Barnes & Noble College Booksellers, Inc. | Multi-page content selection technique |
US9009028B2 (en) | 2012-12-14 | 2015-04-14 | Google Inc. | Custom dictionaries for E-books |
US9134893B2 (en) | 2012-12-14 | 2015-09-15 | Barnes & Noble College Booksellers, Llc | Block-based content selecting technique for touch screen UI |
US8963865B2 (en) | 2012-12-14 | 2015-02-24 | Barnesandnoble.Com Llc | Touch sensitive device with concentration mode |
US9134892B2 (en) | 2012-12-14 | 2015-09-15 | Barnes & Noble College Booksellers, Llc | Drag-based content selection technique for touch screen UI |
US9448719B2 (en) | 2012-12-14 | 2016-09-20 | Barnes & Noble College Booksellers, Llc | Touch sensitive device with pinch-based expand/collapse function |
US9134903B2 (en) | 2012-12-14 | 2015-09-15 | Barnes & Noble College Booksellers, Llc | Content selecting technique for touch screen UI |
US20140181633A1 (en) * | 2012-12-20 | 2014-06-26 | Stanley Mo | Method and apparatus for metadata directed dynamic and personal data curation |
US9836154B2 (en) | 2013-01-24 | 2017-12-05 | Nook Digital, Llc | Selective touch scan area and reporting techniques |
US9971495B2 (en) | 2013-01-28 | 2018-05-15 | Nook Digital, Llc | Context based gesture delineation for user interaction in eyes-free mode |
NL2010357C2 (en) * | 2013-02-14 | 2014-08-18 | Optelec Dev B V | Low vision device and method for recording and displaying an object on a screen. |
JP2014164630A (en) * | 2013-02-27 | 2014-09-08 | Sony Corp | Information processing apparatus, information processing method, and program |
US9003333B2 (en) * | 2013-03-04 | 2015-04-07 | Zynga Inc. | Sequential selection of multiple objects |
US9448643B2 (en) | 2013-03-11 | 2016-09-20 | Barnes & Noble College Booksellers, Llc | Stylus sensitive device with stylus angle detection functionality |
US9946365B2 (en) | 2013-03-11 | 2018-04-17 | Barnes & Noble College Booksellers, Llc | Stylus-based pressure-sensitive area for UI control of computing device |
US9632594B2 (en) | 2013-03-11 | 2017-04-25 | Barnes & Noble College Booksellers, Llc | Stylus sensitive device with stylus idle functionality |
US9760187B2 (en) | 2013-03-11 | 2017-09-12 | Barnes & Noble College Booksellers, Llc | Stylus with active color display/select for touch sensitive devices |
US9367161B2 (en) | 2013-03-11 | 2016-06-14 | Barnes & Noble College Booksellers, Llc | Touch sensitive device with stylus-based grab and paste functionality |
US9766723B2 (en) | 2013-03-11 | 2017-09-19 | Barnes & Noble College Booksellers, Llc | Stylus sensitive device with hover over stylus control functionality |
US9785259B2 (en) | 2013-03-11 | 2017-10-10 | Barnes & Noble College Booksellers, Llc | Stylus-based slider functionality for UI control of computing device |
US9626008B2 (en) | 2013-03-11 | 2017-04-18 | Barnes & Noble College Booksellers, Llc | Stylus-based remote wipe of lost device |
US9891722B2 (en) | 2013-03-11 | 2018-02-13 | Barnes & Noble College Booksellers, Llc | Stylus-based notification system |
US9189084B2 (en) | 2013-03-11 | 2015-11-17 | Barnes & Noble College Booksellers, Llc | Stylus-based user data storage and access |
US9261985B2 (en) | 2013-03-11 | 2016-02-16 | Barnes & Noble College Booksellers, Llc | Stylus-based touch-sensitive area for UI control of computing device |
US9600053B2 (en) | 2013-03-11 | 2017-03-21 | Barnes & Noble College Booksellers, Llc | Stylus control feature for locking/unlocking touch sensitive devices |
US20140349259A1 (en) * | 2013-03-14 | 2014-11-27 | Apple Inc. | Device, method, and graphical user interface for a group reading environment |
US20140315163A1 (en) * | 2013-03-14 | 2014-10-23 | Apple Inc. | Device, method, and graphical user interface for a group reading environment |
US20140282076A1 (en) * | 2013-03-15 | 2014-09-18 | Fisher Printing, Inc. | Online Proofing |
US9454622B2 (en) * | 2013-03-15 | 2016-09-27 | Doron Etzioni | Educational hub |
US9146672B2 (en) | 2013-04-10 | 2015-09-29 | Barnes & Noble College Booksellers, Llc | Multidirectional swipe key for virtual keyboard |
US10489501B2 (en) * | 2013-04-11 | 2019-11-26 | Google Llc | Systems and methods for displaying annotated video content by mobile computing devices |
US8966617B2 (en) | 2013-04-23 | 2015-02-24 | Barnesandnoble.Com Llc | Image pattern unlocking techniques for touch sensitive devices |
US8963869B2 (en) | 2013-04-23 | 2015-02-24 | Barnesandnoble.Com Llc | Color pattern unlocking techniques for touch sensitive devices |
US9152321B2 (en) | 2013-05-03 | 2015-10-06 | Barnes & Noble College Booksellers, Llc | Touch sensitive UI technique for duplicating content |
US9612740B2 (en) | 2013-05-06 | 2017-04-04 | Barnes & Noble College Booksellers, Inc. | Swipe-based delete confirmation for touch sensitive devices |
US10402915B2 (en) | 2013-05-10 | 2019-09-03 | Samsung Electronics Co., Ltd. | Methods and systems for on-device social grouping |
US10341421B2 (en) | 2013-05-10 | 2019-07-02 | Samsung Electronics Co., Ltd. | On-device social grouping for automated responses |
US9697562B2 (en) | 2013-06-07 | 2017-07-04 | International Business Machines Corporation | Resource provisioning for electronic books |
US10019153B2 (en) | 2013-06-07 | 2018-07-10 | Nook Digital, Llc | Scrapbooking digital content in computing devices using a swiping gesture |
US9400601B2 (en) | 2013-06-21 | 2016-07-26 | Nook Digital, Llc | Techniques for paging through digital content on touch screen devices |
US9423932B2 (en) | 2013-06-21 | 2016-08-23 | Nook Digital, Llc | Zoom view mode for digital content including multiple regions of interest |
US9244603B2 (en) | 2013-06-21 | 2016-01-26 | Nook Digital, Llc | Drag and drop techniques for discovering related content |
US9335897B2 (en) * | 2013-08-08 | 2016-05-10 | Palantir Technologies Inc. | Long click display of a context menu |
WO2015026345A1 (en) * | 2013-08-22 | 2015-02-26 | Trunity, Inc. | System and method for virtual textbook creation and remuneration |
US20150058808A1 (en) * | 2013-08-23 | 2015-02-26 | General Electric Company | Dynamic contextual touch menu |
US9811238B2 (en) * | 2013-08-29 | 2017-11-07 | Sharp Laboratories Of America, Inc. | Methods and systems for interacting with a digital marking surface |
US8892679B1 (en) | 2013-09-13 | 2014-11-18 | Box, Inc. | Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform |
US9704137B2 (en) * | 2013-09-13 | 2017-07-11 | Box, Inc. | Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform |
GB2518298A (en) | 2013-09-13 | 2015-03-18 | Box Inc | High-availability architecture for a cloud-based concurrent-access collaboration platform |
US9575948B2 (en) | 2013-10-04 | 2017-02-21 | Nook Digital, Llc | Annotation of digital content via selective fixed formatting |
US9436918B2 (en) | 2013-10-07 | 2016-09-06 | Microsoft Technology Licensing, Llc | Smart selection of text spans |
US10866931B2 (en) | 2013-10-22 | 2020-12-15 | Box, Inc. | Desktop application for accessing a cloud collaboration platform |
US10474747B2 (en) * | 2013-12-16 | 2019-11-12 | International Business Machines Corporation | Adjusting time dependent terminology in a question and answer system |
US10620796B2 (en) | 2013-12-19 | 2020-04-14 | Barnes & Noble College Booksellers, Llc | Visual thumbnail scrubber for digital content |
US9460221B2 (en) | 2013-12-20 | 2016-10-04 | Google Inc. | History of reading positions in eBooks |
US10275506B1 (en) * | 2013-12-20 | 2019-04-30 | Amazon Technologies, Inc. | Coordinating data across services |
US10356032B2 (en) | 2013-12-26 | 2019-07-16 | Palantir Technologies Inc. | System and method for detecting confidential information emails |
US9225522B2 (en) | 2013-12-27 | 2015-12-29 | Linkedin Corporation | Techniques for populating a content stream on a mobile device |
US9338013B2 (en) | 2013-12-30 | 2016-05-10 | Palantir Technologies Inc. | Verifiable redactable audit log |
US9367208B2 (en) | 2013-12-31 | 2016-06-14 | Barnes & Noble College Booksellers, Llc | Move icon to reveal textual information |
US9792272B2 (en) * | 2013-12-31 | 2017-10-17 | Barnes & Noble College Booksellers, Llc | Deleting annotations of paginated digital content |
US9424241B2 (en) | 2013-12-31 | 2016-08-23 | Barnes & Noble College Booksellers, Llc | Annotation mode including multiple note types for paginated digital content |
US10534528B2 (en) | 2013-12-31 | 2020-01-14 | Barnes & Noble College Booksellers, Llc | Digital flash card techniques |
US10331777B2 (en) * | 2013-12-31 | 2019-06-25 | Barnes & Noble College Booksellers, Llc | Merging annotations of paginated digital content |
US9367212B2 (en) | 2013-12-31 | 2016-06-14 | Barnes & Noble College Booksellers, Llc | User interface for navigating paginated digital content |
US10915698B2 (en) * | 2013-12-31 | 2021-02-09 | Barnes & Noble College Booksellers, Llc | Multi-purpose tool for interacting with paginated digital content |
US9588979B2 (en) | 2013-12-31 | 2017-03-07 | Barnes & Noble College Booksellers, Llc | UI techniques for navigating a file manager of an electronic computing device |
US8832832B1 (en) | 2014-01-03 | 2014-09-09 | Palantir Technologies Inc. | IP reputation |
USD762236S1 (en) * | 2014-02-10 | 2016-07-26 | Tencent Technology (Shenzhen) Company Limited | Display screen portion with animated graphical user interface |
US20150234786A1 (en) * | 2014-02-14 | 2015-08-20 | Kobo Inc. | E-reader device to display content from different resources on a partitioned display area |
US10691332B2 (en) | 2014-02-28 | 2020-06-23 | Samsung Electronics Company, Ltd. | Text input on an interactive display |
US9552345B2 (en) | 2014-02-28 | 2017-01-24 | Microsoft Technology Licensing, Llc | Gestural annotations |
US20150277678A1 (en) * | 2014-03-26 | 2015-10-01 | Kobo Incorporated | Information presentation techniques for digital content |
US20150277677A1 (en) * | 2014-03-26 | 2015-10-01 | Kobo Incorporated | Information presentation techniques for digital content |
US11188209B2 (en) | 2014-04-02 | 2021-11-30 | Microsoft Technology Licensing, Llc | Progressive functionality access for content insertion and modification |
US9524440B2 (en) | 2014-04-04 | 2016-12-20 | Myscript | System and method for superimposed handwriting recognition technology |
US9384403B2 (en) | 2014-04-04 | 2016-07-05 | Myscript | System and method for superimposed handwriting recognition technology |
US9898162B2 (en) | 2014-05-30 | 2018-02-20 | Apple Inc. | Swiping functions for messaging applications |
US9971500B2 (en) | 2014-06-01 | 2018-05-15 | Apple Inc. | Displaying options, assigning notification, ignoring messages, and simultaneous user interface displays in a messaging application |
US9619557B2 (en) | 2014-06-30 | 2017-04-11 | Palantir Technologies, Inc. | Systems and methods for key phrase characterization of documents |
US9535974B1 (en) | 2014-06-30 | 2017-01-03 | Palantir Technologies Inc. | Systems and methods for identifying key phrase clusters within documents |
US9430141B1 (en) * | 2014-07-01 | 2016-08-30 | Amazon Technologies, Inc. | Adaptive annotations |
US9256664B2 (en) | 2014-07-03 | 2016-02-09 | Palantir Technologies Inc. | System and method for news events detection and visualization |
US9927963B2 (en) | 2014-07-17 | 2018-03-27 | Barnes & Noble College Booksellers, Llc | Digital flash cards including links to digital content |
US9921721B2 (en) * | 2014-08-08 | 2018-03-20 | Google Llc | Navigation interfaces for ebooks |
US9939996B2 (en) * | 2014-08-13 | 2018-04-10 | Google Llc | Smart scrubber in an ebook navigation interface |
US9419992B2 (en) | 2014-08-13 | 2016-08-16 | Palantir Technologies Inc. | Unwanted tunneling alert system |
US10203933B2 (en) | 2014-11-06 | 2019-02-12 | Microsoft Technology Licensing, Llc | Context-based command surfacing |
US9043894B1 (en) | 2014-11-06 | 2015-05-26 | Palantir Technologies Inc. | Malicious software detection in a computing system |
US20160139763A1 (en) * | 2014-11-18 | 2016-05-19 | Kobo Inc. | Syllabary-based audio-dictionary functionality for digital reading content |
US9489572B2 (en) | 2014-12-02 | 2016-11-08 | Myscript | System and method for recognizing geometric shapes |
US20160162136A1 (en) * | 2014-12-04 | 2016-06-09 | Kobo Incorporated | Method and system for e-book reading-launch interface |
US9648036B2 (en) | 2014-12-29 | 2017-05-09 | Palantir Technologies Inc. | Systems for network risk assessment including processing of user access rights associated with a network of devices |
US9467455B2 (en) | 2014-12-29 | 2016-10-11 | Palantir Technologies Inc. | Systems for network risk assessment including processing of user access rights associated with a network of devices |
KR20160083759A (en) * | 2015-01-02 | 2016-07-12 | 삼성전자주식회사 | Method for providing an annotation and apparatus thereof |
JP6246146B2 (en) * | 2015-02-22 | 2017-12-13 | 株式会社オプティム | Electronic book terminal, information sharing method, and program for electronic book terminal |
US9910562B2 (en) * | 2015-03-01 | 2018-03-06 | Google Llc | Skimming to and past points of interest in digital content |
US11550993B2 (en) * | 2015-03-08 | 2023-01-10 | Microsoft Technology Licensing, Llc | Ink experience for images |
US10223343B2 (en) | 2015-03-17 | 2019-03-05 | Goessential Inc. | Method for providing selection overlays on electronic consumer content |
WO2016161442A1 (en) * | 2015-04-03 | 2016-10-06 | C. Kleinferchner Consulting KG | Selection overlays on electronic content |
CN104778044B (en) * | 2015-04-03 | 2018-03-20 | 北京奇虎科技有限公司 | The method and device of touch-screen gesture event stream distribution |
US9979890B2 (en) | 2015-04-23 | 2018-05-22 | Apple Inc. | Digital viewfinder user interface for multiple cameras |
CN104809295A (en) * | 2015-04-27 | 2015-07-29 | 任晖 | Method for making dynamic digital child publication by virtue of computer software |
US9852131B2 (en) * | 2015-05-18 | 2017-12-26 | Google Llc | Techniques for providing visual translation cards including contextually relevant definitions and examples |
US9971753B2 (en) * | 2015-06-04 | 2018-05-15 | University Of Central Florida Research Foundation, Inc. | Computer system providing collaborative learning features and related methods |
US9407652B1 (en) | 2015-06-26 | 2016-08-02 | Palantir Technologies Inc. | Network anomaly detection |
US9456000B1 (en) | 2015-08-06 | 2016-09-27 | Palantir Technologies Inc. | Systems, methods, user interfaces, and computer-readable media for investigating potential malicious communications |
US9537880B1 (en) | 2015-08-19 | 2017-01-03 | Palantir Technologies Inc. | Anomalous network monitoring, user behavior detection and database system |
CN105070114A (en) * | 2015-08-26 | 2015-11-18 | 华中师范大学 | Sticky-note-based visualized classroom interaction system |
US10178218B1 (en) | 2015-09-04 | 2019-01-08 | Vishal Vadodaria | Intelligent agent / personal virtual assistant with animated 3D persona, facial expressions, human gestures, body movements and mental states |
US10044745B1 (en) | 2015-10-12 | 2018-08-07 | Palantir Technologies, Inc. | Systems for computer network security risk assessment including user compromise analysis associated with a network of devices |
US20170116047A1 (en) * | 2015-10-25 | 2017-04-27 | Khozem Z. Dohadwala | Further applications of Reading State control - A method for repositioning reading material on electronic devices |
CN105262675A (en) * | 2015-10-29 | 2016-01-20 | 北京奇虎科技有限公司 | Method and apparatus for controlling chat based on electronic book |
USD830372S1 (en) | 2015-11-10 | 2018-10-09 | Gea Farm Technologies Gmbh | Display screen with a graphical user interface for a herd management system |
US20170154542A1 (en) | 2015-12-01 | 2017-06-01 | Gary King | Automated grading for interactive learning applications |
US10032208B2 (en) * | 2015-12-15 | 2018-07-24 | International Business Machines Corporation | Identifying recommended electronic books with detailed comparisons |
US20170177577A1 (en) * | 2015-12-18 | 2017-06-22 | Google Inc. | Biasing scrubber for digital content |
US9888039B2 (en) | 2015-12-28 | 2018-02-06 | Palantir Technologies Inc. | Network-based permissioning system |
US9916465B1 (en) | 2015-12-29 | 2018-03-13 | Palantir Technologies Inc. | Systems and methods for automatic and customizable data minimization of electronic data stores |
US10909479B2 (en) | 2016-02-12 | 2021-02-02 | David Harris Walters | Personalized multimedia autographing system |
CN105975175A (en) * | 2016-04-26 | 2016-09-28 | 广东小天才科技有限公司 | Method and device for identifying selection items |
US10498711B1 (en) | 2016-05-20 | 2019-12-03 | Palantir Technologies Inc. | Providing a booting key to a remote system |
KR20170138279A (en) * | 2016-06-07 | 2017-12-15 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
US10739972B2 (en) | 2016-06-10 | 2020-08-11 | Apple Inc. | Device, method, and graphical user interface for managing electronic communications |
US9854156B1 (en) | 2016-06-12 | 2017-12-26 | Apple Inc. | User interface for camera effects |
USD794065S1 (en) * | 2016-06-17 | 2017-08-08 | Google Inc. | Display screen with an animated graphical user interface |
US10084802B1 (en) | 2016-06-21 | 2018-09-25 | Palantir Technologies Inc. | Supervisory control and data acquisition |
US10303352B2 (en) | 2016-06-30 | 2019-05-28 | Microsoft Technology Licensing, Llc | Navigating long distances on navigable surfaces |
KR102625906B1 (en) * | 2016-07-04 | 2024-01-23 | 김용한 | Adhesive memo paper for voice recording using electronic pen |
US10291637B1 (en) | 2016-07-05 | 2019-05-14 | Palantir Technologies Inc. | Network anomaly detection and profiling |
JP6683042B2 (en) * | 2016-07-06 | 2020-04-15 | 富士ゼロックス株式会社 | Data processing device, system and program |
US10698927B1 (en) | 2016-08-30 | 2020-06-30 | Palantir Technologies Inc. | Multiple sensor session and log information compression and correlation system |
CN106599219B (en) * | 2016-11-25 | 2018-02-23 | 杭州日阅通讯有限公司 | A kind of implementation method of digital book interaction share system |
US11120074B2 (en) | 2016-12-06 | 2021-09-14 | International Business Machines Corporation | Streamlining citations and references |
US20180165724A1 (en) * | 2016-12-13 | 2018-06-14 | International Business Machines Corporation | Method and system for contextual business intelligence report generation and display |
US10728262B1 (en) | 2016-12-21 | 2020-07-28 | Palantir Technologies Inc. | Context-aware network-based malicious activity warning systems |
US10721262B2 (en) | 2016-12-28 | 2020-07-21 | Palantir Technologies Inc. | Resource-centric network cyber attack warning system |
US10754872B2 (en) | 2016-12-28 | 2020-08-25 | Palantir Technologies Inc. | Automatically executing tasks and configuring access control lists in a data transformation system |
JP6807248B2 (en) * | 2017-02-24 | 2021-01-06 | 株式会社東芝 | Display control device and display control program |
US10643485B2 (en) * | 2017-03-30 | 2020-05-05 | International Business Machines Corporation | Gaze based classroom notes generator |
CN107038905A (en) * | 2017-05-11 | 2017-08-11 | 深圳市恒科电子科技有限公司 | A kind of VR intellectual education control system |
WO2018218660A1 (en) * | 2017-06-02 | 2018-12-06 | 深圳市华阅文化传媒有限公司 | Method and device for multi-person voice chatting on reading page |
US10027551B1 (en) | 2017-06-29 | 2018-07-17 | Palantir Technologies, Inc. | Access controls through node-based effective policy identifiers |
US10963465B1 (en) | 2017-08-25 | 2021-03-30 | Palantir Technologies Inc. | Rapid importation of data including temporally tracked object recognition |
US10984427B1 (en) | 2017-09-13 | 2021-04-20 | Palantir Technologies Inc. | Approaches for analyzing entity relationships |
US10079832B1 (en) | 2017-10-18 | 2018-09-18 | Palantir Technologies Inc. | Controlling user creation of data resources on a data processing platform |
GB201716170D0 (en) | 2017-10-04 | 2017-11-15 | Palantir Technologies Inc | Controlling user creation of data resources on a data processing platform |
US10250401B1 (en) | 2017-11-29 | 2019-04-02 | Palantir Technologies Inc. | Systems and methods for providing category-sensitive chat channels |
US11133925B2 (en) | 2017-12-07 | 2021-09-28 | Palantir Technologies Inc. | Selective access to encrypted logs |
US10142349B1 (en) | 2018-02-22 | 2018-11-27 | Palantir Technologies Inc. | Verifying network-based permissioning rights |
WO2019130333A1 (en) * | 2017-12-28 | 2019-07-04 | Bodhaguru Learning Private Limited | Learning device and method thereof |
US10878051B1 (en) | 2018-03-30 | 2020-12-29 | Palantir Technologies Inc. | Mapping device identifiers |
EP4290400A3 (en) | 2018-04-03 | 2024-03-06 | Palantir Technologies Inc. | Controlling access to computer resources |
US11023661B2 (en) * | 2018-05-03 | 2021-06-01 | Microsoft Technology Licensing, Llc | Visually enhanced digital ink |
US10949400B2 (en) | 2018-05-09 | 2021-03-16 | Palantir Technologies Inc. | Systems and methods for tamper-resistant activity logging |
US10579163B2 (en) * | 2018-06-02 | 2020-03-03 | Mersive Technologies, Inc. | System and method of annotation of a shared display using a mobile device |
US11928984B2 (en) | 2018-06-07 | 2024-03-12 | Thinkster Learning Inc. | Intelligent and contextual system for test management |
US11030913B2 (en) * | 2018-06-07 | 2021-06-08 | Thinkster Learning, Inc. | Intelligent and contextual system for knowledge progression and quiz management |
US11244063B2 (en) | 2018-06-11 | 2022-02-08 | Palantir Technologies Inc. | Row-level and column-level policy service |
CN110797001B (en) * | 2018-07-17 | 2022-04-12 | 阿里巴巴(中国)有限公司 | Method and device for generating voice audio of electronic book and readable storage medium |
US10645294B1 (en) | 2019-05-06 | 2020-05-05 | Apple Inc. | User interfaces for capturing and managing visual media |
US11770601B2 (en) | 2019-05-06 | 2023-09-26 | Apple Inc. | User interfaces for capturing and managing visual media |
KR102275356B1 (en) * | 2018-10-05 | 2021-07-09 | 엔에이치엔 주식회사 | Terminal and method for providing mediation information for collaborative communication |
CN111258465B (en) * | 2018-12-03 | 2021-10-12 | 连尚(新昌)网络科技有限公司 | Method and equipment for displaying and viewing information |
CN109710144B (en) * | 2018-12-05 | 2020-11-17 | 掌阅科技股份有限公司 | Notebook page processing method, computer device and storage medium |
CN109685053B (en) * | 2018-12-18 | 2021-11-12 | 北京天融信网络安全技术有限公司 | Method and device for training character recognition system, storage medium and electronic equipment |
CN109815189A (en) * | 2019-01-31 | 2019-05-28 | 北京翰舟信息科技有限公司 | A kind of intelligence reading method, device, system and storage medium |
US10868887B2 (en) | 2019-02-08 | 2020-12-15 | Palantir Technologies Inc. | Systems and methods for isolating applications associated with multiple tenants within a computing platform |
CN115185445A (en) * | 2019-04-17 | 2022-10-14 | 华为技术有限公司 | Method for adding annotations and electronic equipment |
US11706521B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | User interfaces for capturing and managing visual media |
CN110085061A (en) * | 2019-05-14 | 2019-08-02 | 韩钦德 | A kind of knowledge learning system and its application method |
CN110231907A (en) * | 2019-06-19 | 2019-09-13 | 京东方科技集团股份有限公司 | Display methods, electronic equipment, computer equipment and the medium of electronic reading |
US11195509B2 (en) * | 2019-08-29 | 2021-12-07 | Microsoft Technology Licensing, Llc | System and method for interactive virtual assistant generation for assemblages |
US11704441B2 (en) | 2019-09-03 | 2023-07-18 | Palantir Technologies Inc. | Charter-based access controls for managing computer resources |
USD916133S1 (en) * | 2019-09-08 | 2021-04-13 | Apple Inc. | Electronic device with icon |
US10761889B1 (en) | 2019-09-18 | 2020-09-01 | Palantir Technologies Inc. | Systems and methods for autoscaling instance groups of computing platforms |
USD930701S1 (en) | 2019-10-10 | 2021-09-14 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with transitional graphical user interface |
CN111459373A (en) * | 2020-03-26 | 2020-07-28 | 掌阅科技股份有限公司 | Electronic book idea data display method, computing device and computer storage medium |
USD983810S1 (en) | 2020-07-10 | 2023-04-18 | Schlumberger Technology Corporation | Electronic device with display screen and graphical user interface |
USD1006820S1 (en) | 2020-07-10 | 2023-12-05 | Schlumberger Technology Corporation | Electronic device with display screen and graphical user interface |
USD1009070S1 (en) | 2020-07-10 | 2023-12-26 | Schlumberger Technology Corporation | Electronic device with display screen and graphical user interface |
CN115729416A (en) * | 2021-09-02 | 2023-03-03 | 北京字节跳动网络技术有限公司 | Information reply method, device, electronic equipment, readable storage medium and program product |
US20230087611A1 (en) * | 2021-09-22 | 2023-03-23 | Apple Inc. | Highlighting reading based on adaptive prediction |
KR102684023B1 (en) | 2024-04-20 | 2024-07-12 | 주식회사 큐버 | group control method of activation mode of applications by use of school class profiles |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6493006B1 (en) * | 1996-05-10 | 2002-12-10 | Apple Computer, Inc. | Graphical user interface having contextual menus |
US6664991B1 (en) * | 2000-01-06 | 2003-12-16 | Microsoft Corporation | Method and apparatus for providing context menus on a pen-based device |
US7058902B2 (en) * | 2002-07-30 | 2006-06-06 | Microsoft Corporation | Enhanced on-object context menus |
US7733366B2 (en) * | 2002-07-01 | 2010-06-08 | Microsoft Corporation | Computer network-based, interactive, multimedia learning system and process |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US20120054671A1 (en) * | 2010-08-30 | 2012-03-01 | Vmware, Inc. | Multi-touch interface gestures for keyboard and/or mouse inputs |
US20120096386A1 (en) * | 2010-10-19 | 2012-04-19 | Laurent Baumann | User interface for application transfers |
US20120096383A1 (en) * | 2010-10-15 | 2012-04-19 | Sony Network Entertainment Inc. | Loader animation |
Family Cites Families (275)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4820167A (en) * | 1987-01-14 | 1989-04-11 | Nobles Anthony A | Electronic school teaching system |
US4985697A (en) * | 1987-07-06 | 1991-01-15 | Learning Insights, Ltd. | Electronic book educational publishing method using buried reference materials and alternate learning levels |
US7401286B1 (en) | 1993-12-02 | 2008-07-15 | Discovery Communications, Inc. | Electronic book electronic links |
US5392387A (en) | 1992-12-17 | 1995-02-21 | International Business Machines Corporation | Method and system for enhanced data access efficiency in an electronic book |
US5463725A (en) | 1992-12-31 | 1995-10-31 | International Business Machines Corp. | Data processing system graphical user interface which emulates printed material |
US9053640B1 (en) * | 1993-12-02 | 2015-06-09 | Adrea, LLC | Interactive electronic book |
US6178431B1 (en) | 1994-10-05 | 2001-01-23 | International Business Machines Corporation | Method and system for providing side notes in word processing |
US5629980A (en) | 1994-11-23 | 1997-05-13 | Xerox Corporation | System for controlling the distribution and use of digital works |
US5799157A (en) * | 1994-12-13 | 1998-08-25 | Elcom Systems, Inc. | System and method for creating interactive electronic systems to present information and execute transactions |
US5877765A (en) | 1995-09-11 | 1999-03-02 | Microsoft Corporation | Method and system for displaying internet shortcut icons on the desktop |
GB2307813A (en) | 1995-11-02 | 1997-06-04 | Int Mobile Satellite Org | Text/Image separation and compression encoding method |
US5893132A (en) | 1995-12-14 | 1999-04-06 | Motorola, Inc. | Method and system for encoding a book for reading using an electronic book |
US7155677B2 (en) * | 1997-04-25 | 2006-12-26 | Diane Kessenich | Portal for supplying supplementary information for printed books |
US6017219A (en) | 1997-06-18 | 2000-01-25 | International Business Machines Corporation | System and method for interactive reading and language instruction |
US7028899B2 (en) | 1999-06-07 | 2006-04-18 | Metrologic Instruments, Inc. | Method of speckle-noise pattern reduction and apparatus therefore based on reducing the temporal-coherence of the planar laser illumination beam before it illuminates the target object by applying temporal phase modulation techniques during the transmission of the plib towards the target |
US8479122B2 (en) | 2004-07-30 | 2013-07-02 | Apple Inc. | Gestures for touch sensitive input devices |
US6181344B1 (en) | 1998-03-20 | 2001-01-30 | Nuvomedia, Inc. | Drag-and-release method for configuring user-definable function key of hand-held computing device |
ATE243862T1 (en) | 1998-04-24 | 2003-07-15 | Natural Input Solutions Inc | METHOD FOR PROCESSING AND CORRECTION IN A STYLIST-ASSISTED USER INTERFACE |
US6178430B1 (en) | 1998-05-11 | 2001-01-23 | Mci Communication Corporation | Automated information technology standards management system |
US6122647A (en) | 1998-05-19 | 2000-09-19 | Perspecta, Inc. | Dynamic generation of contextual links in hypertext documents |
US6438564B1 (en) | 1998-06-17 | 2002-08-20 | Microsoft Corporation | Method for associating a discussion with a document |
DE69939118D1 (en) | 1998-07-02 | 2008-08-28 | Sharp Kk | ADMINISTRATIVE SYSTEM FOR ELECTRONIC PRODUCTS IN WHICH THE FOLLOWING DEVICES ARE ASSOCIATED BY COMMUNICATION LINE: APPARATUS FOR MANAGING COPYRIGHT RIGHTS, ELECTRONIC PRODUCT SALES DEVICE, ELECTRONIC MASS DISPLAYING DEVICE |
US6178344B1 (en) | 1999-03-02 | 2001-01-23 | The United States Of America As Represented By The Secretary Of The Navy | Reconfigurable array for positioning medical sensors |
US6549219B2 (en) | 1999-04-09 | 2003-04-15 | International Business Machines Corporation | Pie menu graphical user interface |
US6580683B1 (en) | 1999-06-23 | 2003-06-17 | Dataplay, Inc. | Optical recording medium having a master data area and a writeable data area |
JP2001076621A (en) | 1999-06-30 | 2001-03-23 | Toshiba Corp | Manufacture of electron tube stem and electron tube, manufacturing device thereof, the electron tube stem, and electron gun |
USD464360S1 (en) | 1999-11-17 | 2002-10-15 | Siemens Aktiengesellschaft | User interface for a medical playback device |
US7028267B1 (en) | 1999-12-07 | 2006-04-11 | Microsoft Corporation | Method and apparatus for capturing and rendering text annotations for non-modifiable electronic content |
US6957233B1 (en) | 1999-12-07 | 2005-10-18 | Microsoft Corporation | Method and apparatus for capturing and rendering annotations for non-modifiable electronic content |
US6714214B1 (en) | 1999-12-07 | 2004-03-30 | Microsoft Corporation | System method and user interface for active reading of electronic content |
US6446246B1 (en) | 1999-12-28 | 2002-09-03 | Intel Corporation | Method and apparatus for detail routing using obstacle carving around terminals |
US7007034B1 (en) * | 2000-01-21 | 2006-02-28 | International Business Machines Corporation | File structure for storing content objects in a data repository |
US6611840B1 (en) * | 2000-01-21 | 2003-08-26 | International Business Machines Corporation | Method and system for removing content entity object in a hierarchically structured content object stored in a database |
US7340481B1 (en) * | 2000-01-21 | 2008-03-04 | International Business Machines Corp. | Method and system for adding user-provided content to a content object stored in a data repository |
US20040205645A1 (en) * | 2000-02-14 | 2004-10-14 | Goosewing, Inc. | Customized textbook systems and methods |
AU781901B2 (en) * | 2000-03-31 | 2005-06-23 | International Business Machines Corporation | Aggregation of content as a personalized document |
US20020054073A1 (en) | 2000-06-02 | 2002-05-09 | Yuen Henry C. | Electronic book with indexed text-to-audio switching capabilities |
KR20020002102A (en) | 2000-06-29 | 2002-01-09 | 임중연,이재훈 | Network education system mounted in class room and education method using said system |
KR100390969B1 (en) | 2000-07-11 | 2003-07-12 | 이구민 | ebook contents servise system and method thereof |
JP3553912B2 (en) | 2000-09-28 | 2004-08-11 | 株式会社リコー | Consumption information management system and service center device |
US7688306B2 (en) * | 2000-10-02 | 2010-03-30 | Apple Inc. | Methods and apparatuses for operating a portable device based on an accelerometer |
US20020091793A1 (en) | 2000-10-23 | 2002-07-11 | Isaac Sagie | Method and system for tourist guiding, including both navigation and narration, utilizing mobile computing and communication devices |
US20020073177A1 (en) * | 2000-10-25 | 2002-06-13 | Clark George Philip | Processing content for electronic distribution using a digital rights management system |
US20020082939A1 (en) | 2000-10-25 | 2002-06-27 | Clark George Phillip | Fulfilling a request for an electronic book |
US6704733B2 (en) | 2000-10-25 | 2004-03-09 | Lightning Source, Inc. | Distributing electronic books over a computer network |
EP2378733B1 (en) * | 2000-11-10 | 2013-03-13 | AOL Inc. | Digital content distribution and subscription system |
US6632094B1 (en) | 2000-11-10 | 2003-10-14 | Readingvillage.Com, Inc. | Technique for mentoring pre-readers and early readers |
US6590568B1 (en) | 2000-11-20 | 2003-07-08 | Nokia Corporation | Touch screen drag and drop input technique |
US20020087560A1 (en) | 2000-12-29 | 2002-07-04 | Greg Bardwell | On-line class and curriculum management |
US7139977B1 (en) | 2001-01-24 | 2006-11-21 | Oracle International Corporation | System and method for producing a virtual online book |
US20020099552A1 (en) | 2001-01-25 | 2002-07-25 | Darryl Rubin | Annotating electronic information with audio clips |
US20020120635A1 (en) | 2001-02-27 | 2002-08-29 | Joao Raymond Anthony | Apparatus and method for providing an electronic book |
US7107533B2 (en) | 2001-04-09 | 2006-09-12 | International Business Machines Corporation | Electronic book with multimode I/O |
US20020182189A1 (en) | 2001-04-19 | 2002-12-05 | Chang Chia Ning (Sophia) | Compositions and methods for the repair and construction of bone and other tissue |
US7020663B2 (en) | 2001-05-30 | 2006-03-28 | George M. Hay | System and method for the delivery of electronic books |
KR20030000244A (en) | 2001-06-22 | 2003-01-06 | 신영선 | E-book |
US20030018543A1 (en) * | 2001-06-25 | 2003-01-23 | Alger Jeffrey H. | Client portal |
KR20030003818A (en) | 2001-07-04 | 2003-01-14 | (주) 고미드 | System and method for bookmarking specific position inside of web pages |
GB0117543D0 (en) * | 2001-07-18 | 2001-09-12 | Hewlett Packard Co | Document viewing device |
US7039234B2 (en) * | 2001-07-19 | 2006-05-02 | Microsoft Corporation | Electronic ink as a software object |
US7103848B2 (en) * | 2001-09-13 | 2006-09-05 | International Business Machines Corporation | Handheld electronic book reader with annotation and usage tracking capabilities |
US20030144961A1 (en) * | 2002-01-25 | 2003-07-31 | Tharaken Ajit C. | System and method for the creation and distribution of customized electronic books |
US20040205568A1 (en) | 2002-03-01 | 2004-10-14 | Breuel Thomas M. | Method and system for document image layout deconstruction and redisplay system |
US7236966B1 (en) | 2002-03-08 | 2007-06-26 | Cisco Technology | Method and system for providing a user-customized electronic book |
CN100504743C (en) * | 2002-03-19 | 2009-06-24 | 电子图书系统有限公司 | Method and system for analyzing reading pattern of reader for electronic documents |
JP2003337527A (en) * | 2002-05-21 | 2003-11-28 | Hitachi Ltd | Learning support system and learning support method |
US8201085B2 (en) | 2007-06-21 | 2012-06-12 | Thomson Reuters Global Resources | Method and system for validating references |
US7568151B2 (en) | 2002-06-27 | 2009-07-28 | Microsoft Corporation | Notification of activity around documents |
CN1295665C (en) | 2002-07-04 | 2007-01-17 | 诺基亚有限公司 | Method and apparatus for reproducing multi-track data according to predetermined conditions |
US20040162846A1 (en) * | 2003-01-14 | 2004-08-19 | Tohru Nakahara | Content use management system |
JP2004251125A (en) | 2003-02-18 | 2004-09-09 | Rikogaku Shinkokai | Exhaust heat recovery system |
JP2004258932A (en) * | 2003-02-26 | 2004-09-16 | Toohan:Kk | Inspection method for returned book |
US8064753B2 (en) | 2003-03-05 | 2011-11-22 | Freeman Alan D | Multi-feature media article and method for manufacture of same |
US7793233B1 (en) * | 2003-03-12 | 2010-09-07 | Microsoft Corporation | System and method for customizing note flags |
US20050289461A1 (en) * | 2003-05-23 | 2005-12-29 | Manoel Amado | System and method for digital content processing and distribution |
US20040260714A1 (en) * | 2003-06-20 | 2004-12-23 | Avijit Chatterjee | Universal annotation management system |
US20040267527A1 (en) | 2003-06-25 | 2004-12-30 | International Business Machines Corporation | Voice-to-text reduction for real time IM/chat/SMS |
US7210107B2 (en) | 2003-06-27 | 2007-04-24 | Microsoft Corporation | Menus whose geometry is bounded by two radii and an arc |
US7353457B2 (en) | 2003-06-30 | 2008-04-01 | Sap Ag | Graphical access to data objects |
US20050022113A1 (en) | 2003-07-24 | 2005-01-27 | Hanlon Robert Eliot | System and method to efficiently switch between paper, electronic and audio versions of documents |
CN100555264C (en) | 2003-10-21 | 2009-10-28 | 国际商业机器公司 | The annotate method of electronic document, device and system |
US8641424B2 (en) * | 2003-10-23 | 2014-02-04 | Monvini Limited | Method of publication and distribution of instructional materials |
US20050091578A1 (en) | 2003-10-24 | 2005-04-28 | Microsoft Corporation | Electronic sticky notes |
JP2005189906A (en) | 2003-12-24 | 2005-07-14 | Fuji Photo Film Co Ltd | Electronic book |
US20050154760A1 (en) | 2004-01-12 | 2005-07-14 | International Business Machines Corporation | Capturing portions of an electronic document |
US7812860B2 (en) | 2004-04-01 | 2010-10-12 | Exbiblio B.V. | Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device |
US20050193330A1 (en) | 2004-02-27 | 2005-09-01 | Exit 33 Education, Inc. | Methods and systems for eBook storage and presentation |
US20050210047A1 (en) | 2004-03-18 | 2005-09-22 | Zenodata Corporation | Posting data to a database from non-standard documents using document mapping to standard document types |
KR20050108231A (en) | 2004-05-12 | 2005-11-16 | 주식회사 인포스트림 | Ebook system and business method is operated by streaming data service in internet web browser |
US8504369B1 (en) | 2004-06-02 | 2013-08-06 | Nuance Communications, Inc. | Multi-cursor transcription editing |
KR20060001392A (en) * | 2004-06-30 | 2006-01-06 | 주식회사 한국인식기술 | Document image storage method of content retrieval base to use ocr |
US20070118794A1 (en) | 2004-09-08 | 2007-05-24 | Josef Hollander | Shared annotation system and method |
US7454717B2 (en) | 2004-10-20 | 2008-11-18 | Microsoft Corporation | Delimiters for selection-action pen gesture phrases |
US9275052B2 (en) * | 2005-01-19 | 2016-03-01 | Amazon Technologies, Inc. | Providing annotations of a digital work |
US8131647B2 (en) | 2005-01-19 | 2012-03-06 | Amazon Technologies, Inc. | Method and system for providing annotations of a digital work |
US20060194181A1 (en) | 2005-02-28 | 2006-08-31 | Outland Research, Llc | Method and apparatus for electronic books with enhanced educational features |
US20070136657A1 (en) | 2005-03-25 | 2007-06-14 | Daniel Blumenthal | Process for Automatic Data Annotation, Selection, and Utilization. |
US7546524B1 (en) | 2005-03-30 | 2009-06-09 | Amazon Technologies, Inc. | Electronic input device, system, and method using human-comprehensible content to automatically correlate an annotation of a paper document with a digital version of the document |
US8751916B2 (en) * | 2005-07-29 | 2014-06-10 | Gary T. Bender | Apparatuses, methods and systems for a composite multimedia content generator |
US7925973B2 (en) * | 2005-08-12 | 2011-04-12 | Brightcove, Inc. | Distribution of content |
US7779347B2 (en) | 2005-09-02 | 2010-08-17 | Fourteen40, Inc. | Systems and methods for collaboratively annotating electronic documents |
US20070061755A1 (en) * | 2005-09-09 | 2007-03-15 | Microsoft Corporation | Reading mode for electronic documents |
WO2007031411A2 (en) * | 2005-09-14 | 2007-03-22 | Irex Technologies B.V. | Electronic reading device |
US7650557B2 (en) | 2005-09-19 | 2010-01-19 | Network Appliance, Inc. | Memory scrubbing of expanded memory |
US7783993B2 (en) | 2005-09-23 | 2010-08-24 | Palm, Inc. | Content-based navigation and launching on mobile devices |
EP1949218A4 (en) | 2005-10-04 | 2009-12-02 | Strands Inc | Methods and apparatus for visualizing a music library |
USD554661S1 (en) | 2005-11-14 | 2007-11-06 | Microsoft Corporation | Image for a portion of a display screen |
US20070150802A1 (en) * | 2005-12-12 | 2007-06-28 | Canon Information Systems Research Australia Pty. Ltd. | Document annotation and interface |
US8726144B2 (en) * | 2005-12-23 | 2014-05-13 | Xerox Corporation | Interactive learning-based document annotation |
US7644372B2 (en) | 2006-01-27 | 2010-01-05 | Microsoft Corporation | Area frequency radial menus |
US8196055B2 (en) * | 2006-01-30 | 2012-06-05 | Microsoft Corporation | Controlling application windows in an operating system |
US7667686B2 (en) * | 2006-02-01 | 2010-02-23 | Memsic, Inc. | Air-writing and motion sensing input for portable devices |
US8312372B2 (en) * | 2006-02-10 | 2012-11-13 | Microsoft Corporation | Method for confirming touch input |
KR20080096761A (en) | 2006-02-28 | 2008-11-03 | 샌디스크 아이엘 엘티디 | Bookmarked synchronization of files |
US20090070034A1 (en) * | 2006-03-17 | 2009-03-12 | Christopher L Oesterling | Method for recording an annotation and making it available for later playback |
KR100695209B1 (en) | 2006-03-22 | 2007-03-14 | 에스케이 텔레콤주식회사 | Method and mobile communication terminal for storing content of electronic book |
US8352449B1 (en) | 2006-03-29 | 2013-01-08 | Amazon Technologies, Inc. | Reader device content indexing |
US7748634B1 (en) | 2006-03-29 | 2010-07-06 | Amazon Technologies, Inc. | Handheld electronic book reader device having dual displays |
US7586499B1 (en) | 2006-05-08 | 2009-09-08 | Adobe Systems Incorporated | Method and apparatus for adjusting the color of a digital image |
CN102081645B (en) * | 2006-05-10 | 2014-11-26 | 谷歌公司 | WEB notebook tools |
US20070300260A1 (en) | 2006-06-22 | 2007-12-27 | Nokia Corporation | Method, system, device and computer program product for generating and distributing media diary podcasts |
USD552121S1 (en) | 2006-07-20 | 2007-10-02 | Xerion Avionix, Llc | Computer-generated icon for a portion of an engine instrument display |
US20080027726A1 (en) | 2006-07-28 | 2008-01-31 | Eric Louis Hansen | Text to audio mapping, and animation of the text |
US20120166316A1 (en) * | 2006-08-11 | 2012-06-28 | Richard Angelo Messina | Collective community Method of Integrated Internet-Based tools for Independent Contractors, their Collaborators, and Customers |
KR100838485B1 (en) | 2006-08-30 | 2008-06-16 | 주식회사 케이티프리텔 | Method and apparatus for e-book service with realtime searching |
TWI346494B (en) | 2006-09-08 | 2011-08-01 | High Tech Comp Corp | Page movement controller and operating method thereof |
US9356935B2 (en) | 2006-09-12 | 2016-05-31 | Adobe Systems Incorporated | Selective access to portions of digital content |
US20100278453A1 (en) | 2006-09-15 | 2010-11-04 | King Martin T | Capture and display of annotations in paper and electronic documents |
US7873588B2 (en) | 2007-02-05 | 2011-01-18 | Emantras, Inc. | Mobile e-learning method and apparatus based on media adapted learning objects |
US8462109B2 (en) * | 2007-01-05 | 2013-06-11 | Invensense, Inc. | Controlling and accessing content using motion processing on mobile devices |
US7877854B2 (en) | 2007-02-08 | 2011-02-01 | St. Jude Medical, Atrial Fibrillation Division, Inc. | Method of manufacturing an ultrasound transducer |
US8352876B2 (en) | 2007-02-21 | 2013-01-08 | University Of Central Florida Research Foundation, Inc. | Interactive electronic book operating systems and methods |
US20080222257A1 (en) * | 2007-03-10 | 2008-09-11 | Shamik Mukherjee | Systems and methods for sending customized emails to recipient groups |
US20080225757A1 (en) | 2007-03-13 | 2008-09-18 | Byron Johnson | Web-based interactive learning system and method |
US8144990B2 (en) | 2007-03-22 | 2012-03-27 | Sony Ericsson Mobile Communications Ab | Translation and display of text in picture |
US20080243991A1 (en) | 2007-03-29 | 2008-10-02 | Ryan Thomas A | Content Purchase and Transfer Management for Reader Device |
US8700005B1 (en) | 2007-05-21 | 2014-04-15 | Amazon Technologies, Inc. | Notification of a user device to perform an action |
US8108793B2 (en) * | 2007-05-21 | 2012-01-31 | Amazon Technologies, Inc, | Zone-associated objects |
US20080317346A1 (en) | 2007-06-21 | 2008-12-25 | Microsoft Corporation | Character and Object Recognition with a Mobile Photographic Device |
US20090009532A1 (en) | 2007-07-02 | 2009-01-08 | Sharp Laboratories Of America, Inc. | Video content identification using ocr |
US8190590B2 (en) | 2007-08-15 | 2012-05-29 | Martin Edward Lawlor | System and method for the creation and access of dynamic course content |
US20090047647A1 (en) | 2007-08-15 | 2009-02-19 | Welch Meghan M | System and method for book presentation |
JP4618517B2 (en) | 2007-08-22 | 2011-01-26 | ソニー株式会社 | E-book, progress sensation notification method, progress sensation notification program, and progress sensation notification program storage medium |
KR101391599B1 (en) | 2007-09-05 | 2014-05-09 | 삼성전자주식회사 | Method for generating an information of relation between characters in content and appratus therefor |
ITBO20070656A1 (en) | 2007-09-26 | 2009-03-27 | Ferrari Spa | INFOTELEMATIC SYSTEM FOR A ROAD VEHICLE |
JP2011501271A (en) | 2007-10-09 | 2011-01-06 | スキッフ・エルエルシー | Content distribution system, method and apparatus |
US20090187842A1 (en) | 2008-01-22 | 2009-07-23 | 3Dlabs Inc., Ltd. | Drag and Drop User Interface for Portable Electronic Devices with Touch Sensitive Screens |
US8593408B2 (en) * | 2008-03-20 | 2013-11-26 | Lg Electronics Inc. | Electronic document reproduction apparatus and reproducing method thereof |
KR101012379B1 (en) * | 2008-03-25 | 2011-02-09 | 엘지전자 주식회사 | Terminal and method of displaying information therein |
US20090254805A1 (en) | 2008-04-02 | 2009-10-08 | Milton Jr Harold W | Method of drafting a claim set |
US20090254802A1 (en) * | 2008-04-04 | 2009-10-08 | Print Asset Management, Inc. | Publishing system and method that enables users to collaboratively create, professional appearing digital publications for "On-Demand" distribution in a variety of media that includes digital printing |
US20090267909A1 (en) * | 2008-04-27 | 2009-10-29 | Htc Corporation | Electronic device and user interface display method thereof |
KR20090117965A (en) * | 2008-05-12 | 2009-11-17 | 제노젠(주) | Study system and method for linking multimedia content with book |
US8346662B2 (en) * | 2008-05-16 | 2013-01-01 | Visa U.S.A. Inc. | Desktop alert with interactive bona fide dispute initiation through chat session facilitated by desktop application |
US20110184960A1 (en) | 2009-11-24 | 2011-07-28 | Scrible, Inc. | Methods and systems for content recommendation based on electronic document annotation |
US8126878B2 (en) | 2008-06-24 | 2012-02-28 | Krasnow Arthur Z | Academic study tool utilizing e-book technology |
US8245156B2 (en) | 2008-06-28 | 2012-08-14 | Apple Inc. | Radial menu selection |
CN102124523B (en) | 2008-07-04 | 2014-08-27 | 布克查克控股有限公司 | Method and system for making and playing soundtracks |
US20100004944A1 (en) * | 2008-07-07 | 2010-01-07 | Murugan Palaniappan | Book Creation In An Online Collaborative Environment |
US20100023319A1 (en) * | 2008-07-28 | 2010-01-28 | International Business Machines Corporation | Model-driven feedback for annotation |
KR101466356B1 (en) | 2008-08-12 | 2014-11-27 | 삼성전자주식회사 | Apparatus and method for sharing a bookmark in a home network |
US20100050064A1 (en) | 2008-08-22 | 2010-02-25 | At & T Labs, Inc. | System and method for selecting a multimedia presentation to accompany text |
US9055017B2 (en) | 2008-08-28 | 2015-06-09 | Amazon Technologies, Inc. | Selective communication of messages |
WO2010028071A1 (en) * | 2008-09-03 | 2010-03-11 | Owjo Ltd. | Systems and methods for a comprehensive integrated and universal content selling and buying platform |
US20100088746A1 (en) | 2008-10-08 | 2010-04-08 | Sony Corporation | Secure ebook techniques |
US20100114714A1 (en) * | 2008-10-31 | 2010-05-06 | James Gerard Vitek | Method and system for sharing revenue of an application platform |
US8433431B1 (en) | 2008-12-02 | 2013-04-30 | Soundhound, Inc. | Displaying text to end users in coordination with audio playback |
US20100146459A1 (en) * | 2008-12-08 | 2010-06-10 | Mikko Repka | Apparatus and Method for Influencing Application Window Functionality Based on Characteristics of Touch Initiated User Interface Manipulations |
US20100324895A1 (en) | 2009-01-15 | 2010-12-23 | K-Nfb Reading Technology, Inc. | Synchronization for document narration |
US8433998B2 (en) | 2009-01-16 | 2013-04-30 | International Business Machines Corporation | Tool and method for annotating an event map, and collaborating using the annotated event map |
US9863656B2 (en) | 2009-02-19 | 2018-01-09 | Siemens Industry, Inc. | Room sensor using charged particle airflow |
CN101847055A (en) * | 2009-03-24 | 2010-09-29 | 鸿富锦精密工业(深圳)有限公司 | Input method based on touch screen |
US9159075B2 (en) * | 2009-04-24 | 2015-10-13 | Reza Jalili | System and method for distribution and redistribution of electronic content |
US9436380B2 (en) | 2009-05-19 | 2016-09-06 | International Business Machines Corporation | Radial menus with variable selectable item areas |
KR101072176B1 (en) * | 2009-05-27 | 2011-10-10 | 포항공과대학교 산학협력단 | User profile automatic creation apparatus through voice dialog meaning process, and contents recommendation apparatus using the same |
US9141768B2 (en) | 2009-06-10 | 2015-09-22 | Lg Electronics Inc. | Terminal and control method thereof |
KR20110001105A (en) | 2009-06-29 | 2011-01-06 | 엘지전자 주식회사 | Mobile terminal and control method thereof |
US20100324709A1 (en) | 2009-06-22 | 2010-12-23 | Tree Of Life Publishing | E-book reader with voice annotation |
US20110010210A1 (en) * | 2009-07-10 | 2011-01-13 | Alcorn Robert L | Educational asset distribution system and method |
WO2011014569A1 (en) | 2009-07-28 | 2011-02-03 | Etxtbk, Llc | Systems and methods for distributing electronic content |
WO2011017465A2 (en) * | 2009-08-04 | 2011-02-10 | Iverse Media, Llc | Method, system, and storage medium for a comic book reader platform |
US8375329B2 (en) | 2009-09-01 | 2013-02-12 | Maxon Computer Gmbh | Method of providing a graphical user interface using a concentric menu |
US9262063B2 (en) | 2009-09-02 | 2016-02-16 | Amazon Technologies, Inc. | Touch-screen user interface |
CA2714523A1 (en) * | 2009-09-02 | 2011-03-02 | Sophia Learning, Llc | Teaching and learning system |
IN2012DN01870A (en) * | 2009-09-02 | 2015-08-21 | Amazon Tech Inc | |
US8451238B2 (en) | 2009-09-02 | 2013-05-28 | Amazon Technologies, Inc. | Touch-screen user interface |
US20120231441A1 (en) | 2009-09-03 | 2012-09-13 | Coaxis Services Inc. | System and method for virtual content collaboration |
US8578295B2 (en) | 2009-09-16 | 2013-11-05 | International Business Machines Corporation | Placement of items in cascading radial menus |
US9330069B2 (en) | 2009-10-14 | 2016-05-03 | Chi Fai Ho | Layout of E-book content in screens of varying sizes |
KR20110046822A (en) | 2009-10-29 | 2011-05-06 | 에스케이 텔레콤주식회사 | System and method for information sharing based on electronic book |
KR101702659B1 (en) | 2009-10-30 | 2017-02-06 | 삼성전자주식회사 | Appratus and method for syncronizing moving picture contents and e-book contents and system thereof |
WO2011054088A1 (en) * | 2009-11-03 | 2011-05-12 | Les Contes Perpetuels Inc . | Method and system for enabling a user to create a document in a collaborative environment |
KR20110049981A (en) | 2009-11-06 | 2011-05-13 | 김명주 | Electronic book terminal, system for providing electronic book contents and method thereof |
US8527859B2 (en) | 2009-11-10 | 2013-09-03 | Dulcetta, Inc. | Dynamic audio playback of soundtracks for electronic visual works |
US20110153330A1 (en) | 2009-11-27 | 2011-06-23 | i-SCROLL | System and method for rendering text synchronized audio |
US20110163944A1 (en) | 2010-01-05 | 2011-07-07 | Apple Inc. | Intuitive, gesture-based communications with physics metaphors |
US9811507B2 (en) | 2010-01-11 | 2017-11-07 | Apple Inc. | Presenting electronic publications on a graphical user interface of an electronic device |
US20110177481A1 (en) | 2010-01-15 | 2011-07-21 | Haff Olle | Electronic device with media function and method |
USD624557S1 (en) | 2010-01-19 | 2010-09-28 | Microsoft Corporation | Animated image for a portion of a display screen |
US8103554B2 (en) | 2010-02-24 | 2012-01-24 | GM Global Technology Operations LLC | Method and system for playing an electronic book using an electronics system in a vehicle |
US8463456B2 (en) | 2010-03-18 | 2013-06-11 | International Business Machines Corporation | Minimizing aggregate cooling and leakage power |
US20110227949A1 (en) | 2010-03-19 | 2011-09-22 | I/O Interconnect, Ltd. | Read apparatus and operation method for e-book |
US9323756B2 (en) | 2010-03-22 | 2016-04-26 | Lenovo (Singapore) Pte. Ltd. | Audio book and e-book synchronization |
US8578366B2 (en) | 2010-04-13 | 2013-11-05 | Avaya Inc. | Application store |
US20110261030A1 (en) | 2010-04-26 | 2011-10-27 | Bullock Roddy Mckee | Enhanced Ebook and Enhanced Ebook Reader |
US9501582B2 (en) | 2010-05-10 | 2016-11-22 | Amazon Technologies, Inc. | Providing text content embedded with protected multimedia content |
CN101833421A (en) * | 2010-05-12 | 2010-09-15 | 中兴通讯股份有限公司 | Electronic device and method for acquiring user operation |
US20120030022A1 (en) | 2010-05-24 | 2012-02-02 | For-Side.Com Co., Ltd. | Electronic book system and content server |
US8887042B2 (en) | 2010-06-01 | 2014-11-11 | Young-Joo Song | Electronic multimedia publishing systems and methods |
US8434001B2 (en) | 2010-06-03 | 2013-04-30 | Rhonda Enterprises, Llc | Systems and methods for presenting a content summary of a media item to a user based on a position within the media item |
US20110314427A1 (en) * | 2010-06-18 | 2011-12-22 | Samsung Electronics Co., Ltd. | Personalization using custom gestures |
US8405606B2 (en) | 2010-07-02 | 2013-03-26 | Alpha & Omega Inc. | Remote control systems and methods for activating buttons of digital electronic display devices |
US9786159B2 (en) | 2010-07-23 | 2017-10-10 | Tivo Solutions Inc. | Multi-function remote control device |
WO2012018356A1 (en) * | 2010-08-04 | 2012-02-09 | Copia Interactive, Llc | System for and method of determining relative value of a product |
US8452600B2 (en) | 2010-08-18 | 2013-05-28 | Apple Inc. | Assisted reader |
US8700987B2 (en) | 2010-09-09 | 2014-04-15 | Sony Corporation | Annotating E-books / E-magazines with application results and function calls |
USD687056S1 (en) | 2011-08-16 | 2013-07-30 | Nest Labs, Inc. | Display screen with an animated graphical user interface |
US20120147055A1 (en) | 2010-09-16 | 2012-06-14 | Matt Pallakoff | System and method for organizing and presenting content on an electronic device |
US20120110429A1 (en) | 2010-09-23 | 2012-05-03 | Webdoc Sa | Platform enabling web-based interpersonal communication within shared digital media |
US20120077175A1 (en) | 2010-09-28 | 2012-03-29 | Sympoz, Inc. | Time-indexed discussion enabled video education |
KR20120038668A (en) | 2010-10-14 | 2012-04-24 | 삼성전자주식회사 | Apparatus and method for updating e-book content |
US9098407B2 (en) | 2010-10-25 | 2015-08-04 | Inkling Systems, Inc. | Methods for automatically retrieving electronic media content items from a server based upon a reading list and facilitating presentation of media objects of the electronic media content items in sequences not constrained by an original order thereof |
US20120113019A1 (en) | 2010-11-10 | 2012-05-10 | Anderson Michelle B | Portable e-reader and method of use |
US8478662B1 (en) | 2010-11-24 | 2013-07-02 | Amazon Technologies, Inc. | Customized electronic books with supplemental content |
US20120151397A1 (en) | 2010-12-08 | 2012-06-14 | Tavendo Gmbh | Access to an electronic object collection via a plurality of views |
KR101051149B1 (en) | 2010-12-08 | 2011-07-22 | 주식회사 라이프사이언스테크놀로지 | Method of making digtal contents based on social network, method of sharing digital contents, and system thereof |
KR20120087248A (en) | 2010-12-15 | 2012-08-07 | 고스트리트(주) | Social Networking System And Method Using E-Book |
USD684585S1 (en) | 2010-12-20 | 2013-06-18 | Adobe Systems Incorporated | Portion of a display with a graphical user interface |
US20120236201A1 (en) | 2011-01-27 | 2012-09-20 | In The Telling, Inc. | Digital asset management, authoring, and presentation techniques |
US20120204092A1 (en) | 2011-02-07 | 2012-08-09 | Hooray LLC | E-reader generating ancillary content from markup tags |
US8479662B2 (en) | 2011-02-10 | 2013-07-09 | Siemens Aktiengesellschaft | Rail vehicle having a vehicle door seal |
US20120210269A1 (en) | 2011-02-16 | 2012-08-16 | Sony Corporation | Bookmark functionality for reader devices and applications |
US9645986B2 (en) | 2011-02-24 | 2017-05-09 | Google Inc. | Method, medium, and system for creating an electronic book with an umbrella policy |
US20120233552A1 (en) | 2011-03-07 | 2012-09-13 | Sony Corporation | Personalizing the user experience |
US8918711B2 (en) | 2011-03-10 | 2014-12-23 | Michael J. Reed | System and method for visually presenting electronic media |
US8543905B2 (en) | 2011-03-14 | 2013-09-24 | Apple Inc. | Device, method, and graphical user interface for automatically generating supplemental content |
US9706247B2 (en) | 2011-03-23 | 2017-07-11 | Audible, Inc. | Synchronized digital content samples |
US9697265B2 (en) | 2011-03-23 | 2017-07-04 | Audible, Inc. | Synchronizing digital content |
WO2012142055A1 (en) | 2011-04-11 | 2012-10-18 | Zinio, Llc | Reader with enhanced user functionality |
USD691171S1 (en) | 2011-05-03 | 2013-10-08 | Htc Corporation | Display screen with graphical user interface |
US20120284348A1 (en) | 2011-05-05 | 2012-11-08 | Ariel Inventions Llc | System and method for social interactivity while using an e-book reader |
USD699251S1 (en) | 2011-05-12 | 2014-02-11 | Business Objects Software Ltd. | Electronic display with graphical user interface |
USD690728S1 (en) | 2011-05-24 | 2013-10-01 | Htc Corporation | Portion of a display screen with graphical user interface |
US10672399B2 (en) | 2011-06-03 | 2020-06-02 | Apple Inc. | Switching between text data and audio data based on a mapping |
CA2846620A1 (en) | 2011-08-26 | 2013-03-07 | Scholastic Inc. | Interactive electronic reader with parental control |
US20130080968A1 (en) | 2011-09-27 | 2013-03-28 | Amazon Technologies Inc. | User interface with media content prediction |
US9031493B2 (en) | 2011-11-18 | 2015-05-12 | Google Inc. | Custom narration of electronic books |
USD700197S1 (en) | 2011-11-28 | 2014-02-25 | Lonestar Inventions, L.P. | Smart phone with a GPS astronomical clock |
US20140006914A1 (en) * | 2011-12-10 | 2014-01-02 | University Of Notre Dame Du Lac | Systems and methods for collaborative and multimedia-enriched reading, teaching and learning |
US9628296B2 (en) * | 2011-12-28 | 2017-04-18 | Evernote Corporation | Fast mobile mail with context indicators |
USD681662S1 (en) | 2012-01-05 | 2013-05-07 | Flextronics Ap, Llc | Display panel with graphical user interface for analyzing and presenting supply, fabrication, and logistics data |
USD700207S1 (en) | 2012-02-07 | 2014-02-25 | Microsoft Corporation | Display screen with animated graphical user interface |
USD699737S1 (en) | 2012-02-07 | 2014-02-18 | Microsoft Corporation | Display screen with a graphical user interface |
USD699750S1 (en) | 2012-02-09 | 2014-02-18 | Microsoft Corporation | Display screen with animated graphical user interface |
USD699745S1 (en) | 2012-02-09 | 2014-02-18 | Microsoft Corporation | Display screen with animated graphical user interface |
USD699749S1 (en) | 2012-02-09 | 2014-02-18 | Microsoft Corporation | Display screen with animated graphical user interface |
USD737278S1 (en) | 2012-06-28 | 2015-08-25 | Samsung Electronics Co., Ltd. | Portable electronic device with animated GUI |
USD716832S1 (en) | 2012-07-19 | 2014-11-04 | Desire 26arn Incorporated | Display screen with graphical user interface |
USD718325S1 (en) | 2012-07-19 | 2014-11-25 | Desire 2Learn Incorporated | Display screen with graphical user interface |
USD716831S1 (en) | 2012-07-19 | 2014-11-04 | Desire2Learn Incorporated | Display screen with graphical user interface |
USD716327S1 (en) | 2012-07-19 | 2014-10-28 | Desire26am Incorporated | Display screen with graphical user interface |
USD733167S1 (en) | 2012-07-20 | 2015-06-30 | D2L Corporation | Display screen with graphical user interface |
USD716328S1 (en) | 2012-07-20 | 2014-10-28 | Desire2Learn Incorporated | Display screen with graphical user interface |
USD720362S1 (en) | 2012-07-20 | 2014-12-30 | Desire 2 Learn Incorporated | Display screen with graphical user interface |
US20140033030A1 (en) * | 2012-07-24 | 2014-01-30 | Anthony R. Pfister | Indexing and providing electronic publications in a networked computing environment |
USD713415S1 (en) | 2012-09-07 | 2014-09-16 | Lg Electronics Inc. | Display of mobile phone with transitional graphical user interface |
USD716340S1 (en) | 2012-09-28 | 2014-10-28 | Google Inc. | Display screen or portion thereof for a control unit with animated graphical user interface |
USD716315S1 (en) | 2013-02-26 | 2014-10-28 | Quixey, Inc. | Display screen with graphical user interface |
USD716320S1 (en) | 2013-04-02 | 2014-10-28 | Quixey, Inc. | Display screen with animated graphical user interface |
USD716318S1 (en) | 2013-04-02 | 2014-10-28 | Quixey, Inc. | Display screen with animated graphical user interface |
USD716319S1 (en) | 2013-04-02 | 2014-10-28 | Quixey, Inc. | Display screen with animated graphical user interface |
CN105408883B (en) * | 2013-04-30 | 2018-09-11 | 安提特软件有限责任公司 | Database table column is explained |
USD761801S1 (en) | 2013-06-06 | 2016-07-19 | Life Technologies Corporation | Mobile application interface for display or portion thereof |
USD744535S1 (en) | 2013-10-25 | 2015-12-01 | Microsoft Corporation | Display screen with animated graphical user interface |
USD731801S1 (en) | 2014-06-12 | 2015-06-16 | Steelcase Inc. | Chair |
-
2011
- 2011-04-18 US US13/089,154 patent/US9645986B2/en active Active
- 2011-06-28 US US13/171,130 patent/US20120221938A1/en not_active Abandoned
- 2011-07-14 US US13/182,773 patent/US20120221968A1/en not_active Abandoned
- 2011-07-14 US US13/182,787 patent/US8520025B2/en active Active
- 2011-07-14 US US13/182,809 patent/US8543941B2/en active Active
- 2011-07-14 US US13/182,797 patent/US9063641B2/en active Active
- 2011-08-24 US US13/216,773 patent/US20120221441A1/en not_active Abandoned
-
2012
- 2012-02-02 KR KR1020157015057A patent/KR20150070431A/en not_active Application Discontinuation
- 2012-02-02 EP EP12749565.3A patent/EP2678760A4/en not_active Ceased
- 2012-02-02 CN CN201280019905.2A patent/CN103493087A/en active Pending
- 2012-02-02 EP EP12749201.5A patent/EP2678766A4/en not_active Withdrawn
- 2012-02-02 KR KR1020137024788A patent/KR101566461B1/en active IP Right Grant
- 2012-02-02 EP EP12749553.9A patent/EP2678821A4/en not_active Ceased
- 2012-02-02 KR KR1020167017224A patent/KR101684586B1/en active IP Right Grant
- 2012-02-02 KR KR1020137024787A patent/KR20140047594A/en not_active Application Discontinuation
- 2012-02-02 WO PCT/US2012/023584 patent/WO2012115756A2/en active Application Filing
- 2012-02-02 WO PCT/US2012/023599 patent/WO2012115758A2/en active Application Filing
- 2012-02-02 KR KR1020137024924A patent/KR20140022972A/en active Search and Examination
- 2012-02-02 WO PCT/US2012/023628 patent/WO2012115759A2/en active Application Filing
- 2012-02-02 CN CN201280019886.3A patent/CN103493117A/en active Pending
- 2012-02-02 CN CN201280019910.3A patent/CN103492997B/en active Active
- 2012-02-16 CN CN201280019888.2A patent/CN103492996A/en active Pending
- 2012-02-16 WO PCT/US2012/025467 patent/WO2012115856A2/en active Application Filing
- 2012-02-16 KR KR1020137024923A patent/KR20140037824A/en not_active Application Discontinuation
- 2012-02-16 WO PCT/US2012/025432 patent/WO2013105999A2/en active Application Filing
- 2012-02-16 EP EP12749489.6A patent/EP2678818A4/en not_active Ceased
- 2012-02-16 CN CN201710229866.3A patent/CN107272999A/en active Pending
- 2012-02-16 WO PCT/US2012/025443 patent/WO2012115853A2/en active Application Filing
- 2012-02-16 WO PCT/US2012/025339 patent/WO2013101263A2/en active Application Filing
- 2012-02-16 EP EP12861843.6A patent/EP2678768A4/en not_active Ceased
-
2013
- 2013-05-23 US US13/901,110 patent/US20130262973A1/en not_active Abandoned
- 2013-07-19 US US13/946,937 patent/US10067922B2/en active Active
- 2013-07-23 US US13/949,049 patent/US9501461B2/en active Active
- 2013-10-01 US US14/043,015 patent/US20140033128A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6493006B1 (en) * | 1996-05-10 | 2002-12-10 | Apple Computer, Inc. | Graphical user interface having contextual menus |
US6664991B1 (en) * | 2000-01-06 | 2003-12-16 | Microsoft Corporation | Method and apparatus for providing context menus on a pen-based device |
US7733366B2 (en) * | 2002-07-01 | 2010-06-08 | Microsoft Corporation | Computer network-based, interactive, multimedia learning system and process |
US7058902B2 (en) * | 2002-07-30 | 2006-06-06 | Microsoft Corporation | Enhanced on-object context menus |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US20120054671A1 (en) * | 2010-08-30 | 2012-03-01 | Vmware, Inc. | Multi-touch interface gestures for keyboard and/or mouse inputs |
US20120096383A1 (en) * | 2010-10-15 | 2012-04-19 | Sony Network Entertainment Inc. | Loader animation |
US20120096386A1 (en) * | 2010-10-19 | 2012-04-19 | Laurent Baumann | User interface for application transfers |
Non-Patent Citations (1)
Title |
---|
Hoellwarth United States Patent Publication 2011/0167350 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8958850B2 (en) * | 2010-10-08 | 2015-02-17 | Lg Electronics Inc. | Mobile terminal and control method thereof |
US20120088554A1 (en) * | 2010-10-08 | 2012-04-12 | Hojoon Lee | Mobile terminal and control method thereof |
US20120284348A1 (en) * | 2011-05-05 | 2012-11-08 | Ariel Inventions Llc | System and method for social interactivity while using an e-book reader |
US9928566B2 (en) | 2012-01-20 | 2018-03-27 | Microsoft Technology Licensing, Llc | Input mode recognition |
US20140304648A1 (en) * | 2012-01-20 | 2014-10-09 | Microsoft Corporation | Displaying and interacting with touch contextual user interface |
US10430917B2 (en) | 2012-01-20 | 2019-10-01 | Microsoft Technology Licensing, Llc | Input mode recognition |
US9928562B2 (en) | 2012-01-20 | 2018-03-27 | Microsoft Technology Licensing, Llc | Touch mode and input type recognition |
US20150121212A1 (en) * | 2013-10-31 | 2015-04-30 | Apollo Group, Inc. | Method and apparatus for presenting and navigating bookmarks in a set of electronic reading material |
US9519623B2 (en) * | 2013-10-31 | 2016-12-13 | Apollo Education Group, Inc. | Method and apparatus for presenting and navigating bookmarks in a set of electronic reading material |
US10218652B2 (en) | 2014-08-08 | 2019-02-26 | Mastercard International Incorporated | Systems and methods for integrating a chat function into an e-reader application |
US10380226B1 (en) * | 2014-09-16 | 2019-08-13 | Amazon Technologies, Inc. | Digital content excerpt identification |
US10891320B1 (en) | 2014-09-16 | 2021-01-12 | Amazon Technologies, Inc. | Digital content excerpt identification |
US20180292975A1 (en) * | 2017-04-05 | 2018-10-11 | Open Txt Sa Ulc | Systems and methods for animated computer generated display |
USD868834S1 (en) | 2017-04-05 | 2019-12-03 | Open Text Sa Ulc | Display screen or portion thereof with animated graphical user interface |
US11586338B2 (en) * | 2017-04-05 | 2023-02-21 | Open Text Sa Ulc | Systems and methods for animated computer generated display |
CN109582191A (en) * | 2017-09-28 | 2019-04-05 | 北京国双科技有限公司 | A kind of menu content display methods and device |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9501461B2 (en) | Systems and methods for manipulating user annotations in electronic books | |
US11074397B1 (en) | Adaptive annotations | |
JP6038927B2 (en) | Establishing content navigation direction based on directional user gestures | |
CN108629033B (en) | Manipulation and display of electronic text | |
KR101569644B1 (en) | Device, method, and graphical user interface for navigating through an electronic document | |
US9099010B2 (en) | Content authoring application | |
Liao | Papiercraft: a paper-based interface to support interaction with digital documents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PATTERSON, JAMES;MOODY, NATHAN;DOUGALL, SCOTT;REEL/FRAME:031319/0200 Effective date: 20110629 |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044695/0115 Effective date: 20170929 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |