US20230054747A1 - Automatic Generation of Preferred Views for Personal Content Collections - Google Patents

Automatic Generation of Preferred Views for Personal Content Collections Download PDF

Info

Publication number
US20230054747A1
US20230054747A1 US17/981,264 US202217981264A US2023054747A1 US 20230054747 A1 US20230054747 A1 US 20230054747A1 US 202217981264 A US202217981264 A US 202217981264A US 2023054747 A1 US2023054747 A1 US 2023054747A1
Authority
US
United States
Prior art keywords
content items
user
content
context
subset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/981,264
Inventor
Mark Ayzenshtat
Clinton Burford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bending Spoons SpA
Original Assignee
Evernote Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Evernote Corp filed Critical Evernote Corp
Priority to US17/981,264 priority Critical patent/US20230054747A1/en
Publication of US20230054747A1 publication Critical patent/US20230054747A1/en
Assigned to BENDING SPOONS S.P.A. reassignment BENDING SPOONS S.P.A. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EVERNOTE CORPORATION
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Definitions

  • This application is directed to the field of extracting, analyzing and presenting information, especially in conjunction with custom ordering of items in personal and shared content management systems.
  • Content collections supported by such software and online services may contain thousands and even hundreds of thousands of content items (notes, memos, documents, etc.) with widely varying sizes, content types and other parameters. These items are viewed and modified by users in different order, with different frequency and under different circumstances. Routines for accessing items in content collections may include direct scrolling, keyword and natural language search, accessing items by tags, categories, notebooks, browsing interlinked clusters of items with or without indexes and tables of content, and other methods.
  • providing a view of relevant items of a content collection includes identifying a current context based temporal parameters, spatial parameters, navigational parameters, lexical parameters, organizational parameters, and/or events, evaluating each of the items of the content collection according to the current context to provide a value for each of the items, and displaying a subset of the items corresponding to highest determined values.
  • the temporal parameters may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and frequency of event related access of an item Frequency of access of an item may be modeled according to the following formula:
  • ⁇ u (.) is a feature value for frequency
  • e is an accessed content item
  • C, C e are, respectively, past user actions and a set of all actions and only past actions where the user has accessed the item e
  • t(c i )(t(C j )) is an age of each access event measured at a present moment
  • t m is a normalizing median coefficient.
  • the user feedback may be implicit and may include frequency of actual viewing by the user.
  • the user feedback may be explicit.
  • User feedback may be used to modify features used to evaluate items.
  • the subset of items may include only items having a value above a predetermined threshold and displaying the subset of items may include sorting the subset according to values provided for each of the items and items that are not part of the subset may be displayed following items in the subset. Displaying the subset of items may include displaying the items in a pop up screen that is superimposed over a different list containing the items.
  • Analyzing items may include splitting the items into a training set and a test set and a classifier may be built using automatic learning.
  • the items in the training set may be analyzed to develop a set of rules used for evaluation of the items.
  • the temporal parameters of the items in the training set may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and/or frequency of event related access of an item.
  • the items may be displayed on a mobile device.
  • the mobile device may include software that is pre-loaded with the device, installed from an app store, installed from a desktop computer, installed from media, or downloaded from a Web site.
  • the mobile device may use an operating system selected from the group consisting of: iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS. Items may be stored using the OneNote® note-taking software provided by the Microsoft Corporation of Redmond, Washington.
  • computer software provided in a non-transitory computer-readable medium, provides a view of relevant items of a content collection.
  • the software includes executable code that identifies a current context based on temporal parameters, spatial parameters, navigational parameters, lexical parameters, organizational parameters, and/or events, executable code that evaluates each of the items of the content collection according to the current context to provide a value for each of the items, and executable code that displays a subset of the items corresponding to highest determined values.
  • the temporal parameters may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and frequency of event related access of an item. Frequency of access of an item may be modeled according to the following formula:
  • ⁇ u (.) is a feature value for frequency
  • e is an accessed content item
  • c i (c j ),C,C e are, respectively, past user actions and a set of all actions and only past actions where: the user has accessed the item e
  • t(c i )(t(c j )) is an age of each access event measured at a present moment
  • t m is a normalizing median coefficient.
  • Temporal patterns of accessing items may be numerically assessed based on time of day, time of week, and/or time of month.
  • Executable code that evaluates each item may determine a distance from a separating hyperplane using a support vector machine classification method. User feedback may be used to adjust subsequent evaluation of each of the items.
  • the user feedback may be implicit and may include frequency of actual viewing by the user:
  • the user feedback may be explicit.
  • User feedback may be used to modify features used to evaluate items.
  • the subset of items may include only items having a value above a predetermined threshold and displaying the subset of items may include sorting the subset according to values provided for each of the items and items that are not part of the subset may be displayed following items in the subset
  • Executable code that displays the subset of items may display the items in a pop up screen that is superimposed over a different list containing the items.
  • Executable code that analyzes items may split the items into a training set and a test set and may build a classifier using automatic learning.
  • the items in the training set may be analyzed to develop a set of rules used for evaluation of the items.
  • the temporal parameters of the items in the training set may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and/or frequency of event related access of an item:
  • the items may be displayed on a mobile device.
  • the mobile device may include software that is pre-loaded with the device, installed from an app store, installed from a desktop computer, installed from media, or downloaded from a Web site.
  • the mobile device may use an operating system selected from the group consisting of: iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS. Items may be stored using the OneNote® note-taking software provided by the Microsoft Corporation of Redmond, Washington.
  • the proposed system automatically generates preferred content views, regrouping and selecting such content items as notes and notebooks depending on a particular environment or conditions, reflected in context related features, and based on automatic classification with parameters derived from historical patterns of user access to items.
  • Such features may include and combine temporal, spatial, navigational, lexical, organizational and other parameters, events such as meetings, trips, visits, and other factors that may be pre-processed and formalized by the system, to reflect real life situations via linguistic variables in the meaning accepted in probability and fuzzy set theories.
  • temporal features may include modeled notions of recent access, frequent access, frequent location related access, frequent event related access, etc.
  • a numeric feature value for frequent access may be modeled as:
  • ⁇ u (e) is a feature value for frequency (a superscript ‘u.’ reflects the term ‘usualness’);
  • e is an accessed content item, such as a note, a notebook of a tag;
  • c i (cj), C, C e are respectively, the past user actions and the set of all actions and only the past actions where the user has accessed the item e;
  • t( Ci ) (t(c i )) is an age of each access event measured at the present moment;
  • t m is a normalizing median coefficient; for example, if all time measurements are in seconds, t m may be equal to 2,592,000, which corresponds to a 30-day age of an item.
  • temporal and non-temporal features such as frequency+location, frequency+navigation, etc. can be modeled.
  • temporal patterns of accessing notes may also be numerically assessed. Examples are presented in the following list:
  • a set of features / rules may be chosen and numeric representations for the features may be defined, as explained elsewhere herein.
  • the conglomerate of pre-existing content collections may be split into a training set and a test set, and a binary classifier may be built and optimized using automatic learning.
  • the classifier may work with an input data pair (item, context) and may define whether the item may be added to a preferred viewing list for a given context; additionally, for items that are positively assessed by the classifier, the score of the items may be calculated, such as a distance from the separating hyperplane in the numeric feature space corresponding to the (linear or non-linear) Support Vector Machine (SVM) classification method.
  • SVM Support Vector Machine
  • Ranking notes in the preferred viewing list by scores of the notes may allow control over a length of the list to address possible user interface and other requirements.
  • a version of a preferred note view classifier developed at the previous step may be bundled with the content management or note-taking software and may be delivered to new users and immediately employed for automatic building of custom preferred content views for various contexts.
  • An explicit or implicit user feedback to the functioning of such classifier may be used to improve the system and adjust the classifier:
  • Both techniques may lead to re-training and adjusting parameters of the classifier, such as weights representing the coordinates of a normal vector in the SVM method.
  • user feedback may be used to modify the set of features through supervised learning.
  • preferred viewing lists may be implemented in a variety of ways.
  • the preferred viewing lists may be displayed as separate lists of notes that automatically pop up on a user screen every time a new context is identified and requires an update of a preferred view.
  • preferred view may populate a list or a section of a list of favorite user notes.
  • preferred notes for a new context may be displayed in a top portion of a main note view preceding other notes, as if the preferred view implied a new sorting order pushing previously displayed top items down the list.
  • Preferred views may not be limited to individual notes and other elementary content units. Similar technique may be applied to choosing larger content assemblies, such as notebooks or notebook stacks in the Evernote content management system. The techniques may also be used to modify tags, lists of saved searches, lists of favorites and other content related displayable attributes that may depend on the environment, external conditions and contexts.
  • the system may constantly monitor changing conditions, the system may also have built-in thresholds to identify meaningful changes of the context and assess notes for the purpose of inclusion of particular notes into preferred views only when such meaningful changes occur. Such clustering of contexts may bring additional economy of system resources.
  • FIG. 1 is a schematic illustration of a preferred note view created in response to a temporal, scheduling and sharing context, according to an embodiment of to the system described herein.
  • FIG. 2 is a schematic illustration of preferred note and notebooks views created in response to a geolocation context, according to an embodiment of the system described herein.
  • FIG. 3 is a schematic illustration of feature extraction from an individual note and classification of the note, according to an embodiment of the system described herein.
  • FIG. 4 is a system flow diagram illustrating automatic learning, according to an embodiment of the system described herein.
  • FIG. 5 is a system flow diagram describing building of preferred content views, according to an embodiment of the system described herein.
  • the system described herein provides a mechanism for building preferred views of items from individual, shared and organization-wide content collections in response to changing environment and context.
  • Items may include individual notes, notebooks, tags, search lists and other attributes; contexts may include temporal characteristics, location, navigation, events, content organization and other features.
  • the mechanism utilizes classifiers build through automatic learning based on past user access to content items; classifiers may be dynamically adjusted based on user feedback.
  • FIG. 1 is a schematic illustration 100 showing a preferred note view created in response to a temporal, scheduling and sharing context.
  • a content collection 110 displays eight notes 120 to a user.
  • a system classifier applied to the content collection (not shown in FIG. 1 , see FIG. 3 and the accompanying text for details) chooses two notes 140 a , 140 b for inclusion in a preferred system view.
  • the notes in a previously displayed main note view are reordered so that the notes 140 a , 140 b occupy a top position 150 and a remainder of the notes 160 are pushed down the main view.
  • FIG. 2 is a schematic illustration 200 showing preferred note and notebooks views created in response to a geolocation context.
  • a content collection 110 displays eight notes 120 to a user.
  • a notebook view of the content collections includes three notebooks 210 (notebooks A, B, C).
  • a system classifier applied to the content collection chooses two notes 150 a , 150 b and a notebook C for inclusion in a preferred system view.
  • the two selected notes 150 a , 150 b are displayed in a pop-up pane 230 ; at a bottom portion 240 of the pane 230 , the selected notebook is also displayed.
  • FIG. 2 illustrates a different user interface solution compared with the FIG. 1 : In FIG. 2 , the pane 230 with a preferred note view is shown on top of a main note view 250 .
  • FIG. 3 is a schematic illustration 300 of feature extraction from an individual note and classification of the individual note.
  • the note collection 110 is scanned by the system to identify notes that should be included in a preferred note view reflecting a current context 320 .
  • a note 310 is evaluated based on the current context 320 .
  • the current context 320 may include multiple components, such as a temporal context 320 a , a spatial (geolocation) context 320 b , scheduled events 320 c , a navigational context 320 d (such as a scrolling view, a tag based view or a notebook based view within a content collection), a sharing context 320 e , a search context 320 f , a linguistic (textual) context 320 g , a travel context 320 h , a social network context 320 i , etc.
  • a temporal context 320 a a spatial (geolocation) context 320 b
  • scheduled events 320 c such as a scrolling view, a tag based view or a notebook based view within a content collection
  • a sharing context 320 e such as a scrolling view, a tag based view or a notebook based view within a content collection
  • a sharing context 320 e such as a scrolling view,
  • each component of the context may be represented by one or multiple features 330 .
  • three sample feature sets 330 a , 330 b , 330 c are shown and the first feature in each set is described in details:
  • the system may extract attributes of the note 310 corresponding to each of the feature sets 330 a - 330 c and build numeric feature values 340 , as explained elsewhere herein (see, for example, formula (1) for some of the temporal features). Numeric feature values are illustrated in FIG. 1 for a temporal context - the feature set 340 a , and for a search context the features set 340 b .
  • a vector V of feature values 340 is processed using a classifier 350 , such as an SVM classifier where a separating plane defining one of two possible outcomes is defined by a normal vector W of the classifier plane, so the outcome is associated with a sign of the dot product V ⁇ W (for example, V ⁇ W > 0 may indicate an inclusion of the note 310 into a preferred note view, as illustrated in FIGS. s 1 , 2 ).
  • the system makes a binary decision 360 to add the selected note 310 to the preferred note view or to not add the note 310 .
  • a flow diagram illustrates automatic learning in conjunction with compiling an SVM classifier.
  • Processing begins at a step 410 where pre-existing notes and access history are collected, as explained elsewhere herein.
  • processing proceeds to a step 420 where a feature set for automatic learning is built.
  • processing proceeds to a step 430 where a classifier designated for pre-building into the system and delivering to users is trained and evaluated utilizing training and test sets of notes (and possibly other items in content collections, such as notebooks, tags, search lists, etc.).
  • processing proceeds to a step 440 where the classifier is delivered to a new user with the classifier software.
  • processing proceeds to a step 450 where, in connection with software functioning and user access to notes and other items in different environments, the system collects additional contexts and note access history for the new user.
  • processing proceeds to a step 460 where the classifier is re-trained and parameters of the classifier are modified. After the step 460 , processing is complete.
  • a flow diagram 500 describes building preferred content views. Processing begins at a step 510 where the system identifies the current context, as described elsewhere herein. After the step 510 , processing proceeds to a step 520 where a note or other item is chosen for evaluation. After the step 520 , processing proceeds to a step 530 where features relevant to the current context and the chosen note are assessed, as described in more detail elsewhere herein. After the step 530 , processing proceeds to a step 540 where numeric feature values for the selected note and the current context are built, as explained elsewhere herein (see, for example, formula (1) and FIG. 3 ).
  • processing proceeds to a step 550 where the classifier is applied to a vector of numeric feature values (see, for example, items 340 , 350 and the accompanying text in FIG. 3 ).
  • processing proceeds to a test step 560 where it is determined whether the previous step resulted in assigning the selected note to the preferred view. If so, processing proceeds to a step 570 where the note score obtained during the classification step (such as a cosine of the angle between the vectors V, W explained in conjunction with FIG. 3 ) is used to calculate note rank with respect to other notes identified as candidates for inclusion in the preferred view (if any). Such ranking may apply to any type of items that may be present in the preferred view: notes, notebooks, tags, saved search queries, etc.
  • processing proceeds to a test step 575 where it is determined whether the note rank is within a preferred list size. If so, processing proceeds to a step 580 where the note is added to the preferred view list and the list is modified if necessary; for example, a previously included item with a lower score residing at the bottom of the list may be eliminated from the preferred view list.
  • processing proceeds to a test step 585 where it is determined whether there are more notes to evaluate. Note that the step 585 may be independently reached from the step 560 if the selected note is not added to the preferred view and from the step 575 if the note rank is outside the list size. If there are more notes to evaluate, processing proceeds to a step 590 where the next note is chosen and control is transferred back to the step 530 ; otherwise, processing is complete.
  • the mobile device may include software that is pre-loaded with the device, installed from an app store, installed from a desktop (after possibly being pre-loaded thereon), installed from media such as a CD, DVD, etc., and/or downloaded from a Web site.
  • the mobile device may use an operating system selected from the group consisting of: iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS.
  • the items may be stored using the OneNote® note-taking software provided by the Microsoft Corporation of Redmond, Washington.
  • Software implementations of the system described herein may include executable code that is stored in a computer readable medium and executed by one or more processors.
  • the computer readable medium may be non-transitory and include a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, a flash drive, an SD card and/or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer readable medium or computer memory on which executable code may be stored and executed by a processor.
  • the system described herein may be used in connection with any appropriate operating system.

Abstract

A computer system obtains access to a content collection that includes a plurality of content items. For each of the content items, the computer system determines user access history and context associated with the access history. The system builds a classifier characterizing a user access pattern of the content items. The system constantly monitors in real-time environmental parameters of the user. It infers a current context of the user based on the environmental parameters. In accordance with the current context, for each of the content items, the system extracts numeric feature values, evaluates the item using the classifier, and determines a score for the item. The system identifies in real-time a subset of the content items that is most relevant to the current context. The system further generates a preferred view of the subset and causes the preferred view to be delivered to a mobile device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. Pat. Application No. 16/773,890, filed Jan. 27, 2020, titled “Automatic Generation Of Preferred Views For Personal Content Collections,” which is a continuation of U.S. Pat. Application No. 14/470,021, filed Aug. 27, 2014, titled “Automatic Generation Of Preferred Views For Personal Content Collections,” which claims priority to U.S. Provisional Application No. 61/878,296, filed Sep. 16, 2013, titled “Automatic Generation Of Preferred Views For Personal Content Collections”, all of which are hereby incorporated by reference herein in their entirety.
  • TECHNICAL FIELD
  • This application is directed to the field of extracting, analyzing and presenting information, especially in conjunction with custom ordering of items in personal and shared content management systems.
  • BACKGROUND OF THE INVENTION
  • Hundreds of millions people are using personal, shared and business-wide content management systems, such as the Evernote service and software created by the Evernote Corporation of Redwood City, California, the Microsoft® Office OneNote and many more systems. Content collections supported by such software and online services may contain thousands and even hundreds of thousands of content items (notes, memos, documents, etc.) with widely varying sizes, content types and other parameters. These items are viewed and modified by users in different order, with different frequency and under different circumstances. Routines for accessing items in content collections may include direct scrolling, keyword and natural language search, accessing items by tags, categories, notebooks, browsing interlinked clusters of items with or without indexes and tables of content, and other methods.
  • Irrespective of specific methods, quick and targeted access to desired content at any given moment, place and situation is important to user productivity and convenience. Search technologies, organizational and user interface features, such as tags, favorites, folders, advanced content sorting, and other functionality provide a significant help in accessing needed content. Contemporary content management systems may expand search to images, audio and video, synonyms, semantic terms, anthologies and language specifics. Navigational methods for tags, tag clouds, lists of favorites, and interlinked clusters of items are constantly progressing and may include multi-dimensional and dynamic data representation, advanced use of touch interfaces and screen estate, etc.
  • Still, even the most sophisticated search and navigational methods may be insufficient for quickly growing information volumes. Additionally, repetitive searches for the same materials even with saved queries take additional time with every search occurrence. A recent enterprise search study has discovered a significant search gap affecting all categories of workers: 52% of respondents said they could not find the information they were seeking within an acceptable amount of time using their own organization’s enterprise search facility. Further analysis has shown that 65% of respondents have defined an overall good search experience as a situation where a particular search takes less than two minutes. However, only 48% of study participants have reported being able to achieve that result in their own organization. In other words, there exists a 17% gap between user expectation of satisfying search experiences and an enterprise search reality. Additionally, about 90% of respondents reported that taking four minutes or more to find the information they want does not constitute a good search experience; yet 27% responded this was the case within their own enterprises. Accordingly limited search efficiency may drive many users to abandoning search as a method of defining immediate views of materials from personal or shared data collections. Analogously, sorting items in a content collection by time, location, size and other parameters may complicate information processing and still fall short of representing content views required by users.
  • Furthermore, user needs in accessing various materials from content collections (notes, attachments, notebooks, folders, etc.) are driven, on the one hand, by constantly changing work, home and other environments, and on the other hand, by repetitive patterns of user adaptation to such environments. For example, users may need several notes with standard bits of information (a social security number, a driver license number, a passport number or other IDs, a credit card number) every time they visit an official establishment. However, additional pieces of information that they may need could significantly differ depending on whether the users visit a bank or a medical office, are traveling to a place where they have taken family photos and want to recall them or are reviewing materials before a weekly staff meeting. Deflecting dynamic combinations of parameters, different environments and contexts influencing content access requirements and customized content views may be difficult with fixed content. settings such as tags or favorite lists, while trying to memorize such combinations of parameters may be cumbersome, tiring, and inefficient and causing frequent updates as user behavior patterns evolve
  • Accordingly, it is desirable to develop advanced systems and methods for generating preferred content views depending on context and user viewing history.
  • SUMMARY OF THE INVENTION
  • According to the system described herein, providing a view of relevant items of a content collection includes identifying a current context based temporal parameters, spatial parameters, navigational parameters, lexical parameters, organizational parameters, and/or events, evaluating each of the items of the content collection according to the current context to provide a value for each of the items, and displaying a subset of the items corresponding to highest determined values. The temporal parameters may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and frequency of event related access of an item Frequency of access of an item may be modeled according to the following formula:
  • f u e = c i C e 2 t c i / t m / c j C e 2 t c j / t m
  • where ƒu(.) is a feature value for frequency, e is an accessed content item, ci(cj), C, Ce are, respectively, past user actions and a set of all actions and only past actions where the user has accessed the item e, t(ci)(t(Cj)) is an age of each access event measured at a present moment, and tm is a normalizing median coefficient. Temporal patterns of accessing items may be numerically assessed based on time of day, time of week, and/or time of month. Evaluating each item may include determining a distance from a separating hyperplane using a support vector machine classification method. User feedback may be used to adjust subsequent evaluation of each of the items. The user feedback may be implicit and may include frequency of actual viewing by the user. The user feedback may be explicit. User feedback may be used to modify features used to evaluate items. The subset of items may include only items having a value above a predetermined threshold and displaying the subset of items may include sorting the subset according to values provided for each of the items and items that are not part of the subset may be displayed following items in the subset. Displaying the subset of items may include displaying the items in a pop up screen that is superimposed over a different list containing the items. Analyzing items may include splitting the items into a training set and a test set and a classifier may be built using automatic learning. Prior to evaluating the items, the items in the training set may be analyzed to develop a set of rules used for evaluation of the items. The temporal parameters of the items in the training set may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and/or frequency of event related access of an item. The items may be displayed on a mobile device. The mobile device may include software that is pre-loaded with the device, installed from an app store, installed from a desktop computer, installed from media, or downloaded from a Web site. The mobile device may use an operating system selected from the group consisting of: iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS. Items may be stored using the OneNote® note-taking software provided by the Microsoft Corporation of Redmond, Washington.
  • According further to the system described herein, computer software, provided in a non-transitory computer-readable medium, provides a view of relevant items of a content collection. The software includes executable code that identifies a current context based on temporal parameters, spatial parameters, navigational parameters, lexical parameters, organizational parameters, and/or events, executable code that evaluates each of the items of the content collection according to the current context to provide a value for each of the items, and executable code that displays a subset of the items corresponding to highest determined values. The temporal parameters may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and frequency of event related access of an item. Frequency of access of an item may be modeled according to the following formula:
  • f u e = c i C e 2 t c i / t m / c j C e 2 t c j / t m
  • where ƒu(.) is a feature value for frequency, e is an accessed content item, ci(cj),C,Ce are, respectively, past user actions and a set of all actions and only past actions where: the user has accessed the item e, t(ci)(t(cj)) is an age of each access event measured at a present moment, and tm is a normalizing median coefficient. Temporal patterns of accessing items may be numerically assessed based on time of day, time of week, and/or time of month. Executable code that evaluates each item may determine a distance from a separating hyperplane using a support vector machine classification method. User feedback may be used to adjust subsequent evaluation of each of the items. The user feedback may be implicit and may include frequency of actual viewing by the user: The user feedback may be explicit. User feedback may be used to modify features used to evaluate items. The subset of items may include only items having a value above a predetermined threshold and displaying the subset of items may include sorting the subset according to values provided for each of the items and items that are not part of the subset may be displayed following items in the subset Executable code that displays the subset of items may display the items in a pop up screen that is superimposed over a different list containing the items. Executable code that analyzes items may split the items into a training set and a test set and may build a classifier using automatic learning. Prior to evaluating the items, the items in the training set may be analyzed to develop a set of rules used for evaluation of the items. The temporal parameters of the items in the training set may include a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and/or frequency of event related access of an item: The items may be displayed on a mobile device. The mobile device may include software that is pre-loaded with the device, installed from an app store, installed from a desktop computer, installed from media, or downloaded from a Web site. The mobile device may use an operating system selected from the group consisting of: iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS. Items may be stored using the OneNote® note-taking software provided by the Microsoft Corporation of Redmond, Washington.
  • The proposed system automatically generates preferred content views, regrouping and selecting such content items as notes and notebooks depending on a particular environment or conditions, reflected in context related features, and based on automatic classification with parameters derived from historical patterns of user access to items.
  • At a first phase, extensive content collections from many existing users of a content management system are processed and analyzed to develop a set of learning features, or rules, derived from contexts (environment, situation, conditions) and defining stable repetitive viewing of content items (e.g., notes).
  • Such features may include and combine temporal, spatial, navigational, lexical, organizational and other parameters, events such as meetings, trips, visits, and other factors that may be pre-processed and formalized by the system, to reflect real life situations via linguistic variables in the meaning accepted in probability and fuzzy set theories. Thus, temporal features may include modeled notions of recent access, frequent access, frequent location related access, frequent event related access, etc. For example, a numeric feature value for frequent access may be modeled as:
  • f u e = c i C e 2 t c i / t m / c j C e 2 t c j / t m
  • where ƒu(e) is a feature value for frequency (a superscript ‘u.’ reflects the term ‘usualness’); e is an accessed content item, such as a note, a notebook of a tag; ci(cj), C, Ce are respectively, the past user actions and the set of all actions and only the past actions where the user has accessed the item e; t(Ci) (t(ci)) is an age of each access event measured at the present moment; tm is a normalizing median coefficient; for example, if all time measurements are in seconds, tm may be equal to 2,592,000, which corresponds to a 30-day age of an item.
  • Analogously, by restricting sets of user note access actions to actions performed in a certain location (Cl,
  • C I e
  • corresponding to a certain navigational scheme (Cn,
  • C n e
  • or an event (incidence), such as a calendar meeting (Ci,
  • C i e
  • combined temporal and non-temporal features such as frequency+location, frequency+navigation, etc. can be modeled.
  • Furthermore, temporal patterns of accessing notes may also be numerically assessed. Examples are presented in the following list:
    • The time of day, measured in half hour intervals
    • The time of week, measured in half hour intervals
    • The time of month, measured in half hour intervals
    • The time of day, measured in four hour intervals
    • The time of week, measured in four hour intervals
    • The time of month, measured in four hour intervals
    • The day of week
    • The time of week, measured in twenty-four hour intervals
    • The time of month, measured in twenty-four hour intervals
  • The following are examples of contexts and applications where the temporal, spatial, navigational and other features may be utilized:
    • View a certain note every time a user is at a given location. This rule has a broad set of applications, such as viewing partnerships related notes when a user arrives to a meeting at partner address and the system identifies user location, for example, from a mobile copy of content management software running on a user location aware device (GPS, GeoIP, etc.). Another application could be an automatic display of a note with an ATM pin number when a user arrives to a known ATM location where the note containing the pin number was frequently recalled in the past.
    • View a certain note at a given time if such note has been repetitively viewed at around the given time in the past. Applications could be Monday to-do lists for the week on Monday morning; meeting notes from a last week staff meeting; etc.
    • View a certain note in conjunction with a scheduled event, such as meetings, meeting reminders, action reminders, etc.
    • View a certain note if it previously appeared near the top of a saved search query and has been frequently viewed after such search has been performed.
    • View a certain note when meeting around the same periodically repeating time with certain people is detected by schedule and location aware technologies. Applications include opening master project schedule every time when all or part of a project team meets for weekly project reviews.
  • Based on a preliminary analysis of repetitive note viewing patterns, a set of features / rules may be chosen and numeric representations for the features may be defined, as explained elsewhere herein.
  • At a next phase, the conglomerate of pre-existing content collections may be split into a training set and a test set, and a binary classifier may be built and optimized using automatic learning.
  • The classifier may work with an input data pair (item, context) and may define whether the item may be added to a preferred viewing list for a given context; additionally, for items that are positively assessed by the classifier, the score of the items may be calculated, such as a distance from the separating hyperplane in the numeric feature space corresponding to the (linear or non-linear) Support Vector Machine (SVM) classification method. Ranking notes in the preferred viewing list by scores of the notes may allow control over a length of the list to address possible user interface and other requirements.
  • A version of a preferred note view classifier developed at the previous step may be bundled with the content management or note-taking software and may be delivered to new users and immediately employed for automatic building of custom preferred content views for various contexts. An explicit or implicit user feedback to the functioning of such classifier may be used to improve the system and adjust the classifier:
    • An implicit user feedback may be monitored by the system through measuring frequency of actual viewing by users of notes in the preferred lists
    • An explicit user feedback may use a built-in feedback mechanism.
  • Both techniques may lead to re-training and adjusting parameters of the classifier, such as weights representing the coordinates of a normal vector in the SVM method. In some embodiments, user feedback may be used to modify the set of features through supervised learning.
  • From the user interface standpoint, preferred viewing lists may be implemented in a variety of ways. The preferred viewing lists may be displayed as separate lists of notes that automatically pop up on a user screen every time a new context is identified and requires an update of a preferred view. Alternatively, preferred view may populate a list or a section of a list of favorite user notes. In yet another implementation, preferred notes for a new context may be displayed in a top portion of a main note view preceding other notes, as if the preferred view implied a new sorting order pushing previously displayed top items down the list.
  • Preferred views may not be limited to individual notes and other elementary content units. Similar technique may be applied to choosing larger content assemblies, such as notebooks or notebook stacks in the Evernote content management system. The techniques may also be used to modify tags, lists of saved searches, lists of favorites and other content related displayable attributes that may depend on the environment, external conditions and contexts.
  • It should be noted that, while the system may constantly monitor changing conditions, the system may also have built-in thresholds to identify meaningful changes of the context and assess notes for the purpose of inclusion of particular notes into preferred views only when such meaningful changes occur. Such clustering of contexts may bring additional economy of system resources.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the system described herein will now be explained in more detail in accordance with the figures of the drawings, which are briefly described as follows.
  • FIG. 1 is a schematic illustration of a preferred note view created in response to a temporal, scheduling and sharing context, according to an embodiment of to the system described herein.
  • FIG. 2 is a schematic illustration of preferred note and notebooks views created in response to a geolocation context, according to an embodiment of the system described herein.
  • FIG. 3 is a schematic illustration of feature extraction from an individual note and classification of the note, according to an embodiment of the system described herein.
  • FIG. 4 is a system flow diagram illustrating automatic learning, according to an embodiment of the system described herein.
  • FIG. 5 is a system flow diagram describing building of preferred content views, according to an embodiment of the system described herein.
  • DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS
  • The system described herein provides a mechanism for building preferred views of items from individual, shared and organization-wide content collections in response to changing environment and context. Items may include individual notes, notebooks, tags, search lists and other attributes; contexts may include temporal characteristics, location, navigation, events, content organization and other features. The mechanism utilizes classifiers build through automatic learning based on past user access to content items; classifiers may be dynamically adjusted based on user feedback.
  • FIG. 1 is a schematic illustration 100 showing a preferred note view created in response to a temporal, scheduling and sharing context. A content collection 110 displays eight notes 120 to a user. In response to a new context 130, that includes a temporal context 130 a, a scheduling context 130 b and a sharing context 130 c, a system classifier applied to the content collection (not shown in FIG. 1 , see FIG. 3 and the accompanying text for details) chooses two notes 140 a, 140 b for inclusion in a preferred system view. Subsequently, the notes in a previously displayed main note view are reordered so that the notes 140 a, 140 b occupy a top position 150 and a remainder of the notes 160 are pushed down the main view.
  • FIG. 2 is a schematic illustration 200 showing preferred note and notebooks views created in response to a geolocation context. Analogously to FIG. 1 , a content collection 110 displays eight notes 120 to a user. Additionally, a notebook view of the content collections includes three notebooks 210 (notebooks A, B, C). In response to a new geolocation context 220, a system classifier applied to the content collection (not shown in FIG. 2 ) chooses two notes 150 a, 150 b and a notebook C for inclusion in a preferred system view. Subsequently, the two selected notes 150 a, 150 b are displayed in a pop-up pane 230; at a bottom portion 240 of the pane 230, the selected notebook is also displayed. FIG. 2 illustrates a different user interface solution compared with the FIG. 1 : In FIG. 2 , the pane 230 with a preferred note view is shown on top of a main note view 250.
  • FIG. 3 is a schematic illustration 300 of feature extraction from an individual note and classification of the individual note. The note collection 110 is scanned by the system to identify notes that should be included in a preferred note view reflecting a current context 320. In the example of FIG. 3 , a note 310 is evaluated based on the current context 320. The current context 320 may include multiple components, such as a temporal context 320 a, a spatial (geolocation) context 320 b, scheduled events 320 c, a navigational context 320 d (such as a scrolling view, a tag based view or a notebook based view within a content collection), a sharing context 320 e, a search context 320 f, a linguistic (textual) context 320 g, a travel context 320 h, a social network context 320 i, etc.
  • Furthermore, each component of the context may be represented by one or multiple features 330. In the illustration 300, three sample feature sets 330 a, 330 b, 330 c are shown and the first feature in each set is described in details:
    • The feature set 330 a is a group of k features T1 ... Tk for a temporal context;
    • The feature set 330 b is a group of m features S1 ... Sm for a spatial context;
    • The feature set 330 c is a group of n features Li ... Ln for a search context.
  • The system may extract attributes of the note 310 corresponding to each of the feature sets 330 a-330 c and build numeric feature values 340, as explained elsewhere herein (see, for example, formula (1) for some of the temporal features). Numeric feature values are illustrated in FIG. 1 for a temporal context - the feature set 340 a, and for a search context the features set 340 b.
  • At a next step, a vector V of feature values 340 is processed using a classifier 350, such as an SVM classifier where a separating plane defining one of two possible outcomes is defined by a normal vector W of the classifier plane, so the outcome is associated with a sign of the dot product V · W (for example, V · W > 0 may indicate an inclusion of the note 310 into a preferred note view, as illustrated in FIGS. s 1, 2 ). Based on the classification result, the system makes a binary decision 360 to add the selected note 310 to the preferred note view or to not add the note 310.
  • Referring to FIG. 4 , a flow diagram illustrates automatic learning in conjunction with compiling an SVM classifier. Processing begins at a step 410 where pre-existing notes and access history are collected, as explained elsewhere herein. After the step 410, processing proceeds to a step 420 where a feature set for automatic learning is built. After the step 420, processing proceeds to a step 430 where a classifier designated for pre-building into the system and delivering to users is trained and evaluated utilizing training and test sets of notes (and possibly other items in content collections, such as notebooks, tags, search lists, etc.). After the step 430, processing proceeds to a step 440 where the classifier is delivered to a new user with the classifier software. After the step 440, processing proceeds to a step 450 where, in connection with software functioning and user access to notes and other items in different environments, the system collects additional contexts and note access history for the new user. After the step 450, processing proceeds to a step 460 where the classifier is re-trained and parameters of the classifier are modified. After the step 460, processing is complete.
  • Referring to FIG. 5 , a flow diagram 500 describes building preferred content views. Processing begins at a step 510 where the system identifies the current context, as described elsewhere herein. After the step 510, processing proceeds to a step 520 where a note or other item is chosen for evaluation. After the step 520, processing proceeds to a step 530 where features relevant to the current context and the chosen note are assessed, as described in more detail elsewhere herein. After the step 530, processing proceeds to a step 540 where numeric feature values for the selected note and the current context are built, as explained elsewhere herein (see, for example, formula (1) and FIG. 3 ).
  • After the step 540, processing proceeds to a step 550 where the classifier is applied to a vector of numeric feature values (see, for example, items 340, 350 and the accompanying text in FIG. 3 ). After the step 550, processing proceeds to a test step 560 where it is determined whether the previous step resulted in assigning the selected note to the preferred view. If so, processing proceeds to a step 570 where the note score obtained during the classification step (such as a cosine of the angle between the vectors V, W explained in conjunction with FIG. 3 ) is used to calculate note rank with respect to other notes identified as candidates for inclusion in the preferred view (if any). Such ranking may apply to any type of items that may be present in the preferred view: notes, notebooks, tags, saved search queries, etc.
  • After the step 570, processing proceeds to a test step 575 where it is determined whether the note rank is within a preferred list size. If so, processing proceeds to a step 580 where the note is added to the preferred view list and the list is modified if necessary; for example, a previously included item with a lower score residing at the bottom of the list may be eliminated from the preferred view list. After the step 580, processing proceeds to a test step 585 where it is determined whether there are more notes to evaluate. Note that the step 585 may be independently reached from the step 560 if the selected note is not added to the preferred view and from the step 575 if the note rank is outside the list size. If there are more notes to evaluate, processing proceeds to a step 590 where the next note is chosen and control is transferred back to the step 530; otherwise, processing is complete.
  • Various embodiments discussed herein may be combined with each other in appropriate combinations in connection with the system described herein. Additionally, in some instances, the order of steps in the flowcharts, flow diagrams and/or described flow processing may be modified, where appropriate. Subsequently, elements and areas of screen described in screen layouts may vary from the illustrations presented herein. Further, various aspects of the system described herein may be implemented using software, hardware, a combination of software and hardware and/or other computer-implemented modules or devices having the described features and performing the described functions. A mobile device, such as a cell phone or a tablet, may be used to implement the system described herein, although other devices, such as a laptop computer, etc., are also possible. The mobile device may include software that is pre-loaded with the device, installed from an app store, installed from a desktop (after possibly being pre-loaded thereon), installed from media such as a CD, DVD, etc., and/or downloaded from a Web site. The mobile device may use an operating system selected from the group consisting of: iOS, Android OS, Windows Phone OS, Blackberry OS and mobile versions of Linux OS. The items may be stored using the OneNote® note-taking software provided by the Microsoft Corporation of Redmond, Washington.
  • Software implementations of the system described herein may include executable code that is stored in a computer readable medium and executed by one or more processors. The computer readable medium may be non-transitory and include a computer hard drive, ROM, RAM, flash memory, portable computer storage media such as a CD-ROM, a DVD-ROM, a flash drive, an SD card and/or other drive with, for example, a universal serial bus (USB) interface, and/or any other appropriate tangible or non-transitory computer readable medium or computer memory on which executable code may be stored and executed by a processor. The system described herein may be used in connection with any appropriate operating system.
  • Other embodiments of the invention will be apparent to those skilled in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims.

Claims (20)

What is claimed is:
1. A content management method, comprising:
at a computer system including one or more processors and memory storing programs for execution by the one or more processors:
obtaining access to a plurality of content items, each content item having a respective type, respective user access history, and respective historic context;
characterizing a user access pattern based on the respective user access history and historic context of each content item; and
while monitoring environmental parameters of a user, in real time:
inferring a current context of the user based on the environmental parameters;
in accordance with the current context and the user access pattern, identifying a subset of content items having a highest likelihood of user access among the plurality of content items; and
causing to be delivered to, and displayed on, a mobile device a preferred view of the subset of content items.
2. The method of claim 1, wherein the computer system is communicatively connected with a mobile device associated with the user, and the plurality of content items are obtained from a content collection of the user and have a plurality of content types.
3. The method of claim 1, further comprising:
for each of the plurality of content items, determining the respective user access history and the respective historic context;
building a classifier for characterizing the user access pattern; and
evaluating each content item using the classifier.
4. The method of claim 1, in accordance with the current context and the user access pattern, identifying the subset of content items having the highest likelihood of user access among the plurality of content items further comprising:
in accordance with the current context, for each of the plurality of content items:
extracting numeric feature values to generate a vector of feature values;
determining a score for the respective content item based on the vector of feature values, the score indicating a likelihood of user access of the content item in the current context; and
identifying in real-time the subset of content items in the plurality of content items that is most relevant to the current context based on the determined score, wherein the subset of content items is a partial aggregation of the plurality of content items having highest scores.
5. The method of claim 1, further comprising:
causing to be delivered to, and displayed on, the mobile device a list of favorite user notes, wherein the preferred view populates a section of the list of favorite user notes, wherein the plurality of content items are displayed as a list on the mobile device, and the subset of content items is displayed on top of the list.
6. The method of claim 1, wherein for each content item, the respective type is one of document, user note, notebook, search list, media file, appointment, navigational route, and reminder.
7. The method of claim 1, wherein, for each content item:
a context is associated with the respective user access history and includes a plurality of components; and
each of the components has a respective set of distinct features that is used to evaluate a relevance of the respective content item.
8. The method of claim 7, wherein inferring the current context of the user further comprises comparing the respective set of features of each of the components in the context associated with the respective user access history and the current context.
9. The method of claim 7, wherein the plurality of components includes a plurality of: temporal, geolocation, scheduled events, navigational, organizational, sharing context, search, travel, and social network.
10. The method of claim 1, wherein for each content item:
a context is associated with the respective user access history and includes a temporal context; and
the temporal context includes one or more temporal patterns of access of an item selected from: a time of recent access of an item, frequency of access of an item, frequency of location related access of an item, and frequency of event related access of an item.
11. A computer system, comprising:
one or more processors; and
memory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for:
obtaining access to a plurality of content items, each content item having a respective type, respective user access history, and respective historic context;
characterizing a user access pattern based on the respective user access history and historic context of each content item; and
while monitoring environmental parameters of a user, in real time:
inferring a current context of the user based on the environmental parameters;
in accordance with the current context and the user access pattern, identifying a subset of content items having a highest likelihood of user access among the plurality of content items; and
causing to be delivered to, and displayed on, a mobile device a preferred view of the subset of content items.
12. The computer system of claim 11, wherein the one or more temporal patterns of access of the item are numerically assessed based on at least one of: time of day, time of week, and time of month.
13. The computer system of claim 11, wherein:
the subset of content items includes only content items having a relevance value above a predetermined threshold; and
the one or more programs further comprise instructions for:
sorting the subset of content items according to respective relevance values of the content items in the subset of content items, and wherein content items that are not part of the subset of content items are displayed following the subset of content items.
14. The computer system of claim 11, the one or more programs further comprising instructions for:
splitting the content items into a training set and a test set; and
analyzing the content items in the training set to develop a set of rules used for evaluation of relevance of the content items; and
building a classifier for characterizing the user access pattern, wherein the classifier further includes instructions for performing automatic learning on the training set.
15. The computer system of claim 14, wherein:
the classifier comprises a support vector machine classification method; and
the one or more programs further comprising instructions for, for each content item:
generating a vector of feature values; and
determining a score based on a relationship between the vector of feature values and a normal vector of a separating hyperplane using the support vector machine classification method.
16. A non-transitory computer-readable storage medium storing one or more programs for execution by one or more processors of a computer system, the one or more programs comprising instructions for:
obtaining access to a plurality of content items, each content item having a respective type, respective user access history, and respective historic context;
characterizing a user access pattern based on the respective user access history and historic context of each content item; and while monitoring environmental parameters of a user, in real time:
inferring a current context of the user based on the environmental parameters;
in accordance with the current context and the user access pattern, identifying a subset of content items having a highest likelihood of user access among the plurality of content items; and
causing to be delivered to, and displayed on, a mobile device a preferred view of the subset of content items.
17. The non-transitory computer-readable storage medium of claim 16, the one or more programs further comprising instructions for:
adjusting subsequent subsets of the plurality of content items based on user feedback, wherein the user feedback is implicit and includes frequency of actual viewing of respective items of the plurality of content items by the user.
18. The non-transitory computer-readable storage medium of claim 16, the one or more programs further comprising instructions for:
adjusting subsequent subsets of the plurality of content items based on user feedback, wherein the user feedback is explicit.
19. The non-transitory computer-readable storage medium of claim 16, wherein the preferred view is configured for presentation based on a ranking of each content item in the subset of content items.
20. The non-transitory computer-readable storage medium of claim 19, wherein the preferred view includes a pop up screen that is superimposed over a first view of the plurality of content items.
US17/981,264 2013-09-16 2022-11-04 Automatic Generation of Preferred Views for Personal Content Collections Pending US20230054747A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/981,264 US20230054747A1 (en) 2013-09-16 2022-11-04 Automatic Generation of Preferred Views for Personal Content Collections

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361878296P 2013-09-16 2013-09-16
US14/470,021 US10545638B2 (en) 2013-09-16 2014-08-27 Automatic generation of preferred views for personal content collections
US16/773,890 US11500524B2 (en) 2013-09-16 2020-01-27 Automatic generation of preferred views for personal content collections
US17/981,264 US20230054747A1 (en) 2013-09-16 2022-11-04 Automatic Generation of Preferred Views for Personal Content Collections

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/773,890 Continuation US11500524B2 (en) 2013-09-16 2020-01-27 Automatic generation of preferred views for personal content collections

Publications (1)

Publication Number Publication Date
US20230054747A1 true US20230054747A1 (en) 2023-02-23

Family

ID=52666159

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/470,021 Active 2035-05-22 US10545638B2 (en) 2013-09-16 2014-08-27 Automatic generation of preferred views for personal content collections
US16/773,890 Active 2036-01-17 US11500524B2 (en) 2013-09-16 2020-01-27 Automatic generation of preferred views for personal content collections
US17/981,264 Pending US20230054747A1 (en) 2013-09-16 2022-11-04 Automatic Generation of Preferred Views for Personal Content Collections

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/470,021 Active 2035-05-22 US10545638B2 (en) 2013-09-16 2014-08-27 Automatic generation of preferred views for personal content collections
US16/773,890 Active 2036-01-17 US11500524B2 (en) 2013-09-16 2020-01-27 Automatic generation of preferred views for personal content collections

Country Status (2)

Country Link
US (3) US10545638B2 (en)
WO (1) WO2015038335A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11907910B2 (en) 2013-05-29 2024-02-20 Evernote Corporation Content associations and sharing for scheduled events

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102186819B1 (en) * 2013-08-27 2020-12-07 삼성전자주식회사 A mobile terminal supportting a note function and a method controlling the same
KR20150040607A (en) * 2013-10-07 2015-04-15 엘지전자 주식회사 Mobile terminal and control method thereof
JP6290835B2 (en) * 2015-08-27 2018-03-07 ファナック株式会社 Numerical control device and machine learning device
WO2018039774A1 (en) 2016-09-02 2018-03-08 FutureVault Inc. Systems and methods for sharing documents
CA3035277A1 (en) 2016-09-02 2018-03-08 FutureVault Inc. Real-time document filtering systems and methods
CA3035097A1 (en) 2016-09-02 2018-03-08 FutureVault Inc. Automated document filing and processing methods and systems
US10027796B1 (en) 2017-03-24 2018-07-17 Microsoft Technology Licensing, Llc Smart reminder generation from input
CA204308S (en) * 2020-12-21 2023-08-21 Hoffmann La Roche Display screen with graphical user interface
US11860780B2 (en) 2022-01-28 2024-01-02 Pure Storage, Inc. Storage cache management

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7161619B1 (en) 1998-07-28 2007-01-09 Canon Kabushiki Kaisha Data communication system, data communication control method and electronic apparatus
US20020135614A1 (en) 2001-03-22 2002-09-26 Intel Corporation Updating user interfaces based upon user inputs
GB0223464D0 (en) 2002-10-09 2002-11-13 British Telecomm Distributed scheduling method and apparatus
US7206773B2 (en) 2003-04-11 2007-04-17 Ricoh Company, Ltd Techniques for accessing information captured during a presentation using a paper document handout for the presentation
US7454377B1 (en) * 2003-09-26 2008-11-18 Perry H. Beaumont Computer method and apparatus for aggregating and segmenting probabilistic distributions
JP4070739B2 (en) 2004-03-30 2008-04-02 ジヤトコ株式会社 Continuously variable transmission
US7948448B2 (en) 2004-04-01 2011-05-24 Polyvision Corporation Portable presentation system and methods for use therewith
US20050273372A1 (en) 2004-06-03 2005-12-08 International Business Machines Corporation Integrated system for scheduling meetings and resources
WO2007005463A2 (en) 2005-06-29 2007-01-11 S.M.A.R.T. Link Medical, Inc. Collections of linked databases
US20070016661A1 (en) 2005-07-12 2007-01-18 Malik Dale W Event organizer
EP1758031A1 (en) * 2005-08-25 2007-02-28 Microsoft Corporation Selection and display of user-created documents
JP2007061451A (en) * 2005-08-31 2007-03-15 Square Enix Co Ltd Interactive content delivery server, interactive content delivery method, and interactive content delivery program
WO2007033495A1 (en) 2005-09-26 2007-03-29 Research In Motion Limited Communications event scheduler
JP2010504578A (en) * 2006-09-22 2010-02-12 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ A method for feature selection based on genetic algorithm using classifier ensemble
US7796309B2 (en) 2006-11-14 2010-09-14 Microsoft Corporation Integrating analog markups with electronic documents
US20080300937A1 (en) 2007-05-30 2008-12-04 Ty Allen Event-linked social networking
US8140584B2 (en) * 2007-12-10 2012-03-20 Aloke Guha Adaptive data classification for data mining
JP4772839B2 (en) * 2008-08-13 2011-09-14 株式会社エヌ・ティ・ティ・ドコモ Image identification method and imaging apparatus
US9053625B2 (en) 2008-12-04 2015-06-09 The F3M3 Companies, Inc. System and method for group tracking
US20100306018A1 (en) 2009-05-27 2010-12-02 Microsoft Corporation Meeting State Recall
US20110282964A1 (en) * 2010-05-13 2011-11-17 Qualcomm Incorporated Delivery of targeted content related to a learned and predicted future behavior based on spatial, temporal, and user attributes and behavioral constraints
US8639719B2 (en) 2011-02-02 2014-01-28 Paul Tepper Fisher System and method for metadata capture, extraction and analysis
US9165289B2 (en) 2011-02-28 2015-10-20 Ricoh Company, Ltd. Electronic meeting management for mobile wireless devices with post meeting processing
US20140207718A1 (en) * 2011-08-12 2014-07-24 Thomson Licensing Method and apparatus for identifying users from rating patterns
US20130073329A1 (en) 2011-08-24 2013-03-21 The Board Of Trustees Of The Leland Stanford Junior University Method and System for Calendaring Events
WO2013049386A1 (en) 2011-09-27 2013-04-04 Allied Minds Devices Llc Instruct-or
US9948988B2 (en) 2011-10-04 2018-04-17 Ricoh Company, Ltd. Meeting system that interconnects group and personal devices across a network
KR101892216B1 (en) * 2012-02-24 2018-08-27 삼성전자주식회사 Apparatas and method of handing a touch input in a portable terminal
JP5979918B2 (en) 2012-03-12 2016-08-31 キヤノン株式会社 Information processing system, information processing system control method, information processing apparatus, and computer program
US9774658B2 (en) 2012-10-12 2017-09-26 Citrix Systems, Inc. Orchestration framework for connected devices
US9953304B2 (en) 2012-12-30 2018-04-24 Buzd, Llc Situational and global context aware calendar, communications, and relationship management
US9674132B1 (en) * 2013-03-25 2017-06-06 Guangsheng Zhang System, methods, and user interface for effectively managing message communications

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11907910B2 (en) 2013-05-29 2024-02-20 Evernote Corporation Content associations and sharing for scheduled events

Also Published As

Publication number Publication date
US10545638B2 (en) 2020-01-28
US20200159379A1 (en) 2020-05-21
WO2015038335A1 (en) 2015-03-19
US20150081601A1 (en) 2015-03-19
US11500524B2 (en) 2022-11-15

Similar Documents

Publication Publication Date Title
US20230054747A1 (en) Automatic Generation of Preferred Views for Personal Content Collections
US11741173B2 (en) Related notes and multi-layer search in personal and shared content
US10832219B2 (en) Using feedback to create and modify candidate streams
EP3244312B1 (en) A personal digital assistant
US11574026B2 (en) Analytics-driven recommendation engine
US10353967B2 (en) Assigning relevance weights based on temporal dynamics
US8612463B2 (en) Identifying activities using a hybrid user-activity model
US11727328B2 (en) Machine learning systems and methods for predictive engagement
US20210383308A1 (en) Machine learning systems for remote role evaluation and methods for using same
US20140122355A1 (en) Identifying candidates for job openings using a scoring function based on features in resumes and job descriptions
CN108701155B (en) Expert detection in social networks
EP2557510A1 (en) Context and process based search ranking
US20210383229A1 (en) Machine learning systems for location classification and methods for using same
US11023503B2 (en) Suggesting text in an electronic document
CN107958014B (en) Search engine
US20180060822A1 (en) Online and offline systems for job applicant assessment
US11176152B2 (en) Job matching method and system
US9946787B2 (en) Computerized systems and methods for generating interactive cluster charts of human resources-related documents
CN111989699A (en) Calendar-aware resource retrieval
US20210383261A1 (en) Machine learning systems for collaboration prediction and methods for using same
WO2011111038A2 (en) Method and system of providing completion suggestion to a partial linguistic element
US9996529B2 (en) Method and system for generating dynamic themes for social data
US20220147934A1 (en) Utilizing machine learning models for identifying a subject of a query, a context for the subject, and a workflow
JP2020129232A (en) Machine learning device, program, and machine learning method
RU2698916C1 (en) Method and system of searching for relevant news

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: BENDING SPOONS S.P.A., ITALY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EVERNOTE CORPORATION;REEL/FRAME:066288/0195

Effective date: 20231229

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER