WO2014002099A1 - Interaction related content - Google Patents

Interaction related content Download PDF

Info

Publication number
WO2014002099A1
WO2014002099A1 PCT/IL2013/050553 IL2013050553W WO2014002099A1 WO 2014002099 A1 WO2014002099 A1 WO 2014002099A1 IL 2013050553 W IL2013050553 W IL 2013050553W WO 2014002099 A1 WO2014002099 A1 WO 2014002099A1
Authority
WO
WIPO (PCT)
Prior art keywords
sequence
user
media items
users
peer
Prior art date
Application number
PCT/IL2013/050553
Other languages
French (fr)
Inventor
Ido Milstein
Original Assignee
Call Labs Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Call Labs Ltd. filed Critical Call Labs Ltd.
Publication of WO2014002099A1 publication Critical patent/WO2014002099A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1083In-session procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks

Definitions

  • the present invention in some embodiments thereof, relates to a method and/or system for interaction based media items and, more particularly, but not exclusively, to a method and/or system that adjusts a sequence of media items according to an interaction communication session between users.
  • An aspect of some embodiments of the invention relates to a method and/or system for enhancing a communication session between two or more people, by adapting designated content during the communication session according to the communication session and/or the flow of the session.
  • a computerized method of automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between at least two interacting users comprising displaying the media items of the at least one prepared sequence to at least one of the at least two interacting users on a display device; and automatically adjusting the at least one prepared sequence according to an action executed on one or more media items by at least one of the at least two interacting users during the peer to peer interactive communication session.
  • the one prepared sequence is prepared in advance of the peer to peer interactive communication session.
  • automatically adjusting comprises automatically inserting an edited media item into the sequence. According to some embodiments of the invention, automatically adjusting comprises adding a new electronic item into the sequence.
  • a single click sends the adjusted media item to a corresponding user.
  • a first user prepares a first sequence for providing to the first user, and the second user prepares a second sequence for providing to the second user.
  • a subset of media items of the first sequence are provided to the second user and a subset of media items of the second sequence are provided to the first user.
  • a first user prepares a first sequence for providing to a second user, and the second user prepares a second sequence for providing to the first user.
  • the first and second sequences are merged and organized into a third sequence for display to the first and second users.
  • a first user prepares a first sequence for providing to the first user and to one or more second users.
  • the first sequence is divided into a second sequence for display to a second user and a third sequence for display to a third user.
  • the second sequence and the third sequence have at least some common electronic media items.
  • the sequence is a branching tree of media items, and adjusting comprises traversing a path through the tree.
  • automatically adjusting comprises removing an existing media item in the sequence.
  • automatically adjusting comprises automatically adjusting the sequence according to changes in relevance of the one or more media items to the flow of conversation between the at least two interacting users.
  • the method further comprises automatically labeling added or edited media items according to a rating score.
  • the rating score comprises a context score of relevance of the media item to the at least two interacting users.
  • labeling comprises labeling the media item according to a quality score.
  • the quality score is determined according to a date and/or external rating.
  • labeling further comprises labeling the media item according to an interaction fit score.
  • the interaction fit score relates a property of the media item with an interaction type.
  • the method further comprises automatically organizing the prepared sequence according to the labeling of the media item.
  • the method further comprises determining an interaction type of the interaction between the at least two interacting users.
  • the interaction comprises two or more interactions between a plurality of pairs of users.
  • the method further comprises synchronizing the sequences between the at least two interacting users.
  • synchronizing comprises allowing the at least two interacting users to independently browse the sequence with changes sent from one user to another by a single click.
  • the method further comprises selecting media items for interactive display to at least one user of the at least two interacting users.
  • the method further comprises filtering adjusted media items.
  • the method further comprises editing the media items.
  • the method further comprises navigating the media items.
  • navigating comprises backwards or forwards sequential browsing of the electronic media items in the sequence.
  • the number of interacting users is two, and adjusting is performed only by one of the two interacting users.
  • the peer to peer interactive communication session comprises a voice telephone call.
  • automatically adjusting during the peer to peer interactive communication session comprises automatically adjusting the prepared sequence after the peer to peer interactive session has terminated.
  • automatically adjusting during the peer to peer interactive communication session comprises automatically adjusting the prepared sequence before the peer to peer interactive session has been initialized.
  • automatically adjusting comprises browsing the prepared sequence.
  • a first portion of the at least one prepared sequence is prepared in advance and a second portion of the at least one prepared sequence is prepared during the communication session.
  • prepared during the communication session comprises ordering the media items of the second portion of the at least one prepared sequence and inserting the ordered media items into the first portion of the at least one prepared sequence prepared in advance.
  • a system for automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between at least two interacting users comprising a module for automatically adjusting the prepared sequence according to changes to one or more media items made by at least one of the at least two interacting users during the peer to peer interactive communication session.
  • the system further comprises a memory in electrical communication with the module, the memory having a region configured as a cache for storing data intensive media items.
  • the system further comprises a mobile computer comprising the module, the mobile computer further comprising a communication port configured to send and receive media items.
  • the system further comprises a display unit for displaying the media items, the display unit being in electronic communication with the module.
  • the system further comprises a module configured to vary a media item update rate according to at least one of: network connectivity, computational abilities, and an importance of an interaction between the at least two interacting users.
  • the number of interacting users is two, and only one of the two users is running the module.
  • a computerized method of automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between a plurality of interacting users comprising identifying a plurality of contacts from a contact list of a user; identifying for each one of the plurality of contacts a plurality of media items which are relevant thereto; identifying an interactive communication session between a first contact of the plurality of contacts and the user; automatically generating a sequence from at least some of respective the plurality of media items which are relevant to the first contact; automatically presenting the sequence to both the user and the first contact during the interactive communication session; and actively controlling instructions from both the user and the first contact of the presentation of the sequence during the communication session.
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • FIG. 1 is a schematic diagram of a system to enhance a communication session between interacting users, in accordance with exemplary embodiments of the invention
  • FIG. 2 is a flow chart of a method of adapted media content to a communication session between interacting users, in accordance with exemplary embodiments of the invention
  • FIG. 3 is an exemplary data structure of labeled media items, in accordance with exemplary embodiments of the invention.
  • FIGs. 4A-4D are schematic diagrams of organized sequences of media items, in accordance with exemplary embodiments of the invention.
  • FIGs. 5A-5D are schematic diagrams illustrating organized sequences of media items associated with a pair of interacting users, in accordance with exemplary embodiments of the invention;
  • FIGs. 6A-6C are schematic diagrams illustrating organized sequences of media items associated with three or more interacting users, in accordance with exemplary embodiments of the invention.
  • FIGs. 7A-7C are schematic diagrams illustrating possible adjustments of the media sequences, in accordance with exemplary embodiments of the invention.
  • FIGs. 8A-8D are schematic diagrams illustrating some possible sequence synchronizations, in accordance with exemplary embodiments of the invention.
  • FIG. 9 is an example of a smartphone screen shot displaying the media sequence, in accordance with exemplary embodiments of the invention.
  • An aspect of some embodiments of the invention relates to a computerized method of adapting a designated sequence of media items during a communication session between at least two interacting users.
  • the communication session is a peer to peer interactive communication session, for example, a voice phone call, a call using a mobile phone with a screen, an instant message (IM) communication session, a video call and/or a combination of the listed methods or other methods.
  • the sequence is at least partly prepared in advance.
  • the sequence is presented during the communication session in a manner that is adapted to the communication session and/or to the interaction between the users. Potentially, the method enhances the communication session by fitting the flow of the call.
  • the content is automatically organized into a sequence. For example, a callee views a sequence "A" created automatically from the data selected specifically for him, and a caller views another sequence "B" created automatically from the data selected specifically for him.
  • a user selects content to provide for the other user(s).
  • a callee views the sequence created from the data selected by a caller, and the caller views the sequence created from the data selected by the callee.
  • one or more users selects content to provide for him or herself.
  • a caller views a sequence created from the data he/she selected and/or a callee views a sequence created from the data he/she selected.
  • the contents of the sequences are merged into a single sequence that is viewed together by one or more users.
  • the sequence created from one user or more users is viewed together by all users, or a sub-group of users.
  • the sequence created from one user is divided into two or more different sequences (with or without overlapping content) for viewing by different users or groups of users.
  • the sequences are adjusted according to the interaction between the users.
  • editing of e.g., drawing on
  • an electronic item of the first sequence is integrated into the second sequence.
  • the edited item is integrated into the first sequence.
  • the edited or added items are integrated into the sequence at the point of browsing.
  • the edited or added items are integrated into the sequence according into position according to a ranking of the electronic content.
  • the user browsing the sequence selects a pointer for controlling and/or focusing the presentation of the sequence to another user.
  • the pointer is selected by a single click.
  • a new electronic item is integrated into the sequence created by the user adding the new item (e.g., which is being displayed to the opposite user). Additionally or alternatively, the new electronic item is integrated into the sequence created by the opposite user.
  • the electronic item is deleted from the sequence, for example, by a single click. Potentially, offensive or personal items are quickly removed.
  • the sequence is personalized for the communication session, for example, based on who is involved in the call, the communication devices, previous communication data and/or other data.
  • the sequence is a subset selected from the selected electronic content items.
  • the subset forming the sequence is dynamically adjusted in response to the interaction between the users, for example, the topic of conversation of the users, the frequency of keywords, and/or actions of the users.
  • adjustments include, items are removed from the sequence, added to the sequence, and/or the order of items in the sequence is re-organized.
  • browsing the sequence is synchronized together to two or more users, so that an action taken by one of the users is almost immediately apparent to the other user.
  • the action taken by the first user is unrelated (or asynchronous) to the other user.
  • the synchronization is automatically performed by software. For example, changes to the sequence of the caller are almost immediately reflected in the sequence of the callee, for example, after the caller performs a single click.
  • both users view the same item at substantially the same period in time.
  • both users view items within the same category at substantially the same period in time.
  • changes to the sequence are made apparent to both users. For example, editing of a media item is displayed to both users at about the same period in time.
  • browsing is synchronized between the users.
  • each user is limited in the content that may be browsed during a period of interaction.
  • the users may browse the same content item and/or the users may browse content items that fall within a similar category, such as topic of conversation and/or category of similarity of the content between the users.
  • the present invention in some embodiments thereof, relates to a method and/or system for interaction based media items and, more particularly, but not exclusively, to a method and/or system that adjusts a sequence of media items according to an interaction communication session between users.
  • FIG. 1 illustrates a communication enhancing system 100 for interaction by at least two users 102A-B, in accordance with exemplary embodiments of the invention.
  • users 102A-B interactively communicate with each other using one or more communication devices 104A-B, for example, smartphones, feature phones, cellphones, landlines, tablet computers, laptop computers, personal computers, television set, game console, and/or a combination thereof.
  • communication devices 104A-B for example, smartphones, feature phones, cellphones, landlines, tablet computers, laptop computers, personal computers, television set, game console, and/or a combination thereof.
  • devices 104A-B comprise and/or are in electrical communication with one or more one or more transceivers.
  • devices 104A-B communicate with each through one or more communication links (wired and/or wireless) set up through the one or more transceivers, for example, through an internet, a cellular network, short range communication (e.g., Bluetooth), Ethernet, and/or other suitable channels.
  • communication links wireless and/or wireless
  • devices 104A-B comprise a memory 116A-B and/or are in electrical communication with other storage devices (e.g., remote server 114).
  • memory 116A-B and/or other storage devices store one or more software modules for running a communication enhancement session. Details of the method and/or software modules of the communication enhancement session are described in more detail below, for example, with reference to figure 2.
  • the software enhancement session may include one or more modules that perform the following functions: preparation of the sequence before the call, adjustment of the sequence during the call and/or continuing the interaction with the sequence even after the peer to peer session (e.g., voice call, IM session) has been terminated.
  • 102A-B is a peer to peer interactive communication session, for example, a voice phone call, a call using a mobile phone with a screen, an instant message (IM) communication session, a video call and/or a combination of the listed methods or other methods.
  • IM instant message
  • devices 104A-B comprise a visual display unit (e.g., high resolution screen).
  • users 102A-B prepare in advance a sequence of media items, for example, from one or more devices and/or networks in electronic communication with devices 104A-B, for example, audio recorder 104 (e.g., MP3 music), still and/or video camera 106 (e.g., pictures, videos), computer 108 (e.g., school project files), private network 110 (e.g., facebook links), public network 112 (e.g., internet hyperlinks from public sites).
  • audio recorder 104 e.g., MP3 music
  • still and/or video camera 106 e.g., pictures, videos
  • computer 108 e.g., school project files
  • private network 110 e.g., facebook links
  • public network 112 e.g., internet hyperlinks from public sites.
  • the sequences are swapped between the users.
  • user 102A provides sequence 114A to user 102B and/or user 102B provides sequence 114B to user 102A.
  • user 102A-B uses his or her automatically generated sequence.
  • user 102A uses sequence 114A and/or user 102B uses sequence 114B.
  • the sequences are adjusted in response to the interaction between the users, for example, in response to the flow of the call and/or to actions taken by users during the call. Potentially, adjusting the sequences enhances the phone call of the users.
  • Figure 2 is a flowchart of a computerized method of automatically adapting a prepared sequence of media items between at least two interacting users, in accordance with exemplary embodiments of the invention.
  • the method is implemented by one or more software modules, for example, stored on communication devices.
  • an interaction type is determined. Potentially, the interaction type is used as an initial starting point, for example, for content labeling and/or content organization.
  • the interaction type is associated with contacts, for example, from a contact list of a user.
  • one or more contacts are identified from the contact list, for example, automatically by software and/or manually by the user.
  • the interaction type is associated with the number of interacting users, for example, a single user to a single user, a single user to a group of users, a group of users to a group of users, and/or a network of users.
  • the interaction type is associated with the communication hardware, for example, a smartphone with high resolution screen, and/or a laptop computer running communication software.
  • the interaction type is associated with the expected nature of the interaction, for example, business, commercial, social, and/or family.
  • the interaction type is associated with the expected length of the interaction, for example, a single session or multiple sessions.
  • the interaction type is associated with the expected mechanism of interaction, for example, phone call, messaging, video calling.
  • the interaction type is associated with the expected communication context and/or carrier company, for example, an international call, a free call, a VoIP call, a free SMS, an AT&T® call, and/or a Sprint® call.
  • the interaction type is an electronically stored data structure, for example, a table, a file, a record, an object, an array and/or other suitable structures.
  • the interaction type is determined automatically, for example, by a computer algorithm.
  • the interaction type is determined manually, for example, by the user.
  • the interaction type is determined partly automatically and partly manually, for example, for a group of users, each one manually selecting the interaction type, with an algorithm automatically determining the interaction.
  • electronic media items are selected.
  • the electronic media items are selected for display to one or more users, for example, as will described below with reference to box 216.
  • not all the selected items are displayed, for example, some may not be selected, some may be filtered out and some may not be browsed by the user.
  • a plurality of media items are identified for each of the plurality of contacts, for example, contacts from a list as described with reference to 202.
  • the plurality of media items are relevant to the plurality of contacts. For example, relevance of the media items to the contact is described in more detail below in 206.
  • the media items may be different for different contacts, may overlap between contacts, or may be similar for one or more contacts.
  • Electronic media items may be, for example, images, text files, computer application files, videos, sound effects, animations, games, and/or links to any of the mentioned items and/or combinations thereof.
  • Media items may contain the full data, partial data and/or a link to the data.
  • Media items may reside at any location, for example, social networks (e.g., facebook, twitter), private networks, the internet, the smartphone used to make the call, and/or servers of generic content providers (e.g., CNN, YouTube).
  • Media items may exist at the time of the selection, and/or may not yet exist (e.g., feeds from sites).
  • Media items may be selected manually, for example, by the user, for example, by an electronic tag, by copying of the link to the content, and/or by placing the data (or copy thereof) into an electronic folder.
  • media items may be selected automatically, for example, from website feeds.
  • the media item (e.g., selected at 204) is labeled, in accordance with exemplary embodiments of the invention.
  • the media items are labeled in way that helps to determine the most relevant, suitable and/or interesting items for discussion during the interactive communication session, for example, relevance to a specific contact.
  • the labeling is used to sort the items according to a priority level, for as will be discussed below with reference to box 212.
  • the media item is labeled with one or more of, a quality score, an interaction fit score, a context score, the interaction type (e.g., as in 202) and/or other suitable labels.
  • labeling is performed automatically, for example, by a software algorithm.
  • labeling is performed manually, for example, by the user.
  • Labeling of the media item will be discussed with reference to FIG. 3, which is an example of a labeling data structure (e.g., a table), in accordance with exemplary embodiments of the invention.
  • Items Type refers to the type of file, for example, image, text, video, music.
  • File Name refers to the name of the stored media item.
  • Location refers to where the media is stored.
  • the media item is labeled with a "Quality Score", for example, a numerical value on a scale.
  • the quality score is calculated by a function of one or more associated input variables.
  • the quality score is associated with the date of creation and/or last update of the item ("Date").
  • the quality score is associated with an external rating, for example, number of positive approvals (e.g., user selected stars), and/or number of comments by other people.
  • the media item is labeled with an "Interaction Fit
  • the Interaction Fit Score is associated with the fit of the content form type to the interaction type, for example, ease of reading or processing of the item by a human using the provided interaction.
  • the fit is determined according to the interaction type, for example, as determined in 202.
  • a relative value (e.g., numerical "Score” and/or descriptive value) is determined for the interaction fit score, for example, by a software algorithm, for example, by a table associating "Item description" and the relative score value.
  • higher resolution images e.g., easier to view
  • text e.g., easier to understand
  • text with less words e.g., easier to read
  • visual content e.g., photos
  • short text items receive a 'medium' rating
  • /or long text items receive a 'low' rating.
  • the media item is labeled with a "Context Score”.
  • the context score is associated with the fit of the electronic item to one or both parties.
  • the context score is comprised of one or more sub-labels, for example, "How affiliated”, “Relevance to Other Party”, “Relationship Marking”, “Timing Relevance”, and/or other suitable labels.
  • "How affiliated” is associated with the affiliation of the content of the item and/or the creator of the content to the user(s) who will be receiving the content.
  • "How affiliated” is associated with the affiliation of the content to the user selecting the content.
  • Some examples of methods (automatic and/or manual) of determining affiliation include; tags attached to the content that link the content to specific individuals (e.g., author), recognition of key features in the context (e.g., names in text, faces in images, email correspondence associated with a business context).
  • the media item is labeled with a "Relevance to
  • the "Relevance to Other Party” is associated with the degree of relevance to one or more of the users in the interaction.
  • the "Relevance to Other Party” label comprises of "Directly relevant", “Indirectly relevant”, “Generally relevant” or other suitable labels.
  • “Directly relevant” is associated with the user, for example, a picture showing the user.
  • “Indirectly relevant” is associated with a feature not directly common to the users, for example, a picture of a family member of the user.
  • "Generally relevant” is associated with a property of the users, for example, demographics, hobbies and/or interests.
  • “Limited relevant” is used as a default label if other labels are not suitable.
  • a numeric scale of relevancy scores is used, for example, instead of discrete labels.
  • the media item is labeled with a "Relationship Marking".
  • the "Relationship Marking” is associated with the affiliation between the content and the users. For example:
  • the "Timing Relevance” label is associated with one or more particular events that are currently occurring and/or that have recently occurred, and/or to the previous sessions. For example, on a person's birthday, a few days before, and/or a few days after the birthday, an item relating to the birthday will be marked with the Timing relevance. For example, items related to a game played between the parties in the previous session will be marked with a "high” timing relevance. For example, on Valentine's Day, items relating to "love” and/or to other Valentine's Day events will be marked with the "high" timing relevance.
  • the context score labels are determined according to a user profile, which includes, for example, a list of friends, a list of interests, a list of activities, demographics, or other suitable methods can be used to determine the context score.
  • items are labeled with a single label.
  • at least some items are labeled with two or more labels.
  • labels described are only an example, and that other suitable labels may be used.
  • labels may change depending on the content, the interaction and/or the users.
  • one or more media items are filtered.
  • filtering is performed according to a threshold, with items below (or above) the threshold being removed. Potentially, filtering media items removes the least relevant and/or interesting items.
  • filtering is performed according to a quality score threshold.
  • items having a quality score below the quality score threshold are filtered out.
  • a time threshold is used to filter out items that were created or updated before the set time threshold value.
  • an external threshold filter is used to filter out items that have relatively low external ratings, for example, few views, few comments and/or few stars.
  • an interaction fit score filter is used to filter out items that have relatively low interaction fit ratings. For example, long text is removed.
  • filters based on other labels for example, those discussed with reference to "Label Content" 206 are used to filter out items, for example, items that have relatively low score labels.
  • one or more media items are edited. Potentially, editing the items improves the content interaction, for example, by removing sensitive data, by formatting media items so that they are easier to read and/or see.
  • media items are formatted to increase the interaction fit score.
  • long text items are truncated and/or broken into smaller pieces. Potentially, the shorter text item has a relatively higher interaction fit score than the previously longer text item.
  • media items are formatted to improve the fit of the item to the determined interaction. For example, if the interaction is 'business', then personal content and/or other non-business content is removed.
  • the media items are organized and/or ordered, in accordance with exemplary embodiments of the inventions.
  • the media items are relevant to one or more users, for example, to a contact.
  • FIGs. 4A-4D which schematic illustrates showing different ways of organizing and/or ordering the electronic content.
  • the content is ordered in a sequence which may be browsed by users.
  • a sequence is automatically generated from at least some of the respective plurality of media items which are relevant to one or more other users, for example, the first contact.
  • the sequence is suitable for presentation in a controlled manner on a suitable communication device, for example, as described herein.
  • the user and the first contact actively control the presentation of the sequence during the communication session, for example, the navigation of the sequence (e.g., as in 218), the synchronization of the sequence (e.g., as in 222), sending one or more media items to one another, the order of the sequence (e.g., as in 224) and/or other actions that are allowed on the sequence, for example, editing, addition and/or removal of media items (e.g., as in 220), and/or changes to the relevance of media items (e.g., relevance as described in 202, 204 and/or 206).
  • the sequence is prepared in advance of the interactive communication session, for example, before the session is initialized (e.g., as in 214).
  • the media items are selected in advance of the session, and the sequence is prepared, for example, using the methods described in boxes 206, 208, 210 and/or 212.
  • the media items are selected in advance of the session, and the sequence is labeled (e.g., as in 206), filtered (e.g., as in 208), edited (e.g., as in 210), ordered (e.g., as in 212) and/or loaded during the session.
  • a part of the sequence is prepared in advance, and optionally another part is prepared during the communication session.
  • the part prepared in advance is prepared using methods 204, 206, 208, 210 and/or 212.
  • the part prepared during the session is prepared using methods 204, 206, 208, 210 and/or 212, and optionally merged with the sequence prepared in advance, for example, the media items are ordered within the sequence prepared in advance.
  • FIG. 4A is schematic illustration of a linear sequence 400 of media items 402A-F.
  • media items are logically linked as a chain, for example, 402B follows 402A, and 402C follows 402B.
  • linear sequence 400 is generated by sorting the labeled media items, for example, in order from highest to lowest.
  • sorting is performed according to the quality score, the interaction fit score, the relevance marking, other suitable scores and/or markings, and/or an aggregate numeric or discrete function of the parameters described with reference to figure 3 and/or other suitable parameters. For example, an average of the ratings of the parameters.
  • the ordering of items is based on creating the linear sequence with a mix of scores and/or item types, for example, based on a set of rules so as to create an interesting flow of items.
  • a relatively high quality photo item is inserted between 2 relatively lower quality text items.
  • the rule-set is monitored and/or updated periodically, for example, manually by a human and/or automatically by software, for example, based on new available content and/or relevant news.
  • content relating to a TV show may be made available for users, and labeled with a high priority, for example, if the users reside in a relevant area, and the show becomes very popular in that area with a defined range of time.
  • FIG. 4B is a schematic illustration of a branching sequence 404 of electronic content items.
  • content items are sorted into categories, for example, 406 A, 406B and 406C represent the start of three categories.
  • the items are sorted in the linear sequence.
  • branching sequence 404 is generated by categorizing the labeled electronic data items, for example, according to the "Relevance" category. For example, all directly relevant items are placed in a first sequence, all indirectly relevant items are placed in a second sequence, and/or all generally relevant items are placed in a third sequence. Optionally, the items are sorted (e.g., as in FIG. 4A) within each category.
  • FIG. 4C is a schematic illustration of an organized and ordered sequence 408 of electronic content items.
  • sequence 408 is a linear sequence.
  • categories 41 OA, 410B and 4 IOC are sorted in a linear sequence.
  • individual items are sorted, for example, as described above.
  • organized and/or ordered sequence 408 is generated from an organized branched sequence.
  • categories 406A-406C of branch sequence 404 are sorted into the linear sequence.
  • the sequence is predetermined, for example, "Directly relevant”, followed by “Indirectly relevant”, followed by "Generally relevant” and followed by "Limited relevance”.
  • “Timely relevant” items e.g., friend's birthday
  • “Shared Updates” e.g., shared photos
  • “Other person's updates” e.g.,. recent status updates by the other person
  • Our interests e.g., recent football statistics of our liked team
  • My updates e.g., the recent YouTube videos that I watched an hour ago and/or recent photos I took with my camera.
  • FIG. 4D is a schematic illustration of an organized and/or ordered tree sequence 412 of electronic content items.
  • the items are organized into a multiple level, branching tree structure.
  • each higher level (or branching point) represents a higher level category.
  • first branching points 414A-414C represent different users (or groups thereof) that will receive the sequence.
  • items and/or categories within the higher level are sorted, for example as described above.
  • organized and/or ordered sequence 408 is generated from tree 412.
  • sequences 410A-C respectively represent the sub-trees of roots 414A-C, for example, users (or groups thereof) A-C.
  • sequence 408 is a sorted and/or ordered subset of items from tree 412.
  • sequence 408 is a sorted and/or ordered sequence of items under branch root 414A, for example, for user A.
  • a plurality of sequences 408 are formed, for example, three sequences for three users, corresponding to three sub-trees 414A-C.
  • the same data item may be marked for different users.
  • different users are provided with different customized sequences.
  • the different users are provided with different content and/or different ordering of the content.
  • FIGs. 5A-5D which are schematic diagrams of some examples of different ways in which the sequences of electronic content are initialized.
  • sequences are swapped between a pair of users.
  • user A may refer to the caller and user B may refer to the callee.
  • sequence 502, prepared by user A (504) is provided to user B (508).
  • sequence 506, prepared by user B (508) is provided to user A (504).
  • swapping of the sequences allows two users to share their content with each other so that each can view the content prepared by and/or for the other user. For example, each user can provide a sequence of content of their summer vacation to his or her friend.
  • sequence 510 is provided to user B (514).
  • FIG. 5B also illustrates sharing of sequence 510 by both user A (512) and user B (514). Additional details of browsing the sequence, either independently or together will be discussed, for example with reference to FIGs. 8A-8D. Potentially, sharing the sequence allows two users to interact with each other over common content. For example, one user can discuss content from an event with his or her friend.
  • sequences are merged together. As shown, sequence 520, is a merger of a sequence prepared by user A (522) and by user B (524).
  • a portion of one sequence is merged with the entire other sequence.
  • a portion of one sequence is merged with a portion of the other sequence.
  • the merger of sequences may be performed, for example, by treating each content item in each sequence as an independent item, then running an organization and/or ordering algorithm (e.g., as described with reference to FIG. 4A-4D) to generate the merged and/or ordered sequence.
  • merging the sequences allows two users to interact with each other over the content prepared by both users. For example, both users can combine content from an event that both attended together.
  • a retail store may send its sequence for merging with the content sequence of the caller in advance and/or during the interaction, so that the caller may be aware of, and/or be able to interact with the shop's catalog, as the store and the caller interact over the phone.
  • sequences may be kept by the user and not provided to others.
  • a sequence 530 prepared by and/or for user A (532) is kept by user A (532).
  • a sequence 534 prepared by and/or for user B (536) is kept by user B (536).
  • each user prepares content for the interaction, but keeps the content confidential.
  • the user is a sales person that prepared content to discuss with a client, but does not want the client to have access to the materials.
  • the user sends a part of the content sequence to the other user.
  • a sales person sends a prepared offering content item to the client.
  • a user chooses to view and/or interact on content prepared for a different interaction than the one taking place. For example, a user browses and/or sends news and/or gossip relating to one family member while talking to another family member.
  • the cases discussed with reference to FIGs. 5A-5D are extended to three or more users, or to groups of users, for example, as illustrated by FIGs. 6A-6C.
  • FIG. 6A illustrates the case of a user A (602) providing a sequence 608 to a user B (604), and a sequence 610 to a user C (606).
  • sequences 608 and 610 are each formed from common content, for example, the sequences share some content in common.
  • Forming two or more customized sequences for two or more users or groups of users from a broader sequence (e.g., tree) has been previously discussed, for example, with reference to FIGs. 4C-4D.
  • each user receives only the content that is relevant for his or her interaction with the preparing user. For example, a CEO of an international company prepares content for facilities in different countries. Each facility will receive content containing common important material (e.g., global sales), but will not receive material relevant to facilities located in other countries (e.g., pictures from the yearly office party).
  • FIG. 6B illustrates the case of a user A (612) providing a first sequence 614 to a user B (616), and a second sequence 618 to a user C (620).
  • sequences 614 and 618 are different, for example, each having been separately prepared by user A (612). Potentially, each user receives only the sequence relevant to him or her, without access to the other sequence. For example, an arbitrator having a discussion with opposing sides prepares separate content sequences to make sure that each side does not receive confidential material.
  • FIG. 6C illustrates the case of four users interacting together.
  • each user A, B, C and D is able to interact with everybody else.
  • the four users A-D may be divided into three groups, user A, users B and C, and user D.
  • User A and D may merge their content together.
  • User A and users B-C may exchange media items.
  • User D may prepare a sequence to provide to users B-C.
  • the interactive communication session between a first contact (e.g., from a contact list of a plurality of contacts) and the user is identified, for example, automatically by a software algorithm detecting a phone call.
  • the initial state of the interaction is marked, for example, by running of the application.
  • the initial state is automatically triggered, for example, by a phone call, a received SMS and/or a received content item.
  • the initial state is marked when the user selects to view a content sequence in preparation for an interaction, and/or after an interaction.
  • the initialization process is started even if contact has not yet been made with all users, for example, all the users have not yet joined the phone call.
  • sequences are provided, received and/or merged during the initial stage.
  • sequences are provided, received and/or merged during the initial stage.
  • one or more media items are displayed, in accordance with exemplary embodiments of the invention.
  • media items are displayed on a display unit of a communication device, for example, on the screen of a smartphone, feature phone, tablet computer, personal computer, television, and/or game console.
  • the sequence is automatically presented to both the user and the first contact during the interactive communication session, for example, by software.
  • the sequence is only presented to the first contact, or other selected contacts.
  • the sequence is only presented to the user.
  • the sequence of media items is browsed.
  • browsing is performed manually by the user of the communication device displaying the content.
  • browsing is performed manually by another user, optionally interacting with the user of the device.
  • browsing is performed automatically by software, for example, according to an automatic analysis of the call.
  • the browsing is performed in the same order as the prepared sequence, for example, from relatively higher ranked items to relatively lower ranked items.
  • browsing is performed in reverse order.
  • browsing is performed by jumping between content items. For example, if software automatically detects a topic of conversation, the software automatically browses the media items related to the topic of conversation.
  • Browsing may be performed, for example, by a web browser, as a slide show, using a touch-screen scroll function, or other suitable methods.
  • browser is performed according to a timer, and/or manually by the user.
  • FIG. 9 is a schematic illustration of a mobile communication device, for example, a smartphone.
  • browsing 910 is performed by placing a finger on a touch-screen and manually scrolling through the content.
  • an interaction state is determined, in accordance with exemplary embodiments of the invention. Potentially, the interaction state is used to dynamically adjust the sequence according to the interaction between the users.
  • the interaction state is associated with the interaction of the users with the sequence, and optionally with each other.
  • the interaction state is associated with changes to the sequence and/or media items of the sequence, for example, editing the content, deleting the content and/or adding new content.
  • the interaction state is associated with changes to the interaction between the users themselves, for example, changes to the topic of discussion, interest in the topic of discussion.
  • the interaction state may be implemented as any suitable software method, for example, calculation of a numerical value, a state table, or other suitable algorithms.
  • FIGs. 7A-7C are simplified schematics showing some possible outcomes of users interacting with the content sequence during the phone call.
  • the interaction occurs by one user sending instructions to a second user by a single click.
  • one or more user actions may be executed on media items within the sequence, for example, editing an item, adding an item, removing an item.
  • a media item 702 is edited by user A, forming an edited media item 704.
  • the sequences of users A and/or B have been provided by one or more methods, for example, swapping of prepared sequences, each user preparing his/her own respective sequence, and/or the sequences are a merger of a self-prepared and/or swapped sequence.
  • edited items include; pictures that were drawn on by the user, edited text, and/or comments added to content.
  • edited media item 704 is inserted into the sequence being browsed by user B.
  • edited media item 704 is inserted into the sequence at user B's current location, for example, without regards to the rest of the content of the stream. Potentially, user B is almost immediately aware of edited content item 704.
  • edited media item 704 is inserted into the sequence according to the labeling of item 704 and/or in relation to the rest of the sorted sequence. For example, further down the sequence. Potentially, user B is provided with a smoother flow of content, according to the selected ordering of the sequence.
  • edited media item 704 replaces media item 702 in the sequence being browsed by user A.
  • edited media item 704 is re-labeled and the sequence is reformed by resorting the list. The original item 702 is kept or removed.
  • the edited content item is sent to the other user using a single click button 920 (FIG. 9).
  • the single click feature may be used because each sequence is built tailored for a single interaction with a single user.
  • a new media item 706 is inserted into the sequence being browsed by user A and/or by user B.
  • New items may be provided by user A, user B, other users, and/or from the web (e.g., generated by automatic feeds).
  • new item 706 is inserted into the current location in the sequence being browsed by the user.
  • new item 706 is labeled, and the sequence is re-sorted, with the position of new item 706 within the sequence determined according to the new sorting.
  • media item 708 is deleted.
  • deletion is performed by the browsing user.
  • deletion is performed by the user that prepared the sequence.
  • the content items on opposite ends of deleted item 708 are joined together (e.g., assuming the sequence is in the ordered state).
  • the interaction state is determined according to the content being discussed.
  • the topic of discussion is detected, for example, by software analyzing the labeling of the browsed content for similarity, by software scanning instant message communication for key words and/or voice recognition software detecting key words.
  • media related to the topic of discussion is relabeled with relatively higher values.
  • media not being discussed is removed. For example, if content is skipped over and/or not discussed much relative to other content, the minimally or undiscussed media is removed from the sequence and/or relabeled with relatively lower values.
  • automatic learning algorithms are applied, to provide relatively lower values or relatively higher values for future sequences, for example, based on the above feedback criteria.
  • the interaction state is determined according to the amount of time spent discussing the topic. For example, topics with more time spent on them are relabeled with a higher value.
  • the interaction state is determined according to the total amount of time spent interacting with the user. For example, in long phone-calls content items that are less news oriented and/or more interaction oriented (e.g., games) are offered to the users.
  • the interaction state is determined according to the rate of interaction during the content. For example, topics in which many more words are exchanged are relabeled with a higher value relative to topics in which little is being said.
  • the interaction state is determined according to the topic discussed earlier in the conversation. For example, topics discussed at the start of the interaction receive relatively higher ratings than topics discussed towards the end of the conversation.
  • FIGs. 8A- 8D are simplified schematics showing some possible ways of synchronizing sequences between the users, in accordance with exemplary embodiments of the invention.
  • FIG. 8A is a simplified schematic showing a user A (802) browsing a sequence 804 provided by a user B (806).
  • User B (806) is browsing a sequence 808, provided by user A (802).
  • each user A (802) and user B (806) browse their respective sequences in a manner that is independent of each other (e.g., asynchronous browsing).
  • An arrow 810 shows the browsing position in sequence 804 as being viewed by user A (802).
  • An arrow 812 shows the browsing position in sequence 808 as being viewed by user B (806). Potentially, each user browses their sequence at their own pace.
  • FIG. 8B is a simplified schematic showing a user A (820) and a user B (822) browsing a shared sequence 824.
  • each user A (820) and user B (822) browse shared sequence 824 in a manner that is independent of each other.
  • An arrow 826 shows the browsing position in sequence 824 of user A (820).
  • An arrow 828 shows the browsing position in sequence 824 of user B (822). Potentially, each user browses the shared sequence at their own pace.
  • FIG. 8C is a simplified schematic showing a user A (830) and a user B (832) browsing a shared sequence 834.
  • the browsing of shared sequence 834 by user A (830) and user B (832) is synchronized.
  • An arrow 836 shows the browsing position of user A (830), and an arrow 838 shows the browsing position of user B (832). Both arrows 836 and 838 point to the same content item in sequence 834.
  • the browsing is synchronized so that the same media item is displayed to the same users at about the same time.
  • browsing is synchronized so that if one user navigates to a different media item, the other user is automatically navigated to the same content item. Potentially, the synchronization allows both users to continuously browse the media items in the sequence together.
  • FIG. 8D is a simplified schematic showing a user A (840) and a user B (842) browsing their own respective sequences 844 and 846.
  • the sequences 844 and/or 846 are sub-organized into sequential categories.
  • sequences 844 and 846 are first comprised of 'Directly relevant content' 848 852, followed by 'Indirectly relevant content' 850 854.
  • sequences 844 846 are first comprised of 'Trip content', followed by 'Birthday content' .
  • browsing by categories is synchronized so that both users are browsing content within the same category.
  • browsing within the category is not synchronized, so that users browse independently within the category.
  • an arrow 856 shows the browsing position within category 850 by user A (840).
  • An arrow 858 shows the browsing position within category 854 by user B (842).
  • synchronization by content allows both users to be synchronized to the overall topic, but provides freedom to browse the content within the topic independently.
  • both users may browse a clothing catalog or different catalogues from different stores, and edit the items with "like" marks.
  • Each user may browse at his/her own pace, while both users get to view each other's "likes" as they come across the same items.
  • the electronic media items are stored in the form of a tree with branches.
  • the tree is traversed by a path according to the interaction state. For example, at a point which branches into different topics of discussion (e.g., birthday vs. beach vacation), the media is selected according to the topic of conversation as per the interaction state.
  • the tree is traversed forward, reverse and/or in a loop.
  • the tree is re-organized according to the interaction state. For example, new branch points are selected according to the interaction state.
  • the interaction type (e.g., as initially set in 202) is adjusted, for example, as conditions are changed during the interaction, for example, as users are added or removed from the conference call.
  • the sequence is adjusted and/or updated, in accordance with exemplary embodiments of the invention.
  • the sequence is adjusted and/or updated according to the interaction state, for example, as determined in box 220. Potentially, adjusting the sequence according to the interaction provides for an enhanced and dynamic media based interaction between the users.
  • instructions from both the user and the first contact and/or other contacts are automatically controlled.
  • the instructions refer to the presentation of the sequence during the communication session. For example, addition of items, editing of items, removal of items, browsing of items, relabeling of items, or other presentation instructions.
  • changed media items of the sequence are relabeled (e.g., as in 206), re-filtered (e.g., as in 208), re-edited (e.g., as in 210) and/or re-organized (e.g., as in 212).
  • the sequence is re-sorted according to the interaction state. For example, a function maps the identified interaction state and the ordering of the content, for example according to context rating categories.
  • changes in topic of conversation trigger an adjustment in the media sequence, for example, to reflect the priority of the new topic.
  • the media is re-labeled (e.g., as in 206), re-filtered (e.g., as in 208), re-edited (e.g., as in 210) and/or re-organized (e.g., as in 212), reflecting the changes according to the interaction state.
  • the adjusted sequence dynamically follows the interaction between the users, providing the users with the media that is most relevant to their interaction (e.g., topic of conversation).
  • the adjustment of the sequence is continued even after a part of the communication session has terminated.
  • the peer to peer communication session e.g., voice call, IM session
  • users may continue to interact with the sequences, for example, as described above.
  • one or more actions are taken after termination of the session, for example, automatically sending a 'bye-bye' and/or the users playing a game with one another.
  • each mobile device 104 A and/or 104B prepares content sequence 114A and/or 114B for consumption and/or sending to the other user in advance of the communication session.
  • each mobile device 104A-B processes the received content stream prior to the communication session.
  • the media stream resides on an external device in data communication with mobile device 104A-B, for example, computer 108, a remove server 114 and/or other devices with memory capacity.
  • a hyperlink to the media is provided, and media items are downloaded as they are browsed.
  • mobile devices 104A-B are in electrical communication with a memory 116A-B (e.g., stored thereon).
  • memory 116A-B is suitable to store cached data.
  • data intensive media is cached and locally stored on memory 116A-B.
  • the media sequence is prepared before the interaction and updated during the interaction only if network availability permits.
  • changes are stored until the network communication is restored.
  • the update rate and/or frequency vary for different interactions. For example, media for interactions that are more frequent and/or are higher in relative important to the user are updated more frequency. For example, an interaction with a close friend is updated more frequently than an interaction with a remote acquaintance.
  • compositions, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Abstract

According to an aspect of some embodiments of the present invention there is provided a computerized method of automatically adjusting at least one prepared sequence of electronic media items during an interactive communication session between a plurality of interacting users, the method comprising identifying a plurality of contacts from a contact list; identifying for each one of the plurality of contacts a plurality of media items which are relevant thereto; identifying an interactive communication session between a first contact of the plurality of contacts and the user; automatically generating a sequence from at least some of respective the plurality of media items which are relevant to the first contact; automatically presenting the sequence to both the user and the first contact during the interactive communication session; and actively controlling instructions from both the user and the first contact of the presentation of the sequence during the communication session.

Description

INTERACTION RELATED CONTENT
RELATED APPLICATIONS
This application claims the benefit of priority under 35 USC § 119(e) of U.S. Provisional Patent Application No. 61/665,926 filed 29 June 2012, the contents of which are incorporated herein by reference in their entirety.
FIELD AND BACKGROUND OF THE INVENTION
The present invention, in some embodiments thereof, relates to a method and/or system for interaction based media items and, more particularly, but not exclusively, to a method and/or system that adjusts a sequence of media items according to an interaction communication session between users.
SUMMARY OF THE INVENTION
An aspect of some embodiments of the invention relates to a method and/or system for enhancing a communication session between two or more people, by adapting designated content during the communication session according to the communication session and/or the flow of the session.
According to an aspect of some embodiments of the present invention there is provided a computerized method of automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between at least two interacting users, the method comprising displaying the media items of the at least one prepared sequence to at least one of the at least two interacting users on a display device; and automatically adjusting the at least one prepared sequence according to an action executed on one or more media items by at least one of the at least two interacting users during the peer to peer interactive communication session.
According to some embodiments of the invention, the one prepared sequence is prepared in advance of the peer to peer interactive communication session.
According to some embodiments of the invention, automatically adjusting comprises automatically inserting an edited media item into the sequence. According to some embodiments of the invention, automatically adjusting comprises adding a new electronic item into the sequence.
According to some embodiments of the invention, a single click sends the adjusted media item to a corresponding user.
According to some embodiments of the invention, a first user prepares a first sequence for providing to the first user, and the second user prepares a second sequence for providing to the second user. Optionally, a subset of media items of the first sequence are provided to the second user and a subset of media items of the second sequence are provided to the first user.
According to some embodiments of the invention, a first user prepares a first sequence for providing to a second user, and the second user prepares a second sequence for providing to the first user.
According to some embodiments of the invention, the first and second sequences are merged and organized into a third sequence for display to the first and second users.
According to some embodiments of the invention, a first user prepares a first sequence for providing to the first user and to one or more second users. Optionally, the first sequence is divided into a second sequence for display to a second user and a third sequence for display to a third user. Optionally, the second sequence and the third sequence have at least some common electronic media items.
According to some embodiments of the invention, the sequence is a branching tree of media items, and adjusting comprises traversing a path through the tree.
According to some embodiments of the invention, automatically adjusting comprises removing an existing media item in the sequence.
According to some embodiments of the invention, automatically adjusting comprises automatically adjusting the sequence according to changes in relevance of the one or more media items to the flow of conversation between the at least two interacting users.
According to some embodiments of the invention, the method further comprises automatically labeling added or edited media items according to a rating score. Optionally, the rating score comprises a context score of relevance of the media item to the at least two interacting users. Optionally, labeling comprises labeling the media item according to a quality score. Optionally, the quality score is determined according to a date and/or external rating. Optionally, labeling further comprises labeling the media item according to an interaction fit score. Optionally, the interaction fit score relates a property of the media item with an interaction type.
According to some embodiments of the invention, the method further comprises automatically organizing the prepared sequence according to the labeling of the media item.
According to some embodiments of the invention, the method further comprises determining an interaction type of the interaction between the at least two interacting users.
According to some embodiments of the invention, the interaction comprises two or more interactions between a plurality of pairs of users.
According to some embodiments of the invention, the method further comprises synchronizing the sequences between the at least two interacting users.
According to some embodiments of the invention, synchronizing comprises allowing the at least two interacting users to independently browse the sequence with changes sent from one user to another by a single click.
According to some embodiments of the invention, the method further comprises selecting media items for interactive display to at least one user of the at least two interacting users.
According to some embodiments of the invention, the method further comprises filtering adjusted media items.
According to some embodiments of the invention, the method further comprises editing the media items.
According to some embodiments of the invention, the method further comprises navigating the media items. Optionally, navigating comprises backwards or forwards sequential browsing of the electronic media items in the sequence.
According to some embodiments of the invention, the number of interacting users is two, and adjusting is performed only by one of the two interacting users.
According to some embodiments of the invention, the peer to peer interactive communication session comprises a voice telephone call. According to some embodiments of the invention, automatically adjusting during the peer to peer interactive communication session comprises automatically adjusting the prepared sequence after the peer to peer interactive session has terminated.
According to some embodiments of the invention, automatically adjusting during the peer to peer interactive communication session comprises automatically adjusting the prepared sequence before the peer to peer interactive session has been initialized.
According to some embodiments of the invention, automatically adjusting comprises browsing the prepared sequence.
According to some embodiments of the invention, a first portion of the at least one prepared sequence is prepared in advance and a second portion of the at least one prepared sequence is prepared during the communication session. Optionally, prepared during the communication session comprises ordering the media items of the second portion of the at least one prepared sequence and inserting the ordered media items into the first portion of the at least one prepared sequence prepared in advance.
According to an aspect of some embodiments of the present invention there is provided a system for automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between at least two interacting users, the system comprising a module for automatically adjusting the prepared sequence according to changes to one or more media items made by at least one of the at least two interacting users during the peer to peer interactive communication session.
According to some embodiments of the invention, the system further comprises a memory in electrical communication with the module, the memory having a region configured as a cache for storing data intensive media items.
According to some embodiments of the invention, the system further comprises a mobile computer comprising the module, the mobile computer further comprising a communication port configured to send and receive media items.
According to some embodiments of the invention, the system further comprises a display unit for displaying the media items, the display unit being in electronic communication with the module. According to some embodiments of the invention, the system further comprises a module configured to vary a media item update rate according to at least one of: network connectivity, computational abilities, and an importance of an interaction between the at least two interacting users.
According to some embodiments of the invention, the number of interacting users is two, and only one of the two users is running the module.
According to an aspect of some embodiments of the present invention there is provided a computerized method of automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between a plurality of interacting users, the method comprising identifying a plurality of contacts from a contact list of a user; identifying for each one of the plurality of contacts a plurality of media items which are relevant thereto; identifying an interactive communication session between a first contact of the plurality of contacts and the user; automatically generating a sequence from at least some of respective the plurality of media items which are relevant to the first contact; automatically presenting the sequence to both the user and the first contact during the interactive communication session; and actively controlling instructions from both the user and the first contact of the presentation of the sequence during the communication session.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
BRIEF DESCRIPTION OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIG. 1 is a schematic diagram of a system to enhance a communication session between interacting users, in accordance with exemplary embodiments of the invention;
FIG. 2 is a flow chart of a method of adapted media content to a communication session between interacting users, in accordance with exemplary embodiments of the invention;
FIG. 3 is an exemplary data structure of labeled media items, in accordance with exemplary embodiments of the invention;
FIGs. 4A-4D are schematic diagrams of organized sequences of media items, in accordance with exemplary embodiments of the invention; FIGs. 5A-5D are schematic diagrams illustrating organized sequences of media items associated with a pair of interacting users, in accordance with exemplary embodiments of the invention;
FIGs. 6A-6C are schematic diagrams illustrating organized sequences of media items associated with three or more interacting users, in accordance with exemplary embodiments of the invention;
FIGs. 7A-7C are schematic diagrams illustrating possible adjustments of the media sequences, in accordance with exemplary embodiments of the invention;
FIGs. 8A-8D are schematic diagrams illustrating some possible sequence synchronizations, in accordance with exemplary embodiments of the invention; and
FIG. 9 is an example of a smartphone screen shot displaying the media sequence, in accordance with exemplary embodiments of the invention.
DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
An aspect of some embodiments of the invention relates to a computerized method of adapting a designated sequence of media items during a communication session between at least two interacting users. In exemplary embodiments, the communication session is a peer to peer interactive communication session, for example, a voice phone call, a call using a mobile phone with a screen, an instant message (IM) communication session, a video call and/or a combination of the listed methods or other methods. Optionally, the sequence is at least partly prepared in advance. In exemplary embodiments of the invention, the sequence is presented during the communication session in a manner that is adapted to the communication session and/or to the interaction between the users. Potentially, the method enhances the communication session by fitting the flow of the call.
In exemplary embodiments, the content is automatically organized into a sequence. For example, a callee views a sequence "A" created automatically from the data selected specifically for him, and a caller views another sequence "B" created automatically from the data selected specifically for him.
In exemplary embodiments, a user, or each user selects content to provide for the other user(s). For example, a callee views the sequence created from the data selected by a caller, and the caller views the sequence created from the data selected by the callee. Alternatively or additionally, one or more users selects content to provide for him or herself. For example, a caller views a sequence created from the data he/she selected and/or a callee views a sequence created from the data he/she selected. Alternatively or additionally, the contents of the sequences are merged into a single sequence that is viewed together by one or more users. Alternatively or additionally, the sequence created from one user or more users is viewed together by all users, or a sub-group of users. Alternatively or additionally, in the case of three or more users, the sequence created from one user is divided into two or more different sequences (with or without overlapping content) for viewing by different users or groups of users.
In exemplary embodiments, the sequences are adjusted according to the interaction between the users. Optionally, editing of (e.g., drawing on) an electronic item of the first sequence is integrated into the second sequence. Additionally or alternatively, the edited item is integrated into the first sequence.
In exemplary embodiments, the edited or added items are integrated into the sequence at the point of browsing. Alternatively or additionally, the edited or added items are integrated into the sequence according into position according to a ranking of the electronic content.
In exemplary embodiments, the user browsing the sequence selects a pointer for controlling and/or focusing the presentation of the sequence to another user. Optionally, the pointer is selected by a single click.
Optionally, a new electronic item is integrated into the sequence created by the user adding the new item (e.g., which is being displayed to the opposite user). Additionally or alternatively, the new electronic item is integrated into the sequence created by the opposite user.
Optionally, the electronic item is deleted from the sequence, for example, by a single click. Potentially, offensive or personal items are quickly removed.
In exemplary embodiments, the sequence is personalized for the communication session, for example, based on who is involved in the call, the communication devices, previous communication data and/or other data.
Optionally, the sequence is a subset selected from the selected electronic content items. Optionally, the subset forming the sequence is dynamically adjusted in response to the interaction between the users, for example, the topic of conversation of the users, the frequency of keywords, and/or actions of the users. Some examples of adjustments include, items are removed from the sequence, added to the sequence, and/or the order of items in the sequence is re-organized.
In exemplary embodiments, browsing the sequence is synchronized together to two or more users, so that an action taken by one of the users is almost immediately apparent to the other user. Optionally, the action taken by the first user is unrelated (or asynchronous) to the other user. Optionally, the synchronization is automatically performed by software. For example, changes to the sequence of the caller are almost immediately reflected in the sequence of the callee, for example, after the caller performs a single click. Optionally, both users view the same item at substantially the same period in time. Alternatively or additionally, both users view items within the same category at substantially the same period in time. Alternatively or additionally, changes to the sequence are made apparent to both users. For example, editing of a media item is displayed to both users at about the same period in time.
Optionally, browsing is synchronized between the users. Optionally, each user is limited in the content that may be browsed during a period of interaction. For example, the users may browse the same content item and/or the users may browse content items that fall within a similar category, such as topic of conversation and/or category of similarity of the content between the users.
The present invention, in some embodiments thereof, relates to a method and/or system for interaction based media items and, more particularly, but not exclusively, to a method and/or system that adjusts a sequence of media items according to an interaction communication session between users.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. Referring now to the drawings, FIG. 1 illustrates a communication enhancing system 100 for interaction by at least two users 102A-B, in accordance with exemplary embodiments of the invention.
In exemplary embodiments, users 102A-B interactively communicate with each other using one or more communication devices 104A-B, for example, smartphones, feature phones, cellphones, landlines, tablet computers, laptop computers, personal computers, television set, game console, and/or a combination thereof.
In exemplary embodiments, devices 104A-B comprise and/or are in electrical communication with one or more one or more transceivers. Optionally, devices 104A-B communicate with each through one or more communication links (wired and/or wireless) set up through the one or more transceivers, for example, through an internet, a cellular network, short range communication (e.g., Bluetooth), Ethernet, and/or other suitable channels.
In exemplary embodiments, devices 104A-B comprise a memory 116A-B and/or are in electrical communication with other storage devices (e.g., remote server 114). In exemplary embodiments, memory 116A-B and/or other storage devices store one or more software modules for running a communication enhancement session. Details of the method and/or software modules of the communication enhancement session are described in more detail below, for example, with reference to figure 2. As described in more detail below, the software enhancement session may include one or more modules that perform the following functions: preparation of the sequence before the call, adjustment of the sequence during the call and/or continuing the interaction with the sequence even after the peer to peer session (e.g., voice call, IM session) has been terminated.
In exemplary embodiments, the interactive communication session between users
102A-B is a peer to peer interactive communication session, for example, a voice phone call, a call using a mobile phone with a screen, an instant message (IM) communication session, a video call and/or a combination of the listed methods or other methods.
In exemplary embodiments, both or all users actively interact with each other by running the communication enhancement software. Alternatively, only one or a subset of the interacting parties uses the communication enhancing software. Optionally, devices 104A-B comprise a visual display unit (e.g., high resolution screen).
In exemplary embodiments, users 102A-B prepare in advance a sequence of media items, for example, from one or more devices and/or networks in electronic communication with devices 104A-B, for example, audio recorder 104 (e.g., MP3 music), still and/or video camera 106 (e.g., pictures, videos), computer 108 (e.g., school project files), private network 110 (e.g., facebook links), public network 112 (e.g., internet hyperlinks from public sites).
In exemplary embodiments, the sequences are swapped between the users. For example, user 102A provides sequence 114A to user 102B and/or user 102B provides sequence 114B to user 102A. Alternatively or additionally user 102A-B uses his or her automatically generated sequence. For example, user 102A uses sequence 114A and/or user 102B uses sequence 114B.
In exemplary embodiments, the sequences are adjusted in response to the interaction between the users, for example, in response to the flow of the call and/or to actions taken by users during the call. Potentially, adjusting the sequences enhances the phone call of the users.
Additional details of system 100 will be described below with reference to the other figures.
Figure 2 is a flowchart of a computerized method of automatically adapting a prepared sequence of media items between at least two interacting users, in accordance with exemplary embodiments of the invention. In exemplary embodiments, the method is implemented by one or more software modules, for example, stored on communication devices.
Referring back to FIG. 2, optionally, at 202, an interaction type is determined. Potentially, the interaction type is used as an initial starting point, for example, for content labeling and/or content organization.
Optionally, the interaction type is associated with contacts, for example, from a contact list of a user. Optionally, one or more contacts are identified from the contact list, for example, automatically by software and/or manually by the user. Alternatively or additionally, the interaction type is associated with the number of interacting users, for example, a single user to a single user, a single user to a group of users, a group of users to a group of users, and/or a network of users. Alternatively or additionally, the interaction type is associated with the communication hardware, for example, a smartphone with high resolution screen, and/or a laptop computer running communication software. Alternatively or additionally, the interaction type is associated with the expected nature of the interaction, for example, business, commercial, social, and/or family. Alternatively or additionally, the interaction type is associated with the expected length of the interaction, for example, a single session or multiple sessions. Alternatively or additionally, the interaction type is associated with the expected mechanism of interaction, for example, phone call, messaging, video calling. Alternatively or additionally, the interaction type is associated with the expected communication context and/or carrier company, for example, an international call, a free call, a VoIP call, a free SMS, an AT&T® call, and/or a Sprint® call.
Optionally, the interaction type is an electronically stored data structure, for example, a table, a file, a record, an object, an array and/or other suitable structures.
Optionally, the interaction type is determined automatically, for example, by a computer algorithm. Alternatively, the interaction type is determined manually, for example, by the user. Alternatively, the interaction type is determined partly automatically and partly manually, for example, for a group of users, each one manually selecting the interaction type, with an algorithm automatically determining the interaction.
Referring back to FIG. 2, optionally, at 204, electronic media items are selected. Optionally, the electronic media items are selected for display to one or more users, for example, as will described below with reference to box 216. Alternatively, not all the selected items are displayed, for example, some may not be selected, some may be filtered out and some may not be browsed by the user.
Optionally, a plurality of media items are identified for each of the plurality of contacts, for example, contacts from a list as described with reference to 202. Optionally, the plurality of media items are relevant to the plurality of contacts. For example, relevance of the media items to the contact is described in more detail below in 206. The media items may be different for different contacts, may overlap between contacts, or may be similar for one or more contacts.
Electronic media items may be, for example, images, text files, computer application files, videos, sound effects, animations, games, and/or links to any of the mentioned items and/or combinations thereof.
Media items may contain the full data, partial data and/or a link to the data. Media items may reside at any location, for example, social networks (e.g., facebook, twitter), private networks, the internet, the smartphone used to make the call, and/or servers of generic content providers (e.g., CNN, YouTube).
Media items may exist at the time of the selection, and/or may not yet exist (e.g., feeds from sites).
Media items may be selected manually, for example, by the user, for example, by an electronic tag, by copying of the link to the content, and/or by placing the data (or copy thereof) into an electronic folder. Alternatively or additionally, media items may be selected automatically, for example, from website feeds.
Referring back to FIG. 2, at 206, the media item (e.g., selected at 204) is labeled, in accordance with exemplary embodiments of the invention. Potentially, the media items are labeled in way that helps to determine the most relevant, suitable and/or interesting items for discussion during the interactive communication session, for example, relevance to a specific contact. Optionally, the labeling is used to sort the items according to a priority level, for as will be discussed below with reference to box 212.
Optionally, the media item is labeled with one or more of, a quality score, an interaction fit score, a context score, the interaction type (e.g., as in 202) and/or other suitable labels.
Optionally, labeling is performed automatically, for example, by a software algorithm. Alternatively or additionally, labeling is performed manually, for example, by the user.
Labeling of the media item will be discussed with reference to FIG. 3, which is an example of a labeling data structure (e.g., a table), in accordance with exemplary embodiments of the invention. For example, "Item Type" refers to the type of file, for example, image, text, video, music. For example, "File Name" refers to the name of the stored media item. For example, "Location" refers to where the media is stored.
Optionally, the media item is labeled with a "Quality Score", for example, a numerical value on a scale. Optionally, the quality score is calculated by a function of one or more associated input variables.
Optionally, the quality score is associated with the date of creation and/or last update of the item ("Date"). Alternatively or additionally, the quality score is associated with an external rating, for example, number of positive approvals (e.g., user selected stars), and/or number of comments by other people.
Alternatively or additionally, the media item is labeled with an "Interaction Fit
Score". Optionally, the Interaction Fit Score is associated with the fit of the content form type to the interaction type, for example, ease of reading or processing of the item by a human using the provided interaction. Optionally, the fit is determined according to the interaction type, for example, as determined in 202.
Optionally, a relative value (e.g., numerical "Score" and/or descriptive value) is determined for the interaction fit score, for example, by a software algorithm, for example, by a table associating "Item description" and the relative score value. For example, higher resolution images (e.g., easier to view) have relatively higher values than low resolution images. For example, for a phone call using a smartphone, an image has a relatively higher value than text (e.g., easier to understand). For example, text with less words (e.g., easier to read) has a relatively higher value than text with more words. For example, visual content (e.g., photos) receive a 'high' rating, short text items receive a 'medium' rating and/or long text items receive a 'low' rating.
Alternatively or additionally, the media item is labeled with a "Context Score". Optionally, the context score is associated with the fit of the electronic item to one or both parties. Optionally, the context score is comprised of one or more sub-labels, for example, "How Affiliated", "Relevance to Other Party", "Relationship Marking", "Timing Relevance", and/or other suitable labels.
Optionally, "How Affiliated" is associated with the affiliation of the content of the item and/or the creator of the content to the user(s) who will be receiving the content. Alternatively or additionally, "How Affiliated" is associated with the affiliation of the content to the user selecting the content. Some examples of methods (automatic and/or manual) of determining affiliation include; tags attached to the content that link the content to specific individuals (e.g., author), recognition of key features in the context (e.g., names in text, faces in images, email correspondence associated with a business context).
Alternatively or additionally, the media item is labeled with a "Relevance to
Other Party". Optionally, the "Relevance to Other Party" is associated with the degree of relevance to one or more of the users in the interaction. Alternatively or additionally, the "Relevance to Other Party" label comprises of "Directly relevant", "Indirectly relevant", "Generally relevant" or other suitable labels. Optionally, "Directly relevant" is associated with the user, for example, a picture showing the user. Optionally, "Indirectly relevant" is associated with a feature not directly common to the users, for example, a picture of a family member of the user. Optionally, "Generally relevant" is associated with a property of the users, for example, demographics, hobbies and/or interests. Optionally, "Limited relevant" is used as a default label if other labels are not suitable. Alternatively or additionally, a numeric scale of relevancy scores is used, for example, instead of discrete labels.
Alternatively or additionally, the media item is labeled with a "Relationship Marking". Optionally, the "Relationship Marking" is associated with the affiliation between the content and the users. For example:
Media items created, tagged or affiliated with the other user(s) are labeled as "Other person's updates".
Media items created, tagged or affiliated with the current user are labeled as "My updates" and/or "My stuff.
Media items created, tagged or affiliated with others who are direct mutual friends of at least both users are labeled as "Our friends".
Media items created, tagged or affiliated with mutual interests of at least both users are labeled as "Our interests".
Media items created, tagged or affiliated with items identified with others who are shared in common between the at least both users (e.g., acquaintances, friends, field of interest, group, event), or who are in common with some users of a group, are labeled as "Shared updates", "Shared friends", "Shared interests", "Shared groups", "Shared events", and/or other appropriate labels. Items not falling into the above categories, but are related to general demographics of at least both users (e.g., age, sex, location, income) are labeled as "Generally relevant to similar individuals". Alternatively or additionally, the media item is labeled with a "Timing
Relevance" label. Optionally the "Timing Relevance" label is associated with one or more particular events that are currently occurring and/or that have recently occurred, and/or to the previous sessions. For example, on a person's birthday, a few days before, and/or a few days after the birthday, an item relating to the birthday will be marked with the Timing relevance. For example, items related to a game played between the parties in the previous session will be marked with a "high" timing relevance. For example, on Valentine's Day, items relating to "love" and/or to other Valentine's Day events will be marked with the "high" timing relevance.
Optionally, the context score labels are determined according to a user profile, which includes, for example, a list of friends, a list of interests, a list of activities, demographics, or other suitable methods can be used to determine the context score.
Optionally, items are labeled with a single label. Alternatively, at least some items are labeled with two or more labels.
It should be understood that the labels described are only an example, and that other suitable labels may be used. For example, labels may change depending on the content, the interaction and/or the users.
Referring back to FIG. 2, optionally, at 208, one or more media items are filtered. Optionally, filtering is performed according to a threshold, with items below (or above) the threshold being removed. Potentially, filtering media items removes the least relevant and/or interesting items.
Optionally, filtering is performed according to a quality score threshold. Optionally, items having a quality score below the quality score threshold are filtered out. Alternatively or additionally, a time threshold is used to filter out items that were created or updated before the set time threshold value. Alternatively or additionally, an external threshold filter is used to filter out items that have relatively low external ratings, for example, few views, few comments and/or few stars. Alternatively or additionally, an interaction fit score filter is used to filter out items that have relatively low interaction fit ratings. For example, long text is removed. Alternatively or additionally, filters based on other labels, for example, those discussed with reference to "Label Content" 206 are used to filter out items, for example, items that have relatively low score labels.
Referring back to FIG. 2, optionally, at 210, one or more media items are edited. Potentially, editing the items improves the content interaction, for example, by removing sensitive data, by formatting media items so that they are easier to read and/or see.
Optionally, media items are formatted to increase the interaction fit score. For example, long text items are truncated and/or broken into smaller pieces. Potentially, the shorter text item has a relatively higher interaction fit score than the previously longer text item.
Optionally, media items are formatted to improve the fit of the item to the determined interaction. For example, if the interaction is 'business', then personal content and/or other non-business content is removed.
Referring back to FIG. 2, at 212, the media items are organized and/or ordered, in accordance with exemplary embodiments of the inventions. Optionally, the media items are relevant to one or more users, for example, to a contact. Reference will be made to FIGs. 4A-4D, which schematic illustrates showing different ways of organizing and/or ordering the electronic content. Optionally, the content is ordered in a sequence which may be browsed by users.
In exemplary embodiments, a sequence is automatically generated from at least some of the respective plurality of media items which are relevant to one or more other users, for example, the first contact. Optionally, the sequence is suitable for presentation in a controlled manner on a suitable communication device, for example, as described herein. Optionally, the user and the first contact actively control the presentation of the sequence during the communication session, for example, the navigation of the sequence (e.g., as in 218), the synchronization of the sequence (e.g., as in 222), sending one or more media items to one another, the order of the sequence (e.g., as in 224) and/or other actions that are allowed on the sequence, for example, editing, addition and/or removal of media items (e.g., as in 220), and/or changes to the relevance of media items (e.g., relevance as described in 202, 204 and/or 206).
In exemplary embodiments, the sequence is prepared in advance of the interactive communication session, for example, before the session is initialized (e.g., as in 214). For example, the media items are selected in advance of the session, and the sequence is prepared, for example, using the methods described in boxes 206, 208, 210 and/or 212. Alternatively or additionally, the media items are selected in advance of the session, and the sequence is labeled (e.g., as in 206), filtered (e.g., as in 208), edited (e.g., as in 210), ordered (e.g., as in 212) and/or loaded during the session. Alternatively or additionally, a part of the sequence is prepared in advance, and optionally another part is prepared during the communication session. For example, the part prepared in advance is prepared using methods 204, 206, 208, 210 and/or 212. For example, the part prepared during the session is prepared using methods 204, 206, 208, 210 and/or 212, and optionally merged with the sequence prepared in advance, for example, the media items are ordered within the sequence prepared in advance. FIG. 4A is schematic illustration of a linear sequence 400 of media items 402A-F. In such a model, media items are logically linked as a chain, for example, 402B follows 402A, and 402C follows 402B.
Optionally, linear sequence 400 is generated by sorting the labeled media items, for example, in order from highest to lowest. Optionally, sorting is performed according to the quality score, the interaction fit score, the relevance marking, other suitable scores and/or markings, and/or an aggregate numeric or discrete function of the parameters described with reference to figure 3 and/or other suitable parameters. For example, an average of the ratings of the parameters.
Optionally, the ordering of items is based on creating the linear sequence with a mix of scores and/or item types, for example, based on a set of rules so as to create an interesting flow of items. For example, a relatively high quality photo item is inserted between 2 relatively lower quality text items. Optionally, the rule-set is monitored and/or updated periodically, for example, manually by a human and/or automatically by software, for example, based on new available content and/or relevant news. For example, content relating to a TV show may be made available for users, and labeled with a high priority, for example, if the users reside in a relevant area, and the show becomes very popular in that area with a defined range of time.
FIG. 4B is a schematic illustration of a branching sequence 404 of electronic content items. In such a model, content items are sorted into categories, for example, 406 A, 406B and 406C represent the start of three categories. Optionally, within each category, the items are sorted in the linear sequence.
Optionally, branching sequence 404 is generated by categorizing the labeled electronic data items, for example, according to the "Relevance" category. For example, all directly relevant items are placed in a first sequence, all indirectly relevant items are placed in a second sequence, and/or all generally relevant items are placed in a third sequence. Optionally, the items are sorted (e.g., as in FIG. 4A) within each category.
FIG. 4C is a schematic illustration of an organized and ordered sequence 408 of electronic content items. Optionally, sequence 408 is a linear sequence. In such a model, categories 41 OA, 410B and 4 IOC are sorted in a linear sequence. Optionally, within each category 410A-C, individual items are sorted, for example, as described above.
Optionally, organized and/or ordered sequence 408 is generated from an organized branched sequence. For example, categories 406A-406C of branch sequence 404 are sorted into the linear sequence. For example, the sequence is predetermined, for example, "Directly relevant", followed by "Indirectly relevant", followed by "Generally relevant" and followed by "Limited relevance". For example, "Timely relevant" items (e.g., friend's birthday) are followed by "Shared Updates", (e.g., shared photos), are followed by "Other person's updates" (e.g.,. recent status updates by the other person), followed by "Our interests" (e.g., recent football statistics of our liked team), followed by "My updates" (e.g., the recent YouTube videos that I watched an hour ago and/or recent photos I took with my camera).
FIG. 4D is a schematic illustration of an organized and/or ordered tree sequence 412 of electronic content items. In such a model, the items are organized into a multiple level, branching tree structure. Optionally, each higher level (or branching point) represents a higher level category. For example, first branching points 414A-414C represent different users (or groups thereof) that will receive the sequence. Optionally, items and/or categories within the higher level are sorted, for example as described above. Optionally, organized and/or ordered sequence 408 is generated from tree 412. For example, sequences 410A-C respectively represent the sub-trees of roots 414A-C, for example, users (or groups thereof) A-C. Alternatively, sequence 408 is a sorted and/or ordered subset of items from tree 412. For example, sequence 408 is a sorted and/or ordered sequence of items under branch root 414A, for example, for user A. Optionally, in such a case, a plurality of sequences 408 are formed, for example, three sequences for three users, corresponding to three sub-trees 414A-C.
Optionally, there is some repetition within the tree, for example, the same data item may be marked for different users.
Potentially, different users are provided with different customized sequences. For example, the different users are provided with different content and/or different ordering of the content.
Referring back to FIG. 2, optionally, at 214, the interaction is initialized, in accordance with exemplary embodiments of the invention. Reference will be made to
FIGs. 5A-5D, which are schematic diagrams of some examples of different ways in which the sequences of electronic content are initialized.
Optionally, as illustrated by FIG. 5A, sequences are swapped between a pair of users. In the follow examples, user A may refer to the caller and user B may refer to the callee. As shown, sequence 502, prepared by user A (504) is provided to user B (508).
As shown, sequence 506, prepared by user B (508) is provided to user A (504).
Potentially, swapping of the sequences allows two users to share their content with each other so that each can view the content prepared by and/or for the other user. For example, each user can provide a sequence of content of their summer vacation to his or her friend.
Alternatively, as illustrated by FIG. 5B, one sequence is provided to the other user. As shown, sequence 510, prepared by and/or for user A (512) is provided to user B (514). Alternative, FIG. 5B also illustrates sharing of sequence 510 by both user A (512) and user B (514). Additional details of browsing the sequence, either independently or together will be discussed, for example with reference to FIGs. 8A-8D. Potentially, sharing the sequence allows two users to interact with each other over common content. For example, one user can discuss content from an event with his or her friend. Alternatively, as illustrated by FIG. 5C, sequences are merged together. As shown, sequence 520, is a merger of a sequence prepared by user A (522) and by user B (524). Alternatively, a portion of one sequence is merged with the entire other sequence. Alternatively, a portion of one sequence is merged with a portion of the other sequence. The merger of sequences may be performed, for example, by treating each content item in each sequence as an independent item, then running an organization and/or ordering algorithm (e.g., as described with reference to FIG. 4A-4D) to generate the merged and/or ordered sequence. Potentially, merging the sequences allows two users to interact with each other over the content prepared by both users. For example, both users can combine content from an event that both attended together. For example, a retail store may send its sequence for merging with the content sequence of the caller in advance and/or during the interaction, so that the caller may be aware of, and/or be able to interact with the shop's catalog, as the store and the caller interact over the phone.
Alternatively, as illustrated by FIG. 5D, sequences may be kept by the user and not provided to others. As shown, a sequence 530 prepared by and/or for user A (532) is kept by user A (532). As shown, a sequence 534 prepared by and/or for user B (536) is kept by user B (536). Potentially, each user prepares content for the interaction, but keeps the content confidential. For example, the user is a sales person that prepared content to discuss with a client, but does not want the client to have access to the materials. Alternatively or additionally, the user sends a part of the content sequence to the other user. For example, a sales person sends a prepared offering content item to the client.
Optionally, a user chooses to view and/or interact on content prepared for a different interaction than the one taking place. For example, a user browses and/or sends news and/or gossip relating to one family member while talking to another family member.
Optionally, the cases discussed with reference to FIGs. 5A-5D are extended to three or more users, or to groups of users, for example, as illustrated by FIGs. 6A-6C.
FIG. 6A illustrates the case of a user A (602) providing a sequence 608 to a user B (604), and a sequence 610 to a user C (606). Optionally, sequences 608 and 610 are each formed from common content, for example, the sequences share some content in common. Forming two or more customized sequences for two or more users or groups of users from a broader sequence (e.g., tree) has been previously discussed, for example, with reference to FIGs. 4C-4D. Potentially, each user receives only the content that is relevant for his or her interaction with the preparing user. For example, a CEO of an international company prepares content for facilities in different countries. Each facility will receive content containing common important material (e.g., global sales), but will not receive material relevant to facilities located in other countries (e.g., pictures from the yearly office party).
FIG. 6B illustrates the case of a user A (612) providing a first sequence 614 to a user B (616), and a second sequence 618 to a user C (620). Optionally, sequences 614 and 618 are different, for example, each having been separately prepared by user A (612). Potentially, each user receives only the sequence relevant to him or her, without access to the other sequence. For example, an arbitrator having a discussion with opposing sides prepares separate content sequences to make sure that each side does not receive confidential material.
FIG. 6C illustrates the case of four users interacting together. Optionally, each user A, B, C and D is able to interact with everybody else. For example, the four users A-D may be divided into three groups, user A, users B and C, and user D. User A and D may merge their content together. User A and users B-C may exchange media items. User D may prepare a sequence to provide to users B-C.
In exemplary embodiments, the interactive communication session between a first contact (e.g., from a contact list of a plurality of contacts) and the user is identified, for example, automatically by a software algorithm detecting a phone call. Optionally, the initial state of the interaction is marked, for example, by running of the application. Optionally, the initial state is automatically triggered, for example, by a phone call, a received SMS and/or a received content item. Alternatively or additionally, the initial state is marked when the user selects to view a content sequence in preparation for an interaction, and/or after an interaction.
Optionally, the initialization process is started even if contact has not yet been made with all users, for example, all the users have not yet joined the phone call.
Optionally, sequences are provided, received and/or merged during the initial stage. Referring back to FIG. 2, optionally, at 216, one or more media items are displayed, in accordance with exemplary embodiments of the invention.
In exemplary embodiments, media items are displayed on a display unit of a communication device, for example, on the screen of a smartphone, feature phone, tablet computer, personal computer, television, and/or game console.
In exemplary embodiments, the sequence is automatically presented to both the user and the first contact during the interactive communication session, for example, by software. Alternatively, the sequence is only presented to the first contact, or other selected contacts. Alternatively, the sequence is only presented to the user.
Referring back to FIG. 2, optionally, at 218, the sequence of media items is browsed. Optionally, browsing is performed manually by the user of the communication device displaying the content. Alternatively or additionally, browsing is performed manually by another user, optionally interacting with the user of the device. Alternatively or additionally, browsing is performed automatically by software, for example, according to an automatic analysis of the call.
Optionally, the browsing is performed in the same order as the prepared sequence, for example, from relatively higher ranked items to relatively lower ranked items. Alternatively or additionally, browsing is performed in reverse order. Alternatively or additionally, browsing is performed by jumping between content items. For example, if software automatically detects a topic of conversation, the software automatically browses the media items related to the topic of conversation.
Browsing may be performed, for example, by a web browser, as a slide show, using a touch-screen scroll function, or other suitable methods. Optionally, browser is performed according to a timer, and/or manually by the user.
Reference is made to FIG. 9, which is a schematic illustration of a mobile communication device, for example, a smartphone. Optionally, browsing 910 is performed by placing a finger on a touch-screen and manually scrolling through the content.
Referring back to figure 2, at 220, an interaction state is determined, in accordance with exemplary embodiments of the invention. Potentially, the interaction state is used to dynamically adjust the sequence according to the interaction between the users.
In exemplary embodiments, the interaction state is associated with the interaction of the users with the sequence, and optionally with each other. Optionally, the interaction state is associated with changes to the sequence and/or media items of the sequence, for example, editing the content, deleting the content and/or adding new content. Alternatively or additionally, the interaction state is associated with changes to the interaction between the users themselves, for example, changes to the topic of discussion, interest in the topic of discussion.
The interaction state may be implemented as any suitable software method, for example, calculation of a numerical value, a state table, or other suitable algorithms.
FIGs. 7A-7C are simplified schematics showing some possible outcomes of users interacting with the content sequence during the phone call. In exemplary embodiments, the interaction occurs by one user sending instructions to a second user by a single click. In exemplary embodiments, one or more user actions may be executed on media items within the sequence, for example, editing an item, adding an item, removing an item.
Optionally, as shown in FIG. 7A, a media item 702 is edited by user A, forming an edited media item 704. In this scenario, the sequences of users A and/or B have been provided by one or more methods, for example, swapping of prepared sequences, each user preparing his/her own respective sequence, and/or the sequences are a merger of a self-prepared and/or swapped sequence. Examples of edited items include; pictures that were drawn on by the user, edited text, and/or comments added to content.
Optionally, edited media item 704 is inserted into the sequence being browsed by user B. Optionally, edited media item 704 is inserted into the sequence at user B's current location, for example, without regards to the rest of the content of the stream. Potentially, user B is almost immediately aware of edited content item 704. Alternatively, edited media item 704 is inserted into the sequence according to the labeling of item 704 and/or in relation to the rest of the sorted sequence. For example, further down the sequence. Potentially, user B is provided with a smoother flow of content, according to the selected ordering of the sequence. Optionally, edited media item 704 replaces media item 702 in the sequence being browsed by user A. Alternatively, edited media item 704 is re-labeled and the sequence is reformed by resorting the list. The original item 702 is kept or removed.
Optionally, if user B edits updated item 704 again, and resends it to user A, the new item appears at the same location as 702 and/or 704 originally did. Potentially, this enables users to repeatedly interact on content without breaking the sequence.
Optionally, the edited content item is sent to the other user using a single click button 920 (FIG. 9). Potentially, the single click feature may be used because each sequence is built tailored for a single interaction with a single user.
Optionally, as shown in FIG. 7B, a new media item 706 is inserted into the sequence being browsed by user A and/or by user B. New items may be provided by user A, user B, other users, and/or from the web (e.g., generated by automatic feeds).
Optionally, new item 706 is inserted into the current location in the sequence being browsed by the user. Alternatively, new item 706 is labeled, and the sequence is re-sorted, with the position of new item 706 within the sequence determined according to the new sorting.
Optionally, as shown in FIG. 7C, media item 708 is deleted. Optionally, deletion is performed by the browsing user. Alternatively or additionally, deletion is performed by the user that prepared the sequence.
Optionally, the content items on opposite ends of deleted item 708 are joined together (e.g., assuming the sequence is in the ordered state).
In some embodiments, the interaction state is determined according to the content being discussed. Optionally, the topic of discussion is detected, for example, by software analyzing the labeling of the browsed content for similarity, by software scanning instant message communication for key words and/or voice recognition software detecting key words. Optionally, media related to the topic of discussion is relabeled with relatively higher values. Alternatively or additionally, media not being discussed is removed. For example, if content is skipped over and/or not discussed much relative to other content, the minimally or undiscussed media is removed from the sequence and/or relabeled with relatively lower values. Optionally, automatic learning algorithms are applied, to provide relatively lower values or relatively higher values for future sequences, for example, based on the above feedback criteria. In some embodiments, the interaction state is determined according to the amount of time spent discussing the topic. For example, topics with more time spent on them are relabeled with a higher value. Optionally, the interaction state is determined according to the total amount of time spent interacting with the user. For example, in long phone-calls content items that are less news oriented and/or more interaction oriented (e.g., games) are offered to the users.
In some embodiments, the interaction state is determined according to the rate of interaction during the content. For example, topics in which many more words are exchanged are relabeled with a higher value relative to topics in which little is being said.
Optionally, upon detection of increased silent times during a phone call, "ice- breaking" content items are offered to the user(s).
In some embodiments, the interaction state is determined according to the topic discussed earlier in the conversation. For example, topics discussed at the start of the interaction receive relatively higher ratings than topics discussed towards the end of the conversation.
Referring back to FIG. 2, optionally, at 222, media between the users is synchronized. Optionally, the browsing of the content in the sequence between the users is synchronized. Alternatively, the browsing of the content is asynchronous. FIGs. 8A- 8D are simplified schematics showing some possible ways of synchronizing sequences between the users, in accordance with exemplary embodiments of the invention.
FIG. 8A is a simplified schematic showing a user A (802) browsing a sequence 804 provided by a user B (806). User B (806) is browsing a sequence 808, provided by user A (802). Optionally, each user A (802) and user B (806) browse their respective sequences in a manner that is independent of each other (e.g., asynchronous browsing). An arrow 810 shows the browsing position in sequence 804 as being viewed by user A (802). An arrow 812 shows the browsing position in sequence 808 as being viewed by user B (806). Potentially, each user browses their sequence at their own pace.
FIG. 8B is a simplified schematic showing a user A (820) and a user B (822) browsing a shared sequence 824. Optionally, each user A (820) and user B (822) browse shared sequence 824 in a manner that is independent of each other. An arrow 826 shows the browsing position in sequence 824 of user A (820). An arrow 828 shows the browsing position in sequence 824 of user B (822). Potentially, each user browses the shared sequence at their own pace.
FIG. 8C is a simplified schematic showing a user A (830) and a user B (832) browsing a shared sequence 834. Optionally, the browsing of shared sequence 834 by user A (830) and user B (832) is synchronized. An arrow 836 shows the browsing position of user A (830), and an arrow 838 shows the browsing position of user B (832). Both arrows 836 and 838 point to the same content item in sequence 834. Optionally, the browsing is synchronized so that the same media item is displayed to the same users at about the same time. Optionally, browsing is synchronized so that if one user navigates to a different media item, the other user is automatically navigated to the same content item. Potentially, the synchronization allows both users to continuously browse the media items in the sequence together.
FIG. 8D is a simplified schematic showing a user A (840) and a user B (842) browsing their own respective sequences 844 and 846. Optionally, the sequences 844 and/or 846 are sub-organized into sequential categories. For example, sequences 844 and 846 are first comprised of 'Directly relevant content' 848 852, followed by 'Indirectly relevant content' 850 854. Alternatively, for example, sequences 844 846 are first comprised of 'Trip content', followed by 'Birthday content' . Optionally, browsing by categories is synchronized so that both users are browsing content within the same category. Optionally, browsing within the category is not synchronized, so that users browse independently within the category. For example, as shown, an arrow 856 shows the browsing position within category 850 by user A (840). An arrow 858 shows the browsing position within category 854 by user B (842). For example, even through categories 850 and 854 are the same, the content within the categories may differ, and the users may browse it independently. Potentially, synchronization by content allows both users to be synchronized to the overall topic, but provides freedom to browse the content within the topic independently. For example, both users may browse a clothing catalog or different catalogues from different stores, and edit the items with "like" marks. Each user may browse at his/her own pace, while both users get to view each other's "likes" as they come across the same items. Alternatively or additionally, referring back to FIG. 4D, the electronic media items are stored in the form of a tree with branches. Optionally, the tree is traversed by a path according to the interaction state. For example, at a point which branches into different topics of discussion (e.g., birthday vs. beach vacation), the media is selected according to the topic of conversation as per the interaction state. Optionally, the tree is traversed forward, reverse and/or in a loop. Optionally, the tree is re-organized according to the interaction state. For example, new branch points are selected according to the interaction state.
Optionally, the interaction type (e.g., as initially set in 202) is adjusted, for example, as conditions are changed during the interaction, for example, as users are added or removed from the conference call.
It should be noted that although the examples show two users, the cases are not limited to two, but may extent to any number of interacting users and/or groups of users. Referring back to FIG. 2, at 224, the sequence is adjusted and/or updated, in accordance with exemplary embodiments of the invention. Optionally, the sequence is adjusted and/or updated according to the interaction state, for example, as determined in box 220. Potentially, adjusting the sequence according to the interaction provides for an enhanced and dynamic media based interaction between the users.
In exemplary embodiments, instructions from both the user and the first contact and/or other contacts are automatically controlled. Optionally, the instructions refer to the presentation of the sequence during the communication session. For example, addition of items, editing of items, removal of items, browsing of items, relabeling of items, or other presentation instructions.
Optionally, changed media items of the sequence (e.g., new item, edited item, deleted item) are relabeled (e.g., as in 206), re-filtered (e.g., as in 208), re-edited (e.g., as in 210) and/or re-organized (e.g., as in 212). Optionally, the sequence is re-sorted according to the interaction state. For example, a function maps the identified interaction state and the ordering of the content, for example according to context rating categories.
Optionally, changes in topic of conversation (e.g., according to the interaction state) trigger an adjustment in the media sequence, for example, to reflect the priority of the new topic. Optionally, the media is re-labeled (e.g., as in 206), re-filtered (e.g., as in 208), re-edited (e.g., as in 210) and/or re-organized (e.g., as in 212), reflecting the changes according to the interaction state. Potentially, the adjusted sequence dynamically follows the interaction between the users, providing the users with the media that is most relevant to their interaction (e.g., topic of conversation).
Optionally, the adjustment of the sequence is continued even after a part of the communication session has terminated. For example, after the peer to peer communication session (e.g., voice call, IM session) has terminated. In such a case, users may continue to interact with the sequences, for example, as described above. Alternatively or additionally, one or more actions are taken after termination of the session, for example, automatically sending a 'bye-bye' and/or the users playing a game with one another.
Referring back to FIG. 1, some different ways of storing the media sequence and/or calculating the interaction will be discussed. Potentially, the different ways help to ensure a smooth interaction, even if computing capabilities and/or network capabilities are not entirely suitable.
Optionally, each mobile device 104 A and/or 104B prepares content sequence 114A and/or 114B for consumption and/or sending to the other user in advance of the communication session. Optionally, each mobile device 104A-B processes the received content stream prior to the communication session.
Alternatively or additionally, the media stream resides on an external device in data communication with mobile device 104A-B, for example, computer 108, a remove server 114 and/or other devices with memory capacity. Optionally, a hyperlink to the media is provided, and media items are downloaded as they are browsed.
Optionally, mobile devices 104A-B are in electrical communication with a memory 116A-B (e.g., stored thereon). Optionally, memory 116A-B is suitable to store cached data. Optionally, data intensive media is cached and locally stored on memory 116A-B.
Optionally, the media sequence is prepared before the interaction and updated during the interaction only if network availability permits. Optionally, if the network communication is unsuitable, changes are stored until the network communication is restored. Optionally, the update rate and/or frequency vary for different interactions. For example, media for interactions that are more frequent and/or are higher in relative important to the user are updated more frequency. For example, an interaction with a close friend is updated more frequently than an interaction with a remote acquaintance.
It is expected that during the life of a patent maturing from this application many relevant 'interaction based content systems' will be developed and the scope of the term 'interaction based content system' is intended to include all such new technologies a priori.
As used herein the term "about" refers to ± 10 %.
The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to".
The term "consisting of means "including and limited to".
The term "consisting essentially of" means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range. Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.

Claims

WHAT IS CLAIMED IS:
1. A computerized method of automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between at least two interacting users, the method comprising:
displaying the media items of the at least one prepared sequence to at least one of the at least two interacting users on a display device; and automatically adjusting the at least one prepared sequence according to an action executed on one or more media items by at least one of the at least two interacting users during the peer to peer interactive communication session.
2. The method of claim 1, wherein the one prepared sequence is prepared in advance of the peer to peer interactive communication session.
3. The method of claim 1, wherein automatically adjusting comprises automatically inserting an edited media item into the sequence.
4. The method of claim 1, wherein automatically adjusting comprises adding a new electronic item into the sequence.
5. The method of claim 1, wherein a single click sends the adjusted media item to a corresponding user.
6. The method of claim 1, wherein a first user prepares a first sequence for providing to the first user, and the second user prepares a second sequence for providing to the second user.
7. The method of claim 6, wherein a subset of media items of the first sequence are provided to the second user and a subset of media items of the second sequence are provided to the first user.
8. The method of claim 1, wherein a first user prepares a first sequence for providing to a second user, and the second user prepares a second sequence for providing to the first user.
9. The method of claim 6, wherein the first and second sequences are merged and organized into a third sequence for display to the first and second users.
10. The method of claim 1, wherein a first user prepares a first sequence for providing to the first user and to one or more second users.
11. The method of claim 10, wherein the first sequence is divided into a second sequence for display to a second user and a third sequence for display to a third user.
12. The method of claim 11, wherein the second sequence and the third sequence have at least some common electronic media items.
13. The method of claim 1, wherein the sequence is a branching tree of media items, and adjusting comprises traversing a path through the tree.
14. The method of claim 1, wherein automatically adjusting comprises removing an existing media item in the sequence.
15. The method of claim 1, wherein automatically adjusting comprises automatically adjusting the sequence according to changes in relevance of the one or more media items to the flow of conversation between the at least two interacting users.
16. The method of claim 1, further comprising automatically labeling added or edited media items according to a rating score.
17. The method of claim 16, wherein the rating score comprises a context score of relevance of the media item to the at least two interacting users.
18. The method of claim 16, wherein the labeling comprises labeling the media item according to a quality score.
19. The method of claim 18, wherein the quality score is determined according to a date and/or external rating.
20. The method of claim 16, wherein the labeling comprises labeling the media item according to an interaction fit score.
21. The method of claim 20, wherein the interaction fit score relates a property of the media item with an interaction type.
22. The method of claim 16, further comprising automatically organizing the prepared sequence according to the labeling of the media item.
23. The method of claim 1, further comprising determining an interaction type of the interaction between the at least two interacting users.
24. The method of claim 1, wherein the interaction comprises two or more interactions between a plurality of pairs of users.
25. The method of claim 1, further comprising synchronizing the sequences between the at least two interacting users.
26. The method of claim 25, wherein synchronizing comprises allowing the at least two interacting users to independently browse the sequence with changes sent from one user to another by a single click.
27. The method of claim 1, further comprising selecting media items for interactive display to at least one user of the at least two interacting users.
28. The method of claim 1, further comprising filtering adjusted media items.
29. The method of claim 1, further comprising editing the media items.
30. The method of claim 1, further comprising navigating the media items.
31. The method of claim 30, wherein navigating comprises backwards or forwards sequential browsing of the electronic media items in the sequence.
32. The method of claim 1, wherein the number of interacting users is two, and adjusting is performed only by one of the two interacting users.
33. The method of claim 1, wherein the peer to peer interactive communication session comprises a voice telephone call.
34. The method of claim 1, wherein automatically adjusting during the peer to peer interactive communication session comprises automatically adjusting the prepared sequence after the peer to peer interactive session has terminated.
35. The method of claim 1, wherein automatically adjusting during the peer to peer interactive communication session comprises automatically adjusting the prepared sequence before the peer to peer interactive session has been initialized.
36. The method of claim 1, wherein automatically adjusting comprises browsing the prepared sequence.
37. The method of claim 1, wherein a first portion of the at least one prepared sequence is prepared in advance and a second portion of the at least one prepared sequence is prepared during the communication session.
38. The method of claim 37, wherein prepared during the communication session comprises ordering the media items of the second portion of the at least one prepared sequence and inserting the ordered media items into the first portion of the at least one prepared sequence prepared in advance.
39. A system for automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between at least two interacting users, the system comprising:
a module for automatically adjusting the prepared sequence according to changes to one or more media items made by at least one of the at least two interacting users during the peer to peer interactive communication session.
40. The system of claim 39, further comprising a memory in electrical communication with the module, the memory having a region configured as a cache for storing data intensive media items.
41. The system of claim 39, further comprising a mobile computer comprising the module, the mobile computer further comprising a communication port configured to send and receive media items.
42. The system of claim 39, further comprising a display unit for displaying the media items, the display unit being in electronic communication with the module.
43. The system of claim 39, further comprising a module configured to vary a media item update rate according to at least one of: network connectivity, computational abilities, and an importance of an interaction between the at least two interacting users.
44. The system of claim 39, wherein the number of interacting users is two, and only one of the two users is running the module.
45. A computerized method of automatically adjusting at least one prepared sequence of electronic media items during a peer to peer interactive communication session between a plurality of interacting users, the method comprising:
identifying a plurality of contacts from a contact list of a user;
identifying for each one of the plurality of contacts a plurality of media items which are relevant thereto; identifying an interactive communication session between a first contact of the plurality of contacts and the user;
automatically generating a sequence from at least some of respective said plurality of media items which are relevant to the first contact;
automatically presenting said sequence to both the user and the first contact during the interactive communication session; and
actively controlling instructions from both the user and the first contact of the presentation of the sequence during the communication session.
PCT/IL2013/050553 2012-06-29 2013-06-27 Interaction related content WO2014002099A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261665926P 2012-06-29 2012-06-29
US61/665,926 2012-06-29

Publications (1)

Publication Number Publication Date
WO2014002099A1 true WO2014002099A1 (en) 2014-01-03

Family

ID=49782364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2013/050553 WO2014002099A1 (en) 2012-06-29 2013-06-27 Interaction related content

Country Status (1)

Country Link
WO (1) WO2014002099A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192682A1 (en) * 2014-06-16 2015-12-23 华为技术有限公司 Communication method and terminal
CN109982262A (en) * 2019-03-21 2019-07-05 中国联合网络通信集团有限公司 Incoming call response method, device, system and the storage medium of terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009026367A1 (en) * 2007-08-20 2009-02-26 Emotive Communications, Inc. Interactive interface for devices supporting communication employing sender-specified media content
WO2012083439A1 (en) * 2010-12-23 2012-06-28 Research In Motion Limited Social media shuffle system and application

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009026367A1 (en) * 2007-08-20 2009-02-26 Emotive Communications, Inc. Interactive interface for devices supporting communication employing sender-specified media content
WO2012083439A1 (en) * 2010-12-23 2012-06-28 Research In Motion Limited Social media shuffle system and application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192682A1 (en) * 2014-06-16 2015-12-23 华为技术有限公司 Communication method and terminal
CN109982262A (en) * 2019-03-21 2019-07-05 中国联合网络通信集团有限公司 Incoming call response method, device, system and the storage medium of terminal

Similar Documents

Publication Publication Date Title
US9779157B2 (en) Bundled event memories
US20200081895A1 (en) Method for organising content
CN107087235B (en) Media content recommendation method, server and client
US20170127128A1 (en) Social Post Roll Up and Management System and Method of Use
JP6102124B2 (en) Information processing apparatus, information processing method, and program
US9630114B2 (en) Suggesting friends for playing a game
US20150186366A1 (en) Method and System for Displaying Universal Tags
US20080028294A1 (en) Method and system for managing and maintaining multimedia content
US20130031173A1 (en) Information recommendation method, recommendation engine, network system
US20120096088A1 (en) System and method for determining social compatibility
US20130268513A1 (en) Annotations based on hierarchical categories and groups
US9262044B2 (en) Methods, systems, and user interfaces for prompting social video content interaction
WO2013082142A1 (en) Methods and apparatus for enhancing a digital content experience
US11599906B2 (en) Transmedia story management systems and methods
US10685680B2 (en) Generating videos of media items associated with a user
US20100228732A1 (en) Information offering apparatus and method
US20170214963A1 (en) Methods and systems relating to metatags and audiovisual content
US20190197312A1 (en) Method, apparatus and computer-readable media for displaying augmented reality information
US20090276412A1 (en) Method, apparatus, and computer program product for providing usage analysis
WO2014002099A1 (en) Interaction related content
US9578258B2 (en) Method and apparatus for dynamic presentation of composite media
US20150055936A1 (en) Method and apparatus for dynamic presentation of composite media
CN109711893A (en) Member's business handles method, apparatus and system
JP2018153624A (en) Server device and computer program used in the same
KR20120137625A (en) Apparatus and method for providing human network management service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13809900

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 09-04-2015)

122 Ep: pct application non-entry in european phase

Ref document number: 13809900

Country of ref document: EP

Kind code of ref document: A1