US20170164036A1 - Enabling communication and content viewing - Google Patents

Enabling communication and content viewing Download PDF

Info

Publication number
US20170164036A1
US20170164036A1 US15/350,784 US201615350784A US2017164036A1 US 20170164036 A1 US20170164036 A1 US 20170164036A1 US 201615350784 A US201615350784 A US 201615350784A US 2017164036 A1 US2017164036 A1 US 2017164036A1
Authority
US
United States
Prior art keywords
data
location
object element
electronic communication
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/350,784
Inventor
Kwabena Benoni Abboa-Offei
Maurice Courtois
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WEW ENTERTAINMENT Corp
Original Assignee
WEW ENTERTAINMENT Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WEW ENTERTAINMENT Corp filed Critical WEW ENTERTAINMENT Corp
Priority to US15/350,784 priority Critical patent/US20170164036A1/en
Assigned to WEW ENTERTAINMENT CORPORATION reassignment WEW ENTERTAINMENT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ABBOA-OFFEI, KWABENA BENONI, COURTOIS, MAURICE
Publication of US20170164036A1 publication Critical patent/US20170164036A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications

Definitions

  • This specification relates generally to enabling communication and content viewing.
  • Audio-video content such as television programs or movies
  • This specification describes systems, methods and apparatus, including computer program products, for enabling communication and content viewing.
  • An example system performs the following operations: storing, in a data object, data corresponding to electronic communication, where the data object has an object element associated therewith, and where the object element initially resides on a first area of a portal; recognizing movement of the object element from the first location to a second location on the portal, where the second location is different from the first location; detecting that the object element has been moved to the second location; in response to the detecting, triggering execution of a function to obtain the data from the data object; and generating a display of the electronic communication based on the data, where the display is rendered over at least part of the second location.
  • the example system may include one or more of the following features, either alone or in combination.
  • the data may comprise a serialized representation of the electronic communication, and the function to obtain the data may comprise a process for deserializing the data.
  • the data may comprise one or more pointers to information about the electronic communication, and the function to obtain the data may comprise using the pointer to retrieve the data.
  • the system may perform the following operations: storing the data object in a temporary object in response to recognizing that the object element moved from the first location; and redrawing a part of the first area that contained the object element to reflect movement of the object element from the first area.
  • the display of the electronic communication may be generated in response to detecting release of the object element at the second location.
  • the system may generate a shadow display in response to detecting that the object element is over the second location but has not been released over the second location, where the shadow display is generated based on at least some of the data.
  • the system may perform the following operations: recognizing movement of the object element from the second location to the first location on the portal; detecting that the object element has been moved to the second location; in response to the detecting, triggering execution of a function to store data for the electronic communication in a data object associated with the object element at the second location; and generating a display of the electronic communication based on the data, where the display is rendered over at least part of the first location.
  • the system may perform the following operations prior to detecting that the object element has been moved to the second location: detecting that the object element has been released prior to reaching the second location; sending the data object to a process associated with the first location; and executing the process at the first location to redraw the object element at the first location and to store the data object in the first location in association with the object element.
  • the data may comprise an encrypted representation of the electronic communication and the function to obtain the data may comprise a process for decrypting the data.
  • the electronic communication may be a chat session for a user of a multimedia application.
  • the multimedia application may be for displaying audio-video content associated with the chat session either on the portal or on a second portal.
  • the data object may store data representing content other than the electronic communication, where the display includes the content other than the electronic communication.
  • the example systems described herein enable: (1) embedding data into data objects associated with object elements contained in a first sub-view, where that data is in a format that can be decompressed by dragging-and-dropping or moving object element(s) into a second sub-view, and where the second sub-view is associated with data-decompression code, (2) user(s) to interact with various fields of decompressed data, where fields of data are associated with multiple users and the object elements are inside of the second sub-view, and (3) moving the object element(s) back to the first sub-view for recompression.
  • All or part of the systems and techniques described herein may be implemented as a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices.
  • Examples of non-transitory machine-readable storage media include e.g., read-only memory, an optical disk drive, memory disk drive, random access memory, and the like.
  • All or part of the systems and techniques described herein may be implemented as an apparatus, method, or electronic system that may include one or more processing devices and memory to store executable instructions to implement the stated functions.
  • FIG. 1 is a block diagram of an example system on which processes enabling communication and content viewing may be implemented.
  • FIG. 2 is a block diagram of an example electronic program guide.
  • FIG. 3 is a block diagram of an example page for viewing audio-video content and for selecting one or more users for chat.
  • FIG. 4 is a block diagram of an example page for viewing audio-video content, which shows a sub-view area to which a profile picture may be moved.
  • FIG. 5 is a block diagram of an example page for viewing audio-video content, which shows movement of a profile picture to the sub-view area.
  • FIG. 6 is a block diagram of an example page for viewing audio-video content, which shows multiple chats in the context of a two-screen experience.
  • FIG. 7 is a block diagram of an example page for viewing audio-video content, which shows multiple chats and a program guide in the context of a two-screen experience.
  • FIG. 8 is a block diagram of an example page for viewing audio-video content, which shows multiple chats and chat feeds in the context of a two-screen experience.
  • FIG. 9 is a block diagram of an example page for viewing audio-video content, which shows multiple chats and chat feeds in the context of a two-screen experience.
  • FIG. 10 is a flowchart showing an example process for enabling communication and content viewing.
  • the example processes enable a user to view content, such as a television broadcast, while also viewing electronic communications, such as chat sessions, associated with that content.
  • Data representing the electronic communications may be stored in a data object in association with an object element, such as a profile picture, on a portal. That object element may be moved from one location to another location on the portal. That other location is associated with executable code that may be triggered by executable code associated with the data object/object element.
  • movement of the object element to the other location triggers execution of code to enable display of the electronic communications at the other location (coincident with display of the content for viewing).
  • the user may further control movement of the object element, as described herein, to control display of the electronic communications.
  • the example system includes an interactive electronic program guide (e.g., FIG. 2 ) that is controllable by a user 100 .
  • the example system may be implemented in a multimedia application that includes a dynamic communications interface (e.g., FIG. 5 ).
  • the dynamic communications interface enables and encourages users of the system to interact and, in some cases, to view the reactions of others, while watching content, such as broadcast television.
  • the interface which may be a Web page or other type of portal, may provide users with the opportunity to view the profile information of people who are watching content at a given time, and to select users for a chat experience.
  • the profile information may be input to the system or downloaded from external social media platform(s).
  • the example system may be hosted by one or more servers (server) 101 , which are accessible over a network 102 , such as the Internet, either over wired or wireless connection(s).
  • server may access a Web site hosted by the one or more servers to obtain access to the example system.
  • User 100 may access the system through any one or more user devices, such as television 104 , computer 105 , portable device 106 , smartphone 107 , etc.
  • the Web site may be generated by one or more processing devices in the server executing code 110 stored in memory 109 of server 101 .
  • the Web site operations may be controlled by server 101 alone or in conjunction with executable code 114 that is stored in memory 113 on, and executed by one or more processing devices of, a user device.
  • the user may be presented with an electronic program guide, such as guide 200 shown in FIG. 2 .
  • the user may select guide 200 by selecting “HOME” menu option 202 from menu 204 .
  • Guide 200 may contain audio-video content (e.g., television programs, movies, and the like) that the user may access via their device (e.g., computer, tablet, smartphone, smart television, or the like).
  • the content may be all content available to that user (e.g., through a subscription service) or content that is trending on social media or on the system itself.
  • the user may select a program 201 to view by clicking-on, touching, or otherwise identifying that program.
  • the system enables a two-screen experience and in some implementations, the system enables a one-screen experience.
  • the user may configure the system for a one-screen or two-screen experience.
  • the selected program is viewed on one screen (e.g., on television 104 's screen) and related content (described below) is viewed on a second screen (e.g., computer 105 's screen).
  • the selected program and related content are both viewed on the same portal on the same screen (e.g., on the screen of computer 105 or television 104 ).
  • the user selects a program 201 from guide 200 .
  • the system automatically (e.g., without user intervention) obtains content for the program 200 , and identifies other users registered with the system who are currently viewing that program.
  • the system generates data for a portal, such as Web page 300 of FIG. 3 .
  • user settings may dictate portal display and functionality.
  • the user may configure, and display, the portal of Web page 300 by selecting “TV DISPLAY” menu option 303 from menu 204 .
  • Web page 300 includes a display area 301 for viewing the program and sub-views 302 corresponding to users logged into the system who are currently viewing the same program.
  • Sub-views 302 are associated with data objects corresponding to other users logged into (or registered with) the system who are currently viewing the same program.
  • sub-views 302 may constitute a single sub-view area.
  • each data object is represented by a user's profile picture. The representation of a data object is referred to as an object element.
  • each object element may be a current video of the user (e.g., from a Web camera) or some non-static object element.
  • representations of data objects include, but are not limited to the following: pictures, text, images, banners, graphics, sounds files, and videos.
  • sub-views include, but are not limited to, a portion of an overall view on a display screen, or a defined section of a viewable display screen, which may hold content.
  • FIG. 3 there is a menu, and a display or list of one or more various object elements (e.g., photographs, text, banners etc.).
  • the menu display or list in this example contains scroll-bar functionality 306 .
  • FIG. 4 shows another example portal, e.g., Web page 400 , configured to implement a one-screen viewing experience.
  • display area 401 presents the selected program and sub-views 402 correspond to data objects (represented by object elements, which are profile pictures in this example) associated with users logged into the system who are currently viewing the same program (e.g., the program presented in display area 401 ).
  • Web page 401 includes a selectable menu 404 (in this example implementation, a “mini” program guide) for selecting other programs 405 for viewing.
  • Web page 400 also includes sub-view area 406 , which enables interaction between users, as described below. The user may configure, and display, the portal of Web page 400 by selecting “TV DISPLAY” menu option 410 and “MINI-GUIDE” menu option 411 from menu 204 .
  • each data object contains (e.g., stores) data relating to the user represented by a corresponding object element.
  • a data object for object element 410 may contain profile information for the user (e.g., name, age, likes, dislikes, etc.), and data representing, or data representing a link to, one or more chats or other electronic communications in which the user is participating, has participated, has some interest, and so forth.
  • each user may participate in one or more online chat sessions with other users of the system or with other systems. Those chat sessions may be about the program presented in display area 401 , or may be initiated in response to viewing the program followed by some user interaction.
  • data may be embedded and compressed (e.g., serialized) within each data object.
  • the actual data representing a chat session may be stored within a data object.
  • pointers to data may be embedded in and compressed within each data object.
  • pointer(s) to data representing a chat session or other type of information may be stored within a data object.
  • a data object may contain a combination of embedded and compressed data and pointers.
  • a user may drag-and-drop an object element (and thus also a data object associated with the object element) into a sub-view area 406 . Dragging-and-dropping the object element also causes the data object to be dragged-and-dropped.
  • sub-view area 406 is a pre-defined area of the Web page (or other type of portal) having the functionality described below.
  • each data object (and thus object element) in a sub-view 402 may be associated with one or more data tags or data elements, which may be hidden from view, yet embedded, compressed (e.g., serialized), or referenced within each corresponding data object for later retrieval, expansion, manipulation or other appropriate functions.
  • This first sub-view area functions to keep embedded-data elements hidden from view or compressed within the data object as result of a dynamic interaction between event coding associated with the first sub-view area and event coding associated with the data object/object element. As such, this first sub-view area may also be referred to as “the compression space”.
  • event coding may include, but is not limited to, computer-executable code associated with a data object/object element or sub-view, the operation of which may be triggered by on-screen interaction of an object element and sub-view, as described herein.
  • sub-view area 406 is referred to as a “hot space” because it is located on the user's viewable screen and serves as an electronic key, decompression space, or decompression event that is usable to decompress, unlock (e.g., deserialize), extract, and/or dynamically expand compressed, embedded and/or hidden data in the data object represented by the corresponding object element.
  • the user drags-and-drops, or clicks, an object element or elements for a corresponding data object(s) (e.g., the user's profile picture, in this example) into the hot space on the display.
  • event-trigger code associated with the data object/object element interacts with event-trigger code associated with the hot space, and this interaction causes expansion of, and display of, information contained in, or referenced by, data tags associated with the data object.
  • the hot spaces trigger a new view of data when the event trigger code associated with a moved data object/object element(s) and event trigger code associated with the hot space(s) execute in response to interaction of the object element and the hot space (e.g., sub-view).
  • FIGS. 5 to 9 Examples of the foregoing operation are illustrated in FIGS. 5 to 9 .
  • the user may configure, and display, the portal of Web page 500 by selecting “TV DISPLAY” menu option 508 and “CHAT FEED” menu option 509 from menu 204 (to also display a chat area 504 ).
  • the user may configure, and display, the portal of Web page 600 by selecting “MINI-GUIDE” menu option 608 from menu 204 .
  • dragging-and-data object element results in expansion and display of data representing a chat session 501 of the corresponding user with one or more other users.
  • the user has not yet “dropped” the profile picture and, as a result, the chat session 501 is displayed in shadow form.
  • event trigger code associated with the data object for object element 410 and sub-view area 406 is able to determine when an object element passes over the sub-view without dropping, in which case data is expanded (e.g., deserialized) and a shadow view is rendered and when the object is dropped, in which case there is a full resolution rendering in the sub-view.
  • Chat area 504 may represent chat from all users on the system or from a subset of users on the system selected, e.g., by geography or other criteria. Users for chat area 504 may be filtered by selecting appropriate system settings.
  • a user may drag-and-drop a profile picture (e.g., 410 ) into chat area 504 .
  • this action causes executable code associated with the data object and corresponding object element to interact with trigger code associated with chat area 504 , resulting in a chat session associated with the profile picture expanding, and being presented as part of, chat area 504 .
  • This functionality may be implemented in the same manner as the functionality that results in display of a chat card in sub-view area 406 as described below.
  • a profile picture (e.g., picture 512 ) may have associated therewith data object(s) and functionality of the type associated with profile pictures 511 .
  • dragging and dropping such a picture e.g., picture 512
  • sub-view area 406 will result in display of a chat session associated with the user of that picture in sub-view area 406 .
  • This functionality may be implemented in the same manner as the functionality that results in display of chat session 501 , 602 , described below.
  • FIG. 6 shows a portal, e.g., Web page 600 , which may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations of user objects 601 , but rather is displayed on a second screen (e.g., a television screen, which is not shown).
  • a second screen e.g., a television screen, which is not shown.
  • chat session 501 , 602 displayed for different users.
  • profile information for the users may be displayed.
  • a chat session 602 is displayed in full resolution.
  • Dragging a representation of the user data object (e.g., 410 ) from sub-view area 406 back to sub-view area 402 causes the information in the data object to be removed from sub-view area 406 .
  • the information contained therein may be re-compressed and embedded into the object for later viewing, if desired.
  • object elements may be moved from a hot space sub-view (e.g., sub-view area 406 ) back to a decompression space sub-view (e.g., sub-view 402 ), where a given set of data elements will again become hidden, embedded, and/or compressed.
  • data for a corresponding data object is updated, reserialized, and re-drawn in the sub-view area 402 , as described in more detail below.
  • Hiding, embedding and/or compressing may be caused by execution of code associated with the two sub-views in response to on-screen interaction thereof.
  • FIG. 7 shows a portal, e.g., Web page 700 that may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations of user objects in area 402 , but rather is displayed on a second screen (e.g., a television screen, which is not shown).
  • a second screen e.g., a television screen, which is not shown.
  • user chat is displayed in area 706 and the electronic program guide (a mini-guide) is displayed in area 707 .
  • a user may delete a chat session.
  • chat cards e.g., 602 , 702
  • chat deck a set of chat cards is referred to as a “chat deck”.
  • the user may configure, and display, the portal of Web page 700 by selecting the “MINI-GUIDE” menu option (to also display a mini-guide 707 ) and “CHAT FEED” menu option from menu 204 (to also display a chat feed 706 ).
  • FIG. 8 shows a portal, e.g., Web page 800 that may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations (object elements) 802 of user objects, but rather is displayed on a second screen (e.g., a television screen, which is not shown).
  • a chat card may be “flipped” to display information other than a chat session.
  • chat card 805 is flipped (e.g., by clicking on control 806 ) to display information from the user's profile, here the name and location of the user, along with a list of social networks where the user has accounts.
  • Web page 800 also includes a chat feed 810 relating to a program that users are viewing. The user may configure, and display, the portal of Web page 500 by selecting the “CHAT FEED” menu option from menu 204 .
  • FIG. 9 shows a portal, e.g., Web page 900 , which may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations of user objects, but rather is displayed on a second screen (e.g., a television screen, which is not shown).
  • a portal e.g., Web page 900
  • the viewable program is not displayed on the same screen as representations of user objects, but rather is displayed on a second screen (e.g., a television screen, which is not shown).
  • a second screen e.g., a television screen, which is not shown.
  • chat cards 902 are shown. Chat cards 602 , 702 are the same as those shown in FIG. 8 .
  • Chat card 904 is displayed as well in shadow view because it has been selected for movement from sub-view area 905 to sub-view area 906 .
  • chat card 905 is displayed in shadow view, with a “ ⁇ ” sign 907 to indicate that it is being removed from the chat deck (whereas chat card 908 has a “+” sign 909 to indicate that it is being added to chat deck).
  • Chat card 904 remains displayed in shadow form until it is dragged-and-dropped to sub-view area 906 .
  • Computer-executable code in sub-view area 906 interacts with computer-executable code associated with the data object represented by object element 910 to re-compress (e.g., reserialize) data into the data object represented by the object element, leaving only the object element (e.g., the representation, such as the user's profile picture) displayed in sub-view area 906 , with no corresponding chat card.
  • FIG. 9 also shows different chat feeds 911 , 912 relating to programs that are currently being viewed in the system. These chat feeds may be user-selected, and may be based on users preferences or other information input by the user. The user may configure, and display, the portal of Web page 900 by selecting the “CHAT FEED” menu option from menu 204 .
  • the user may manage and view different object elements simultaneously by, e.g.: (1) moving object elements back and forth between first and second sub-views, while at the same time (2) interacting with, or viewing, multiple object elements in a given sub-view (e.g., sharing and viewing data with multiple users in the hot space simultaneously); and (3) scrolling through the object elements in each view.
  • Content may be managed between the compression space (e.g., sub-view area 906 of FIG. 9 ) and the decompression space (e.g., sub-view area 905 of FIG. 9 ) through use of a hand, finger, cursor, track pad, mouse, or equivalent tactile gestures and/or apparatus, or through automated processes (e.g.
  • the user may use a pinch, a click, tap, or series of taps or gestures on one or more of the object element(s) in a sub-view to move one or more object elements into a grouping sub-view that dynamically adjusts to include the new element.
  • the systems include a dynamic communications portal (e.g., a Web page or other type of interactive user interface) that encourages users to interact, and to view the reactions of their neighbors, while watching broadcast television or other audio-video content.
  • a dynamic communications portal e.g., a Web page or other type of interactive user interface
  • the portal provides users with an opportunity to view the profile pictures of people who are watching a broadcast television show at a given time, and then to select said users for a chat experience.
  • the example systems described herein may combine functions of (1) drag-and-drop technology, (2) sub-views and data objects associated with a display interface, which are associated with executable event-trigger code, (3) embedding/associating extractable data with(in) data objects, and (4) compressing and decompressing data embedded/associated with(in) data objects upon moving representations associated with the data objects (e.g., the object elements) into a sub-view containing executable event trigger code configured to either compress (e.g., serialize) or decompress (e.g., deserialize) data.
  • compress e.g., serialize
  • decompress e.g., deserialize
  • the example systems described herein include various intelligently-coded sub-views that are resident on a display screen, or the functional equivalent of a display screen.
  • the sub-views may be coded in a manner that causes various data object to dynamically expand visible data, contract visible data, reveal hidden/embedded data, and/or hide visible data elements from view as a user moves or drags a representation of the data object from one specially-coded sub-view to another specially-coded sub-view.
  • the executable code associated with the sub-views is referred to herein as “event trigger coding,” and variations of that phrase, such as event coding, event trigger(s), and trigger(s).
  • each sub-view, and corresponding data object representation may be programmed or coded to cause or trigger any one or more of the following functions in response to a user applying a pinch, click, dragging gesture, and/or mouse movement to move an object element from one sub-view into another sub-view: hiding, embedding, compressing, decompressing, expanding, contracting, or revealing data elements that are contained in, or related to, the data objects whose representation is moved into the sub-view.
  • a viewable display on a laptop, personal digital assistant (PDA), mobile phone, television, or tablet device may be divided into two or more sub-views.
  • the first sub-view e.g., sub-view area 906
  • the profile-picture-object element may contain, or be associated with, one or more embedded, hidden, or compressed data objects that are related to the person in the profile picture (e.g., object-element data). That is, in some implementations, a single object element (e.g., profile picture) may be associated with multiple data objects, each representing different information and each behaving in the manner described herein.
  • Both the first sub-view, and the object element(s) may be associated with distinct, executable event-trigger code that cause the object-element data to remain compressed or embedded so long as the object element(s) remain inside of the first sub-view on the viewable display.
  • the second sub-view e.g., sub-view area 905
  • the expanded object-element data may include that person's biographical information and micro-blogging activity, among other information or related data.
  • the coding language(s) and protocols that may be used to perform these functions include, but are not limited to, JSON (JavaScript® Object Notation), Javascript®, HTML (HyperText Markup Language), or CSS (Cascading Style Sheets).
  • the object elements in each sub-view may move or shift slightly to the right and to the left to accommodate the action of the user placing a new object element into the second sub-view for data expansion, or back to the first sub-view for data decompression or re-compression.
  • the object elements in each sub-view may be programmed to be scrollable menus of object elements that may be simultaneously viewed, manipulated, and interacted therewith. A scroll bar for scrolling through object elements is shown in the figures. Also, an object element that is moved from the first sub-view to the second sub-view is typically no longer displayed in the first sub-view, and vice versa.
  • users may: view and dynamically scroll through a list of profile pictures of social-media users in the first sub-view, and drag the profile picture(s) into the second sub-view (e.g., a hot space).
  • the hot space e.g., via code in, e.g., JSON, CSS, Javascript®, and/or HTML
  • Such information may include, e.g., the profiled person's micro-blogging activity and interactions.
  • the hot space is a key that unlocks or expands data that is embedded, hidden, compressed or catalogued, within the object element that is dragged from the first sub-view into the hot space.
  • the profile-picture-object elements in the first sub-view are images of people who are all watching the same television broadcast or on-demand video at roughly the same time.
  • the profile-picture-object elements may include only those users with whom the user has a pre-established connection within the system or in an external network.
  • profile-picture-object elements may be initially located in the first sub-view, which is positioned at or near the middle of the display screen (e.g., sub-view area 906 of FIG. 9 ), and which includes a left-to-right-scrollable menu or collage of profile-picture object elements that may be dragged and dropped to the second sub-view in lower half of the screen (e.g., sub-view area 905 of Fig.).
  • the picture object When a picture is dragged to the second sub-view in the lower half of the screen, the picture object may be automatically converted (as described herein) into an interface element, such as a chat window, where the user can now chat with the person pictured and view a stream of information about the person pictured, including, e.g., additional pictures of the profiled person and that person's current or previous micro-blogging activity with other social-network users.
  • a profile-picture-object element When a profile-picture-object element is dragged back to the first sub-view area (e.g., from sub-view area 905 to sub-view area 906 of FIG. 9 ), then the respective event coding associated with the first sub-view and the profile-picture object element(s) will interact to hide, embed, or compress (e.g., serialize), the micro-blogging activity.
  • hyperlinked news headlines with associated event code may be located in a sub-view at or near the top of the display screen.
  • the sub-view(s) may include a list of available news headlines that may be dragged to a second sub-view as well.
  • the headline object element may then be automatically converted to a full news article, e.g., via the interaction between event code contained in a data object represented by the headline (in this example, the object element) and the event code in second sub-view.
  • the article text will hide, and only the headline text(s) will remain in the first sub-view, e.g., via the interaction between the event codes contained in the headline and the first sub-view.
  • FIG. 10 is a flowchart showing an example process 1000 for implementing the system described herein for enabling communication and content viewing.
  • Executable instructions for implementing example process 1000 may be stored on one or more non-transitory machine-readable storage media or devices.
  • Example process 1000 may be implemented by one or more processing devices on a user device (e.g., 104 , 105 , 106 , 107 ) and/or a server (e.g., 101 ) by retrieving and executing the instruction to perform the following operations.
  • a user device e.g., 104 , 105 , 106 , 107
  • server e.g., 101
  • data for various users is stored ( 1001 ) in data objects.
  • the data corresponds to electronic communication, such as chat sessions, for a corresponding user.
  • the data may also include profile information for that user including, but not limited to, the profile information described herein.
  • the data object may be a JSON object and the data contained therein may be serialized.
  • Each data object is associated with executable code (e.g., event trigger code) that operates, either alone or in conjunction with other executable code, to perform the functions described herein.
  • the executable code may be stored in the data object, or elsewhere and associated with the data object and the corresponding object element (each data object may be associated with, and represented by, an object element). Examples of object elements include, but are not limited to, those described herein.
  • the object elements are rendered, and displayed in a first sub-view area of a portal (e.g., a Web page), as described above.
  • Process 1000 recognizes ( 1002 ) selection and movement of an object element from a first location (e.g., a first sub-view area 906 ) to a second location (e.g., a second sub-view area 905 ) on the portal, which is different from the first location.
  • executable code is associated with the data object at the first location.
  • the code is executed in response to detection that the object has been selected (e.g., clicked-on).
  • the code when executed, generates a temporary container object, in which the data object is stored for “transport” from the first location to the second location.
  • the code when executed, also redraws the part of the first location that included the object element corresponding to the data object to reflect that the object element has been removed, and continues to redraw the object element on-screen to reflect movement from the first location to the second location.
  • code associated with the data object informs a process associated with the first location (which may be implemented by executable code associated with the first location) that there is data from the object that will be transferred to the process. Thereafter, the data is transferred to the process, the process removes the container object, stores the data object in an appropriate location, and redraws the corresponding object element in the first location.
  • process 1000 detects ( 1003 ) that the object element has been moved to the second location.
  • reaching the second location may be detected based on an interaction between code executing for the data object and code executing at the second location. Detection may be performed, e.g., by detecting that the object element (e.g., the profile picture) corresponding to the data object has reached a particular part of the display screen.
  • process 1000 triggers ( 1004 ) execution of a function to obtain the data from the corresponding data object. For example data from the data object may be extracted and deserialized.
  • process 1000 Before the object element is dropped at the second location, process 1000 generates ( 1005 ), and renders on-screen at the second location, a shadow image that is based on the data object.
  • the shadow image may be a less-than-full resolution image of the electronic communication (e.g., a chat session) represented by data in the data object. This shadow image is rendered on-screen.
  • An example of a shadow image is, e.g., chat card 908 of FIG. 9 .
  • Process 100 detects ( 1006 ) that the object element has been dropped at the second location. This may be done, e.g., by detecting release of a mouse button or other control feature when the object element is determined to be at the second location.
  • process 1000 generates ( 1007 ) a full-resolution display image of electronic communication (e.g., a chat session) represented by data in the data object. This full-resolution image is rendered on-screen.
  • An example of a full-resolution image is, e.g., chat card 602 of FIG. 9 .
  • the image e.g., the chat card
  • the image is rendered over at least part of the second location.
  • the image may be rendered elsewhere on-screen, or some other function may be triggered unrelated to image rendering.
  • the interaction of event trigger code associated with the data object and the second location triggers deserialization of data representing an electronic communication, and display thereof.
  • the data in the data object may be encrypted or stored in a manner that is not serialized.
  • event trigger code associated with the data object and the second location may trigger decryption of the data and/or any other appropriate type of expansion and display of the data.
  • the data may represent information other than electronic communications.
  • the data may represent articles or other text, which may be expanded and viewed in the manner described herein.
  • the data may represented one or more pointers to locations containing data representing electronic communications or other information. Those pointers may be deserialized or otherwise accessed and used to obtain the data that is used to generate the displays (chat cards) described herein.
  • a data object may be moved (e.g., by moving its corresponding object element, such as a profile picture) from the second location (e.g., sub-view 905 ) to the first location (e.g., sub-view 906 ) to “close” viewing of a chat session.
  • process 1000 is effectively reversed.
  • code executing in the second location and code associated with the data object detects selection and movement of an object element corresponding data object.
  • a process is triggered that serializes data from the second location, and stores that data in the data object, which itself is stored in a temporary container object for transport to the first location.
  • Appropriate processes are executed to redraw the part of the second location previously containing the object element, and to draw the object element during motion.
  • the object element is then dropped to the first location, where the data object is removed from the temporary container object and stored in an appropriate location.
  • the object element is then redrawn on-screen at an appropriate area of the first location.
  • a computer program product i.e., a computer program tangibly embodied in one or more information carriers, e.g., in one or more tangible machine-readable storage media, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
  • Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only storage area or a random access storage area or both.
  • Elements of a computer include one or more processors for executing instructions and one or more storage area devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • Machine-readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor storage area devices e.g., EPROM, EEPROM, and flash storage area devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • Each computing device such as a tablet computer, may include a hard drive for storing data and computer programs, and a processing device (e.g., a microprocessor) and memory (e.g., RAM) for executing computer programs.
  • a processing device e.g., a microprocessor
  • memory e.g., RAM
  • Each computing device may include an image capture device, such as a still camera or video camera. The image capture device may be built-in or simply accessible to the computing device.
  • Each computing device may include a graphics system, including a display screen.
  • a display screen such as an LCD or a CRT (Cathode Ray Tube) displays, to a user, images that are generated by the graphics system of the computing device.
  • display on a computer display e.g., a monitor
  • the computer display is LCD-based
  • the orientation of liquid crystals can be changed by the application of biasing voltages in a physical transformation that is visually apparent to the user.
  • the computer display is a CRT
  • the state of a fluorescent screen can be changed by the impact of electrons in a physical transformation that is also visually apparent.
  • Each display screen may be touch-sensitive, allowing a user to enter information onto the display screen via a virtual keyboard.
  • a physical QWERTY keyboard and scroll wheel may be provided for entering information onto the display screen.
  • Each computing device, and computer programs executed thereon, may also be configured to accept voice commands, and to perform functions in response to such commands. For example, the process described herein may be initiated at a client, to the extent possible, via voice commands.

Abstract

An example system includes: storing, in a data object, data corresponding to electronic communication, where the data object has an object element associated therewith, and the object element initially resides on a first area of a portal; recognizing movement of the object element from the first location to a second location on the portal, where the second location is different from the first location; detecting that the object element has been moved to the second location; in response to the detecting, triggering execution of a function to obtain the data from the data object; and generating a display of the electronic communication based on the data, where the display is rendered over at least part of the second location.

Description

    CLAIM TO PRIORITY
  • Priority is hereby claimed to U.S. Provisional Application No. 61/859,428, which was filed on Jul. 29, 2013. The contents of U.S. Provisional Application No. 61/859,428 are incorporated herein by reference.
  • TECHNICAL FIELD
  • This specification relates generally to enabling communication and content viewing.
  • BACKGROUND
  • Audio-video content, such as television programs or movies, may be viewed on a variety of devices including, but not limited to, smartphones, tablets, smart televisions, and computers. Connections to the Internet, or other network(s), allow user of such devices access to program-related content from other network users.
  • SUMMARY
  • This specification describes systems, methods and apparatus, including computer program products, for enabling communication and content viewing.
  • An example system performs the following operations: storing, in a data object, data corresponding to electronic communication, where the data object has an object element associated therewith, and where the object element initially resides on a first area of a portal; recognizing movement of the object element from the first location to a second location on the portal, where the second location is different from the first location; detecting that the object element has been moved to the second location; in response to the detecting, triggering execution of a function to obtain the data from the data object; and generating a display of the electronic communication based on the data, where the display is rendered over at least part of the second location. The example system may include one or more of the following features, either alone or in combination.
  • The data may comprise a serialized representation of the electronic communication, and the function to obtain the data may comprise a process for deserializing the data. The data may comprise one or more pointers to information about the electronic communication, and the function to obtain the data may comprise using the pointer to retrieve the data.
  • The system may perform the following operations: storing the data object in a temporary object in response to recognizing that the object element moved from the first location; and redrawing a part of the first area that contained the object element to reflect movement of the object element from the first area.
  • The display of the electronic communication may be generated in response to detecting release of the object element at the second location. The system may generate a shadow display in response to detecting that the object element is over the second location but has not been released over the second location, where the shadow display is generated based on at least some of the data.
  • The system may perform the following operations: recognizing movement of the object element from the second location to the first location on the portal; detecting that the object element has been moved to the second location; in response to the detecting, triggering execution of a function to store data for the electronic communication in a data object associated with the object element at the second location; and generating a display of the electronic communication based on the data, where the display is rendered over at least part of the first location.
  • The system may perform the following operations prior to detecting that the object element has been moved to the second location: detecting that the object element has been released prior to reaching the second location; sending the data object to a process associated with the first location; and executing the process at the first location to redraw the object element at the first location and to store the data object in the first location in association with the object element.
  • The data may comprise an encrypted representation of the electronic communication and the function to obtain the data may comprise a process for decrypting the data. The electronic communication may be a chat session for a user of a multimedia application. The multimedia application may be for displaying audio-video content associated with the chat session either on the portal or on a second portal. The data object may store data representing content other than the electronic communication, where the display includes the content other than the electronic communication.
  • In some aspects, the example systems described herein enable: (1) embedding data into data objects associated with object elements contained in a first sub-view, where that data is in a format that can be decompressed by dragging-and-dropping or moving object element(s) into a second sub-view, and where the second sub-view is associated with data-decompression code, (2) user(s) to interact with various fields of decompressed data, where fields of data are associated with multiple users and the object elements are inside of the second sub-view, and (3) moving the object element(s) back to the first sub-view for recompression.
  • Any two or more of the features described in this specification, including in this summary section, may be combined to form embodiments not specifically described in this patent application.
  • All or part of the systems and techniques described herein may be implemented as a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices. Examples of non-transitory machine-readable storage media include e.g., read-only memory, an optical disk drive, memory disk drive, random access memory, and the like. All or part of the systems and techniques described herein may be implemented as an apparatus, method, or electronic system that may include one or more processing devices and memory to store executable instructions to implement the stated functions.
  • The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
  • DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an example system on which processes enabling communication and content viewing may be implemented.
  • FIG. 2 is a block diagram of an example electronic program guide.
  • FIG. 3 is a block diagram of an example page for viewing audio-video content and for selecting one or more users for chat.
  • FIG. 4 is a block diagram of an example page for viewing audio-video content, which shows a sub-view area to which a profile picture may be moved.
  • FIG. 5 is a block diagram of an example page for viewing audio-video content, which shows movement of a profile picture to the sub-view area.
  • FIG. 6 is a block diagram of an example page for viewing audio-video content, which shows multiple chats in the context of a two-screen experience.
  • FIG. 7 is a block diagram of an example page for viewing audio-video content, which shows multiple chats and a program guide in the context of a two-screen experience.
  • FIG. 8 is a block diagram of an example page for viewing audio-video content, which shows multiple chats and chat feeds in the context of a two-screen experience.
  • FIG. 9 is a block diagram of an example page for viewing audio-video content, which shows multiple chats and chat feeds in the context of a two-screen experience.
  • FIG. 10 is a flowchart showing an example process for enabling communication and content viewing.
  • Like reference numerals in different figures indicate like elements.
  • DETAILED DESCRIPTION
  • Described herein are example processes for enabling communication and content viewing. The example processes enable a user to view content, such as a television broadcast, while also viewing electronic communications, such as chat sessions, associated with that content. Data representing the electronic communications may be stored in a data object in association with an object element, such as a profile picture, on a portal. That object element may be moved from one location to another location on the portal. That other location is associated with executable code that may be triggered by executable code associated with the data object/object element. Thus, in the example processes, movement of the object element to the other location triggers execution of code to enable display of the electronic communications at the other location (coincident with display of the content for viewing). The user may further control movement of the object element, as described herein, to control display of the electronic communications.
  • The example techniques described herein may be used in in any appropriate context, and are not limited to use with any particular system. An example system in which the techniques described herein may be used is presented below.
  • Referring to FIG. 1, the example system includes an interactive electronic program guide (e.g., FIG. 2) that is controllable by a user 100. The example system may be implemented in a multimedia application that includes a dynamic communications interface (e.g., FIG. 5). The dynamic communications interface enables and encourages users of the system to interact and, in some cases, to view the reactions of others, while watching content, such as broadcast television. The interface, which may be a Web page or other type of portal, may provide users with the opportunity to view the profile information of people who are watching content at a given time, and to select users for a chat experience. The profile information may be input to the system or downloaded from external social media platform(s).
  • The example system may be hosted by one or more servers (server) 101, which are accessible over a network 102, such as the Internet, either over wired or wireless connection(s). For example, user 100 may access a Web site hosted by the one or more servers to obtain access to the example system. User 100 may access the system through any one or more user devices, such as television 104, computer 105, portable device 106, smartphone 107, etc. The Web site may be generated by one or more processing devices in the server executing code 110 stored in memory 109 of server 101. The Web site operations may be controlled by server 101 alone or in conjunction with executable code 114 that is stored in memory 113 on, and executed by one or more processing devices of, a user device.
  • In an example implementation, upon accessing the system, the user may be presented with an electronic program guide, such as guide 200 shown in FIG. 2. The user may select guide 200 by selecting “HOME” menu option 202 from menu 204. Guide 200 may contain audio-video content (e.g., television programs, movies, and the like) that the user may access via their device (e.g., computer, tablet, smartphone, smart television, or the like). The content may be all content available to that user (e.g., through a subscription service) or content that is trending on social media or on the system itself. The user may select a program 201 to view by clicking-on, touching, or otherwise identifying that program. In some implementations, the system enables a two-screen experience and in some implementations, the system enables a one-screen experience. The user may configure the system for a one-screen or two-screen experience. In an example two-screen experience, the selected program is viewed on one screen (e.g., on television 104's screen) and related content (described below) is viewed on a second screen (e.g., computer 105's screen). In an example one-screen experience, the selected program and related content are both viewed on the same portal on the same screen (e.g., on the screen of computer 105 or television 104).
  • In an example one-screen experience, the user selects a program 201 from guide 200. In response, the system automatically (e.g., without user intervention) obtains content for the program 200, and identifies other users registered with the system who are currently viewing that program. The system generates data for a portal, such as Web page 300 of FIG. 3. As in the case with the other screen-shots shown herein, user settings may dictate portal display and functionality. In this example, the user may configure, and display, the portal of Web page 300 by selecting “TV DISPLAY” menu option 303 from menu 204.
  • In this example implementation, Web page 300 includes a display area 301 for viewing the program and sub-views 302 corresponding to users logged into the system who are currently viewing the same program. Sub-views 302 are associated with data objects corresponding to other users logged into (or registered with) the system who are currently viewing the same program. In some implementation, sub-views 302 may constitute a single sub-view area. In this example, each data object is represented by a user's profile picture. The representation of a data object is referred to as an object element. In some implementations, each object element may be a current video of the user (e.g., from a Web camera) or some non-static object element. In this regard, examples of representations of data objects, namely, object elements, include, but are not limited to the following: pictures, text, images, banners, graphics, sounds files, and videos. Examples of sub-views include, but are not limited to, a portion of an overall view on a display screen, or a defined section of a viewable display screen, which may hold content.
  • Thus, in the example implementation of FIG. 3, there is a menu, and a display or list of one or more various object elements (e.g., photographs, text, banners etc.). The menu display or list in this example contains scroll-bar functionality 306.
  • FIG. 4 shows another example portal, e.g., Web page 400, configured to implement a one-screen viewing experience. In example Web page 400, display area 401 presents the selected program and sub-views 402 correspond to data objects (represented by object elements, which are profile pictures in this example) associated with users logged into the system who are currently viewing the same program (e.g., the program presented in display area 401). Web page 401 includes a selectable menu 404 (in this example implementation, a “mini” program guide) for selecting other programs 405 for viewing. Web page 400 also includes sub-view area 406, which enables interaction between users, as described below. The user may configure, and display, the portal of Web page 400 by selecting “TV DISPLAY” menu option 410 and “MINI-GUIDE” menu option 411 from menu 204.
  • In some implementations, each data object contains (e.g., stores) data relating to the user represented by a corresponding object element. For example, a data object for object element 410 may contain profile information for the user (e.g., name, age, likes, dislikes, etc.), and data representing, or data representing a link to, one or more chats or other electronic communications in which the user is participating, has participated, has some interest, and so forth. In some implementations, each user may participate in one or more online chat sessions with other users of the system or with other systems. Those chat sessions may be about the program presented in display area 401, or may be initiated in response to viewing the program followed by some user interaction.
  • In some implementations, data may be embedded and compressed (e.g., serialized) within each data object. For example, in some implementations, the actual data representing a chat session may be stored within a data object. In some implementations, pointers to data may be embedded in and compressed within each data object. For example, pointer(s) to data representing a chat session or other type of information may be stored within a data object. In some implementations, a data object may contain a combination of embedded and compressed data and pointers. To view some, or all, of the information corresponding to that data, a user may drag-and-drop an object element (and thus also a data object associated with the object element) into a sub-view area 406. Dragging-and-dropping the object element also causes the data object to be dragged-and-dropped. In this example implementation, sub-view area 406 is a pre-defined area of the Web page (or other type of portal) having the functionality described below.
  • In some implementations, each data object (and thus object element) in a sub-view 402 may be associated with one or more data tags or data elements, which may be hidden from view, yet embedded, compressed (e.g., serialized), or referenced within each corresponding data object for later retrieval, expansion, manipulation or other appropriate functions. This first sub-view area functions to keep embedded-data elements hidden from view or compressed within the data object as result of a dynamic interaction between event coding associated with the first sub-view area and event coding associated with the data object/object element. As such, this first sub-view area may also be referred to as “the compression space”. In some implementations, event coding may include, but is not limited to, computer-executable code associated with a data object/object element or sub-view, the operation of which may be triggered by on-screen interaction of an object element and sub-view, as described herein.
  • In some implementations, sub-view area 406 is referred to as a “hot space” because it is located on the user's viewable screen and serves as an electronic key, decompression space, or decompression event that is usable to decompress, unlock (e.g., deserialize), extract, and/or dynamically expand compressed, embedded and/or hidden data in the data object represented by the corresponding object element. In an example implementation, the user drags-and-drops, or clicks, an object element or elements for a corresponding data object(s) (e.g., the user's profile picture, in this example) into the hot space on the display. In response, event-trigger code associated with the data object/object element interacts with event-trigger code associated with the hot space, and this interaction causes expansion of, and display of, information contained in, or referenced by, data tags associated with the data object. Thus, in some implementations, the hot spaces trigger a new view of data when the event trigger code associated with a moved data object/object element(s) and event trigger code associated with the hot space(s) execute in response to interaction of the object element and the hot space (e.g., sub-view).
  • Examples of the foregoing operation are illustrated in FIGS. 5 to 9. In FIG. 5, the user may configure, and display, the portal of Web page 500 by selecting “TV DISPLAY” menu option 508 and “CHAT FEED” menu option 509 from menu 204 (to also display a chat area 504). In FIG. 6, the user may configure, and display, the portal of Web page 600 by selecting “MINI-GUIDE” menu option 608 from menu 204.
  • Referring to FIG. 5, dragging-and-data object element (e.g., a profile picture) 410, results in expansion and display of data representing a chat session 501 of the corresponding user with one or more other users. In the example of FIG. 5, the user has not yet “dropped” the profile picture and, as a result, the chat session 501 is displayed in shadow form. In this example, event trigger code associated with the data object for object element 410 and sub-view area 406 is able to determine when an object element passes over the sub-view without dropping, in which case data is expanded (e.g., deserialized) and a shadow view is rendered and when the object is dropped, in which case there is a full resolution rendering in the sub-view.
  • Also, in Web page 500 of FIG. 5, additional interactions have occurred relative to Web page 400 of FIG. 4 to replace menu 404 with a chat area 504. Chat area 504 may represent chat from all users on the system or from a subset of users on the system selected, e.g., by geography or other criteria. Users for chat area 504 may be filtered by selecting appropriate system settings.
  • A user may drag-and-drop a profile picture (e.g., 410) into chat area 504. In this example implementation, this action causes executable code associated with the data object and corresponding object element to interact with trigger code associated with chat area 504, resulting in a chat session associated with the profile picture expanding, and being presented as part of, chat area 504. This functionality may be implemented in the same manner as the functionality that results in display of a chat card in sub-view area 406 as described below. Furthermore, in some implementations, a profile picture (e.g., picture 512) may have associated therewith data object(s) and functionality of the type associated with profile pictures 511. Accordingly, dragging and dropping such a picture (e.g., picture 512) to sub-view area 406 will result in display of a chat session associated with the user of that picture in sub-view area 406. This functionality may be implemented in the same manner as the functionality that results in display of chat session 501, 602, described below.
  • FIG. 6 shows a portal, e.g., Web page 600, which may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations of user objects 601, but rather is displayed on a second screen (e.g., a television screen, which is not shown). In the example of FIG. 6, as is also the case with the one-screen implementation, there may be different chat session 501, 602 displayed for different users. In some implementations, profile information for the users may be displayed. In this regard, in some implementations, when the user “drops” a profile picture in the sub-view, a chat session 602 is displayed in full resolution.
  • Dragging a representation of the user data object (e.g., 410) from sub-view area 406 back to sub-view area 402 causes the information in the data object to be removed from sub-view area 406. The information contained therein may be re-compressed and embedded into the object for later viewing, if desired. Thus, object elements may be moved from a hot space sub-view (e.g., sub-view area 406) back to a decompression space sub-view (e.g., sub-view 402), where a given set of data elements will again become hidden, embedded, and/or compressed. In some implementations, data for a corresponding data object is updated, reserialized, and re-drawn in the sub-view area 402, as described in more detail below. Hiding, embedding and/or compressing may be caused by execution of code associated with the two sub-views in response to on-screen interaction thereof.
  • FIG. 7 shows a portal, e.g., Web page 700 that may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations of user objects in area 402, but rather is displayed on a second screen (e.g., a television screen, which is not shown). In the example of FIG. 7, as is also the case with the one-screen implementation, there may be more than one chat session 602,702 displayed for different respective users. In this example, user chat is displayed in area 706 and the electronic program guide (a mini-guide) is displayed in area 707. In this example, as shown at option 710, a user may delete a chat session. In this regard, in some implementations, individual chat session displays (e.g., 602, 702) are referred to as “chat cards” and a set of chat cards is referred to as a “chat deck”. The user may configure, and display, the portal of Web page 700 by selecting the “MINI-GUIDE” menu option (to also display a mini-guide 707) and “CHAT FEED” menu option from menu 204 (to also display a chat feed 706).
  • FIG. 8 shows a portal, e.g., Web page 800 that may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations (object elements) 802 of user objects, but rather is displayed on a second screen (e.g., a television screen, which is not shown). In the example of FIG. 8, four chat cards 804 are shown. In some implementations, a chat card may be “flipped” to display information other than a chat session. For example, in FIG. 8, chat card 805 is flipped (e.g., by clicking on control 806) to display information from the user's profile, here the name and location of the user, along with a list of social networks where the user has accounts. Web page 800 also includes a chat feed 810 relating to a program that users are viewing. The user may configure, and display, the portal of Web page 500 by selecting the “CHAT FEED” menu option from menu 204.
  • FIG. 9 shows a portal, e.g., Web page 900, which may be implemented in a two-screen implementation of the example system. That is, the viewable program is not displayed on the same screen as representations of user objects, but rather is displayed on a second screen (e.g., a television screen, which is not shown). In the example of FIG. 9, five chat cards 902 are shown. Chat cards 602, 702 are the same as those shown in FIG. 8. Chat card 904 is displayed as well in shadow view because it has been selected for movement from sub-view area 905 to sub-view area 906. Accordingly, chat card 905 is displayed in shadow view, with a “−” sign 907 to indicate that it is being removed from the chat deck (whereas chat card 908 has a “+” sign 909 to indicate that it is being added to chat deck). Chat card 904 remains displayed in shadow form until it is dragged-and-dropped to sub-view area 906. Computer-executable code in sub-view area 906 interacts with computer-executable code associated with the data object represented by object element 910 to re-compress (e.g., reserialize) data into the data object represented by the object element, leaving only the object element (e.g., the representation, such as the user's profile picture) displayed in sub-view area 906, with no corresponding chat card.
  • FIG. 9 also shows different chat feeds 911, 912 relating to programs that are currently being viewed in the system. These chat feeds may be user-selected, and may be based on users preferences or other information input by the user. The user may configure, and display, the portal of Web page 900 by selecting the “CHAT FEED” menu option from menu 204.
  • Accordingly, as described above, in some implementations, the user may manage and view different object elements simultaneously by, e.g.: (1) moving object elements back and forth between first and second sub-views, while at the same time (2) interacting with, or viewing, multiple object elements in a given sub-view (e.g., sharing and viewing data with multiple users in the hot space simultaneously); and (3) scrolling through the object elements in each view. Content may be managed between the compression space (e.g., sub-view area 906 of FIG. 9) and the decompression space (e.g., sub-view area 905 of FIG. 9) through use of a hand, finger, cursor, track pad, mouse, or equivalent tactile gestures and/or apparatus, or through automated processes (e.g. computer programs) configured to move one or more object elements to a given sub-view. The user may use a pinch, a click, tap, or series of taps or gestures on one or more of the object element(s) in a sub-view to move one or more object elements into a grouping sub-view that dynamically adjusts to include the new element.
  • In the example implementations described above, the systems include a dynamic communications portal (e.g., a Web page or other type of interactive user interface) that encourages users to interact, and to view the reactions of their neighbors, while watching broadcast television or other audio-video content. In some implementations, the portal provides users with an opportunity to view the profile pictures of people who are watching a broadcast television show at a given time, and then to select said users for a chat experience.
  • The example systems described herein may combine functions of (1) drag-and-drop technology, (2) sub-views and data objects associated with a display interface, which are associated with executable event-trigger code, (3) embedding/associating extractable data with(in) data objects, and (4) compressing and decompressing data embedded/associated with(in) data objects upon moving representations associated with the data objects (e.g., the object elements) into a sub-view containing executable event trigger code configured to either compress (e.g., serialize) or decompress (e.g., deserialize) data.
  • The example systems described herein include various intelligently-coded sub-views that are resident on a display screen, or the functional equivalent of a display screen. The sub-views may be coded in a manner that causes various data object to dynamically expand visible data, contract visible data, reveal hidden/embedded data, and/or hide visible data elements from view as a user moves or drags a representation of the data object from one specially-coded sub-view to another specially-coded sub-view. The executable code associated with the sub-views is referred to herein as “event trigger coding,” and variations of that phrase, such as event coding, event trigger(s), and trigger(s).
  • In some implementations, each sub-view, and corresponding data object representation may be programmed or coded to cause or trigger any one or more of the following functions in response to a user applying a pinch, click, dragging gesture, and/or mouse movement to move an object element from one sub-view into another sub-view: hiding, embedding, compressing, decompressing, expanding, contracting, or revealing data elements that are contained in, or related to, the data objects whose representation is moved into the sub-view.
  • As shown in FIGS. 2 to 9 above, a viewable display on a laptop, personal digital assistant (PDA), mobile phone, television, or tablet device may be divided into two or more sub-views. The first sub-view (e.g., sub-view area 906) may contain object elements such as profile pictures (e.g., a profile-picture-object element). The profile-picture-object element may contain, or be associated with, one or more embedded, hidden, or compressed data objects that are related to the person in the profile picture (e.g., object-element data). That is, in some implementations, a single object element (e.g., profile picture) may be associated with multiple data objects, each representing different information and each behaving in the manner described herein. Both the first sub-view, and the object element(s) may be associated with distinct, executable event-trigger code that cause the object-element data to remain compressed or embedded so long as the object element(s) remain inside of the first sub-view on the viewable display. The second sub-view (e.g., sub-view area 905) may contains additional executable code that causes or triggers the profile-picture-object element(s) to reveal expanded or decompressed object-element data about the person or subject featured in the profile-picture-object element when the object element is moved by the user from the first sub-view to the second sub-view. The expanded object-element data may include that person's biographical information and micro-blogging activity, among other information or related data. The coding language(s) and protocols that may be used to perform these functions include, but are not limited to, JSON (JavaScript® Object Notation), Javascript®, HTML (HyperText Markup Language), or CSS (Cascading Style Sheets).
  • In some implementations, the object elements in each sub-view may move or shift slightly to the right and to the left to accommodate the action of the user placing a new object element into the second sub-view for data expansion, or back to the first sub-view for data decompression or re-compression. In addition, the object elements in each sub-view may be programmed to be scrollable menus of object elements that may be simultaneously viewed, manipulated, and interacted therewith. A scroll bar for scrolling through object elements is shown in the figures. Also, an object element that is moved from the first sub-view to the second sub-view is typically no longer displayed in the first sub-view, and vice versa.
  • In some implementations, users may: view and dynamically scroll through a list of profile pictures of social-media users in the first sub-view, and drag the profile picture(s) into the second sub-view (e.g., a hot space). The hot space (e.g., via code in, e.g., JSON, CSS, Javascript®, and/or HTML) interacts with code associated with the profile-picture-object element(s) and/or corresponding data object(s) to trigger the automatic revelation of expanded or decompressed information regarding the person featured in the profile-picture-object element. Such information may include, e.g., the profiled person's micro-blogging activity and interactions. In some implementations, the hot space is a key that unlocks or expands data that is embedded, hidden, compressed or catalogued, within the object element that is dragged from the first sub-view into the hot space.
  • In an example implementation, the profile-picture-object elements in the first sub-view are images of people who are all watching the same television broadcast or on-demand video at roughly the same time. In some implementations, the profile-picture-object elements may include only those users with whom the user has a pre-established connection within the system or in an external network.
  • In some example implementations, profile-picture-object elements may be initially located in the first sub-view, which is positioned at or near the middle of the display screen (e.g., sub-view area 906 of FIG. 9), and which includes a left-to-right-scrollable menu or collage of profile-picture object elements that may be dragged and dropped to the second sub-view in lower half of the screen (e.g., sub-view area 905 of Fig.). When a picture is dragged to the second sub-view in the lower half of the screen, the picture object may be automatically converted (as described herein) into an interface element, such as a chat window, where the user can now chat with the person pictured and view a stream of information about the person pictured, including, e.g., additional pictures of the profiled person and that person's current or previous micro-blogging activity with other social-network users. When a profile-picture-object element is dragged back to the first sub-view area (e.g., from sub-view area 905 to sub-view area 906 of FIG. 9), then the respective event coding associated with the first sub-view and the profile-picture object element(s) will interact to hide, embed, or compress (e.g., serialize), the micro-blogging activity.
  • In some implementations, hyperlinked news headlines with associated event code may be located in a sub-view at or near the top of the display screen. In some implementations, the sub-view(s) may include a list of available news headlines that may be dragged to a second sub-view as well. When a headline is dragged or clicked to the second sub-view, the headline object element may then be automatically converted to a full news article, e.g., via the interaction between event code contained in a data object represented by the headline (in this example, the object element) and the event code in second sub-view. Next, when the user drags or clicks the article back to the first sub-view, the article text will hide, and only the headline text(s) will remain in the first sub-view, e.g., via the interaction between the event codes contained in the headline and the first sub-view.
  • FIG. 10 is a flowchart showing an example process 1000 for implementing the system described herein for enabling communication and content viewing. Executable instructions for implementing example process 1000 may be stored on one or more non-transitory machine-readable storage media or devices. Example process 1000 may be implemented by one or more processing devices on a user device (e.g., 104, 105, 106, 107) and/or a server (e.g., 101) by retrieving and executing the instruction to perform the following operations.
  • According to process 1000, data for various users is stored (1001) in data objects. In some implementations, the data corresponds to electronic communication, such as chat sessions, for a corresponding user. The data may also include profile information for that user including, but not limited to, the profile information described herein. The data object may be a JSON object and the data contained therein may be serialized. Each data object is associated with executable code (e.g., event trigger code) that operates, either alone or in conjunction with other executable code, to perform the functions described herein. The executable code may be stored in the data object, or elsewhere and associated with the data object and the corresponding object element (each data object may be associated with, and represented by, an object element). Examples of object elements include, but are not limited to, those described herein. The object elements are rendered, and displayed in a first sub-view area of a portal (e.g., a Web page), as described above.
  • Process 1000 recognizes (1002) selection and movement of an object element from a first location (e.g., a first sub-view area 906) to a second location (e.g., a second sub-view area 905) on the portal, which is different from the first location. In this regard, executable code is associated with the data object at the first location. The code is executed in response to detection that the object has been selected (e.g., clicked-on). The code, when executed, generates a temporary container object, in which the data object is stored for “transport” from the first location to the second location. The code, when executed, also redraws the part of the first location that included the object element corresponding to the data object to reflect that the object element has been removed, and continues to redraw the object element on-screen to reflect movement from the first location to the second location.
  • If the object element does not reach the second location, e.g., it is dropped prior to reaching the second location, code associated with the data object informs a process associated with the first location (which may be implemented by executable code associated with the first location) that there is data from the object that will be transferred to the process. Thereafter, the data is transferred to the process, the process removes the container object, stores the data object in an appropriate location, and redraws the corresponding object element in the first location.
  • If the object element reaches the second location, process 1000 detects (1003) that the object element has been moved to the second location. In some implementations, reaching the second location may be detected based on an interaction between code executing for the data object and code executing at the second location. Detection may be performed, e.g., by detecting that the object element (e.g., the profile picture) corresponding to the data object has reached a particular part of the display screen. In response to this detection, process 1000 triggers (1004) execution of a function to obtain the data from the corresponding data object. For example data from the data object may be extracted and deserialized. Before the object element is dropped at the second location, process 1000 generates (1005), and renders on-screen at the second location, a shadow image that is based on the data object. For example, the shadow image may be a less-than-full resolution image of the electronic communication (e.g., a chat session) represented by data in the data object. This shadow image is rendered on-screen. An example of a shadow image is, e.g., chat card 908 of FIG. 9.
  • Process 100 detects (1006) that the object element has been dropped at the second location. This may be done, e.g., by detecting release of a mouse button or other control feature when the object element is determined to be at the second location. In response, process 1000 generates (1007) a full-resolution display image of electronic communication (e.g., a chat session) represented by data in the data object. This full-resolution image is rendered on-screen. An example of a full-resolution image is, e.g., chat card 602 of FIG. 9. As shown in, e.g., FIG. 9, the image (e.g., the chat card) is rendered over at least part of the second location. In other implementations, the image may be rendered elsewhere on-screen, or some other function may be triggered unrelated to image rendering.
  • As described above, in this example implementation, the interaction of event trigger code associated with the data object and the second location (e.g., sub-view) triggers deserialization of data representing an electronic communication, and display thereof. In some implementations, the data in the data object may be encrypted or stored in a manner that is not serialized. In such implementations, event trigger code associated with the data object and the second location (e.g., sub-view) may trigger decryption of the data and/or any other appropriate type of expansion and display of the data. In some implementations, as described above, the data may represent information other than electronic communications. For example, the data may represent articles or other text, which may be expanded and viewed in the manner described herein. In some implementations, the data may represented one or more pointers to locations containing data representing electronic communications or other information. Those pointers may be deserialized or otherwise accessed and used to obtain the data that is used to generate the displays (chat cards) described herein.
  • As explained above, a data object may be moved (e.g., by moving its corresponding object element, such as a profile picture) from the second location (e.g., sub-view 905) to the first location (e.g., sub-view 906) to “close” viewing of a chat session. In this case, process 1000 is effectively reversed. For example, code executing in the second location and code associated with the data object detects selection and movement of an object element corresponding data object. In that case, a process is triggered that serializes data from the second location, and stores that data in the data object, which itself is stored in a temporary container object for transport to the first location. Appropriate processes are executed to redraw the part of the second location previously containing the object element, and to draw the object element during motion. The object element is then dropped to the first location, where the data object is removed from the temporary container object and stored in an appropriate location. The object element is then redrawn on-screen at an appropriate area of the first location.
  • All or part of the processes described herein and their various modifications (hereinafter referred to as “the processes”) can be implemented, at least in part, via a computer program product, i.e., a computer program tangibly embodied in one or more information carriers, e.g., in one or more tangible machine-readable storage media, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers
  • A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
  • Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Machine-readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • Each computing device, such as a tablet computer, may include a hard drive for storing data and computer programs, and a processing device (e.g., a microprocessor) and memory (e.g., RAM) for executing computer programs. Each computing device may include an image capture device, such as a still camera or video camera. The image capture device may be built-in or simply accessible to the computing device.
  • Each computing device may include a graphics system, including a display screen. A display screen, such as an LCD or a CRT (Cathode Ray Tube) displays, to a user, images that are generated by the graphics system of the computing device. As is well known, display on a computer display (e.g., a monitor) physically transforms the computer display. For example, if the computer display is LCD-based, the orientation of liquid crystals can be changed by the application of biasing voltages in a physical transformation that is visually apparent to the user. As another example, if the computer display is a CRT, the state of a fluorescent screen can be changed by the impact of electrons in a physical transformation that is also visually apparent. Each display screen may be touch-sensitive, allowing a user to enter information onto the display screen via a virtual keyboard. On some computing devices, such as a desktop or smartphone, a physical QWERTY keyboard and scroll wheel may be provided for entering information onto the display screen. Each computing device, and computer programs executed thereon, may also be configured to accept voice commands, and to perform functions in response to such commands. For example, the process described herein may be initiated at a client, to the extent possible, via voice commands.
  • Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the processes, computer programs, Web pages, etc. described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.
  • Any features described herein may be combined with features described in U.S. patent application Ser. No. ______, filed concurrently herewith and entitled “Displaying Information Based On Recognition Of A Subject” (Attorney Docket No. 40138-0003001), the contents of which are incorporated herein by reference.
  • Other implementations not specifically described herein are also within the scope of the following claims.

Claims (20)

What is claimed is:
1. A method comprising:
storing, in a data object, data corresponding to electronic communication, the data object having an object element associated therewith, the object element initially residing on a first area of a portal;
recognizing, by a processing device, movement of the object element from the first location to a second location on the portal, the second location being different from the first location;
detecting, by the processing device, that the object element has been moved to the second location;
in response to the detecting, the processing device triggering execution of a function to obtain the data from the data object; and
generating, by the processing device, a display of the electronic communication based on the data, the display being rendered over at least part of the second location.
2. The method of claim 1, wherein the data comprises a serialized representation of the electronic communication, and the function to obtain the data comprises a process for deserializing the data.
3. The method of claim 1, wherein the data comprises one or more pointers to information about the electronic communication, and the function to obtain the data comprises using the pointer to retrieve the data.
4. The method of claim 1, further comprising:
in response to recognizing that the object element moved from the first location, the processing device storing the data object in a temporary object; and
redrawing, by the processing device, a part of the first area that contained the object element to reflect movement of the object element from the first area.
5. The method of claim 1, wherein the display of the electronic communication is generated in response to detecting release of the object element at the second location; and
wherein the method further comprises generating, by the processing device, a shadow display in response to detecting that the object element is over the second location but has not been released over the second location, the shadow display being generated based on at least some of the data.
6. The method of claim 1, further comprising:
recognizing, by the processing device, movement of the object element from the second location to the first location on the portal;
detecting, by the processing device, that the object element has been moved to the second location;
in response to the detecting, the processing device triggering execution of a function to store data for the electronic communication in a data object associated with the object element at the second location; and
generating, by the processing device, a display of the electronic communication based on the data, the display being rendered over at least part of the first location.
7. The method of claim 1, further comprising, prior to detecting that the object element has been moved to the second location:
detecting, by the processing device, that the object element has been released prior to reaching the second location;
sending, by the processing device, the data object to a process associated with the first location; and
executing, by the processing device, the process at the first location to redraw the object element at the first location and to store the data object in the first location in association with the object element.
8. The method of claim 1, wherein the data comprises an encrypted representation of the electronic communication, and the function to obtain the data comprises a process for decrypting the data.
9. The method of claim 1, wherein the electronic communication is a chat session for a user of a multimedia application, the multimedia application for displaying audio-video content associated with the chat session either on the portal or on a second portal.
10. The method of claim 1, wherein the data object stores data representing content other than the electronic communication, the display including the content other than the electronic communication.
11. A non-transitory machine-readable storage medium storing instructions that are executable by one or more processing devices to perform operations comprising:
storing, in a data object, data corresponding to electronic communication, the data object having an object element associated therewith, the object element initially residing on a first area of a portal;
recognizing movement of the object element from the first location to a second location on the portal, the second location being different from the first location;
detecting that the object element has been moved to the second location;
in response to the detecting, triggering execution of a function to obtain the data from the data object; and
generating a display of the electronic communication based on the data, the display being rendered over at least part of the second location.
12. The non-transitory machine-readable storage medium of claim 11, wherein the data comprises a serialized representation of the electronic communication, and the function to obtain the data comprises a process for deserializing the data.
13. The non-transitory machine-readable storage medium of claim 11, wherein the data comprises one or more pointers to information about the electronic communication, and the function to obtain the data comprises using the pointer to retrieve the data.
14. The non-transitory machine-readable storage medium of claim 11, wherein the operations comprise:
in response to recognizing that the object element moved from the first location, storing the data object in a temporary object; and
redrawing a part of the first area that contained the object element to reflect movement of the object element from the first area.
15. The non-transitory machine-readable storage medium of claim 11, wherein the display of the electronic communication is generated in response to detecting release of the object element at the second location; and
wherein the operations comprise generating a shadow display in response to detecting that the object element is over the second location but has not been released over the second location, the shadow display being generated based on at least some of the data.
16. The non-transitory machine-readable storage medium of claim 11, further comprising:
recognizing movement of the object element from the second location to the first location on the portal;
detecting that the object element has been moved to the second location;
in response to the detecting, triggering execution of a function to store data for the electronic communication in a data object associated with the object element at the second location; and
generating a display of the electronic communication based on the data, the display being rendered over at least part of the first location.
17. The non-transitory machine-readable storage medium of claim 11, further comprising, prior to detecting that the object element has been moved to the second location:
detecting that the object element has been released prior to reaching the second location;
sending the data object to a process associated with the first location; and
executing the process at the first location to redraw the object element at the first location and to store the data object in the first location in association with the object element.
18. The non-transitory machine-readable storage medium of claim 11, wherein the data comprises an encrypted representation of the electronic communication, and the function to obtain the data comprises a process for decrypting the data.
19. The non-transitory machine-readable storage medium of claim 11, wherein the electronic communication is a chat session for a user of a multimedia application, the multimedia application for displaying audio-video content associated with the chat session either on the portal or on a second portal.
20. A system comprising:
computer storage storing instructions that are executable; and
a processing device to execute the instructions to perform operations comprising:
storing, in a data object, data corresponding to electronic communication, the data object having an object element associated therewith, the object element initially residing on a first area of a portal;
recognizing movement of the object element from the first location to a second location on the portal, the second location being different from the first location;
detecting that the object element has been moved to the second location;
in response to the detecting, triggering execution of a function to obtain the data from the data object; and
generating a display of the electronic communication based on the data, the display being rendered over at least part of the second location.
US15/350,784 2013-07-29 2016-11-14 Enabling communication and content viewing Abandoned US20170164036A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/350,784 US20170164036A1 (en) 2013-07-29 2016-11-14 Enabling communication and content viewing

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361859428P 2013-07-29 2013-07-29
US14/339,695 US9154845B1 (en) 2013-07-29 2014-07-24 Enabling communication and content viewing
US201514845073A 2015-09-03 2015-09-03
US15/350,784 US20170164036A1 (en) 2013-07-29 2016-11-14 Enabling communication and content viewing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US201514845073A Continuation 2013-07-29 2015-09-03

Publications (1)

Publication Number Publication Date
US20170164036A1 true US20170164036A1 (en) 2017-06-08

Family

ID=54203971

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/339,695 Expired - Fee Related US9154845B1 (en) 2013-07-29 2014-07-24 Enabling communication and content viewing
US15/350,784 Abandoned US20170164036A1 (en) 2013-07-29 2016-11-14 Enabling communication and content viewing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/339,695 Expired - Fee Related US9154845B1 (en) 2013-07-29 2014-07-24 Enabling communication and content viewing

Country Status (1)

Country Link
US (2) US9154845B1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWD161258S (en) * 2013-01-03 2014-06-21 宏碁股份有限公司 Graphical user interface of screen
CN104679485A (en) * 2013-11-28 2015-06-03 阿里巴巴集团控股有限公司 Page element control method and device
US9729927B2 (en) * 2014-10-30 2017-08-08 Rovi Guides, Inc. Systems and methods for generating shadows for a media guidance application based on content
US9854317B1 (en) 2014-11-24 2017-12-26 Wew Entertainment Corporation Enabling video viewer interaction
CN106371689B (en) * 2015-07-23 2019-11-05 腾讯科技(深圳)有限公司 Picture joining method, apparatus and system
WO2017222408A1 (en) * 2016-06-23 2017-12-28 Ringcentral, Inc., (A Delaware Corporation) Conferencing system and method implementing video quasi-muting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5995096A (en) * 1991-10-23 1999-11-30 Hitachi, Ltd. Conference display control method and apparatus for an electronic conference for displaying either shared or local data and transferring local data
US20100058220A1 (en) * 2006-07-17 2010-03-04 Carpenter Carl E Systems, methods, and computer program products for the creation, monetization, distribution, and consumption of metacontent
US20100185862A1 (en) * 2009-01-20 2010-07-22 International Business Machines Corporation Method and System for Encrypting JavaScript Object Notation (JSON) Messages
USRE44258E1 (en) * 1999-11-04 2013-06-04 Sony Corporation Apparatus and method for manipulating a touch-sensitive display panel
US8930812B2 (en) * 2006-02-17 2015-01-06 Vmware, Inc. System and method for embedding, editing, saving, and restoring objects within a browser window

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6954498B1 (en) 2000-10-24 2005-10-11 Objectvideo, Inc. Interactive video manipulation
WO2003084229A1 (en) 2002-04-02 2003-10-09 Koninklijke Philips Electronics N.V. Method and system for providing complementary information for a video program
US7689556B2 (en) 2005-01-31 2010-03-30 France Telecom Content navigation service
US20070154163A1 (en) 2005-12-29 2007-07-05 United Video Properties, Inc. Systems and methods for creating aggregations of episodes of series programming in order
US8458745B2 (en) 2006-02-17 2013-06-04 The Directv Group, Inc. Amalgamation of user data for geographical trending
US7627831B2 (en) * 2006-05-19 2009-12-01 Fuji Xerox Co., Ltd. Interactive techniques for organizing and retrieving thumbnails and notes on large displays
US20080043144A1 (en) 2006-08-21 2008-02-21 International Business Machines Corporation Multimodal identification and tracking of speakers in video
US20080059986A1 (en) 2006-08-30 2008-03-06 Brian Kalinowski Online video/chat applications
US20080059580A1 (en) 2006-08-30 2008-03-06 Brian Kalinowski Online video/chat system
JP4303311B2 (en) * 2006-10-13 2009-07-29 株式会社コアアプリ Operation support computer program, operation support computer system
JP2008108200A (en) * 2006-10-27 2008-05-08 Canon Inc Information extraction device, method, program and storage medium
US20080250450A1 (en) 2007-04-06 2008-10-09 Adisn, Inc. Systems and methods for targeted advertising
US8275764B2 (en) 2007-08-24 2012-09-25 Google Inc. Recommending media programs based on media program popularity
US8239825B2 (en) * 2007-08-28 2012-08-07 International Business Machines Corporation Dynamic data restructuring method and system
US8170342B2 (en) 2007-11-07 2012-05-01 Microsoft Corporation Image recognition of content
US20090150939A1 (en) 2007-12-05 2009-06-11 Microsoft Corporation Spanning multiple mediums
US20090326953A1 (en) * 2008-06-26 2009-12-31 Meivox, Llc. Method of accessing cultural resources or digital contents, such as text, video, audio and web pages by voice recognition with any type of programmable device without the use of the hands or any physical apparatus.
US8489515B2 (en) 2009-05-08 2013-07-16 Comcast Interactive Media, LLC. Social network based recommendation method and system
US8391673B2 (en) 2009-06-26 2013-03-05 Intel Corporation Method, system, and apparatus to derive content related to a multimedia stream and dynamically combine and display the stream with the related content
WO2011119775A1 (en) 2010-03-23 2011-09-29 Google Inc. Organizing social activity information on a site
US20110289422A1 (en) 2010-05-21 2011-11-24 Live Matrix, Inc. Interactive calendar of scheduled web-based events and temporal indices of the web that associate index elements with metadata
US10210160B2 (en) 2010-09-07 2019-02-19 Opentv, Inc. Collecting data from different sources
JP5235972B2 (en) 2010-11-17 2013-07-10 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method
US9251854B2 (en) 2011-02-18 2016-02-02 Google Inc. Facial detection, recognition and bookmarking in videos
US8769576B2 (en) 2011-04-01 2014-07-01 Mixaroo, Inc. System and method for real-time processing, storage, indexing, and delivery of segmented video
US9026476B2 (en) 2011-05-09 2015-05-05 Anurag Bist System and method for personalized media rating and related emotional profile analytics
US20120311032A1 (en) 2011-06-02 2012-12-06 Microsoft Corporation Emotion-based user identification for online experiences
US10223451B2 (en) 2011-06-14 2019-03-05 International Business Machines Corporation Ranking search results based upon content creation trends
US20130074109A1 (en) 2011-09-20 2013-03-21 Sidebar, Inc. Television listing user interface based on trending
US9141278B2 (en) * 2012-01-24 2015-09-22 Blackberry Limited Method and apparatus for operation of a computing device
US20140089423A1 (en) 2012-09-27 2014-03-27 United Video Properties, Inc. Systems and methods for identifying objects displayed in a media asset
US20140222462A1 (en) * 2013-02-07 2014-08-07 Ian Shakil System and Method for Augmenting Healthcare Provider Performance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5995096A (en) * 1991-10-23 1999-11-30 Hitachi, Ltd. Conference display control method and apparatus for an electronic conference for displaying either shared or local data and transferring local data
USRE44258E1 (en) * 1999-11-04 2013-06-04 Sony Corporation Apparatus and method for manipulating a touch-sensitive display panel
US8930812B2 (en) * 2006-02-17 2015-01-06 Vmware, Inc. System and method for embedding, editing, saving, and restoring objects within a browser window
US20100058220A1 (en) * 2006-07-17 2010-03-04 Carpenter Carl E Systems, methods, and computer program products for the creation, monetization, distribution, and consumption of metacontent
US20100185862A1 (en) * 2009-01-20 2010-07-22 International Business Machines Corporation Method and System for Encrypting JavaScript Object Notation (JSON) Messages

Also Published As

Publication number Publication date
US9154845B1 (en) 2015-10-06

Similar Documents

Publication Publication Date Title
US20170164036A1 (en) Enabling communication and content viewing
RU2632144C1 (en) Computer method for creating content recommendation interface
US9854317B1 (en) Enabling video viewer interaction
US9678926B2 (en) Image preview
US9081421B1 (en) User interface for presenting heterogeneous content
US20160110090A1 (en) Gesture-Based Content-Object Zooming
EP2480960B1 (en) Apparatus and method for grid navigation
US9354899B2 (en) Simultaneous display of multiple applications using panels
US20120210275A1 (en) Display device and method of controlling operation thereof
US20130185676A1 (en) Method and mobile device for classified webpage switching
US20140136959A1 (en) Generating Multiple Versions of a Content Item for Multiple Platforms
US20140229834A1 (en) Method of video interaction using poster view
KR102270953B1 (en) Method for display screen in electronic device and the device thereof
US20140165003A1 (en) Touch screen display
CN107209756B (en) Supporting digital ink in markup language documents
US20120278712A1 (en) Multi-input gestures in hierarchical regions
US20140006967A1 (en) Cross-application transfers of user interface objects
US20140237357A1 (en) Two-dimensional document navigation
US20170277364A1 (en) User interface with dynamic refinement of filtered results
US20140344735A1 (en) Methods, apparatuses and computer program products for managing different visual variants of objects via user interfaces
US20180284956A1 (en) Fragmentation and messaging across web applications
TW201229875A (en) Managing an immersive environment
US20160004406A1 (en) Electronic device and method of displaying a screen in the electronic device
US9495064B2 (en) Information processing method and electronic device
US10191618B2 (en) Hand-held electronic apparatus having function of activating application program of electronic apparatus, and method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: WEW ENTERTAINMENT CORPORATION, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABBOA-OFFEI, KWABENA BENONI;COURTOIS, MAURICE;REEL/FRAME:040409/0416

Effective date: 20140724

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION