WO2015035230A1 - Adaptive process for content management - Google Patents

Adaptive process for content management Download PDF

Info

Publication number
WO2015035230A1
WO2015035230A1 PCT/US2014/054383 US2014054383W WO2015035230A1 WO 2015035230 A1 WO2015035230 A1 WO 2015035230A1 US 2014054383 W US2014054383 W US 2014054383W WO 2015035230 A1 WO2015035230 A1 WO 2015035230A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
content items
user
search
items
Prior art date
Application number
PCT/US2014/054383
Other languages
French (fr)
Inventor
Stephen D. Rosen
Jeff SYMON
George C. Kenney
Jorge Sanchez
Original Assignee
Smart Screen Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Screen Networks, Inc. filed Critical Smart Screen Networks, Inc.
Publication of WO2015035230A1 publication Critical patent/WO2015035230A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • G06Q30/0256User search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Abstract

Systems and methods for content management. In an embodiment, a search request is received from a user. A first search result is generated comprising a first plurality of content items that have been identified, based on the received search request, from a plurality of content items in database(s). One or more second search results are then generated (e.g., automatically) by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item. One or more of the first search result and the one or more second search results may be provided to the user in response to the search request. Furthermore, a user may construct customized user channels that can be delivered via third-party content distribution systems.

Description

ADAPTIVE PROCESS FOR CONTENT MANAGEMENT
CROSS-REFERENCE TO RELATED APPLICATIONS
[1] This application claims priority to U.S. Provisional Patent App. No.
61/874,072, filed on September 5, 2013, the entirety of which is hereby incorporated herein by reference.
[2] This application is related to U.S. Patent App. No. 14/323,738 ("the '738
Application"), filed on July 3, 2014, and U.S. Provisional Patent App. No. 61/888,513 ("the '513 Application"), filed on October 9, 2013, the entireties of both of which are hereby incorporated herein by reference.
BACKGROUND
[3] Field of the Invention
[4] This application is related, generally, to the management of content (e.g., video, images, text, audio, and other data) and the processes used to format, organize, and retrieve such content, optimize a user experience, and distribute the content for personal and/or commercial use.
[5] Description of the Related Art
[6] Current digital content services provide users with content from a variety of sources, for example, using Internet cable, wireless communication channels, and user devices. The content from these various sources is diverse and not standardized. Thus, conventionally, users of such content must navigate a maze of options in order to access, retrieve, and display the content on their user devices.
[7] The amount of content has exploded in recent years, and has continued to grow at an exponential rate. Content is generally fixed and inflexible in the sense that the format of the content depends on who created the content and where and for what device(s) the creator chose to store the content.
[8] There are multiple standards for content (e.g., video, images, text, audio, and other data). There are also multiple locations and ways in which the content can be stored, archived, and/or accessed, and there are different levels of quality for the content. For example, a given video can be viewed on a website, but is fixed in quality and can be affected by the bandwidth available on a communications network and/or for the service. Consequently, a movie viewed over an Internet connection may experience a loss in quality, such as dropped frames, pauses (e.g., resulting from buffering), or even complete stops (e.g., resulting from a dropped connection).
[9] With regards to images, in order to be managed, a printed photograph must first be digitized (e.g., scanned) and then stored at some location on a computer-readable medium. In this case, the image of the digitized photograph will generally not be associated with a date or other metadata (e.g., a description or story behind the photograph), resulting in a deficiency in the ability of a user to enjoy or even locate the digital photograph. In addition, once the photograph has been digitized, the image may be uploaded to commercially available Internet services, adjusted for resolution, aspect ratio, device-specific requirements, and the like. In this case, there are few acceptable means to ensure privacy and control of who uses and retrieves the image. For instance, typically, the creator or custodian of the image must dedicate a substantial amount of time to identifying what device(s) and/or device location(s) to use in order to upload and manage the source of the image, so that distribution and quality is properly controlled.
[10] Consequently, a significant amount of effort must currently be expended to store, retrieve, and/or categorize content (e.g., media such as video, images, or other information). Conventional solutions do not offer intelligent assistance, for example, when searching for the source and location of the content. For instance, in many cases, a user must build his or her own content filing organization structure. Furthermore, since much of the content is not tagged with descriptive metadata (e.g., tags, comments, ratings, captions, description, story behind the content, storage location, etc.) that can be used to track the content, searching for a specific content item across storage devices, file directory structures, etc. may be time consuming and result in a negative user experience.
[11] Limitations of the conventional solutions are also apparent in the display of the content. For example, when viewing content on a mobile device (e.g., smart phone), the quality of the content may suffer. If a communication channel and/or the mobile device are deficient, video content may suffer from pauses or other interruptions and/or problems with a display format. Conventional solutions involve limiting the content available to certain format(s) chosen by the operator of the system. However, this excludes potential end users and uses by limiting the choices available to the end user, thereby resulting in a closed system. In addition, conventional solutions do not necessarily enable or allow certain metadata (e.g., descriptive metadata) to be associated with a content item, such that it travels with the content item from location to location, as a user copies, moves, or reformats the content item. [12] Accordingly, it would be beneficial to have an adaptive system that is able to accept a multiplicity of content available in their respective formats, resolutions, etc., while providing easy management, storage, and/or retrieval of the content and the ability to display the content with the best quality from any source, anywhere, at any time, and on any screen in an open or closed system architecture.
SUMMARY
[13] In an embodiment, a content management system is disclosed. The system comprises: at least one hardware processor; at least one database comprising a plurality of content items; and one or more modules that are configured to, when executed by the at least one hardware processor, receive a search request from a user, generate a first search result comprising a first plurality of content items that have been identified, based on the received search request, from the plurality of content items in the at least one database, generate one or more second search results by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item, and provide one or more of the first search result and the one or more second search results to the user. In an embodiment, the one or more modules may automatically generate the one or more second search results, following the generation of the first search results, or may generate the one or more second search results in response to a user operation or other interaction.
[14] In another embodiment, a method is disclosed. The method comprises using at least one hardware processor of a content management system, having a plurality of content items stored therein, to: receive a search request from a user; generate a first search result comprising a first plurality of content items that have been identified, based on the received search request, from the stored plurality of content items; generate one or more second search results by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item; and provide one or more of the first search result and the one or more second search results to the user.
[15] In another embodiment, a non-transitory computer-readable medium having instructions stored thereon is disclosed. The instructions, when executed by a processor, cause the processor to: receive a search request from a user; generate a first search result comprising a first plurality of content items that have been identified, based on the received search request, from the stored plurality of content items; generate one or more second search results by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item; and provide one or more of the first search result and the one or more second search results to the user.
BRIEF DESCRIPTION OF THE DRAWINGS
[16] The details of the present invention, both as to its structure and operation, may be gleaned in part by study of the accompanying drawings, in which like reference numerals refer to like parts, and in which:
[17] FIG. 1 illustrates a high-level architecture diagram of an Adaptive Information
Processor (AIP) software engine, according to an embodiment;
[18] FIG. 2 illustrates a high-level architecture for a process of receiving and transforming content, according to an embodiment;
[19] FIG. 3 illustrates a high-level architecture for user features and revenue generation, according to embodiments;
[20] FIG. 4 illustrates a high-level architecture for system resources, according to an embodiment;
[21] FIG. 5 illustrates a high-level architecture for a Relational Content
Management System (RCMS), according to an embodiment;
[22] FIG. 6 illustrates a high-level implementation of a possible user administration arrangement and the display of content in a hierarchical structure, according to an embodiment;
[23] FIG. 7 illustrates the results of a search by an RCMS for an example arrangement of content in a relational and intuitive structure, according to an embodiment;
[24] FIG. 8 illustrates a high-level architecture for a personal information delivery program suite, according to an embodiment;
[25] FIG. 9 illustrates a high-level architecture for graphical user interfaces configured to manage features and operations of an AIP, according to an embodiment;
[26] FIG. 10 illustrates a high-level architecture for user embedded applications that may be used to prepare and send content to a user device for display and/or be embedded or installed in a user device, according to an embodiment; [27] FIG. 11 illustrates an example arrangement of example content, according to an embodiment;
[28] FIG. 12 illustrates an example user interface for managing a content group, according to an embodiment;
[29] FIGS. 13 A- 13E illustrate example user interfaces for managing and viewing content, according to an embodiment;
[30] FIGS. 14A-14D illustrate example user interfaces for a search performed by a user, according to an embodiment;
[31] FIG. 15 illustrates a process for content management and delivery, according to an embodiment;
[32] FIG. 16 illustrates an environment in which an AIP may operate, according to an embodiment;
[33] FIG. 17 illustrates an architecture of an AIP, according to an embodiment; and
[34] FIG. 18 illustrates a processing system on which one or more of the processes described herein may be executed, according to an embodiment.
DETAILED DESCRIPTION
[35] In an embodiment, systems, methods, and non-transitory computer-readable media are disclosed for an adaptive process for the transformation, organization, management, location, retrieval, and/or display of user content, and/or the provision of such services for commercial enterprises. It should be understood that the adaptive process may be implemented as a software engine, as hardware, or as both hardware and software.
[36] After reading this description, it will become apparent to one skilled in the art how to implement the invention in various alternative embodiments and alternative applications. However, although various embodiments of the present invention will be described herein, it is understood that these embodiments are presented by way of example and illustration only, and not limitation. As such, this detailed description of various embodiments should not be construed to limit the scope or breadth of the present invention as set forth in the appended claims.
[37] In an embodiment, an Adaptive Information Processor (AIP) is configured to receive, format, organize, store, and deliver content (e.g., video, audio, images, text files, documents, voice data, and/or other data) via multiple wireless and/or wired platforms to multiple display devices anywhere (e.g., on a global basis or on a national or other regional basis) and at any time. In an embodiment, the AIP is implemented as a software system with an open architecture. Such an embodiment provides a means to plug in multiple applications with a variety of features from different sources or suppliers. Alternatively, the AIP can be implemented as a software system with a closed architecture and deliver the disclosed capabilities for a specialized application or for a private user or enterprise.
[38] In an embodiment, the AIP comprises an adaptive software engine that is agnostic to the type, resolution, format, source, and/or storage of content, agnostic to the location and/or type of user device displaying the content, and agnostic to the speed with which the content can be delivered. The AIP may communicate with the source of content and format the content in a way that the content can be delivered at an optimized level of quality, speed, and/or resolution to multiple user devices for display, across multiple platforms (e.g., mobile phones, tablets, vehicle displays, personal computers, televisions, movie theater projectors, non-display devices such as disk storage, and/or any other present or future mobile, broadband, wired, and/or wireless devices). The sources of content may be located on the Internet (e.g., the World Wide Web) or held in private storage. Communication to and from the AIP (e.g., to retrieve, process, and deliver content from various sources) may be performed via one or more networks, including the Internet, and the AIP may support any data transport protocol or speed used by wired and/or wireless networks, private networks, broadband services, etc.
[39] In an embodiment, the AIP uses one or more toolsets to organize content so that it is customized for a particular user's preferences and categorized for ease of retrieval, display, and review. The AIP may display the content and/or metadata associated with the content in a relational and/or hierarchical manner to facilitate ease of use, using a top-down, bottom-up, or lateral approach across the topology of an AIP database.
[40] In an embodiment, the AIP allows a user to tag or append content with dates, comments, and other metadata (e.g., the story behind the content). The AIP may include or interface with pluggable software toolsets that enable automated smart tagging of content, using features such as closed captioning files, facial recognition, geo-location, time, date, and/or speech-to-text conversion that converts a user's voice (e.g., narration, spoken description of a story or other details regarding content, etc.) into text-based metadata. One or more of these features may be automatically executed and applied at the time that content is created (e.g., at the time that an image is taken or a video is recorded, etc.). Additionally or alternatively, a user, such as a system administrator, the creator, or another individual, may manually tag content with metadata at any time before, during, or after content has been created. Embodiments of systems and processes for automatically, manually, and semi-automatically tagging content with metadata are disclosed extensively in the '738 Application and the '513 Application, which have been incorporated herein by reference. For example, these applications disclose mechanisms for a user to set (e.g., through a calendar application or in-app tool) a scheduled event with associated metadata, prior to the event, such that content captured during the event is automatically or semi- automatically tagged with the metadata that was pre-associated with the event.
[41] Notably, content, including Internet-based content, has become increasingly subject to government regulations, for example, that require closed captioning. Closed- captioning information is associated with dialogue in a content item. In an embodiment, the AIP is able to utilize closed-captioning information associated with content to generate metadata to be associated with the content. This generated metadata may comprise text, audio, video, and the like, or any combination of these types of data.
[42] In an embodiment, the AIP may comprise, be comprised in, be interfaced with, or otherwise operate in conjunction with a Relational Content Management System (RCMS). The RCMS may utilize the metadata and/or advanced toolsets of the AIP as part of a search strategy to facilitate the identification of specific content with minimal user effort. In addition, the RCMS may utilize an object or facial recognition algorithm in combination with one or more local and/or remote databases (e.g., storing information about known objects or faces) to perform an associative search.
[43] In an embodiment, the AIP provides analytics for processing global user data.
The AIP may also contain mechanisms for modularity that allow for system expansion with future toolsets and applications, and may contain functionality to license one or more capabilities to customers. The AIP may be configured to interface with third-party applications, which may allow the AIP to receive content, metadata, organizations of content, location services, analysis and analytics for content, etc., for example, as a periodic or continuous incoming data stream.
[44] In an embodiment, the AIP enables users to organize, retrieve, share, personalize, format, and/or analyze their content, as well as participate in interactive polling, including the use of device-specific GPS location services.
[45] In an embodiment, the AIP may enable users, such as businesses or companies, to monetize the delivery of content. The AIP may comprise modules or other mechanisms to preserve and protect privacy, establish administrative and security controls, manage access to the AIP, provide content access controls to end users, store and manage content in cloud servers for both active and archived content or in local or other suitable storage devices.
[46] In an embodiment, the AIP may ensure that descriptive metadata that is associated with content items remains with the content items as it travels from location to location and/or state to state, as a user copies, moves, and/or reformats the content item. The descriptive metadata may be added to standardized metadata files, such that the metadata is viewable and modifiable via third-party software, rather than any single proprietary software. For example, for images in Exchangeable Image File Format (Exif), the disclosed processes update the metadata in the Exif file, such that it becomes part of the content item (i.e., the image in this case) regardless of the location and third-party software used to view or modify the metadata.
[47] The AIP may reside in one or more computers, servers, distributed multiprocessor systems, etc., and may be physically located at a particular facility (e.g., public or private data center) or in a cloud computing environment (e.g., across multiple public or private data centers). The AIP may provide a customizable graphical user interface (GUI) with toolsets configured to customize the operation of the AIP. Thus, a user may benefit by having content available anywhere (i.e., regardless of location and/or device), at any time, and regardless of the source of the content.
[48] 1. Adaptive Information Processor
[49] FIG. 1 illustrates an AIP 100, according to an embodiment. AIP 100 comprises one or more modules. While these modules will primarily be described herein as software modules, it should be understood that the modules of AIP 100 may be implemented as software, hardware, or a combination of both software and hardware.
[50] In the illustrated example, AIP 100 comprises an application executive 110 that may comprise, be comprised in, or interface with an operating system 111 (e.g., as an application executing within an environment defined by operating system 111). AIP 100 may also comprise modules interfaced with or otherwise in communication with application executive 110, such as source content interfaces 101, application expansion port 102, AIP controller and GUI interfaces 103, system resources 104, personal information delivery program suite 105, enterprise information delivery program suite 106, user features and revenue generation 107, user embedded applications 108, and RCMS 109. Together, these modules 101-109 may embody feature components for application executive 110.
[51] Application executive 110 may recall one or more of these modules with the assistance and facilities provided by operating system 111. Modules 101-109 may operate via software connections to application executive 110, according to special arrangements that govern their operations. Operating system 111 may be any commercially-available operating system (Microsoft Windows™, Apple OS™, Linux™, etc.), such as those used for conventional personal computers, servers, distributed computing systems, etc. It should be understood that AIP 100 may operate on a stand-alone system (e.g., server, desktop computer, etc.) or in a multiprocessor and/or distributed system (e.g., server farm, in the cloud, etc.).
[52] In an embodiment, AIP 100 may implement or utilize one or more application programming interfaces (APIs) and/or dynamic link libraries (DLLs) to execute code, for example, by making a call to a function of an API or DLL. More generally, AIP 100 may have a modular architecture, such that a multiplicity of functional modules may be installed in or interfaced with AIP 100.
[53] Application executive 110 functions as a director of operations for these modules, including software modules 101-109 and/or any other modules which may be interfaced with application executive 110. In an embodiment, one or more of modules 101-109 are implemented as executable programs that run concurrently with application executive 110 in a multi-tasked or multi-threaded operation.
[54] In an embodiment, source content formatter or interface module 101 is responsible for accepting content or other data from an available source. The content may be formatted according to any of a plurality of available standards, such as QuickTime File Format (QTFF), QuickTime Movie (MOV), Moving Picture Experts Group (MPEG) -1 or -2 Audio Layer III (MP3), MPEG-4, Audio Video Interleaved (AVI), Joint Photographic Experts Group (JPG or JPEG), Portable Network Graphics (PNG), American Standard Code for Information Interchange (ASCII) text, etc. In one or more cases, the received content may be transformed or converted into an alternative or standard format, so that it can be used downstream. In an embodiment, the content initially resides in any commercially-available content storage facility or social media web service location. AIP 100 can then pull the content (e.g., in response to a user operation or interaction or automatically) in order to be aggregated for a user, modified, stored, formatted, and/or displayed by a user, and/or otherwise be used for or by a user. [55] In an embodiment, application expansion port module 102 is connected to application executive 110 through an API for future expansion of system capabilities. In other words, application expansion port 102 may provide an API comprising functions that may be implemented by and used to interface with future toolsets, plug-ins, and/or other software modules.
[56] In an embodiment, AIP controller and GUI interfaces module 103 provide an interface for setting operational modes of AIP 100 and one or more graphical user interfaces for interacting with AIP 100.
[57] In an embodiment, system resources module 104 manages interface(s) to utilities of the computer hardware and operating system 111. For example, system resources 104 may implement digital system processing, mathematical routines, parallel processing for image transformation, and/or information storage facilities. System resources module 104 can speed up operations. For example, application executive 110 can delegate some time-consuming resource-related functions to module 104, which can operate in parallel with application executive 110 to perform functions in a multi-tasked or multi-threaded fashion. In an embodiment, system resources module 104 can be executed by a separate device (e.g., separate computer or processor) in a distributed or multiprocessor environment.
[58] In an embodiment, personal information delivery program suite 105 prepares and delivers content and/or information about the content (e.g., content metadata) so that it can be effectively displayed in the specific user device(s) utilized by a user. Module 105 may implement features that deliver content and/or information about the content to the user in the appropriate format, at the resolution, and at the appropriate speed to optimize the user's experience.
[59] In an embodiment, enterprise information delivery program suite 106 prepares and delivers content and/or information about the content to enterprise users. Enterprise users may include television broadcasters, motion picture studios, and/or other enterprise content creators.
[60] In an embodiment, user features and revenue generation module 107 implements features that manage electronic commerce ("e-commerce"), advertising, social media, and/or features designed to address users with a specific interest.
[61] In an embodiment, user embedded applications 108 refers to a set of applications loaded in a specific user device for a user. The user embedded applications may be designed to maximize the user's experience. [62] 2. Source Content Interfaces
[63] FIG. 2 illustrates components of source content interfaces 101, according to an embodiment. Content can be provided in a variety of formats available in the industry, such as JPEG or bitmap for images, MOV for video, ASCII for text, etc. Source content interfaces 101 may comprise one or more components to transform received content from one format and/or resolution to another format and/or resolution that is desired and/or required by the user of the content and/or the user device receiving the content. For instance, source content interfaces 101 may comprise a commercial source formatter 201, which converts content received from a commercial source, and a personal source content formatter 202, which converts content received from a non-commercial source. Noncommercial or "personal" sources may include pictures, scanned images, text information, camera output, etc.
[64] In an embodiment, source content interfaces 101 comprise wireless service provider interfaces 203. Wireless service provider interfaces 203 may comprise one or more communication interfaces to one or more wireless service providers to enable access to AIP 100 via a wireless communication network (e.g., when an Internet or Wi-Fi connection is not available to a accessing user device).
[65] In an embodiment, source content interfaces 101 comprise third-party enterprise content interfaces 204. Third-party enterprise content interfaces 204 may comprise one or more communication interfaces to one or more third-party content sources, such that AIP 100 may communicate with the one or more third-party content sources. Third-party content sources include any source that offers content services, such as Internet video publishing, social media sharing, Internet-based data archival services, music and/or art content delivery services, online photo-sharing and/or video-sharing, social networking sites, etc.
[66] In an embodiment, AIP 100 may comprise a source content formatting program selector module 205, which determines which interfaces (e.g., interfaces 203 or 204) and/or formatters (e.g., formatter 201 or 202) to use for received source content. Thus, source content interfaces 101 may act as an entry point for content received by AIP 100. Source content interfaces 101 may determine which interfaces are used to receive or retrieve source content, receive the content via the appropriate interface, and/or perform any appropriate transformations or conversions on the received source content. [67] 3. User Features and Revenue Generation Module
[68] FIG. 3 illustrates components of user features and revenue generation module
107, according to an embodiment. Module 107 implements functions for conducting commerce activities, such as billing support, business-to-business transactions, and/or purchasing transactions. In an embodiment, module 107 comprises one or more modules, such as e-commerce 301, relational advertising 302, analytics 303, voting statistics 304, social sharing 305, generic application templates 306, content renting 307, content editor 308, object and facial recognition 309, priority and contention 310, security and privacy 311, vertical market applications 1 through N (312-313), and/or in-app purchases 314.
[69] In an embodiment, e-commerce module 301 implements functions for executing commercial business-to-business and/or business-to-consumer transactions.
[70] In an embodiment, relational advertising module 302 categorizes content that is specific to each user, and/or, based on these categorizations, carries out statistical analysis about the preferences of users and the users' activities to create a user profile for each user. Based on each user's profile, module 302 may provide appropriate advertisements for products and services to each user.
[71] In an embodiment, analytics module 303 collects data related to content (e.g., how many times the content has been viewed, how the content is used, demographics of consumers of the content, etc.), and/or, based on the collected data, generates and reports information to a provider of the content. Such information may be used by the content provider to optimize the marketing of the provider's products and services.
[72] In an embodiment, voting statistics module 304 collects voting or feedback data related to content (e.g., feedback on entertainment programs, product features, opinions, etc.), and/or, based on the collected data, generates and reports information. For instance, voting statistics module 304 may collect poll voting for political activities.
[73] In an embodiment, social-sharing module 305 implements features used to manage the sharing by users of personal information with others. For example, a user may want to restrict access to his or her content (e.g., photographs, videos, etc.) to family members or certain friends. In other instances or for other content, the user may want to share the content with coworkers. Social-sharing module 305 may also request AIP 100 (e.g., via application executive 110) to implement certain permissions and modes of access control for a user's content or for overlapping or non-overlapping subsets of the user's content. [74] In an embodiment, generic application templates module 306 is a framework that facilitates the generation by users of customized applications or functionality for AIP 100. For example, module 306 may provide one or more application templates and one or more graphical user interfaces for receiving user-specified variables. An application template can be combined with the user-specified variables (e.g., received through a graphical user interface) to generate a user-customized application.
[75] In an embodiment, content renting module 307 implements commerce processes such as pay-per-view, subscriptions, and electronic library content.
[76] In an embodiment, content editor module 308 comprises tools for stitching content together. For example, content editor module 308 may provide graphical user interfaces which enable a user to connect or otherwise associate different content, such as images, videos or video portions, audio, etc. into a new production of the content. The graphical user interfaces of content editor module 308 may enable a user to edit the content beyond or instead of just stitching it together, such as cropping images, balancing light intensity, enhancing colors, performing other image processing, etc.
[77] In an embodiment, object and facial recognition module 309 identifies a face and/or other object in an image. Module 309 may operate in conjunction with source content interfaces 101 to automatically tag content, for example, as it is received or in response to a user operation. For instance, module 309 may analyze content received via source content interfaces 101 to compare features in or characteristics of the content with reference features and/or characteristics in a reference database of known faces and/or objects. If module 309 matches a feature or characteristic with a reference feature of characteristic in the reference database, it may then associate the content with metadata associated with the reference feature or characteristic, such as a name for the recognized face (e.g. "John Smith") or object (e.g., "Mount Rushmore").
[78] Object and facial recognition module 309 may be configured to receive and preload the reference database (e.g., via one or more interfaces, including graphical user interfaces or web services) with a set of features (e.g., representations of known faces of persons or representations of other known objects). For example, if the reference database is specific to a user (e.g., consists of representations of faces and objects preloaded by the user), the user may preload the reference database by uploading content and/or tagging content, such that faces and/or objects in the content may be identified and converted into representations to be stored in association with metadata (e.g., which may be input by the user). The reference database may also be a global database that comprises representations which module 309 may use to tag content from a plurality of users. Alternatively, the reference database may comprise both a global database and a user- specific database, such that both the global database and a particular user's specific database are used to tag content for that particular user. In any case, the reference database may be stored in RCMS 109, and may be accessed by module 309, automatically and/or in response to a user operation, in order to identify faces and/or objects and tag content with metadata associated with any identified faces and/or objects. Alternatively or additionally, module 309 may reference databases other than those stored in RCMS 109, such as databases that are external to AIP 100 (e.g., law enforcement databases, social networking databases, etc.).
[79] In an embodiment, priority and contention module 310 acts as a traffic controller that manages access to AIP 100, such that administrators of the system and end user clients are served in an efficient manner. For example, module 310 may provide load balancing for stored content, may prioritize content or resource requests based on one or more criteria, resolve contention for the same content or resources, and/or the like.
[80] In an embodiment, security and privacy module 311 controls access to content, metadata, and/or other data, for example, stored in RCMS 109. Module 311 may manage permissions for controlling access to individual content items and/or groups of content items. In an embodiment, users of content may be grouped into overlapping or non- overlapping access groups. Each access group may be associated with a particular content item and/or particular group(s) of content items (e.g., content groups), such that all the users in a particular access group have access to the associated content item or group(s) of content items. For example, a user could set different access groups for his or her coworkers, family members, friends, professional relationships, etc. In addition, the user could specify particular content items or content group(s) to be associated with each access group. Thus, an access group of the user's coworkers may have access to a different subset of content items than an access group of the user's family members. Furthermore, module 311 may be configured to allow a user to set different permissions for each access group (e.g., viewing permission, editing permission, commenting permission, sharing permission, etc.).
[81] In an embodiment, vertical market application 1 312 through vertical market application N 313 comprise a set of features that address specific needs and features for group(s) of user(s). For example, one vertical market application may be related to sports. In such an example, AIP 100 could be used to manage content and/or metadata related to a sports event, a given sports activity carried out by the user at a given date and/or time, a one-time special event, a given video or picture of specific importance to the user, etc. The vertical market application could implement processes that are specific to managing this type of content and/or metadata. As a further example, another vertical market application may implement a process for organizing information for a school system in a given locality. Yet another vertical market application could implement a process for disseminating content and/or metadata (e.g., videos, text, events, etc.) to groups of users associated with a religious activity.
[82] In an embodiment, in-app purchases module 314 provides mechanisms that enable a user to purchase products, content, and/or services while connected with AIP 100.
[83] 4. System Resources Module
[84] FIG. 4 illustrates components of system resources module 104, according to an embodiment. In an embodiment, systems resources module 104 comprises a cold storage module 401, random access memory module 402, disk module 403, digital signal processing (DSP) utilities module 404, and/or shared variables module 405. Systems resources module 104 may be managed by operating system 111.
[85] In an embodiment, cold storage module 401 is a form of archive implemented with semi-permanent media. Access to the cold storage is not necessarily readily available to a user. The advantage of this type of storage is that it can hold vast amounts of information at very low cost. When data is requested from cold storage module 401, there may be a significant time delay until the data is received. This is because, in cold storage, the data is often stored in disk drives that have to be physically moved from a rack to a server in order to search and retrieve the data.
[86] In an embodiment, random access memory module 402 comprises or has access to random access memory (RAM) which is useful for fast retrieval of data. Disk module 403 may be used for medium-speed retrieval.
[87] In an embodiment, DSP utilities module 404 comprises utilities for digital signal processing, mathematical processing used for complex transformations, etc.
[88] In an embodiment, shared variables module 405 is used by one or more modules of AIP 100 to store values for variables that may be shared between the module(s). [89] 5. Relational Content Management System
[90] FIG. 5 illustrates components of relational content management system
(RCMS) 109, according to an embodiment. RCMS 109 comprises a search engine and one or more databases, such that data (e.g., content and/or metadata) in the system is accessible via one or more parameters and/or data associations. Whereas conventional search engines perform an exhaustive search for keyword matches and often returns unwanted information, RCMS 109 returns information based on related parameters (e.g., metadata) with more complex relationships between them.
[91] In an embodiment, RCMS 109 organizes data for presentation purposes to user, such as in a hierarchical manner.
[92] In an embodiment, RCMS 109 comprises a relational content management system engine (RCMSE) 501 and a data content structure 502 which defines the structure for the stored content.
[93] FIG. 6 illustrates an example hierarchical access structure 650 for data stored in
RCMS 109, according to an embodiment. It should be understood that other types of presentations or arrangements are also possible.
[94] Block 652 (i.e., "School District") represents the highest level of access to data in the database. Such access may be limited to users at the highest levels of authority, such as a school superintendent. Such users have access to data represented by Block 652 as well as any data represented by blocks below Block 652 (i.e., Blocks 662-694).
[95] Blocks 662A-662C (i.e., "School 1" through "School N") represent a lower level of access. Such access may be limited to users having authority within the respective school, such as a school principal, as well as any users having a higher level of access (e.g., those with access at Block 652). Notably, a user only having access to data represented by Block 662 A (i.e., not having an overarching level of access to data represented by Block 652) would not have access to data represented by any higher-level block (i.e., Block 652) or same-level block (e.g., Block 662B). However, such a user would have access to data represented by any underlying blocks, i.e., Blocks 672, 682, and 692.
[96] Continuing the example, School 1, whose data is represented by Block 662A, has a number of different grade levels, such as Kindergarten 672A, First Grade 672B, and so on, up to Grade M 672C. Again, it should be understood that a user only having access to data represented by Block 672B would have access to data represented by any underlying blocks (i.e., Blocks 682 and 692), but not any higher-level blocks (e.g., Block 662A) or same-level blocks (e.g., Block 672C). The data for each of the grades, Kindergarten through Grade M, may be maintained in their own private location for each grade and not include data from any of the other grades.
[97] As shown in the example, First Grade for School 1 has a number of teachers, i.e., Teacher 1 682A through Teacher K 682B. A user having a level of access represented by Block 682A may be Teacher 1. In this case, Teacher 1 would have access to data represented by Block 682A, but not any higher-level block (e.g., Block 672B) or same- level block (e.g., 682B). Thus, for example, Teacher 1 would have access to his or her own data (e.g., classroom data), but not the data of Teacher K. However, any user with an overlying level of access (e.g., represented by Blocks 672B, 662A, and 652) would have access to the data of Teacher 1.
[98] In the example, Teacher K has responsibility for various content files, arranged as Content Group 1 692 A through Content Group L 692B, as well as an electronic learning ("e-learning") module 693. Accordingly, these content groups 692A-692B and e- learning module 693 are arranged under Block 682B, which represents the data access level of Teacher K. Thus, Teacher K has access to each of Content Group 1 692A through Content Group L 692B and e-learning module 693.
[99] In an embodiment, Teacher K is able to organize, load, and access his or her own files for different content groups. As an example, Teacher K has ownership of Content Group 1 692A. Content Group 1 692A may comprise, for example, data including or related to the names of students in the first grade classroom of Teacher K. Content Group L 692B may comprise different data. For example, Content Group L 692B may comprise data including or related to school events that Teacher K is in charge of or must organize. It should be understood that these are simply examples, and that the content groups may comprise more, less, or different data (e.g., data related to e-learning module 693).
[100] In the example of FIG. 6, there is a different branch from Block 652 for the organization of subjects. For example, the superintendent, or other user with access to data represented by Block 652, may set up and organize curriculums for Math (whose data is represented by Block 664A), Science (whose data is represented by Block 664B), History (whose data is represented by Block 664C), etc.
[101] Under the Math curriculum, there may be sub-curricula Algebra 1 674A, Geometry 674B, Pre-calculus 674C, etc. All of these sub-curricula are associated with and underlie the Math curriculum. Thus, a user with access to the data represented by Block 664A would also have access to the data of the Math sub-curricula, represented by Blocks 674. It should be understood that the other curricula (e.g., Science 664B and History 664C) may also have sub-curricula.
[102] In the example, the Algebra 1 sub-curriculum, represented by Block 674A, has a number of underlying lessons, i.e., Lesson 1 684A through Lesson Q 684B. Each of these lessons may have a number of content groups. For example, Lesson Q 684B has a Content Group 1 694 A through Content Group P 694B. These content groups represent the lowest level in the hierarchy (i.e., the leaf nodes of the hierarchy tree). As an example, Content Group 1 694A may include content items (e.g., media, pictures, text, audiovisual aids, etc.) associated with a Lesson Q for Algebra 1. Similarly, Content Group P may include content items, such as answers to quizzes and/or the like.
[103] As described above, at each block in FIG. 6, a user whose sole access is represented by that block would have access to the data represented by that block and any block underlying that block within the hierarchy (i.e., a descendant of the block) but would not have access to data represented by any other block (e.g., ancestor block, sibling block, cousin block, etc.). Of course, it should be understood that a user may have access represented by multiple blocks. For example, a user may have access to data represented by both Block 662A and Block 674C. In the illustrated example, the content groups represent the lowest levels of access. However, it should be understood that the content items within the content groups may represent the lowest levels of access.
[104] FIG. 7 illustrates an example operation of a Relational Content Management System Engine (RCMSE) 501, according to an embodiment. The content for a hypothetical family 701 (i.e., "the Smith Family") comprises a plurality of content items, such as photographs, videos, text, home movies, general files, etc. This content may be stored in RCMS 109 across one or more computers, servers, storage media, etc.
[105] In the example, the content has been stored across several volumes, i.e., Volume 1 704 A through Volume N 704C. Each of these volumes comprise various records 706A-706D, which may be stored, for example, as a table (e.g., in a relational database). The records may be linked together by one or more common attributes or keys that associate the data contained in the records. As illustrated, Volume 1 704A contains data of interest comprising data 1 and data 2 in records 706 A and 706B, respectively. In addition, Volume 2 704B contains data of interest comprising data 3 in record 706C, and Volume N 704C contains data of interest comprising data 4 in record 706D. The organizational and structural perspective of the data, represented by elements 702-706D in FIG. 7, represents the "Administrator's View" of the data.
[106] The user of the data of interest has a different perspective, called the "User's Relational View," as represented by elements 501 and 708 in FIG. 7. If the user wishes to retrieve certain content, the user queries RCMSE 501 with a request to retrieve content. The request is without regard for the manner in which the content is stored.
[107] In the example of FIG. 7, the user queries RCMSE 501 for content, represented by data 1-4. In response, RCMSE 501 identifies and associates all data that matches the user's query. Accordingly, RCMSE 501 returns content data 1-4. The content data returned by RCMSE 501 is then returned by AIP 100 (e.g., via information delivery program suite 105 or 106) for presentation on a display of the querying user's device.
[108] RCMS 109 may organize content through a manual and/or automated process. For example, in the example of FIG. 6, the content could have been previously stored in RCMS 109 by system administrator(s) in a structured manner, such that it is organized upon entry into the RCMS 109. Alternatively or additionally, the content may be organized in a continual fashion "on the fly" as various content items are tagged with metadata. In other words, the metadata with which a content item is tagged may be used to automatically organize that content item within the set of all content items.
[109] Metadata tagging can itself be a manual and/or automated process. In manual tagging, a user may directly associate one or more content item(s) with metadata, such as a time and/or date, text, description, voice comment, narration, etc. In automated tagging, one or more content item(s) are automatically associated with metadata, such as closed captions, metadata associated with a reference face or other object (e.g., environmental structure, landmark, etc.) or shape that has been automatically recognized in an image (e.g., by object and facial recognition module 309), geo-location data (e.g., acquired from a Global Positioning System (GPS) sensor), time and/or date, etc.
[110] As an example, audiovisual content may be associated with closed-caption data comprising text associated with audible dialogue or other audio in the audiovisual content. One or more modules of AIP 100 may automatically generate closed-caption data for a content item using a standard voice-to-text engine. In an embodiment, RCMSE 501 may search the closed-caption data associated with content items in response to user queries for content.
[I l l] Alternatively or additional, AIP 100 may derive metadata from closed captions that have been previously associated (e.g., by AIP 100 or an external system) with a content item (e.g., by extracting keywords from the closed captions), and associate the derived metadata with that content item. For example, a user may set an operational mode of AIP 100 (e.g., via AIP controller and GUI interfaces 103), such that, for any content item, received by AIP 100, that is associated with closed-caption data, metadata is derived from the associated closed-caption data and associated with that content item. The metadata may be automatically, manually, or semi-automatically derived from the closed- caption data. As one example, text from the closed-caption data may be imported from a closed-caption file associated with one or more content items, displayed on a dashboard user interface of AIP 100, such that a user may choose which extracted text to associate with the one or more content items as metadata. In any case, the derived metadata associated with content items can be used by RCMSE 501 to effectively retrieve content using a relational search.
[112] As another example, if a content item is related to an Internet web page, the AIP can associate the content item with a Hypertext Markup Language (HTML) tag, known as a metatag, for subsequent searching by RCMSE 501.
[113] In an embodiment, a search can be initiated by a user interaction, such as entering a search expression into a web-based form via standard input devices (e.g., virtual or hardware keyboard), voice input, etc. The search expression can comprise one or more criteria, which may take the form of a search string comprising one or more search terms and zero or more operators. The operators may comprise Boolean operators, such as AND, OR, NOT, etc. The search string may be constructed by an interaction in which the user provides types of information that are included in the metadata associated with the content. For example, the user may specify a date (e.g., a date on, before, or after the creation date of desired content) or date range (e.g., a beginning and end date setting a range for the creation date of desired content), number of people in content (e.g., in an image), background of the content (e.g., whether an image comprises a background of water, mountains, sport or other activity, etc.). As discussed above, these types of metadata may be associated with content in RCMS 109, via manual, automatic, or semiautomatic tagging processes, and then searched using RCMSE 501.
[114] 6. Personal Information Delivery Program Suite
[115] FIG. 8 illustrates components of personal information delivery program suite 105, according to an embodiment. Module 105 may comprise a user preference profile 810, content autoloader and synchronizer 811, user display data formatter 812, adaptive bit rate controller transcoder 813, user experience manager 814, and/or multilingual communications 815.
[116] In an embodiment, user preference profiler 810 monitors the activities and preferences of users, and, based on these activities and preferences, generates a profile for each user. Each user's profile can be used to improve the delivery of content to the user, as well as provide suggestions to the user for content to view.
[117] In an embodiment, content autoloader and synchronizer 811 assists a user in loading content and/or metadata from the user's device to AIP 100 and into RCMS 109. Accordingly, a user may load his or her user-generated or otherwise obtained content with its associated metadata into RCMS 109. Module 811 may synchronize the metadata for any content loaded into RCMS 109 to the metadata (e.g., date) of the content on the user's device. In other words, the metadata associated with content in RCMS 109 may be continually updated based on the metadata associated with corresponding content on the user's device.
[118] In an embodiment, user display data formatter 812 formats content and/or metadata, that has been received from a user, into an appropriate format for storage in RCMS 109 and subsequent retrieval by RCMSE 501.
[119] In an embodiment, adaptive bit rate controller transcoder 813 manages the rate of delivery of content and/or metadata from AIP 100 to a user device in order to match the speed of the user device.
[120] In an embodiment, user experience manager 814 is responsible for maximizing the performance of AIP 100 so that it matches the bandwidth limitations of the connection between AIP 100 and a user device (e.g., based on a speed of the user device's connectivity to the Internet). For this purpose, user experience manager 814 may comprise a last-mile communications channel profiler that profile the performance of Internet communications that the user has available (e.g., the final leg or "last mile" of the user's Internet connection) in order to understand the maximum bandwidth that can be utilized to deliver the content. For example, the last-mile communications channel profiler may be configured to modify the resolution of content and/or the speed of delivery of the content.
[121] Accordingly, the resolution and delivery speed of content may be based on the bandwidth of a connection available to a destination user device, as well as the resolution and speed of the destination user device. [122] In an embodiment, multilingual communications module 815 enables AIP 100 to communicate and provide features in a variety of human languages, as well as interpret text and/or voice data in various types of content. In effect, module 815 acts as an interpreter or translator, so that text and/or voice data in content items can be translated into the appropriate language for a user viewing the content items.
[123] 7. AIP Controller and GUI Interfaces
[124] FIG. 9 illustrates components of AIP controller and GUI interfaces 103, according to an embodiment. A user's experience may be personalized based on one or more hierarchically-arranged parameters stored in RCMS 109. These parameters may be set by an administrator and/or the user. Module 103 is used to configure the operations of AIP 100.
[125] In an embodiment, AIP controller and GUI interfaces 103 comprise a number of vertical graphical user interfaces, such as GUI Vertical 1 901, GUI Vertical 2 902, and so on, up to GUI Vertical N 903. These different vertical graphical user interfaces correspond to the different vertical applications in user features and revenue generation module 107. For instance GUI Vertical 1 901 is available to users of Vertical Market Application 1 312, and GUI Vertical N 903 is available to users of GUI Vertical Market Application N 313. The vertical graphical user interfaces may provide unique and specialized features for each of their corresponding market applications. For example, the vertical graphical user interfaces may be used to load content, set up reports, and/or activate features for AIP 100.
[126] In an embodiment, AIP controller and GUI interfaces 103 comprise an AIP user feature controller 904 that allows the user to upload content, organize the content, and associate metadata (e.g., metatags) with the content.
[127] In an embodiment, AIP controller and GUI interfaces 103 comprise an AIP engine operation controller 905 that is responsible for communications, program initialization, and/or reloadable control programs for AIP 100.
[128] 8. User Embedded Applications
[129] FIG. 10 illustrates components of user embedded applications 108, according to an embodiment. User embedded applications 108 may comprise one or a plurality of applications that can be downloaded to a user device in order to provide a customized user interface to AIP 100, optionally with an improved or enhanced set of features. For example, if the user device is a mobile phone, then a user embedded application 1001 may be downloaded from AIP 100 to the mobile phone and installed in the mobile phone. The downloaded application 1001 can then provide the mobile phone with the full functionality or a subset of the functionality of AIP 100. Similarly, user embedded applications 108 may comprise downloadable and/or installable applications providing AIP 100 functionality for other types of devices, such as an application 1005 for other types of mobile devices, a desktop application 1002, an iPad™ application 1003, an application 1004 for other types of tablets (e.g., Android™-based tablets, Windows™- based tablets, etc.), a laptop application 1008, an intelligent vehicle application 1006 (e.g., for vehicles capable of wireless or satellite communications, which may provide, e.g., Internet access to computing devices within or connected to the vehicle), an Internet- integrated television application 1009, and/or an application 1007 for other types of displays. Accordingly, each type of device may have its own specialized client application for interacting with AIP 100.
[130] 9. Content Groups
[131] As discussed elsewhere herein, content groups may comprise one or more content items. For instance, two or more content items may be associated with each other to form a content group. However, it should be understood that, in some embodiments or scenarios, it is possible that a content group could comprise only one content item.
[132] Similarly to FIG. 6, FIG. 11 illustrates an example hierarchical organization of content. The first and highest level 1110 may comprise a broadcast network. At a second level 1120, the broadcast network may comprise a plurality of underlying syndicates. In turn, at a third level 1130, each of the plurality of syndicates may comprise a plurality of underlying affiliates. At a fourth level 1140, each of the plurality of affiliates may comprise a plurality of underlying channels. At a fifth level 1150, each of the plurality of channels may comprise a plurality of underlying content categories. At a sixth level 1160, each of the plurality of content categories may comprise one or more content groups. In turn, each of the plurality of content groups may comprise one or more content items. For example, a channel may comprise a food category comprising food-related content groups titled "cooking," "dining out," etc.
[133] FIG. 12 illustrates an example user interface for managing a content group, according to an embodiment. Specifically, the example user interface is for editing the content group "dining out" at level 1160 in FIG. 11. However, it should be understood that similar user interfaces may be provided for the other content groups, as well as other levels in the hierarchy (e.g., levels 1110-1150 in FIG. 11). In fact, the other levels in the hierarchy may simply be considered content groups that comprise other content groups or content sub-groups. For example, the "cooking" and "dining out" content groups at level 1160 may form a "food" content group at level 1150, the "food," "entertainment," green living," "home," and "health" content groups at level 1150 may form a channel "2013" content group at level 1140, and so on, up to a single root content group comprising the entire broadcasting network at level 1110.
[134] As illustrated in FIG. 12, the user interface for managing a content group may comprise one or more inputs for adding a title, description, keywords, and/or other metadata to be associated with the content group, as well as activating and deactivating the content group (i.e., toggling the content group between visible/accessible and invisible/inaccessible), and saving the content group. In addition, the user interface may comprise a list of content items associated as the content group, along with a title, description, inputs for editing, viewing, deleting, and/or activating and deactivating the content item, an input for adding a new content item to the content group, etc. In an embodiment, a user interface may also comprise one or more inputs for managing which users have access to the content group.
[135] FIGS. 13A-13E illustrate a customized user experience in the context of content groups, according to embodiments. FIG. 13A depicts an entry screen for the "2013" channel in level 1140 from FIG. 11, according to an embodiment.
[136] FIG. 13B illustrates a user interface for selecting content categories from the channel. As illustrated, the user interface lists the content categories from level 1150 for the channel, and prompts a user to select categories of interest. There may be any number of categories from which a user can choose. Furthermore, a user may be provided the option to select and arrange the various categories within the user's user interface according to his or her own preferences, so as to create a customized user interface. For example, a user may arrange the categories (e.g., like tiles) such that the "Entertainment" category is first and the "Weather" category is at the bottom (e.g., and only visible if the user scrolls down).
[137] FIG. 13C illustrates a user interface after a user has selected three categories from the user interface in FIG. 13B for his or her customized user experience. As illustrated, this customized user interface may comprise a selectable list of the user-chosen content categories. It may also comprise sponsored or recommended content, for example, based on the profile generated for the user, as described elsewhere herein. For example, as depicted in FIG. 13C, the user interface has suggested a cooking-related video for the user to watch.
[138] FIG. 13D illustrates a user interface after the user has chosen to view the suggested cooking-related video depicted in FIG. 13C. In addition, text or other content related to the chosen video may be displayed. FIG. 13E illustrates a further user interface that suggests additional content, related to the chosen video, to the user. This user interface may also provide transaction-related inputs, such as options for purchasing a service or product, such as additional content. In an embodiment, FIG. 13E may simply be a further view of the user interface in FIG. 13D after the user scrolls down (or in another direction, e.g., left or right) to bring additional content into view. Accordingly, together, FIGS. 13D and 13E may represent a single content group by displaying a plurality of content items within that content group. Thus, in an embodiment, content groups, including all of their respective content items, may be viewable in a single, integrated user interface.
[139] FIGS. 14A-14D illustrate another example of a customized user experience in the context of content groups, according to embodiments. FIG. 14A illustrates a user interface comprising a search input, and optionally a list of available albums.
[140] The disclosed metadata schema and ontology enables adaptive software of AIP 100 to recognize metadata tags (e.g., keywords) associated with content and aggregate content items based on their associated metadata. For example, AIP 100 may recognize that a certain metadata tag appears in metadata (e.g., the combined metadata for all of a user's content and/or all of the content stored in RCMS 109 for a plurality of users) a certain number of times. If the number of times that the metadata tag appears is greater than a threshold, then the adaptive software of AIP 100 may automatically (i.e., without user intervention), manually (e.g., in response to a user operation), and/or semi- automatically (e.g., in response to a user confirmation of an automatic recognition) aggregate the user's content items associated with the metadata tag into an album for the user. The threshold may be a predetermined threshold or may be based on a number of content items (e.g., a threshold percentage of the number of content items associated with the particular metadata tag to the total number of content items). Alternatively or additionally, the user may manually select the content items to associate into a content group.
[141] FIG. 14B illustrates a user interface for viewing thumbnails of content items, according to an embodiment. The content items may be all of the content items or a smaller subset of the content items available to a user that are comprised in an album, content group, or the results of a search. For example, a user may input a search expression, comprising a search string (e.g., keywords and search operators), into the search input of the user interface in FIG. 14A. In response, RCMS 109 may perform a search according to the disclosed metadata schema and ontology. The results of such a search may be shown as a series of thumbnails, as illustrated in FIG. 14B.
[142] In an embodiment, the content items in the search results are navigably interlocked with associated content items (e.g., content items belonging to the same content group and/or album). The result is a search environment in which a user may navigate farther and farther away from the initial search results as he or she navigates to interlocked content items that were not returned in the original search results, but which are related to content items in the initial search results by way of content groups.
[143] For example, as illustrated in FIG. 14C, a user may select a thumbnail of a content item from the search results illustrated in FIG. 14B. In response, the user interface opens up the content group that comprises the selected content item, as illustrated in FIG. 14D. The opened content group lists all of the content items comprised in the content group, including the selected content item. Accordingly, a user is able to view content items that are associated with the selected content item based on their associated metadata tags in an integrated user interface.
[144] As an illustrative example, in FIG. 14A, the user may enter search terms related to the act of Billy playing baseball. For example, the user may have entered a search expression comprising the search terms "Billy" and "baseball" joined by an explicit or implicit "AND" operator. In FIG. 14B, the search results for the search expression are shown as a series of thumbnails. While navigating through this series of thumbnails, the user found a thumbnail for a video of Billy at his birthday party, as shown in FIG. 14C. The user then selected the thumbnail for the video of Billy at his birthday party, and is responsively presented with a viewable and selectable series of content items within the same content group as the video of Billy at his birthday party. This content group may comprise the video of Billy at his birthday party, as well as content items that share the same metadata tags as the video of Billy at his birthday party. It should be understood that the content items may comprise one or a plurality of types of content items, such as videos, photographs, text, social media, blogs, Uniform Resource Locators (URLs), etc.
[145] It should also be understood that FIGS. 14A-14D merely represent one example of a search user interface. Additionally or alternatively, with reference to FIGS. 14B and 14D, the user interfaces may be organized such that the primary or original search results (i.e., content items) returned by the search expression are presented as thumbnails along a first axis, and the content items related to those content items by content groups (i.e., via shared metadata or a direct association with a common content group) in the primary search results are presented as thumbnails along a second axis. For example, the thumbnails for the primary search results may be shown along the vertical axis, while, to the left and right or diagonal left and diagonal right of a content item, other content items in the same content group are shown along a horizontal or diagonal axis. Accordingly, a user may scroll through the primary search results by scrolling up or down and scroll through the various content groups of content items in the primary search results by scrolling left or right or diagonally left or diagonally right.
[146] In an embodiment, a user may add additional content items to a content group by tagging the content items to be added with metadata associated with the content group, tagging the content items already in the content group with metadata associated with the content items to be added, and/or by directly adding the content items to be added to the content group (i.e., by a direct link, rather than by the commonality of metadata). For example, the content group may be comprised in a social media blog hosted in the cloud, and authorized users could add new content items, post comments for content items, add metadata to content items, create and modify content groups, etc.
[147] Advantageously, this utilization of content groups allows a user to view, in one integrated experience, all the content items associated with a particular event, subject, etc., without having to find individual content items from a hierarchical directory or other structure or from individual third-party sites (e.g., Dropbox™, Facebook™, Twitter™, etc.). Within a content group, a user may be able to view all the content items in a single experience from a single location, irrespective of the content type (e.g., whether video, image, text, blog, comments, etc.) with access and permissions controlled by one or more administrators.
[148] 10. Process Overview
[149] FIG. 15 illustrates a high-level process whereby local content from a variety of different sources may be uploaded, processed, and delivered to cloud storage for distribution to end user devices, according to an embodiment. In step 1510, one or more content items are acquired (e.g., via a camera or input device on a mobile phone). In step 1520, these content item(s) are stored in a local storage of a user device. In step 1530, the content item(s) may be processed using toolsets of AIP 100 and/or user embedded application(s) installed on a local device. For example, such processing may comprise a content architecture tool, a pre-publishing tool, a publishing tool, an archive tool, and/or an analytics tool set which may collectively enable a user to organize (e.g., according to the content organization architecture described herein), format, edit, tag, upload, distribute, and otherwise manage the content item(s). In step 1540, the content item(s) are received at and stored in a cloud service. The cloud service may comprise a number of different types of storage, including active storage, archival storage, and distribution storage. In step 1550, content can be received from the cloud service or a third-party content distribution company and displayed on any of a variety of different user devices according to the personalized or customized end user experience described elsewhere herein.
[150] In an embodiment, users can create and manage their personal cloud storage for their content. This personalized cloud storage may provide a secure and private cloud ecosystem, for example, for distribution of content to members of a family. In addition, the personalized cloud storage may be interfaced with or otherwise accessible by third- party content delivery systems, such as cable, satellite, and/or Internet delivery companies (e.g., DIRECTV™, Dish™, Comcast™, Time Warner Cable™, Netfiix™, Hulu™, Apple TV™, Roku™, etc.), and delivered to the user via their own family channel through such third-party delivery companies. For example, one family member could customize content (e.g., create content groups) via AIP 100, as discussed elsewhere herein, to create his or her own customized family channel. In an embodiment, the channel may be defined by a file directory hierarchy and/or search methodology (e.g., comprising one or more search criteria or expressions defined by the user), such that the channel is manifested to user(s) as content groups or content items representing the results of a search performed according to the search methodology. Alternatively or additionally, the channel may be constructed by the user specifying particular content groups and/or content items to be included in the channel and manifested as the channel.
[151] Then, if the family member has an account with a content delivery company, the content delivery company can be provided access (e.g., via AIP 100) to the family member's customized family channel, and deliver the customized family channel via its distribution system to the family member (e.g., via cable transmission lines and a cable box in the family member's home, via satellite, via the Internet, etc.) for viewing. Additionally, the customized family channel may be delivered to other members of the family, in the same or different households, by the content delivery company or other content delivery companies with access to the customized family channel. The customized family channel, as well as access to the customized family channel, may be managed by the originating family member and/or other user(s) with administrative privileges. Thus, for examples, the members of a family could view a channel customized specifically for their family (e.g., containing content generated and/or selected by family members) in the same manner as they view other television channels (e.g., via their cable or satellite companies).
[152] 11. Example Environment
[153] FIG. 16 illustrates an example system infrastructure in which the disclosed AIP 100 may operate, according to an embodiment. The system may comprise a set of one or more servers or cloud interfaces or instances, which utilize shared resources of one or more servers (any of which may be referred to herein as a "platform" 1610) and/or one or more user devices 1630 which host and/or execute one or more of the various functions, processes, methods, and/or software modules described herein. User system(s) 1630 may host at least some modules of the application, according to embodiments disclosed herein, and/or a local database. Platform 1610 may be communicatively connected to the user system(s) 1630 via one or more network(s) 1620 and may also be communicatively connected to one or more database(s) 1612 (e.g., via one or more network(s), such as network(s) 1620) and/or may comprise one or more database(s) 1612. Network(s) 1620 may comprise the Internet, and platform 1610 may communicate with user system(s) 1630 through the Internet using standard transmission protocols, such as HyperText Transfer Protocol (HTTP), Secure HTTP (HTTPS), File Transfer Protocol (FTP), FTP Secure (FTPS), Secure Shell FTP (SFTP), and the like, as well as proprietary protocols. It should be understood that the components (e.g., servers and/or other resources) of platform 1610 may be, but are not required to be, collocated. Furthermore, while platform 1610 is illustrated as being connected to various systems through a single set of network(s) 1620, it should be understood that platform 1610 may be connected to the various systems via different sets of one or more networks. For example, platform 1610 may be connected to a subset of user systems 1630 via the Internet, but may be connected to one or more other user systems 1630 via an intranet. It should also be understood that user system(s) 1630 may comprise any type or types of computing devices, including without limitation, desktop computers, laptop computers, tablet computers, smart phones or other mobile phones, servers, game consoles, televisions, set-top boxes, electronic kiosks, and the like. While it is contemplated that such devices are capable of wired or wireless communication, this is not a requirement for all embodiments. In addition, while only a few user systems 1630, one platform 1610, and one set of database(s) 1612 are illustrated, it should be understood that the network may comprise any number of user systems, sets of platform(s), and database(s).
[154] Platform 1610 may comprise web servers which host one or more websites or web services. In embodiments in which a website is provided, the website may comprise one or more user interfaces, including, for example, webpages generated in HTML or other language. Platform 1610 transmits or serves these user interfaces as well as other data (e.g., a downloadable copy of or installer for the disclosed application) in response to requests from user system(s) 1630. In some embodiments, these user interfaces may be served in the form of a wizard, in which case two or more user interfaces may be served in a sequential manner, and one or more of the sequential user interfaces may depend on an interaction of the user or user system with one or more preceding user interfaces. The requests to platform 1610 and the responses from platform 1610, including the user interfaces and other data, may both be communicated through network(s) 1620, which may include the Internet, using standard communication protocols (e.g., HTTP, HTTPS). These user interfaces or web pages, as well as the user interfaces provided by the disclosed application executing on a user system 1630, may comprise a combination of content and elements, such as text, images, videos, animations, references (e.g., hyperlinks), frames, inputs (e.g., textboxes, text areas, checkboxes, radio buttons, drop-down menus, buttons, forms, etc.), scripts (e.g., JavaScript), and the like, including elements comprising or derived from data stored in one or more databases that are locally and/or remotely accessible to user system(s) 1630 and/or platform 1610.
[155] Platform 1610 may further comprise, be communicatively coupled with, or otherwise have access to one or more database(s) 1612. For example, platform 1610 may comprise one or more database servers which manage one or more databases 1612. A user system 1630 or application executing on platform 1610 may submit data (e.g., user data, form data, etc.) to be stored in the database(s) 1612, and/or request access to data stored in such database(s) 1612. Any suitable database may be utilized, including without limitation MySQL™, Oracle™, IBM™, Microsoft SQL™, Sybase™, Access™, and the like, including cloud-based database instances and proprietary databases. The term "database" as used herein and in the appended claims may refer to any set of data that is organized for retrieval, including relational databases as well as other types of commercial or proprietary databases. Data may be sent to platform 1610, for instance, using the well- known POST request supported by HTTP, via FTP, etc. This data, as well as other requests, may be handled, for example, by server-side web technology, such as a servlet or other software module, executed by platform 1610.
[156] In embodiments in which a web service is provided, platform 1610 may receive requests from user system(s) 1630, and provide responses in extensible Markup Language (XML) and/or any other suitable or desired format. In such embodiments, platform 1610 may provide an application programming interface (API) which defines the manner in which user system(s) 1630 may interact with the web service. Thus, user system(s) 1630, which may themselves be servers, can define their own user interfaces, and rely on the web service to implement or otherwise provide the backend processes, methods, functionality, storage, etc., described herein. For example, in such an embodiment, a client application (e.g., the disclosed user embedded applications) executing on one or more user system(s) 1630 may interact with a server application executing on platform 1610 to execute one or more or a portion of one or more of the various functions, processes, methods, and/or software modules described herein. The client application may be "thin," in which case processing is primarily carried out server-side by platform 1610. A basic example of a thin client application is a browser application, which simply requests, receives, and renders web pages at user system(s) 1630, while platform 1610 is responsible for generating the web pages and managing database functions. Alternatively, the client application may be "thick," in which case processing is primarily carried out client- side by user system(s) 1630. It should be understood that the client application may perform an amount of processing, relative to platform 1610, at any point along this spectrum between "thin" and "thick," depending on the design goals of the particular implementation. In any case, the application, which may wholly reside on either platform 1610 or user system(s) 1630 or be distributed between platform 1610 and user system(s) 1630, can comprise one or more executable software modules that implement one or more of the processes, methods, or functions of AIP 100 described herein.
[157] 12. Example Architecture
[158] FIG. 17 illustrates an example architecture for the disclosed AIP 100, according to an embodiment. In the illustrated embodiment, AIP 100 is represented by a portal 1710, a service-oriented architecture (SOA) 1720, and an information architecture 1730.
[159] In an embodiment, portal 710 comprises an enterprise portal 1711 (e.g., corresponding to enterprise information delivery program suite 106) and a personal portal 1712 (e.g., corresponding to personal information delivery program suite 105). In addition, portal 710 may comprise a portlet library 1713, comprising one or more modules, such as a client registration portlet 1714, search portlet 1715, view portlet 1716, pre-publication portlet 1717, and/or publication portlet 1718.
[160] In an embodiment, SOA 1720 communicates with portal 1710 according to representational state transfer (REST). SOA 1720 may comprise a plurality of low-level services 1724 which are used to construct one or more composite services 1722. These services represent the building blocks for the various process code and workflows provided by SOA 1720. In addition, SOA 1720 may implement an API 1726 for accessing the services of SOA 1720. Thus, a third-party content distributor 1740 may access the services of SOA 1720 over the web via API 1726. For example, the third-party content distributor may access a customized user channel, as discussed elsewhere herein (e.g., with reference to a customized family channel), and provide the customized user channel to one or more users via standard distribution channels (e.g., cable, satellite, Internet, etc.). Conversely, SOA 1720 may access the services of a third-party content distributer or other third-party service over the web via API 1742.
[161] In an embodiment, SOA 1720 is configured to access information architecture 1730, for example, via JDBC. Information architecture 1730 may comprise one or more databases, such as Lightweight Directory Access Protocol (LDAP) security database 1732 (e.g., for user security provisioning and security context), metadata search engine database 1734 (e.g., which may correspond to RCMSE 501 and implement relational online analytical processing (ROLAP)), online transaction processing (OLTP) database 1736, and/or content management system (CMS) database 1738.
[162] 13. Example Processing Device
[163] FIG. 18 is a block diagram illustrating an example wired or wireless system 550 that may be used in connection with various embodiments described herein. For example the system 550 may be used as or in conjunction with one or more of the mechanisms, processes, methods, or functions (e.g., to store and/or execute AIP 100 or one or more software modules of AIP 100) described herein, and may represent components of platform 1610, user system(s) 1630, and/or other devices described herein. The system 550 can be a server or any conventional personal computer, or any other processor-enabled device that is capable of wired or wireless data communication. Other computer systems and/or architectures may be also used, as will be clear to those skilled in the art.
[164] The system 550 preferably includes one or more processors, such as processor 560. Additional processors may be provided, such as an auxiliary processor to manage input/output, an auxiliary processor to perform floating point mathematical operations, a special-purpose microprocessor having an architecture suitable for fast execution of signal processing algorithms (e.g., digital signal processor), a slave processor subordinate to the main processing system (e.g., back-end processor), an additional microprocessor or controller for dual or multiple processor systems, or a coprocessor. Such auxiliary processors may be discrete processors or may be integrated with the processor 560. Examples of processors which may be used with system 550 include, without limitation, the Pentium® processor, Core i7® processor, and Xeon® processor, all of which are available from Intel Corporation of Santa Clara, California.
[165] The processor 560 is preferably connected to a communication bus 555. The communication bus 555 may include a data channel for facilitating information transfer between storage and other peripheral components of the system 550. The communication bus 555 further may provide a set of signals used for communication with the processor 560, including a data bus, address bus, and control bus (not shown). The communication bus 555 may comprise any standard or non-standard bus architecture such as, for example, bus architectures compliant with industry standard architecture (ISA), extended industry standard architecture (EISA), Micro Channel Architecture (MCA), peripheral component interconnect (PCI) local bus, or standards promulgated by the Institute of Electrical and Electronics Engineers (IEEE) including IEEE 488 general-purpose interface bus (GPIB), IEEE 696/S-100, and the like.
[166] System 550 preferably includes a main memory 565 and may also include a secondary memory 570. The main memory 565 provides storage of instructions and data for programs executing on the processor 560, such as one or more of the functions and/or modules discussed herein. It should be understood that programs stored in the memory and executed by processor 560 may be written and/or compiled according to any suitable language, including without limitation C/C++, Java, JavaScript, Perl, Visual Basic, .NET, and the like. The main memory 565 is typically semiconductor-based memory such as dynamic random access memory (DRAM) and/or static random access memory (SRAM). Other semiconductor-based memory types include, for example, synchronous dynamic random access memory (SDRAM), Rambus dynamic random access memory (RDRAM), ferroelectric random access memory (FRAM), and the like, including read only memory (ROM).
[167] The secondary memory 570 may optionally include an internal memory 575 and/or a removable medium 580, for example a floppy disk drive, a magnetic tape drive, a compact disc (CD) drive, a digital versatile disc (DVD) drive, other optical drive, a flash memory drive, etc. The removable medium 580 is read from and/or written to in a well- known manner. Removable storage medium 580 may be, for example, a floppy disk, magnetic tape, CD, DVD, SD card, etc.
[168] The removable storage medium 580 is a non-transitory computer-readable medium having stored thereon computer executable code (i.e., software) and/or data. The computer software or data stored on the removable storage medium 580 is read into the system 550 for execution by the processor 560.
[169] In alternative embodiments, secondary memory 570 may include other similar means for allowing computer programs or other data or instructions to be loaded into the system 550. Such means may include, for example, an external storage medium 595 and an interface 590. Examples of external storage medium 595 may include an external hard disk drive or an external optical drive, or and external magneto-optical drive.
[170] Other examples of secondary memory 570 may include semiconductor-based memory such as programmable read-only memory (PROM), erasable programmable readonly memory (EPROM), electrically erasable read-only memory (EEPROM), or flash memory (block-oriented memory similar to EEPROM). Also included are any other removable storage media 580 and communication interface 590, which allow software and data to be transferred from an external medium 595 to the system 550.
[171] System 550 may include a communication interface 590. The communication interface 590 allows software and data to be transferred between system 550 and external devices (e.g. printers), networks, or information sources. For example, computer software or executable code may be transferred to system 550 from a network server via communication interface 590. Examples of communication interface 590 include a built- in network adapter, network interface card (NIC), Personal Computer Memory Card International Association (PCMCIA) network card, card bus network adapter, wireless network adapter, Universal Serial Bus (USB) network adapter, modem, a network interface card (NIC), a wireless data card, a communications port, an infrared interface, an IEEE 1394 fire-wire, or any other device capable of interfacing system 550 with a network or another computing device.
[172] Communication interface 590 preferably implements industry promulgated protocol standards, such as Ethernet IEEE 802 standards, Fiber Channel, digital subscriber line (DSL), asynchronous digital subscriber line (ADSL), frame relay, asynchronous transfer mode (ATM), integrated digital services network (ISDN), personal communications services (PCS), transmission control protocol/Internet protocol (TCP/IP), serial line Internet protocol/point to point protocol (SLIP/PPP), and so on, but may also implement customized or non-standard interface protocols as well.
[173] Software and data transferred via communication interface 590 are generally in the form of electrical communication signals 605. These signals 605 are preferably provided to communication interface 590 via a communication channel 600. In one embodiment, the communication channel 600 may be a wired or wireless network, or any variety of other communication links. Communication channel 600 carries signals 605 and can be implemented using a variety of wired or wireless communication means including wire or cable, fiber optics, conventional phone line, cellular phone link, wireless data communication link, radio frequency ("RF") link, or infrared link, just to name a few.
[174] Computer executable code (i.e., computer programs or software, such as the disclosed application) is stored in the main memory 565 and/or the secondary memory 570. Computer programs can also be received via communication interface 590 and stored in the main memory 565 and/or the secondary memory 570. Such computer programs, when executed, enable the system 550 to perform the various functions of the present invention as previously described.
[175] In this description, the term "computer readable medium" is used to refer to any non-transitory computer readable storage media used to provide computer executable code (e.g., software and computer programs) to the system 550. Examples of these media include main memory 565, secondary memory 570 (including internal memory 575, removable medium 580, and external storage medium 595), and any peripheral device communicatively coupled with communication interface 590 (including a network information server or other network device). These non-transitory computer readable mediums are means for providing executable code, programming instructions, and software to the system 550. [176] In an embodiment that is implemented using software, the software may be stored on a computer readable medium and loaded into the system 550 by way of removable medium 580, I/O interface 585, or communication interface 590. In such an embodiment, the software is loaded into the system 550 in the form of electrical communication signals 605. The software, when executed by the processor 560, preferably causes the processor 560 to perform the inventive features and functions previously described herein.
[177] In an embodiment, I/O interface 585 provides an interface between one or more components of system 550 and one or more input and/or output devices. Example input devices include, without limitation, keyboards, touch screens or other touch-sensitive devices, biometric sensing devices, computer mice, trackballs, pen-based pointing devices, and the like. Examples of output devices include, without limitation, cathode ray tubes (CRTs), plasma displays, light-emitting diode (LED) displays, liquid crystal displays (LCDs), printers, vacuum florescent displays (VFDs), surface-conduction electron-emitter displays (SEDs), field emission displays (FEDs), and the like.
[178] The system 550 also includes optional wireless communication components that facilitate wireless communication over a voice and over a data network. The wireless communication components comprise an antenna system 610, a radio system 615 and a baseband system 620. In the system 550, radio frequency (RF) signals are transmitted and received over the air by the antenna system 610 under the management of the radio system 615.
[179] In one embodiment, the antenna system 610 may comprise one or more antennae and one or more multiplexors (not shown) that perform a switching function to provide the antenna system 610 with transmit and receive signal paths. In the receive path, received RF signals can be coupled from a multiplexor to a low noise amplifier (not shown) that amplifies the received RF signal and sends the amplified signal to the radio system 615.
[180] In alternative embodiments, the radio system 615 may comprise one or more radios that are configured to communicate over various frequencies. In one embodiment, the radio system 615 may combine a demodulator (not shown) and modulator (not shown) in one integrated circuit (IC). The demodulator and modulator can also be separate components. In the incoming path, the demodulator strips away the RF carrier signal leaving a baseband receive audio signal, which is sent from the radio system 615 to the baseband system 620. [181] If the received signal contains audio information, then baseband system 620 decodes the signal and converts it to an analog signal. Then the signal is amplified and sent to a speaker. The baseband system 620 also receives analog audio signals from a microphone. These analog audio signals are converted to digital signals and encoded by the baseband system 620. The baseband system 620 also codes the digital signals for transmission and generates a baseband transmit audio signal that is routed to the modulator portion of the radio system 615. The modulator mixes the baseband transmit audio signal with an RF carrier signal generating an RF transmit signal that is routed to the antenna system and may pass through a power amplifier (not shown). The power amplifier amplifies the RF transmit signal and routes it to the antenna system 610 where the signal is switched to the antenna port for transmission.
[182] The baseband system 620 is also communicatively coupled with the processor 560. The central processing unit 560 has access to data storage areas 565 and 570. The central processing unit 560 is preferably configured to execute instructions (i.e., computer programs or software) that can be stored in the memory 565 or the secondary memory 570. Computer programs can also be received from the baseband processor 610 and stored in the data storage area 565 or in secondary memory 570, or executed upon receipt. Such computer programs, when executed, enable the system 550 to perform the various functions of the present invention as previously described. For example, data storage areas 565 may include various software modules (not shown).
[183] Various embodiments may also be implemented primarily in hardware using, for example, components such as application specific integrated circuits (ASICs), or field programmable gate arrays (FPGAs). Implementation of a hardware state machine capable of performing the functions described herein will also be apparent to those skilled in the relevant art. Various embodiments may also be implemented using a combination of both hardware and software.
[184] Furthermore, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and method steps described in connection with the described figures and the embodiments disclosed herein can often be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described herein generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled persons can implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the invention. In addition, the grouping of functions within a module, block, circuit or step is for ease of description. Specific functions or steps can be moved from one module, block or circuit to another without departing from the invention.
[185] Moreover, the various illustrative logical blocks, modules, functions, and methods described in connection with the embodiments disclosed herein can be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC, FPGA, or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor can be a microprocessor, but in the alternative, the processor can be any processor, controller, microcontroller, or state machine. A processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[186] Additionally, the steps of a method or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium including a network storage medium. An exemplary storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can also reside in an ASIC.
[187] Any of the software components described herein may take a variety of forms. For example, a component may be a stand-alone software package, or it may be a software package incorporated as a "tool" in a larger software product. It may be downloadable from a network, for example, a website, as a stand-alone product or as an add-in package for installation in an existing software application. It may also be available as a client- server software application, as a web-enabled software application, and/or as a mobile application. [188] The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles described herein can be applied to other embodiments without departing from the spirit or scope of the invention. Thus, it is to be understood that the description and drawings presented herein represent a presently preferred embodiment of the invention and are therefore representative of the subject matter which is broadly contemplated by the present invention. It is further understood that the scope of the present invention fully encompasses other embodiments that may become obvious to those skilled in the art and that the scope of the present invention is accordingly not limited.

Claims

CLAIMS What is claimed is:
1. A content management system comprising:
at least one hardware processor;
at least one database comprising a plurality of content items; and
one or more modules that are configured to, when executed by the at least one hardware processor,
receive a search request from a user,
generate a first search result comprising a first plurality of content items that have been identified, based on the received search request, from the plurality of content items in the at least one database,
generate one or more second search results by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item, and
provide one or more of the first search result and the one or more second search results to the user.
2. The system of Claim 1, wherein the at least one database comprises a plurality of content groups, and wherein each of the plurality of content groups comprises a plurality of content items.
3. The system of Claim 2, wherein the one or more modules are further configured to automatically generate one or more of the plurality of content groups by associating a plurality of content items with each other based on metadata that is common to each of the plurality of content items.
4. The system of Claim 3, wherein associating a plurality of content items with each other based on metadata that is common to each of the plurality of content items comprises:
identifying at least one metadata tag that is common to a subset of the plurality of content items in the at least one database; and,
when the subset of content items satisfies one or more criteria, generating a content group comprising the subset of content items.
5. The system of Claim 4, wherein the one or more criteria comprise a threshold number of content items in the subset of content items.
6. The system of Claim 4, wherein the one or more criteria comprise a threshold ratio based on a number of content items in the subset of content items.
7. The system of Claim 1, wherein the one or more modules are configured to provide the first search result and the one or more second search results in an integrated user interface.
8. The system of Claim 1, wherein the one or more modules are configured to provide the first search result and the one or more second search results in separate user interfaces.
9. The system of Claim 1, wherein the one or more modules are configured to: provide a first user interface comprising selectable representations of the first plurality of content items in the first search results;
receive a selection of one of the selectable representations of the first plurality of content items; and,
in response to the selection, provide a second user interface comprising representations of the second plurality of content items of one of the one or more second search results that corresponds to the content group comprising the content item corresponding to the selected representation.
10. The system of Claim 1, wherein the one or more modules are further configured to:
provide a user with one or more user interfaces for generating a custom channel comprising a plurality of content items; and
provide the custom channel to a third-party content distribution system for provision to one or more users of the third-party content distribution system.
11. A method comprising using at least one hardware processor of a content management system, having a plurality of content items stored therein, to:
receive a search request from a user; generate a first search result comprising a first plurality of content items that have been identified, based on the received search request, from the stored plurality of content items;
generate one or more second search results by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item; and
provide one or more of the first search result and the one or more second search results to the user.
12. The method of Claim 11, wherein the content management system has a plurality of content groups stored therein, wherein each of the plurality of content groups comprises a plurality of content items, and wherein the method further comprises automatically generating one or more of the plurality of content groups by:
identifying at least one metadata tag that is common to a subset of the stored plurality of content items; and,
when the subset of content items satisfies one or more criteria, generating a content group comprising the subset of content items.
13. The method of Claim 12, wherein the one or more criteria comprise one or more of a threshold number of content items in the subset of content items and a threshold ratio based on a number of content items in the subset of content items.
14. The method of Claim 11, further comprising providing the first search result and the one or more second search results in an integrated user interface.
15. The method of Claim 11 , further comprising:
providing a user with one or more user interfaces for generating a custom channel comprising a plurality of content items; and
providing the custom channel to a third-party content distribution system for provision to one or more users of the third-party content distribution system.
16. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions, when executed by a processor, cause the processor to: receive a search request from a user; generate a first search result comprising a first plurality of content items that have been identified, based on the received search request, from the stored plurality of content items;
generate one or more second search results by, for each of one or more of the first plurality of content items in the first search result, identifying a content group that comprises the content item, wherein the content group comprises a second plurality of content items including the content item; and
provide one or more of the first search result and the one or more second search results to the user.
17. The non-transitory computer-readable medium of Claim 16, wherein the instructions further cause the processor to:
identify at least one metadata tag that is common to a subset of stored plurality of content items; and,
when the subset of content items satisfies one or more criteria, generate a content group comprising the subset of content items.
18. The non-transitory computer-readable medium of Claim 17, wherein the one or more criteria comprise one or more of a threshold number of content items in the subset of content items and a threshold ratio based on a number of content items in the subset of content items.
19. The non-transitory computer-readable medium of Claim 16, wherein the instructions cause the processor to provide the first search result and the one or more second search results in an integrated user interface.
20. The non-transitory computer-readable medium of Claim 16, wherein the instructions further cause the processor to:
provide a user with one or more user interfaces for generating a custom channel comprising a plurality of content items; and
provide the custom channel to a third-party content distribution system for provision to one or more users of the third-party content distribution system.
PCT/US2014/054383 2013-09-05 2014-09-05 Adaptive process for content management WO2015035230A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361874072P 2013-09-05 2013-09-05
US61/874,072 2013-09-05

Publications (1)

Publication Number Publication Date
WO2015035230A1 true WO2015035230A1 (en) 2015-03-12

Family

ID=52628980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/054383 WO2015035230A1 (en) 2013-09-05 2014-09-05 Adaptive process for content management

Country Status (1)

Country Link
WO (1) WO2015035230A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10866926B2 (en) 2017-12-08 2020-12-15 Dropbox, Inc. Hybrid search interface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060218225A1 (en) * 2005-03-28 2006-09-28 Hee Voon George H Device for sharing social network information among users over a network
US20070174389A1 (en) * 2006-01-10 2007-07-26 Aol Llc Indicating Recent Content Publication Activity By A User
US20090249244A1 (en) * 2000-10-10 2009-10-01 Addnclick, Inc. Dynamic information management system and method for content delivery and sharing in content-, metadata- & viewer-based, live social networking among users concurrently engaged in the same and/or similar content
US20120016879A1 (en) * 2010-07-16 2012-01-19 Research In Motion Limited Systems and methods of user interface for image display
US20120155828A1 (en) * 2010-12-21 2012-06-21 Yosuke Takahashi Content Continuous-Reproduction Device, Reproduction Method Thereof, and Reproduction Control Program Thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249244A1 (en) * 2000-10-10 2009-10-01 Addnclick, Inc. Dynamic information management system and method for content delivery and sharing in content-, metadata- & viewer-based, live social networking among users concurrently engaged in the same and/or similar content
US20060218225A1 (en) * 2005-03-28 2006-09-28 Hee Voon George H Device for sharing social network information among users over a network
US20070174389A1 (en) * 2006-01-10 2007-07-26 Aol Llc Indicating Recent Content Publication Activity By A User
US20120016879A1 (en) * 2010-07-16 2012-01-19 Research In Motion Limited Systems and methods of user interface for image display
US20120155828A1 (en) * 2010-12-21 2012-06-21 Yosuke Takahashi Content Continuous-Reproduction Device, Reproduction Method Thereof, and Reproduction Control Program Thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10866926B2 (en) 2017-12-08 2020-12-15 Dropbox, Inc. Hybrid search interface

Similar Documents

Publication Publication Date Title
US10616627B2 (en) Viewer-authored content acquisition and management system for in-the-moment broadcast in conjunction with media programs
US10264314B2 (en) Multimedia content management system
US8316027B2 (en) Searching two or more media sources for media
JP5981024B2 (en) Sharing TV and video programs via social networking
US8688679B2 (en) Computer-implemented system and method for providing searchable online media content
KR101614064B1 (en) Creating cover art for media browsers
US20120078952A1 (en) Browsing hierarchies with personalized recommendations
US11706318B2 (en) Structured entity information page
US9402050B1 (en) Media content creation application
US20110289533A1 (en) Caching data in a content system
US20140096162A1 (en) Automated Social Media and Event Driven Multimedia Channels
US20120078937A1 (en) Media content recommendations based on preferences for different types of media content
US20130097159A1 (en) System and method for providing information regarding content
US20120128334A1 (en) Apparatus and method for mashup of multimedia content
US9542395B2 (en) Systems and methods for determining alternative names
US20140143835A1 (en) Web-Based Digital Publishing Platform
US20220312059A1 (en) Systems and methods for media verification, organization, search, and exchange
US8965870B2 (en) Method and apparatus for exchanging media service queries
US20190155857A1 (en) Method and apparatus for processing a file
JP2021507582A (en) Systems and methods for aggregating related media content based on tagged content
US20180367838A1 (en) Systems for and methods of browsing and viewing huge and heterogeneous media collections on tv with unified interface
WO2015035230A1 (en) Adaptive process for content management
US9578116B1 (en) Representing video client in social media
EP3195241A1 (en) Systems and methods of aggregating and delivering information
US10467231B2 (en) Method and device for accessing a plurality of contents, corresponding terminal and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14842971

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14842971

Country of ref document: EP

Kind code of ref document: A1