CA2756471A1 - Social content map, navigation system, and loyalty management system for digital and online video - Google Patents

Social content map, navigation system, and loyalty management system for digital and online video Download PDF

Info

Publication number
CA2756471A1
CA2756471A1 CA 2756471 CA2756471A CA2756471A1 CA 2756471 A1 CA2756471 A1 CA 2756471A1 CA 2756471 CA2756471 CA 2756471 CA 2756471 A CA2756471 A CA 2756471A CA 2756471 A1 CA2756471 A1 CA 2756471A1
Authority
CA
Canada
Prior art keywords
video
viewers
user
digital
interactions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2756471
Other languages
French (fr)
Inventor
Hecham Ghazal
Luke Davies
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LEANIN Inc
Original Assignee
LEANIN Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LEANIN Inc filed Critical LEANIN Inc
Publication of CA2756471A1 publication Critical patent/CA2756471A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The current method for navigating a digital video is linear and consists of moving back and forth with a sequential navigational bar. Our navigation system is based on previous viewers interactions and notes about the said video.

Content based browsing and navigation in digital video collections have been centered on sequential and linear presentation of images. To improve search-ability, nonlinear and non-sequential access into video is essential, especially with long programs. For many programs, this can be achieved by identifying underlying story structures. It can also be achieved by identifying single images on a frame. A
new framework of video analysis and associated techniques are proposed to automatically parse long programs, to extract details about every single frame. The proposed analysis and representation contribute to the extraction of frames and scenes and story units, each representing a distinct locale or event, that cannot be achieved by shot boundary detection alone, and cannot be achieved by utilizing merely an algorithmic approach. All of the analysis and representation is organized and easily search able for both individuals and search engines.

Systems and methods are provided that encourage viewers of digital video, to interact with the video and describe what they are seeing and their emotions at the time of viewing. The system will encourage viewers with rewards and loyalty points to make comments throughout the video. Comments will be of different types and have different functionality, such as letting subsequent viewers see them and navigate to a specific scene within a video. Our system also encourages viewers with rewards and loyalty points make personal bookmarks within a video and describe the details of the video during the bookmark. The system will have a range selector that viewers will utilize to crop parts of the video and to create their personal bookmarks. The system will also encourage viewers with rewards and loyalty points to share their bookmarks with their friends and their reward will increase if friends actually view the bookmarked area. Authentication of a user desiring to interact with the video will be performed in various manners, such as OAUTH Platforms or a proprietary system described within. Once a user is authenticated they will be afforded various privileges such as, organizing into groups with other viewers, and seeing how other viewers have interacted with video, and seeing which viewers have earned points and loyalty rewards, and who top point and reward earners are.
The system will also, collect all of the aggregate data that has been input from viewers and present it visually in a variety of different ways such as a heat map, where all of the most interacted sections are highlighted. The system will also collect all of the aggregate user input data and create a map of the video which will be accessible to viewers and internet searches. The system will also take all of the data input by users and share it with search engines in a manner that increases the search ranking of a video. Additionally, the system will allow Internet searchers to navigate directly to a specific scene or frame within a video. Also, Internet searchers will be able to navigate directly to the video map.

Description

SOCIAL CONTENT MAP, NAVIGATION SYSTEM, AND LOYALTY
MANAGEMENT SYSTEM FOR DIGITAL AND ONLINE VIDEO

FIELD
The specification relates generally to media delivery, and more specifically to a social content map, navigation and loyalty management system for digital and online video.

BACKGROUND
Conventional web based video systems that store and display digital video typically only support traditional playback of a video using traditional controls like those for manually starting, stopping, or pausing a video and using a scrubber to manually move laterally in a linear fashion through the video.

SUMMARY
Our navigation system is based on previous viewers interactions and notes about the said video. To improve search-ability, nonlinear and non-sequential access into video is essential, especially with long programs. For many programs, this can be achieved by identifying underlying story structures. It can also be achieved by identifying single images on a frame. A new framework of video analysis and associated techniques are proposed to automatically parse long programs, to extract details about every single frame. The proposed analysis and representation contribute to the extraction of frames and scenes and story units, each representing a distinct locale or event, that cannot be achieved by shot boundary detection alone, and cannot be achieved by utilizing merely an algorithmic approach. All of the analysis and representation is organized and easily search able for both individuals and search engines.

Systems and methods are provided that encourage viewers of digital video, to interact with the video and describe what they are seeing and their emotions at the time of viewing. The system will encourage viewers with rewards and loyalty points to make comments throughout the video. Comments will be of different types and have different functionality, such as letting subsequent viewers see them and navigate to a specific scene within a video. Our system also encourages viewers with rewards and loyalty points make personal bookmarks within a video and describe the details of the video during the bookmark. The system will have a range selector that viewers will utilize to crop parts of the video and to create their personal bookmarks. The system will also encourage viewers with rewards and loyalty points to share their bookmarks with their friends and their reward will increase if friends actually view the bookmarked area. Authentication of a user desiring to interact with the video will be performed in various manners, such as OAUTH Platforms or a proprietary system described within. Once a user is authenticated they will be afforded various privileges such as, organizing into groups with other viewers, and seeing how other viewers have interacted with video, and seeing which viewers have earned points and loyalty rewards, and who top point and reward earners are.
The system will also, collect all of the aggregate data that has been input from viewers and present it visually in a variety of different ways such as a heat map, where all of the most interacted sections are highlighted. The system will also collect all of the aggregate user input data and create a map of the video which will be accessible to viewers and internet searches. The system will also take all of the data input by users and share it with search engines in a manner that increases the search ranking of a video. Additionally, the system will allow Internet searchers to navigate directly to a specific scene or frame within a video. Also, Internet searchers will be able to navigate directly to the video map.

The present invention includes systems and methods for improving navigation of online video, by encouraging viewers to describe the contents of the video, attaching those descriptions to a specific frame or event with the video, and subsequently providing those descriptions to future viewers and search engines. All of the viewer's descriptive behavior will be encouraged through a loyalty management system where they are awarded points and rewards.
The features and advantages described in this summary and the following claims and detailed descriptions and drawings are not all inclusive. Many additional features and advantages will be apparent to one of ordinary skill.

BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are described with reference to the following figures, in which:
Figure 1 depicts a block diagram of the system architecture, according to a non-limiting embodiment;
Figure 2 depicts how visible interactions may be added on the video, according to a non-limiting embodiment;
Figure 3 depicts a view of the engagement plugin, according to a non-limiting embodiment;
Figure 4 depicts a process flow diagram of the Engagement Plugin functionality, according to a non-limiting embodiment;
Figure 5 depicts how a user can create a comment, according to a non-limiting embodiment;
Figure 6 depicts viewing comments, according to a non-limiting embodiment;
Figure 7 depicts how a social bookmark is created inside a video, according to a non-limiting embodiment;Figure 8 depicts a user profile, according to a non-limiting embodiment;
Figure 9 depicts a content map, according to a non-limiting embodiment;
Figure 10 depicts a leaderboard, according to a non-limiting embodiment; and limiting embodiment. Figure 11 depicts how the system awards points to users, according to a non-DETAILED DESCRIPTION OF DRAWINGS
FIG.1 is a block diagram of the system architecture in accordance with one embodiment. As illustrated in FIG.1, a user may play a video inside that Video Player 105 that has an Engagement Plugin 108 or runs the plugin as part of the player. The player container 103 is the shell that holds the video player, which may include, but will not be limited to a web page, stand alone application, mobile application, or TV widget. The container may run on a variety of platforms 100 that include but are not limited to a web browser, computer, mobile device, set-top box, connected TV, tablet computers, touch devices or gaming consoles. Once basic video attributes such as unique identifier are determined, the plugin would establish a connection through the network 120, which is typically the Internet (but may be any other), to an Engagement Server 110.

The Engagement Server's 110 primary role is to capture the different types of interactions on video made by users. It may also include other typical services such as load balancing, caching services, and other features. The server exposes its functionality through an API that can be accessed by the plugin 108 or any other system. The API stores all interactions and engagements in database engine that creates a relationship between the user, the interaction, and the video. The user identity is maintained and retrieved from authentication services 150 which can be an internal system or a publically accessible platform such as FacebookTM
Connect, TwitterTm connect or any other systems, services, tools or technologies that provides a mechanism to identify users.

Video content is published by a video publisher 130 and distributed on the Internet by a video platform 140. Each of the services can function as an independent system to be offered by different organizations in one embodiment or by just on closed system on a local network in another.

FIG. 2 illustrates one embodiment of how visible interactions (maybe added on the video. When a video starts playing 200, a unique identifier for this video (such as ID
from video Platform 140 or publisher 130, URL, or any other mechanism that would uniquely identify a video) is sent to an embodiment of an Engagement DB 202 to check if any prior interactions exist 203 for that video. If none exist, a record can be created 204 for that video for future interactions. If some interactions do exist for the video, they are retrieved with their metadata (such as but not limited to;
start and end time of interaction, user generating interaction, other users interacting with the interaction, unique URL). The metadata is then used to segment the interactions to timespans 206. When multiple visible interactions exist at the same visible timespan, they are grouped together and visibly highlighted to the user. The order by which interactions are bubbled and sorted is algorithmically determined 209 on the basis of the interactions weight, popularity, and relevancy of the user that made it to the user viewing it. With the order determined, a visual control is rendered 210 as in one embodiment of FIG. 6.

FIG. 3 is one embodiment of a view of the engagement plugin. In this view, users are presented with 6 options; ContentMap 301, Bookmark 302, Comment 303, User Profile 304, On/Off 305, and Information 306.

When a user selects ContentMap 301 the systems presents a number of different views that display various statistical data about the current video, or other digital content they are consuming. Some examples of statistical data that users will see include but are not limited to, the Heat Map screen, which is detailed in FIG.
9, and the Leaderboard, which is detailed in FIG. 10. The purpose of the ContentMap is to provide users with enhanced navigation within the video or other digital content, and a clear understanding of how allprevious viewers have interacted and viewed the video or other digital content. In the ContentMap section, producers of video and other digital content will also have an opportunity to display both statistical and meta data.

When a user selects Bookmark 302 they are presented with a number of different methods for selecting, describing and sharing a specific portion of video or other digital content. One embodiment of how a user can bookmark is illustrated in FIG.7 Bookmarking allows users to attach metadata to specific scenes or sections. It can also allow users to make recommendations to their social network. It can also allow users to catalog and organize their personal preferences, or favorite scenes or selections.
When a user selects Comment 303 they are presented with an opportunity to make a comment (see FIG. 5) or share an opinion or observation about the video or other digital content.

When a user selects Personal Profile 304 they are presented with a number of different views (see FIG. 8) and data about all the video or other digital content they have consumed as well as how they have interacted with the different content as well as suggestions and recommendations for their peers and social networks.
Users can also see how they have scored and where they have collected points and other statistical data about their viewing and interactions history.

When a user selects On/Off 305 the comment stream is either turned on or off.
On/Off will also apply to other data and statistical streams that may be present in other embodiments.
When a user selects Information 306 they will be given a detailed overview of how to navigate through our technology.

FIG. 4 is a process flow diagram illustrating one embodiment of the Engagement Plugin functionality.

FIG. 5 is one embodiment of how a user can create a comment. In this selection the user has already selected 'comment' for the engagement toolbar, which cues the system to display the comment box (501). If the digital content is video, it will pause and the user will be given an opportunity to share their thought and feelings about the content. The user merely types the comment into the comment field 503. 502 signifies the area where the users avatar appears. The system collects the user avatar during the authentication process. During the process of creating a comment a user can use hash (#) tags to organize their comments into subjects and categories. When a hash tag is written at the beginning of a word (eg.
#Niagara) the system automatically recognizes that the user wants to create a subject or category called Niagara (see section 504). The category is then displayed in its own section at the bottom of the comment and the hash tag will not be visible to subsequent viewers. When a hash tag is used at the end of the comment (see section 505) it remains visible. When a user has completed inputting their comment they press enter 506, which shares it with all the subsequent viewers of the video, or subsequent consumers of the digital content.

FIG. 6 is one embodiment of viewing comments. When users of our system watch a video or other digital content, that has been previously viewed and interacted with, they will see the comments of previous viewer as illustrated in one embodiment in figure 615. Users will be able to see who made the comment by viewing the author's avatar 620. Users will be able to flag the comment 635 if they feel it is distasteful, or vulgar or not appropriate for all other users. Users will be able to read the actual comment 630. Users will be able vote on whether the 'like' the comment 640.
Users will be able to reply to the comment 645. Users will be notified that there are additional comments 650 on the same frame, or section of the content. Users will be able to go sequentially back 610 to the previous comment. Users will be able to navigate sequentially forward to the next comment 660.

FIG. 7 illustrates one embodiment of how a social bookmark is created inside a video. When the user initiates an action to create a bookmark 700, a visual mechanism is rendered on the user's screen 701 to allow them to visually select parts or all of the video that they are interested in bookmarking. Users are asked to enter descriptive titles for their bookmarks 703 to facilitate future searches through the user's library or the interactions on the video. As the user is entering bookmark description, it gets scanned for hash tags 704. When found, hash tags get automatically extracted 705 and added to the user and video categories.

Title, descriptions, and hash tags then become the basis for searching within the video. As more users interact with video by commenting and bookmarking, the metadata for search get a lot more meaningful and precise. Each interaction has a time frame or a time span and a unique URL that is generated to facilitate access to a specific scene within a video.

In one embodiment, bookmark type interactions prompt the user to make the bookmark public 706, which has an end result of publishing the bookmark to the video's ContentMap FIG. 9, and sharing it on social networks 706A. If the user wants to make the bookmark available to only a group of people 707, a visual display of the user's social connections 707A, allows them to easily select who should get notified 707B. The user also has the option of making the bookmark private 708, which means that it doesn't get shared on social networks with other people.
Whatever the user choice is, the interaction details (which may include but not limited to start and end time of the interaction, URL, sharing parameters, user information, weight, categories, etc... ) are sent through the API to the Engagement DB 710. At that point, the bookmark window is hidden, and the video resumes the state it had before sharing 711, facilitating an easy and transparent engagement.

FIG. 8 is one embodiment of a user profile 800. A user profile is a record of all of that users interactions with any of the video or other digital content that is available through the system. For example, this embodiment shows all of the content that this user has consumed, which is called a 'library'. Within the Library, the user profile also shows which network 811, or publisher 811 supplied the content. This embodiment also shows the name of the program or series, and a thumbnail of the actual episode 821 within the series as well as the name of the episode 822 that the user has watched and interacted with. Additionally, the user profile also shows, how many interactions 823 - both bookmarks, and comments that this user has made with each episode in the library. Additionally, users can click on the arrow 824 of the user profile and actually see all the interactions of that user for the selected episode.
With extended use, users will accumulate a lot of data on their profiles, so we added search functionality 810, so other users can easily navigate a user profile.
The score of the user 801 and total viewed content 802 of the user, is also displayed on the user profile. In this embodiment, viewers can also share 803 a user profile with other users who may have similar tastes, or preferences. This embodiment also gives users the ability to "Follow" 804 other users. By following a user their comment stream will be prioritized over a non-followed user.

FIG. 9 is an embodiment of social view of the video or other digital content called a content map. When a user decides to navigate through a video to get to a specific scene, a visual display is laid on top of the video 900. The window header displays basic information about the video (such as but not limited to title, series, episode number, and brief description) and a list of tabs 910. In this embodiment, a HeatMap view is displayed and is split into two sections. The first one displays information about the entire video. This information helps the user visualize how the video has been enjoyed by the all the previous people that have watched. In one embodiment, 911 shows an interaction density graph. Each bar represents a segment within a video and the sum of all the bars represents the entire video. Underneath the graph, a status bar 920 displays a summary of the interactions (which may include but not limited to total bookmarks, and total comments) on the entire video. Other visual representation of community interactions on frame and areas within a frame are also possible.

The second part of the heatmap 930 displays a detailed view of the interaction selected by the range selector 912. The selection status bar 930 can show information that is specific to the region selected. A share button 931 generates a unique URL of the heatmap and its content to facilitate transportability by others (such as users, search engines, and other systems). A detailed view of each interaction (i.e. comment 942, or a bookmark 943) is displayed underneath that selection status bar. Each interaction displays basic information about the user (such as an avatar, picture, icon, badge, or any other) 940, and the details of the interactions itself 941 (such as but not limited to title, description, text, popularity, timestamp, where the comment was made on the video, duration, etc...) FIG. 10 is one embodiment of a leaderboard. As users interact with video and other digital content, the system awards them points for their actions. The system is built to be easily customizable and many different point structures are possible. As users accumulate points they will want a visual representation of their score, which is provided by the leaderboard 1000. Figure 1010 represent where the user is ranked relative to all of the other viewers of this specific episode or selection of digital content. Figure 1011 shows the avatar of the user associated with the rank.
Figure 1012 shows the name of the user. Figure 1013 shows the points accumulated by the user on this specific episode or selection of digital content. Figures 1014 and 1015 show the single interaction that earned the associated user the most points.
Figure 1016 shows the total points awarded so far for this specific episode of selection of digital content.

FIG. Ills one embodiment of how the system awards points to users.

Claims (7)

1. A method for enabling a digital or online video viewer to add a comment to the frame level of a video for subsequent viewer's to see.
2. A method for enabling a digital or online video viewer to select a specific range of the video and add a descriptive narrative and share that selection with other specified viewers, or just save it for themselves.
3. A Loyalty management program that utilizes points and various rewards to encourage viewers of digital or online video to interact with the video and add descriptions of the contents of the video.
4. An authentication mechanism that allows the system to identify the viewer.
5. A method for organizing the aggregate viewer interactions and displaying them inside the video in various ways including, a video map, and a map of hot spots or most interacted spots, and a leaderboard detailing all of the viewers points.
6. A control panel for enabling a digital or online video viewers to organize and view all of their interactions with videos. This control panel exists within the video itself.
The control panel will also manage navigating all of their friend's interactions.
7. A tool bar that exists inside the video that serves as a gateway for viewer social interaction.
CA 2756471 2011-10-27 2011-10-28 Social content map, navigation system, and loyalty management system for digital and online video Abandoned CA2756471A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161552311P 2011-10-27 2011-10-27
US61/552311 2011-10-27

Publications (1)

Publication Number Publication Date
CA2756471A1 true CA2756471A1 (en) 2013-04-27

Family

ID=48173953

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2756471 Abandoned CA2756471A1 (en) 2011-10-27 2011-10-28 Social content map, navigation system, and loyalty management system for digital and online video

Country Status (1)

Country Link
CA (1) CA2756471A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10587921B2 (en) * 2016-01-08 2020-03-10 Iplateia Inc. Viewer rating calculation server, method for calculating viewer rating, and viewer rating calculation remote apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10587921B2 (en) * 2016-01-08 2020-03-10 Iplateia Inc. Viewer rating calculation server, method for calculating viewer rating, and viewer rating calculation remote apparatus

Similar Documents

Publication Publication Date Title
US8756333B2 (en) Interactive multicast media service
O'Brien et al. Mixed‐methods approach to measuring user experience in online news interactions
US8826357B2 (en) Web-based system for generation of interactive games based on digital videos
KR101588814B1 (en) Media-based recommendations
US7840563B2 (en) Collective ranking of digital content
US8473845B2 (en) Video manager and organizer
NL2007887C2 (en) Contextual video browsing.
US20100153848A1 (en) Integrated branding, social bookmarking, and aggregation system for media content
US20150331856A1 (en) Time-based content aggregator
US20170318344A9 (en) Ranking User Search and Recommendation Results for Multimedia Assets Using Metadata Analysis
US20130138677A1 (en) System and Process for Connecting Media Content
US20100274674A1 (en) Media navigation system
US20100332497A1 (en) Presenting an assembled sequence of preview videos
Zhang et al. Real-time Internet news browsing: Information vs. experience-related gratifications and behaviors
US20180130499A9 (en) Method for intuitively reproducing video contents through data structuring and the apparatus thereof
Al-Hajri et al. Visualization of personal history for video navigation
Cox et al. Developing metrics to characterize Flickr groups
Xu et al. Exploring factors influencing travel information-seeking intention on short video platforms
KR20080067589A (en) Mobile multimedia content distribution and access
CA2756471A1 (en) Social content map, navigation system, and loyalty management system for digital and online video
US20110271190A1 (en) Fight engine
Leggett et al. Exploring design options for interactive video with the Mnemovie hypervideo system
JP2010283434A (en) Program, device, and method for managing and reproducing moving picture
Fu Towards a model of implicit feedback for web search
CN113806567B (en) Recommendation method and device for search terms

Legal Events

Date Code Title Description
FZDE Dead

Effective date: 20141028