WO2023158703A1 - Advanced interactive livestream system and method with real time content management - Google Patents

Advanced interactive livestream system and method with real time content management Download PDF

Info

Publication number
WO2023158703A1
WO2023158703A1 PCT/US2023/013157 US2023013157W WO2023158703A1 WO 2023158703 A1 WO2023158703 A1 WO 2023158703A1 US 2023013157 W US2023013157 W US 2023013157W WO 2023158703 A1 WO2023158703 A1 WO 2023158703A1
Authority
WO
WIPO (PCT)
Prior art keywords
features
video stream
encoded video
group
content
Prior art date
Application number
PCT/US2023/013157
Other languages
French (fr)
Inventor
Maik KAEHLER
Dieter PRIES
Original Assignee
MOON TO MARS LLC (d/b/a LILI STUDIOS)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MOON TO MARS LLC (d/b/a LILI STUDIOS) filed Critical MOON TO MARS LLC (d/b/a LILI STUDIOS)
Publication of WO2023158703A1 publication Critical patent/WO2023158703A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications

Definitions

  • the present invention relates to automated and computerized system for content management and streaming of video content, where user/viewer experiences can be customized and delivered to the user in real time based on various settings. More particularly, the present invention relates to an automated computerized system that provides content management services with additional user-specific and configurable content and settings, which enhances user participation and also allows the content creator or producer to organize, scale and deliver additional user-specific content, settings, actions (i.e., features) in a more organized manner and provide different variations of content, with additional customizable user experience add-ons, settings, apps and features.
  • the content is delivered via a live streaming platform.
  • the user/viewer can enjoy the content-related experience on his or her computerized device that receives the video content and additional experience-changing or customizing data, settings, apps, and other content-related features via the live streaming platform.
  • CMS content management system
  • One objective of the present invention is to create a highly customizable, fast, scalable, highly dynamic platform for live streams and real time web experiences, which supports and allows multiple creative and diverse uses and outcomes simultaneously (i.e., different content experiences) for different users/viewers of the same base content.
  • the present invention provides an Interactive Platform for Live User Experiences, which is sophisticated and comprehensive, more customized, and faster to adapt/update than other known systems, and which includes some or all of the following features:
  • the present invention achieves one or more of its objectives by decoupling the administrative layer of the CMS from the presentation layer.
  • the present invention implements a computerized live content delivery system that is a headless CMS that focuses entirely on the administrative interface for the content creation and the facilitation of content workflows.
  • the present invention allows event organizers or producers to quickly and efficiently organize and create completely different presentations of the same content, with different features and other settable addon features (for different groups of viewers).
  • Data can be accessed by permitted users (for example, the producers of content) and others through an intermediary server (referred in this application as “lili Server” via an API, enabling the intermediary to customize the frontend completely to the needs of the client, event organizer, producer or event itself.
  • the present invention provides and implements a live video platform that has a headless CMS with live trigger functionality.
  • the present invention provides the ability for the event organizers or producers to add any sort of live trigger-able features through the lili Server’s CMS, using simple drag and drop for the available (and organized) sets of options and features.
  • the present system does not need to modify and develop the database or back-end architecture, which ensures greater efficiency in development.
  • dashboard Support Another important feature of the present invention is organization, implementation and structure of the software that implements a computerized dashboard.
  • the electronic dashboard is a user front-end for the producers, event organizers, and video content creators of the video content to organize and customize distinct types of experiences (i.e., the collection of settable features) for different users/viewers of the same base content.
  • the electronic dashboard may reside on the “lili Server,” where the processor executes computer software that implements various aspects of the present invention.
  • the producer may set up and organize various visual features that are accessible to different viewers of the same show, and, thus, create different viewer experiences for different viewers.
  • apps and visual content may be added or customized for delivery to various viewers of “different versions” of the same basic video.
  • the viewers may be allowed to perform their own actions, like uploading images, operating avatars, or sharing authorized content with others during video streaming. This makes the viewers’ experience more unique and different, and allows the producers to control the additional features and experiences of the live content for the viewers.
  • the producer may also change settable features or activate certain add-on features for the different versions of the video content or versions of the show that are delivered during actual live performance. These options may be organized and manipulated through access to a dashboard on a lili Server. Alternatively, some parts or panels of the dashboard, which may be installed and operated on the producer’s own computer device/server or some other third party server. [00014] All selectable features in the dashboard are dynamic and real-time based. This allows the present invention to create complex individual sets of settable features. The features are then applied and shown to the viewers at the time of replaying of the video content on the user device, after receiving the streaming video through a content distribution network.
  • Another important feature of the present invention is to allow the producers to change and vary the set experiences live (i.e., in real time), for everyone watching the show (with live add-on features added to the visuals). This provides a highly advanced and efficient level of dynamic livestream customization, which is not offered by other known CMS systems.
  • Yet another important feature of the present invention is the ability to integrate a unique set of interactive applications, with open opportunity for the event organizers to add new applications (i.e., apps) functionalities.
  • apps applications
  • it allows the event producers to implement the following app features that the user can experience while viewing the video content:
  • the producer or organizer may also trigger activation of certain sets of features shortly before or even while the video content is delivered to the viewers (i.e., in a live environment), while the video is played on the viewer’s computerized device.
  • the producer or organizer may insert a color setting or a trigger into the video, which, at the time of replaying on the user device, will check the state of settings/settable features that are activated and maintained on the dashboard by the producer.
  • the producer may activate a particular feature during live transmission, causing the activated feature to be added to the user experience and live video content display on the user device.
  • Latency in livestream events creates a real issue for accurately timing additional content, overlays or interactions. This is due to the unknown latency in the livestream service, latency in the content display and processing on the user’s device and latency in the cell phone service that delivers the live video content to the viewer’s mobile computerized device.
  • the producer may want to invoke and show some visual elements or features to the user’s display device in order to get the user’s attention or invite the user to interact with the show.
  • the producer, content owner or organizer of the live event may allow each user to “make the noise” (i.e., share the excitement with other users), take and share an image on social media or deliver some “special effects” (i.e., virtual fireworks) at a particular part of the event.
  • special effects i.e., virtual fireworks
  • timesensitive special feature or effect like for example, song lyrics or applauding from the actual viewers, can be added to the video content over the Internet at certain specific time(s) during the show.
  • the timing and delivery of the time-sensitive special feature or effect may need to be synchronized with (1) the actual timing within the show that is played on the viewer’s computerized device; and (2) with responses of other viewers with respect to the delivered content, which is delivered with latency and differential to different users (i.e., latency in content delivery to different viewer devices).
  • the image of the livestream can arrive at the users’ device with a latency of up to 60 seconds or more. Services with relatively stable and small latency still result in a latency of 20 seconds between all receiving devices. So, even with a near-zero latency streaming service, the real time action triggers and actions will happen with unpredictable latency when the livestream video image is sent to user devices.
  • the running of the show with this level of latency might be acceptable.
  • the delivery of the highly time-sensitive information like, for example, song lyrics
  • the delivery of the additional data or app (like showing the song lyrics during performance) must be fully synced with the performance of the artist in the show or other event features in the content, as the video content is displayed for the viewer on the viewer’s computerized display. Otherwise, the user may receive the song lyrics totally out-of-synch with the actual performance of the song (i.e., the lyrics text starts when the artist is half-way through the song). This is a very undesirable situation, which negatively impacts the user experience with the video content, and limits the producer’s ability to offer interesting additional content and experiences in a live streaming event(s).
  • the present invention resolves some common issues with the latency problems in the delivery of live content and additional experiences and actions during the transmission, and particular during livestreaming events.
  • the present invention applies or inserts certain triggers (for the additional experience actions, apps, features or settings) into the original video stream before it is sent to the streaming service, where it is encoded and prepared for streaming (which causes the latency).
  • the processor executes specific computer software on the intermediary server (i.e., the lili Server) and inserts certain specific codes or information (“lili triggers”) into the video output/content (i.e., the “modified video content”) before the occurrence of latency during transmission and replaying.
  • the user device contains a processor that also executes computer instructions that process and display the delivered livestream video.
  • the software executed by the processor on the user device, or on a server which delivers and processes video content streamed to the user device (1) searches in the video data for the lili triggers, and (2) if a trigger is found, then communicates with the intermediary server (i.e, the lili Server), which invokes and processes the active set of settable features for the show that are set on the lili dashboard, and then (3) sends instructions to the processor on the user device about the actions, effects and other features, and the timing information about the add-on items or effects that are displayed and made available to the viewer during the show (as it is delivered and replayed on the viewer’s computerized device).
  • the intermediary server i.e, the lili Server
  • the present invention also utilizes certain specific types of triggers for different actions and effects.
  • the triggering system implements different types of triggers that initiate different types of actions or effects.
  • the present invention also utilizes specific color codes for trigger coding and detection, video object detection triggers, video sound detection triggers, webcam motion detection triggers and microphone sound detection trigger.
  • the system implements different types of actions (based on the received triggers). For example, the system shows an overlay, opens a panel, displays an active object as a part/addition to the video content, displays song lyrics, displays active objects (like, for example, overlays) that allow products in the video to be purchased online through a hyperlink to the merchant’s website.
  • the Active Object feature which is delivered as an add-on to the video content, enables viewers to explore and obtain additional information about objects, like goods, services or information featured in the video stream by connecting to a website with information about those objects.
  • the viewers may also put the featured products or services on the wishlist, which may also be provided as an add-on feature for the viewers of the video content. From the wishlist, the viewer may purchase the featured and saved items, or may obtain more information about the items (like information about a song, songwriter, place of performance, etc.).
  • Another feature of the present invention is to have the computer software or Al (operating by a processor on a server) recognize an object featured in the video, and then modify the settable features or offer an add-on ecommerce app to connect to a website that describes or allows an online purchase.
  • the Al or computer software may perform recognition processing on the featured object.
  • the producer or event organizer may allow the recognized object to be converted to an “active object” that is available with the video content.
  • the system will execute computer instructions on a processor, which will add functionality to the featured experience, like allowing the featured clothing item to become “shoppable”.
  • the system may add an overlay to the featured object, and allow the viewer to click and buy the featured object.
  • FIG. 1 illustrates a general structure, organization and operation of the computerized system and organization and structure of various components of the CMS in accordance with at least one embodiment of the present invention.
  • FIG. 2 illustrates a general structure, organization and interaction of various components of the CMS, and delivery of the customized content-related experiences to the video content that is provided to the viewers with at least one embodiment of the present invention.
  • FIGS. 3A-C illustrate the structure, organization of various settings, apps, add-on features, setting and other experience-related features in accordance with at least one embodiment of the present invention.
  • FIG. 4 illustrates the scalability features and code, data and process space organization of the present invention.
  • FIGS. 5A-5C illustrate the front end dashboard user interface for operating with settable features for the video content, such as interactive apps, settings and add-on features in accordance with at least one embodiment of the present invention.
  • FIG. 6 illustrates the process flow that addresses the latency problem for the video content in accordance with at least one embodiment of the present invention.
  • the content producer may use his or her desktop or mobile computer to access the lili Server backend dashboard 100 via the Internet, through a LAN or another type of network in order to perform various CMS functions and provide settable features for the viewing of the content producer’s video content.
  • the producer can set up various different settable features for the content viewing experience (for different users, audiences, times, etc.)
  • the backend dashboard 100 on the lili Server organizes different settings, apps, options and add-ons that may be provided to the viewer of the video content.
  • the specific organization and modularization of these items on the dashboard 100 allows the producer to quickly organize, modify and change the user experience for the producer’s content that is delivered to a Web-based desktop or mobile device 180 or as an application to the smartphone, tablet or 3D visor 185.
  • the modified content is displayed on a device in 2D, as panoramic display, 3D display, Virtual Reality (VR) displays or Augmented Reality (AR) display 190.
  • Core Concept 110 includes, among other items, the actual video content settings and add-ons, including livestream video, regular video and iFrame Shop, where the user can make or receive specific images from the delivered video content (which are approved and may be selected for delivery to the viewer by the producer).
  • Back Interface 120 may include and group such additional content and features as (1) logo, which is delivered as part of the video; (2) volume settings; (3) multi-dot, which is another variation of a selectable menu with multiple items (i.e., operates like the hamburger menu button); (3) logout; and (4) terms and conditions (T&C) for the event or content which is delivered to the viewer or in which the user participates.
  • logo which is delivered as part of the video
  • volume settings which is delivered as part of the video
  • multi-dot which is another variation of a selectable menu with multiple items (i.e., operates like the hamburger menu button)
  • logout i.e., operates like the hamburger menu button
  • T&C terms and conditions
  • Asset Management 130 may include content buttons that operate as executable files and settable features on the lili dashboard.
  • Interactive Apps Panel 140 Management System may include triggers and settable actions or features for the real time changes to the delivered content to the user.
  • Overlays and Triggers 150 include interactive elements that may be delivered as part of the content.
  • Dynamic Asset Library 160 may include images, artwork or other items that may be provided as additional content during the viewing of the content. For example, an authorized image of a particular part of a concert, or a signed image or text from the performer or author is considered a part of the dynamic assets library group 160.
  • Gates 170 include such settable features as registration requirements and information, login requirements and tickets for the event.
  • the Gates 170 allow the event organizer or producer to implement such additional functionality as (1) ticket control; (2) multiple levels of experience, through different “gates” to the event or content; (3) password settings; (4) ID verification settings; and (5) list of specific domains, allowing different experiences for distinct groups of invitees and attendants of the same event, or different experiences for employees of different companies.
  • Monitoring Analytics feature group 175 provides analytics for the event or content, like, for example, the number of viewers who participated in discussions, chats, or uploaded some specific event-related images, etc.
  • the lili Server Dashboard may include various settings, apps and addons that are grouped together.
  • the accessibility or administrative settable features 300 may include such functions as RSVP, subscribe or create 301, login 302, access code 303 (i.e., password), count down 304 (timing for different events), age gate 305 (age-specific access limitations, or specific access for kids) and ticketing 309 (allow a purchase of a ticket for the event).
  • the lili Server Dashboard may include such additional accessibility or administrative features as an alternative gate 3001 (which allows multiple access points to the same event, with different user experiences), “add to calendar” 3002 (allowing to add the event to a calendar) and link login 3003.
  • the video settable features 310 may include support for the following video events: a live video 312, scheduled video 314 (the producer may schedule the delivery of the video for a certain time), video on demand 316 and redundancy backup video 318 (allowing the producer to provide a backup video, or allow multiple concurrent video events as a redundancy).
  • the following additional video formats may be supported: a spatial audio 3011 (adjusting the audio based on the orientation of the subject in a 3D space, or changing audio volume when the object or listener changes position), 4K live video 3012 (supporting 4K resolution) and 3D live video 3013 (supporting a 3D video experience).
  • the operational control settable features 320 may include such features as a volume control 322 (allowing to change or control the volume on the viewer device), multi -dot 324 (menu selection, similar to a hamburger menu), T&C/FAQ 326 (controlling terms and conditions of the user and allowing FAQ information delivery as part of the delivered content), and language 328 (language settable features for the video, close captions, and translation).
  • the operational control settable features may also include settable features such as a channel jump 3021 (allowing the viewer to change the gate or channel, and view the same show or content with a distinct set of settable features and user experiences), and close captioning 3022 (delivering close captioning with the video content).
  • the operation control settable features may also include the setting for switching the video content or close captioning to another language.
  • the settable features 330 include the following add-on functionality and features to be added to the video stream:
  • Chat 331 - allowing the viewers to conduct a chat during the show or event
  • Curated Notes 333 - allow to share some viewer notes with other users/viewers (after the notes are curated by the producer);
  • Download Gallery 334 allow downloading of tutorials, presentation, or children drawing templates during the event or shortly before or after, and/or allow downloading and sharing of the uploaded user content with other users;
  • Photo Booth 336 allow some additional artwork presentation, or allow editing or modification of the content in the video;
  • Sync Lyrics 337 - allow delivery of lyrics in synchronization with the song being played as part of the content
  • Upload Gallery 338 allow to upload the user/vi ewer’s artwork during the event
  • Emoji Flow 340 - allowing or offering the emoji controls and emoji effects in response to certain actions, like allowing virtual fireworks based on the received Make Some Noise feature;
  • Avatar Configure 342 - allowing to set up and configure own avatar for the video content
  • Motion Trigger 343 allow to activate and control the sensor on the user/viewer device and capture user’s motion (like clapping, dancing, etc.).
  • the captured sensor information may be aggregated for all viewers, and then the video content modified based on the reaction and motions of the viewers (for example, allowing the avatar to dance or make some movements in response).
  • add-on functionality and features to be added to the video stream may also include:
  • Active Maps 3032 providing information about where the event is taking place
  • Image Panel 3033 - allowing to share images from the video content as part of the “Share the Moment” functionality
  • Breakout Room 3036 - allowing viewers or participants to separate into smaller groups (during or after the performance);
  • the eCommerce control settable features 350 are set for the whole show, and may include such features as a shopping grid 352 (allowing to make purchases of the products in a grid display of available products); a shopping wishlist 354 (allowing to set up and maintain a shopping wishlist for each of the viewers); a shopping in-content 356 (allowing purchased of the featured products within the delivered video content); a shop iFrame 358 (allowing the user to purchase using 3d party’s hyperlinks embedded within the delivered content).
  • the shopping wishlist 354 allows to set up active objects within the video content (a button or event), and connect to the merchant’s website through a hyperlink or address embedded within the delivered content.
  • the add-on functionality and features to be added to the video stream may also include (a) ability to make donations 3051 (to charity events, as bets or auctions); (b) live tipping (allowing the viewers to tip the performers or others for the provided content); (c) Shop iFrame 360 degrees interactive overlay sphere 3053 (allowing to make purchases in a 3D space or 2D imitation of the 3D space.
  • the overlay and trigger functionality and features 360 may include: a code trigger 361 (for activating any effect or solving latency issues using code triggers that are inserted into the video content); a forced open 362 (forcing to open a panel); a forced close 363 (forcing to close a panel); an active object 364 (clickable action objects, like overlays of a featured object to allow purchase of that object through a hyperlink to the merchant’s website); an active corner 366 (a special button to Apple music app or website, allowing purchase of the Apple music), interactive overlay 366 (i.e., a button that operate a trigger); a pre-approved social post(s) 367 (opening up social media, like Facebook, and allow posting or sharing images with others); an 1 -Click -Add Apple Music 368 (connecting to Apple Music with one click); and a full page takeover 360 (overlaying the entire screen, like placing an animation over the delivered content).
  • a code trigger 361 for activating any effect or solving latency issues using code triggers that are
  • the Overlay and Trigger settable features may also include a trivia 3061 (allowing to deliver trivia data and allow participation in the trivia question during the performance), and a polls 3062 (allowing the viewer to participate in polling activities.
  • the Informational Content settable features 370 may include 2048-bit RSA encryption 372, a GDPR compliance 374, a COPPA compliance 375, a Scalable Micro Services settable features 376 (low or full scalability setting), a WAF 377 (firewall presence and settings), Design Library 378 (collection where users upload their content), multi -events 379 (multiple event controls) and Watermarking 3071 (setting up privacy or IP-related watermarks as part of the background for the video content).
  • Operational control settable features may also include settings and features such as a channel jump 3021 (allowing the viewer to change the gate or channel, and view the same show or content with a distinct set of settable features and user experiences), and closed captioning 3022 (delivering closed captioning with the video content).
  • a channel jump 3021 allowing the viewer to change the gate or channel, and view the same show or content with a distinct set of settable features and user experiences
  • closed captioning 3022 delivering closed captioning with the video content.
  • FIGS. 5A-5B The operation of the front end dashboard user interface for organizing and operating interactive apps, settings, triggers and other add-on features are described with reference to FIGS. 5A-5B.
  • a dashboard interface and settable features are initially selected for a “Set 1” 510 collection of interactive apps, settings and other add-on features that a producer wants to add to the content as an additional user experience.
  • a “Set 1” 510 collection of interactive apps, settings and other add-on features that a producer wants to add to the content as an additional user experience.
  • all boxes for the Core Content 520, Basics 530 and Assets 540 are not selected and appear off or as unselected.
  • the producer can then click on the various settings, apps and add-on features that appear with a + sign, such as, for example chat, send a question, curated notes, upload gallery, download gallery, share the moment and others that appear as clickable or selectable boxes for the apps 540.
  • the producer or someone else who sets up the features such as experience apps, settings and add-ons for a particular video content by using a drag and drop of the apps or other items that are intended to be selected to the Core Content 520 row.
  • FIG. 5B illustrates a number of selections that have already been made for the “Set 1” 510.
  • the apps Chat, Animated Sync Lyrics, Share the Moment, #Social Hub, Photo Booth and Make Some Noise have been selected and are included in the row for the Core Content 520 and Basics 530.
  • Some of the boxes under the selectable Apps 540 (and under Triggers, Assets, Dynamic Assets, Accounts and Data) appear as being pressed, indicating that they have been selected for the Set 1.
  • the producer has already selected settable features from the apps, triggers, add-ons and settings for the Sets 1-3 and is selecting now settable features for Set 4 - 516.
  • the boxes under the core content and basics indicate which settings, add-ons and features have been selected for that set.
  • the boxes below can indicate (as clicked or selected buttons) all apps that have been selected for all of the organized sets - Set 1, Set 2, Set 3 and Set 4.
  • the front user interface for producers to set up, organize and activate the settable features such as apps, triggers, add-ons and settings for the multiple versions of content that is delivered to the viewers with selections for Setl - Set N processed on the lili Dashboard, shown in FIGS. 5A-C.
  • the implementation of the software that operates the lili Server settings, Dashboard, user interface, triggers and other features of the present invention is implemented as a single infrastructure, which allows the producers (i.e., each client) to operate and execute code independently of others on the lili Server.
  • the data and actual code used by each producer is independent from others, and their Dashboard and live-time processing front and back ends are executed in a separate code and data space.
  • This provides better scalability and code independence for each client and his or her settable features for the viewing of the content.
  • the software that implements the above services may use React, Typesnap JS and Nest computer coding or other similar languages for implementation of the functionality and organization described herein and also provide the autoscaling that is intended in at least one embodiment.
  • the Frontend 410, Apps 420, Dashboard 430 and Static (shared) Content 440 can run as independent processes in a separate code and data space on the lili Server, communicating the content (from each process to the ECS Cluster 450 through the API service contact point 455.
  • the data for example uploaded or downloadable add-on data may be stored in an RDS database 457, accessible from the lili Server.
  • the lili-Server ECS cluster 450 and the security group with the API service 455 communicates with the load balancer 460, which has a http listener 462 and a https listener 464 for watching and balancing the processing load for each of the processes 410-440 and others that are running on the ECS cluster 450.
  • the communication with the load balancer may be done through the Internet 470.
  • the ACM certificates provide security for each process 410-440.
  • the software that implements and operates the functions and organizational structures discussed herein on the lili Server is built as a range of functional, mostly independent, components that communicate with each other using protocols. Databases, file system, cache, content delivery network, email service, client applications, hardware, the event trigger service and other services are working separately from each other and are managed by a load balancer which is able to scale the services in need. Each component can have its own tech stack and, importantly, scales independently based on the set requirements.
  • the software is built on state of the art codebase and implements auto-scaling through an intelligent combination of microservices (AWS preferred) in one or more embodiments.
  • AWS intelligent combination of microservices
  • the experiences are exposed using the global content delivery network (CloudFront) 490 (shown in FIG. 4).
  • the software operates and is configured to replicate tasks based on resources usage or request count, so they are scaling automatically according to the system needs. While the system may use different services from different cloud service companies, it may also utilize AWS services in order to achieve a high level of compatibility and seamless communication between different processes.
  • FIG. 6 illustrates a process flow that resolves the latency problem for the video content in accordance with at least one embodiment of the present invention.
  • a video recording 610 is provided to the Gallery 615, and then to the Video post/mix process 620, which receives and processes settings, apps, add-ons and other settable features that are provided, set and managed on the Dashboard on the lili Server 670 by the producer or the video recording 610.
  • the settable features are applied to the video content by the Video Post/Mix process 620.
  • the triggers are inserted into the content (to allow settable and dynamic options, apps and add-ons).
  • color codes for different settable features are inserted into the contentment.
  • the color codes indicate the specific settable features and/or combinations of settable features for the content (based on the lili Server Dashboard organization and selections, set by the producer).
  • the processor executes software code that determines whether the color coding is needed to implement the lili Dashboard features/elements and combination of features/elements selected by the producer. If so, the processor executes instructions to insert the appropriate color codes into the video stream. In addition, it inserts triggers for the add-ons, features and settings and for latency problem.
  • the video content is then encoded by a hardware encoder 630 or a software encoder 635 (using multi-stream process). Once encoded, the video content is sent for distribution to the Computer Distribution Network (CDN) 640, and to Facebook 642, YouTube 644 and Instagram 646, or other social media platforms.
  • CDN Computer Distribution Network
  • the latency problem (described above) 650 is introduced into communications and live playbacks due to certain delays in delivery of the video content through the Distribution MUX 680 (distribution server) to each viewer device.
  • the Content that is distributed to the viewers has the triggers and color settings/indicators imbedded as part of the video stream.
  • Each user device is connected to the Server Platform Frontend, or operates an XML app or JAVA code that checks for triggers.
  • the user/viewer device 690 receives the video stream from the Distribution MUX 680 (with encoded features and settings), communicates with a lili Server, where the content is checked for triggers and the experience features are applied to the content, including triggers that operate to establish the specific instance for the delivery of time-sensitive features (like lyrics to the song that is played in a concert video).
  • the software code for checking the video stream for color codes and triggers may be done on a separate server or the user device with downloaded Java applets (which execute software instructions for this processing).
  • the color codes and triggers may be pre-set and organized by the producer through the lili Server Dashboard, allowing the producer to have multiple sets of pre-arranged added functions and features for each variant of the delivered content.
  • the settable features that are selected by the producer may include an option for activation and delivery of dynamic features that may be activated in real time. This allows the producer to activate one or more special add-ons, settings or features during actual live play and streaming of the video content.
  • the producer may set/select settable features (i.e., effects, settings and actions) for the content that may be activated by the producer by selecting those dynamic actions on the lili Server Dashboard during actual live content delivery to the viewers.
  • the producer may choose to switch content to another gate (i.e., bear another content delivery point for a different audience) during actual playing of the content.
  • the producer may decide to add or allow some additional features, like purchasing the featured items during actual livestream, or allow some viewers (who pay or purchase some options) to have access to additional settable features and experiences.
  • each of the above steps or elements of the system will comprise computer-implemented aspects, performed by one or more of the computer components described herein.
  • any or all of the steps of collection, evaluation, processing and modeling of the frustration factors and data may be performed electronically.
  • all steps may be performed electronically - either by general or special purpose processors implemented in one or more computer systems such as those described herein.
  • Embodiments of the present system described herein are generally implemented as a special purpose or general-purpose computer including various computer hardware as discussed in greater detail below.
  • Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon.
  • Such computer-readable media can be any available media which can be accessed by a computer, or downloadable through communication networks.
  • such computer-readable media can comprise physical storage media such as RAM, ROM, flash memory, EEPROM, CD- ROM, DVD, or other optical disk storage, magnetic disk storage or other magnetic storage devices, any type of removable non-volatile memories such as secure digital (SD), flash memory, memory stick etc., or any other medium which can be used to carry or store computer program code in the form of computer-executable instructions or data structures and which can be accessed by a computer executing specific software that implements the present invention, or a mobile device.
  • physical storage media such as RAM, ROM, flash memory, EEPROM, CD- ROM, DVD, or other optical disk storage, magnetic disk storage or other magnetic storage devices, any type of removable non-volatile memories such as secure digital (SD), flash memory, memory stick etc.
  • SD secure digital
  • Computer-executable instructions comprise, for example, instructions and data which cause a computer or processing device such as a mobile device processor to perform one specific function or a group of functions that implement the present system and method.
  • the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, and the like.
  • the invention is practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • An exemplary system for implementing the inventions includes a computing device in the form of a computer, laptop or electronic pad, including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit.
  • the computer will typically include one or more magnetic hard disk drives (also called “data stores” or “data storage” or other names) for reading from and writing to.
  • the drives and their associated computer-readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer.
  • exemplary environment described herein employs a magnetic hard disk, a removable magnetic disk, removable optical disks, and/or other types of computer readable media for storing data, including magnetic cassettes, flash memory cards, digital video disks (DVDs), Bernoulli cartridges, RAMs, ROMs, and the like.
  • Computer program code that implements most of the functionality described herein typically comprises one or more program modules may be stored on the hard disk or other storage medium.
  • This program code usually includes an operating system, one or more application programs, other program modules, and program data.
  • a user may enter commands and information into the computer through keyboard, pointing device, a script containing computer program code written in a scripting language or other input devices (not shown), such as a microphone, etc.
  • input devices are often connected to the processing unit through known electrical, optical, or wireless connections.
  • the main computer that effects many aspects of the inventions will typically operate in a networked environment using logical connections to one or more remote computers or data sources, which are described further below.
  • Remote computers may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically include many or all of the elements described above relative to the main computer system in which the inventions are embodied.
  • the logical connections between computers include a local area network (LAN), a wide area network (WAN), and wireless LANs (WLAN) that are presented here by way of example and not limitation.
  • LAN local area network
  • WAN wide area network
  • WLAN wireless LANs
  • the main computer system implementing aspects of the invention is connected to the local network through a network interface or adapter.
  • the computer may include a modem, a wireless link, or other means for establishing communications over the wide area network, such as the Internet.
  • program modules depicted relative to the computer, or portions thereof may be stored in a remote memory storage device. It will be appreciated that the network connections described or shown are exemplary and other means of establishing communications over wide area networks or the Internet may be used.
  • a computer server may facilitate communication of data from a storage device to and from processor(s), and communications to computers.
  • the processor may optionally include or communicate with local or networked computer storage which may be used to store temporary or other information.
  • the applicable software can be installed locally on a computer, processor and/or centrally supported (processed on the server) for facilitating calculations and applications.

Abstract

The present invention is directed to an automated computerized video CMS with live trigger functionality. An electronic dashboard provides a user interface for the producers to set up multiple sets of "settable features" such as viewer features, settings, apps, settings and effect that may be added to the actual video content or made available to the viewers during replay and livestreaming of the video content. The producers may set up multiple sets of "settable features", for different audiences of the same basic show, allowing customized and distinct types of experiences. Certain triggers and color codes are inserted into the modified video stream, and utilized during replay to check the dashboard settings and implement or activate different experiences for different viewers. The triggers and color codes also resolve the latency problems with synchronization of the additional/ added features with respect to the actual timing within the video. They are also resolve the latency problem with delivery of the same additional features to different viewer devices.

Description

ADVANCED INTERACTIVE LIVESTREAM SYSTEM AND METHOD WITH REAL TIME CONTENT MANAGEMENT
Introduction
REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Patent Application No. 63/310,452, filed February 15, 2022, the entire disclosure of which is incorporated herein by reference in its entirety.
FIELD OF THE INVENTION
[0002] The present invention relates to automated and computerized system for content management and streaming of video content, where user/viewer experiences can be customized and delivered to the user in real time based on various settings. More particularly, the present invention relates to an automated computerized system that provides content management services with additional user-specific and configurable content and settings, which enhances user participation and also allows the content creator or producer to organize, scale and deliver additional user-specific content, settings, actions (i.e., features) in a more organized manner and provide different variations of content, with additional customizable user experience add-ons, settings, apps and features.
BACKGROUND
[0003] In recent years, live streaming of various video content have become a standard way of communicating. For example, a user/viewer may view a particular online lecture, attend a live on-line concert, participate in a live on-line interactive communication with others, or view some other video content that is organized and provided by the video content creator, organizer or producer. The References to “producer”, “content creator”, “content organizer”, “video content creator”, “video content organizer” and “event organizers” are used in this application interchangeably.
[0004] In order to allow distribution of different video content versions (or experiences) to different users/vi ewers, and allow those users/vi ewers to enjoy various content-related experiences that are set up by producer for the viewers (or different groups of viewers), the content is delivered via a live streaming platform. Thus, the user/viewer can enjoy the content-related experience on his or her computerized device that receives the video content and additional experience-changing or customizing data, settings, apps, and other content-related features via the live streaming platform.
[0005] Most currently known live streaming platforms are built with a content management system (CMS) that combines the content and the presentation layer in one system. This traditional approach results in a fixed and inflexible output. All content experiences or events based on this CMS will have a similar structure and layout, with little customization, and without a structured way to provide and deliver userspecific add-ons, settings, apps and features as part of the live on-line stream. In other words, the known systems do not allow the producer to create multiple versions of the same video content by simply varying the customizable add-ons, settings, apps and other features that are available to viewers without significant re-writing of the actual content video.
[0006] Moreover, the existing systems in this area usually have a monolithic architecture that only allows such systems to scale as a whole, which can be slow, inefficient and expensive.
[0007] Thus, there is a need for a system that can rapidly customize a platform for different viewer experiences and events, and can also customize in real-time, for live streaming. Moreover, there is a need for a backend computerized system that can implement and allow the event producer to create various features and add-ons, and make changes for every event quickly and efficiently (e.g., full control of placing new items, such as logos, images, or adding new tools, functionalities and settings), where these changes are reflected in a variety of live content viewing experiences for the viewers, but using the same base content.
Summary of the Invention
[0008] One objective of the present invention is to create a highly customizable, fast, scalable, highly dynamic platform for live streams and real time web experiences, which supports and allows multiple creative and diverse uses and outcomes simultaneously (i.e., different content experiences) for different users/viewers of the same base content. In order to achieve this objective, the present invention provides an Interactive Platform for Live User Experiences, which is sophisticated and comprehensive, more customized, and faster to adapt/update than other known systems, and which includes some or all of the following features:
1) is fully (i.e., hardcore) customizable and designable at scale;
2) has rapid iteration protocol rather than long development cycle(s);
3) allows for real time dynamic changes (in the experience in the moment) for the end users (i.e., the viewers), and customization control over the content experience for different users/viewers, or groups of viewers, by the event organizers.
[0009] The present invention achieves one or more of its objectives by decoupling the administrative layer of the CMS from the presentation layer. The present invention implements a computerized live content delivery system that is a headless CMS that focuses entirely on the administrative interface for the content creation and the facilitation of content workflows. As a result, the present invention allows event organizers or producers to quickly and efficiently organize and create completely different presentations of the same content, with different features and other settable addon features (for different groups of viewers). Data can be accessed by permitted users (for example, the producers of content) and others through an intermediary server (referred in this application as “lili Server” via an API, enabling the intermediary to customize the frontend completely to the needs of the client, event organizer, producer or event itself.
[00010] The present invention provides and implements a live video platform that has a headless CMS with live trigger functionality. The present invention provides the ability for the event organizers or producers to add any sort of live trigger-able features through the lili Server’s CMS, using simple drag and drop for the available (and organized) sets of options and features. Thus, in order to add a new item, a viewer feature, an app or a setting to the featured show or video stream, the present system does not need to modify and develop the database or back-end architecture, which ensures greater efficiency in development.
[00011] Dashboard Support. Another important feature of the present invention is organization, implementation and structure of the software that implements a computerized dashboard.
[00012] The electronic dashboard is a user front-end for the producers, event organizers, and video content creators of the video content to organize and customize distinct types of experiences (i.e., the collection of settable features) for different users/viewers of the same base content. The electronic dashboard may reside on the “lili Server,” where the processor executes computer software that implements various aspects of the present invention. In particular the producer may set up and organize various visual features that are accessible to different viewers of the same show, and, thus, create different viewer experiences for different viewers. Also, there are many apps and visual content that may be added or customized for delivery to various viewers of “different versions” of the same basic video. Moreover, the viewers may be allowed to perform their own actions, like uploading images, operating avatars, or sharing authorized content with others during video streaming. This makes the viewers’ experience more unique and different, and allows the producers to control the additional features and experiences of the live content for the viewers.
[00013] In some embodiments, the producer may also change settable features or activate certain add-on features for the different versions of the video content or versions of the show that are delivered during actual live performance. These options may be organized and manipulated through access to a dashboard on a lili Server. Alternatively, some parts or panels of the dashboard, which may be installed and operated on the producer’s own computer device/server or some other third party server. [00014] All selectable features in the dashboard are dynamic and real-time based. This allows the present invention to create complex individual sets of settable features. The features are then applied and shown to the viewers at the time of replaying of the video content on the user device, after receiving the streaming video through a content distribution network.
[00015] Another important feature of the present invention is to allow the producers to change and vary the set experiences live (i.e., in real time), for everyone watching the show (with live add-on features added to the visuals). This provides a highly advanced and efficient level of dynamic livestream customization, which is not offered by other known CMS systems.
[00016] Yet another important feature of the present invention is the ability to integrate a unique set of interactive applications, with open opportunity for the event organizers to add new applications (i.e., apps) functionalities. As an example, it allows the event producers to implement the following app features that the user can experience while viewing the video content:
1) Share the moment, by real time connection to social media;
2) Custom Photobooth, to take photographs or images during the event;
3) Motion / Sound activated interaction, allowing the user(s) to add their own motion-activated and sound-activated feedback to the content, and sharing this with others.
[00017] The producer or organizer may also trigger activation of certain sets of features shortly before or even while the video content is delivered to the viewers (i.e., in a live environment), while the video is played on the viewer’s computerized device. The producer or organizer may insert a color setting or a trigger into the video, which, at the time of replaying on the user device, will check the state of settings/settable features that are activated and maintained on the dashboard by the producer. Thus, the producer may activate a particular feature during live transmission, causing the activated feature to be added to the user experience and live video content display on the user device.
[00018] Live Latency Solution.
[00019] Another important issue addressed and resolved by the present invention is the problem of “latency” in livestream events. Latency in livestream events and experiences creates a real issue for accurately timing additional content, overlays or interactions. This is due to the unknown latency in the livestream service, latency in the content display and processing on the user’s device and latency in the cell phone service that delivers the live video content to the viewer’s mobile computerized device.
[00020] During a livestream video event, such as for example an online concert, the producer may want to invoke and show some visual elements or features to the user’s display device in order to get the user’s attention or invite the user to interact with the show. For example, the producer, content owner or organizer of the live event may allow each user to “make the noise” (i.e., share the excitement with other users), take and share an image on social media or deliver some “special effects” (i.e., virtual fireworks) at a particular part of the event. Often these situations call for an activation that is very time sensitive because it refers to a specific moment in the running of the show or video, and so the add-on element or feature should appear on all user devices that communicate and receive the specific features or effects. Moreover, the timesensitive special feature or effect, like for example, song lyrics or applauding from the actual viewers, can be added to the video content over the Internet at certain specific time(s) during the show. The timing and delivery of the time-sensitive special feature or effect may need to be synchronized with (1) the actual timing within the show that is played on the viewer’s computerized device; and (2) with responses of other viewers with respect to the delivered content, which is delivered with latency and differential to different users (i.e., latency in content delivery to different viewer devices).
[00021] During a livestream event the producer or content creator can’t tell for sure when certain moments occur during the run. Thus, it is not possible to add/set time-related action triggers, causing the specific functionality or add-on effects to just appear in the video, when they are not made part of the actual base video.
[00022] Moreover, depending on the streaming service and the technology behind it, the image of the livestream can arrive at the users’ device with a latency of up to 60 seconds or more. Services with relatively stable and small latency still result in a latency of 20 seconds between all receiving devices. So, even with a near-zero latency streaming service, the real time action triggers and actions will happen with unpredictable latency when the livestream video image is sent to user devices.
[00023] For a delivery of basic information element about the show, like comments or chats among viewers, the running of the show with this level of latency might be acceptable. However, in the case of delivery of the highly time-sensitive information, like, for example, song lyrics, the delivery of the additional data or app (like showing the song lyrics during performance) must be fully synced with the performance of the artist in the show or other event features in the content, as the video content is displayed for the viewer on the viewer’s computerized display. Otherwise, the user may receive the song lyrics totally out-of-synch with the actual performance of the song (i.e., the lyrics text starts when the artist is half-way through the song). This is a very undesirable situation, which negatively impacts the user experience with the video content, and limits the producer’s ability to offer interesting additional content and experiences in a live streaming event(s).
[00024] The present invention resolves some common issues with the latency problems in the delivery of live content and additional experiences and actions during the transmission, and particular during livestreaming events. Among other features, the present invention applies or inserts certain triggers (for the additional experience actions, apps, features or settings) into the original video stream before it is sent to the streaming service, where it is encoded and prepared for streaming (which causes the latency). Thus, prior to the transmission of the video to the streaming service (for distribution), the processor executes specific computer software on the intermediary server (i.e., the lili Server) and inserts certain specific codes or information (“lili triggers”) into the video output/content (i.e., the “modified video content”) before the occurrence of latency during transmission and replaying.
[00025] The user device contains a processor that also executes computer instructions that process and display the delivered livestream video. The software executed by the processor on the user device, or on a server which delivers and processes video content streamed to the user device, (1) searches in the video data for the lili triggers, and (2) if a trigger is found, then communicates with the intermediary server (i.e, the lili Server), which invokes and processes the active set of settable features for the show that are set on the lili dashboard, and then (3) sends instructions to the processor on the user device about the actions, effects and other features, and the timing information about the add-on items or effects that are displayed and made available to the viewer during the show (as it is delivered and replayed on the viewer’s computerized device).
[00026] The present invention also utilizes certain specific types of triggers for different actions and effects. Among other features, discussed below, the triggering system implements different types of triggers that initiate different types of actions or effects. The present invention also utilizes specific color codes for trigger coding and detection, video object detection triggers, video sound detection triggers, webcam motion detection triggers and microphone sound detection trigger.
[00027] When these color codes for the settable feature, such as triggers, settings or add-on features are found and recognized, the system implements different types of actions (based on the received triggers). For example, the system shows an overlay, opens a panel, displays an active object as a part/addition to the video content, displays song lyrics, displays active objects (like, for example, overlays) that allow products in the video to be purchased online through a hyperlink to the merchant’s website.
[00028] The Active Object feature, which is delivered as an add-on to the video content, enables viewers to explore and obtain additional information about objects, like goods, services or information featured in the video stream by connecting to a website with information about those objects. The viewers may also put the featured products or services on the wishlist, which may also be provided as an add-on feature for the viewers of the video content. From the wishlist, the viewer may purchase the featured and saved items, or may obtain more information about the items (like information about a song, songwriter, place of performance, etc.).
[00029] Another feature of the present invention is to have the computer software or Al (operating by a processor on a server) recognize an object featured in the video, and then modify the settable features or offer an add-on ecommerce app to connect to a website that describes or allows an online purchase. Thus, the Al or computer software (like, for example TensorFlow) may perform recognition processing on the featured object. Once it is recognized, the producer or event organizer may allow the recognized object to be converted to an “active object” that is available with the video content. For instance, the system will execute computer instructions on a processor, which will add functionality to the featured experience, like allowing the featured clothing item to become “shoppable”. As part of the shopping experience, the system may add an overlay to the featured object, and allow the viewer to click and buy the featured object.
[00030] Various other features and benefits of the present invention will become readily apparent to those of ordinary skill in the art from the following detailed description of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS [00031] The following detailed description, given by way of example and not intended to limit the present invention solely thereto, will best be appreciated in conjunction with the accompanying drawings, wherein like reference numerals denote like elements and parts, in which:
[00032] FIG. 1 illustrates a general structure, organization and operation of the computerized system and organization and structure of various components of the CMS in accordance with at least one embodiment of the present invention.
[00033] FIG. 2 illustrates a general structure, organization and interaction of various components of the CMS, and delivery of the customized content-related experiences to the video content that is provided to the viewers with at least one embodiment of the present invention.
[00034] FIGS. 3A-C illustrate the structure, organization of various settings, apps, add-on features, setting and other experience-related features in accordance with at least one embodiment of the present invention.
[00035] FIG. 4 illustrates the scalability features and code, data and process space organization of the present invention.
[00036] FIGS. 5A-5C illustrate the front end dashboard user interface for operating with settable features for the video content, such as interactive apps, settings and add-on features in accordance with at least one embodiment of the present invention.
[00037] FIG. 6 illustrates the process flow that addresses the latency problem for the video content in accordance with at least one embodiment of the present invention. DETAILED DESCRIPTION
[00038] A general structure, organization and operation of the computerized system and organization and structure of various components of the Content Management System (SMS) in accordance with at least one embodiment is explained with reference to
FIG. 1.
[00039] The content producer may use his or her desktop or mobile computer to access the lili Server backend dashboard 100 via the Internet, through a LAN or another type of network in order to perform various CMS functions and provide settable features for the viewing of the content producer’s video content. In other words, the producer can set up various different settable features for the content viewing experience (for different users, audiences, times, etc.)
[00040] The backend dashboard 100 on the lili Server organizes different settings, apps, options and add-ons that may be provided to the viewer of the video content. The specific organization and modularization of these items on the dashboard 100 allows the producer to quickly organize, modify and change the user experience for the producer’s content that is delivered to a Web-based desktop or mobile device 180 or as an application to the smartphone, tablet or 3D visor 185. The modified content is displayed on a device in 2D, as panoramic display, 3D display, Virtual Reality (VR) displays or Augmented Reality (AR) display 190.
[00041] The lili Server organizes and processes different settings, apps, options and add-ons into specific groups. For example, Core Concept 110 includes, among other items, the actual video content settings and add-ons, including livestream video, regular video and iFrame Shop, where the user can make or receive specific images from the delivered video content (which are approved and may be selected for delivery to the viewer by the producer).
[00042] Back Interface 120 may include and group such additional content and features as (1) logo, which is delivered as part of the video; (2) volume settings; (3) multi-dot, which is another variation of a selectable menu with multiple items (i.e., operates like the hamburger menu button); (3) logout; and (4) terms and conditions (T&C) for the event or content which is delivered to the viewer or in which the user participates.
[00043] Asset Management 130 may include content buttons that operate as executable files and settable features on the lili dashboard. Interactive Apps Panel 140 Management System may include triggers and settable actions or features for the real time changes to the delivered content to the user. Overlays and Triggers 150 include interactive elements that may be delivered as part of the content. Dynamic Asset Library 160 may include images, artwork or other items that may be provided as additional content during the viewing of the content. For example, an authorized image of a particular part of a concert, or a signed image or text from the performer or author is considered a part of the dynamic assets library group 160.
[00044] Gates 170 include such settable features as registration requirements and information, login requirements and tickets for the event. The Gates 170 allow the event organizer or producer to implement such additional functionality as (1) ticket control; (2) multiple levels of experience, through different “gates” to the event or content; (3) password settings; (4) ID verification settings; and (5) list of specific domains, allowing different experiences for distinct groups of invitees and attendants of the same event, or different experiences for employees of different companies. Finally, Monitoring Analytics feature group 175 provides analytics for the event or content, like, for example, the number of viewers who participated in discussions, chats, or uploaded some specific event-related images, etc.
[00045] The structure, organization of various settings, apps, add-on features, setting and other experience-related features is further described with reference to FIGS. 3A-C.
[00046] The lili Server Dashboard may include various settings, apps and addons that are grouped together. The accessibility or administrative settable features 300 may include such functions as RSVP, subscribe or create 301, login 302, access code 303 (i.e., password), count down 304 (timing for different events), age gate 305 (age-specific access limitations, or specific access for kids) and ticketing 309 (allow a purchase of a ticket for the event).
[00047] In addition, in some embodiments, the lili Server Dashboard may include such additional accessibility or administrative features as an alternative gate 3001 (which allows multiple access points to the same event, with different user experiences), “add to calendar” 3002 (allowing to add the event to a calendar) and link login 3003.
[00048] The video settable features 310 may include support for the following video events: a live video 312, scheduled video 314 (the producer may schedule the delivery of the video for a certain time), video on demand 316 and redundancy backup video 318 (allowing the producer to provide a backup video, or allow multiple concurrent video events as a redundancy). In addition, in some embodiments, the following additional video formats may be supported: a spatial audio 3011 (adjusting the audio based on the orientation of the subject in a 3D space, or changing audio volume when the object or listener changes position), 4K live video 3012 (supporting 4K resolution) and 3D live video 3013 (supporting a 3D video experience).
[00049] The operational control settable features 320 may include such features as a volume control 322 (allowing to change or control the volume on the viewer device), multi -dot 324 (menu selection, similar to a hamburger menu), T&C/FAQ 326 (controlling terms and conditions of the user and allowing FAQ information delivery as part of the delivered content), and language 328 (language settable features for the video, close captions, and translation). In addition, the operational control settable features may also include settable features such as a channel jump 3021 (allowing the viewer to change the gate or channel, and view the same show or content with a distinct set of settable features and user experiences), and close captioning 3022 (delivering close captioning with the video content). The operation control settable features may also include the setting for switching the video content or close captioning to another language.
[00050] The settable features 330 include the following add-on functionality and features to be added to the video stream:
(1) Chat 331 - allowing the viewers to conduct a chat during the show or event;
(2) Send a Question 332 - allowing the viewers to send a question to the performer or someone who is featured or provides the featured content;
(3) Curated Notes 333 - allow to share some viewer notes with other users/viewers (after the notes are curated by the producer);
(4) Download Gallery 334 - allow downloading of tutorials, presentation, or children drawing templates during the event or shortly before or after, and/or allow downloading and sharing of the uploaded user content with other users;
(5) Share the Moment 335 - allow downloading of a screenshot or image of a scene or some other delivered content;
(6) Photo Booth 336 - allow some additional artwork presentation, or allow editing or modification of the content in the video;
(7) Sync Lyrics 337 - allow delivery of lyrics in synchronization with the song being played as part of the content;
(8) Upload Gallery 338 - allow to upload the user/vi ewer’s artwork during the event;
(9) Make Some Noise 339 - allowing the system to control the microphone of the user device during the event or performance and capture user’s audio reaction to the displayed video. After receiving the reactions of multiple viewers, the content may be supplemented with some additional features, like virtual fireworks or audio noise (based on the recorded reactions of multiple viewers);
(10) Emoji Flow 340 - allowing or offering the emoji controls and emoji effects in response to certain actions, like allowing virtual fireworks based on the received Make Some Noise feature; (11) Active Content 341 - tutorials, with rotating panels to show different specs;
(12) Avatar Configure 342 - allowing to set up and configure own avatar for the video content; and
(13) Motion Trigger 343 - allow to activate and control the sensor on the user/viewer device and capture user’s motion (like clapping, dancing, etc.). The captured sensor information may be aggregated for all viewers, and then the video content modified based on the reaction and motions of the viewers (for example, allowing the avatar to dance or make some movements in response).
[00051] In addition, the add-on functionality and features to be added to the video stream may also include:
(14) People Panel 3031 - providing information about performers, artists, authors, etc.;
(15) Active Maps 3032 - providing information about where the event is taking place;
(16) Image Panel 3033 - allowing to share images from the video content as part of the “Share the Moment” functionality;
(17) Smart Schedule 3034 - providing an agenda for the show or a meeting;
(18) Sing Along Video Booth 3038 - activate and control user/viewer camera and microphone; capture the viewer’s audio and video and send to the lili Server, where the producer may authorize to perform the mixing of the user audio and video with the concert or event performance, and then deliver the mixed content to the viewer (or multiple viewers).
(19) Social Media Hub 3035 - discussion and sharing information about the show or part of the show through social media;
(20) Breakout Room 3036 - allowing viewers or participants to separate into smaller groups (during or after the performance);
(21) 2d Live Camera 3037 - controlling the second camera that is showing the event from another angle.
[00052] The eCommerce control settable features 350 are set for the whole show, and may include such features as a shopping grid 352 (allowing to make purchases of the products in a grid display of available products); a shopping wishlist 354 (allowing to set up and maintain a shopping wishlist for each of the viewers); a shopping in-content 356 (allowing purchased of the featured products within the delivered video content); a shop iFrame 358 (allowing the user to purchase using 3d party’s hyperlinks embedded within the delivered content). The shopping wishlist 354 allows to set up active objects within the video content (a button or event), and connect to the merchant’s website through a hyperlink or address embedded within the delivered content.
[00053] In addition, the add-on functionality and features to be added to the video stream may also include (a) ability to make donations 3051 (to charity events, as bets or auctions); (b) live tipping (allowing the viewers to tip the performers or others for the provided content); (c) Shop iFrame 360 degrees interactive overlay sphere 3053 (allowing to make purchases in a 3D space or 2D imitation of the 3D space. [00054] The overlay and trigger functionality and features 360 may include: a code trigger 361 ( for activating any effect or solving latency issues using code triggers that are inserted into the video content); a forced open 362 (forcing to open a panel); a forced close 363 (forcing to close a panel); an active object 364 (clickable action objects, like overlays of a featured object to allow purchase of that object through a hyperlink to the merchant’s website); an active corner 366 (a special button to Apple music app or website, allowing purchase of the Apple music), interactive overlay 366 (i.e., a button that operate a trigger); a pre-approved social post(s) 367 (opening up social media, like Facebook, and allow posting or sharing images with others); an 1 -Click -Add Apple Music 368 (connecting to Apple Music with one click); and a full page takeover 360 (overlaying the entire screen, like placing an animation over the delivered content).
[00055] The Overlay and Trigger settable features may also include a trivia 3061 (allowing to deliver trivia data and allow participation in the trivia question during the performance), and a polls 3062 (allowing the viewer to participate in polling activities.
[00056] The Informational Content settable features 370 may include 2048-bit RSA encryption 372, a GDPR compliance 374, a COPPA compliance 375, a Scalable Micro Services settable features 376 (low or full scalability setting), a WAF 377 (firewall presence and settings), Design Library 378 (collection where users upload their content), multi -events 379 (multiple event controls) and Watermarking 3071 (setting up privacy or IP-related watermarks as part of the background for the video content).
[00057] Operational control settable features may also include settings and features such as a channel jump 3021 (allowing the viewer to change the gate or channel, and view the same show or content with a distinct set of settable features and user experiences), and closed captioning 3022 (delivering closed captioning with the video content).
[00058] Dashboard Interface
[00059] The operation of the front end dashboard user interface for organizing and operating interactive apps, settings, triggers and other add-on features are described with reference to FIGS. 5A-5B.
[00060] A dashboard interface and settable features are initially selected for a “Set 1” 510 collection of interactive apps, settings and other add-on features that a producer wants to add to the content as an additional user experience. Initially, as illustrated in FIG. 5A, all boxes for the Core Content 520, Basics 530 and Assets 540 are not selected and appear off or as unselected. The producer can then click on the various settings, apps and add-on features that appear with a + sign, such as, for example chat, send a question, curated notes, upload gallery, download gallery, share the moment and others that appear as clickable or selectable boxes for the apps 540. In other embodiments, the producer or someone else who sets up the features such as experience apps, settings and add-ons for a particular video content by using a drag and drop of the apps or other items that are intended to be selected to the Core Content 520 row.
[00061] FIG. 5B illustrates a number of selections that have already been made for the “Set 1” 510. As illustrated, the apps Chat, Animated Sync Lyrics, Share the Moment, #Social Hub, Photo Booth and Make Some Noise have been selected and are included in the row for the Core Content 520 and Basics 530. Some of the boxes under the selectable Apps 540 (and under Triggers, Assets, Dynamic Assets, Accounts and Data) appear as being pressed, indicating that they have been selected for the Set 1.
[00062] This indicates in a visual manner to the producer which items from the Dashboard have been selected as additional user experiences for the content that is controlled by the Set 1 settings. These setting are used when the video content is delivered in live stream to the viewers’ devices and played there with the selected settings, apps, add-ons and triggers with the producer’s video content.
[00063] As illustrated in FIG. 5C, the producer has already selected settable features from the apps, triggers, add-ons and settings for the Sets 1-3 and is selecting now settable features for Set 4 - 516. The boxes under the core content and basics indicate which settings, add-ons and features have been selected for that set. The boxes below, however, can indicate (as clicked or selected buttons) all apps that have been selected for all of the organized sets - Set 1, Set 2, Set 3 and Set 4. In other words, the front user interface (for producers) to set up, organize and activate the settable features such as apps, triggers, add-ons and settings for the multiple versions of content that is delivered to the viewers with selections for Setl - Set N processed on the lili Dashboard, shown in FIGS. 5A-C.
[00064] Scalability. The scalability features and organization of the software and processes on the lili Server is illustrated with reference to FIG. 4.
[00065] The implementation of the software that operates the lili Server settings, Dashboard, user interface, triggers and other features of the present invention is implemented as a single infrastructure, which allows the producers (i.e., each client) to operate and execute code independently of others on the lili Server. Thus, the data and actual code used by each producer is independent from others, and their Dashboard and live-time processing front and back ends are executed in a separate code and data space. This provides better scalability and code independence for each client and his or her settable features for the viewing of the content. The software that implements the above services may use React, Typesnap JS and Nest computer coding or other similar languages for implementation of the functionality and organization described herein and also provide the autoscaling that is intended in at least one embodiment.
[00066] As shown in FIG. 4, the Frontend 410, Apps 420, Dashboard 430 and Static (shared) Content 440 can run as independent processes in a separate code and data space on the lili Server, communicating the content (from each process to the ECS Cluster 450 through the API service contact point 455. The data, for example uploaded or downloadable add-on data may be stored in an RDS database 457, accessible from the lili Server. In some embodiments the lili-Server ECS cluster 450 and the security group with the API service 455 communicates with the load balancer 460, which has a http listener 462 and a https listener 464 for watching and balancing the processing load for each of the processes 410-440 and others that are running on the ECS cluster 450. The communication with the load balancer may be done through the Internet 470. The ACM certificates provide security for each process 410-440.
[00067] In at least one embodiment, the software that implements and operates the functions and organizational structures discussed herein on the lili Server is built as a range of functional, mostly independent, components that communicate with each other using protocols. Databases, file system, cache, content delivery network, email service, client applications, hardware, the event trigger service and other services are working separately from each other and are managed by a load balancer which is able to scale the services in need. Each component can have its own tech stack and, importantly, scales independently based on the set requirements. The software is built on state of the art codebase and implements auto-scaling through an intelligent combination of microservices (AWS preferred) in one or more embodiments.
[00068] The experiences (websites, livestreams) are exposed using the global content delivery network (CloudFront) 490 (shown in FIG. 4). The software operates and is configured to replicate tasks based on resources usage or request count, so they are scaling automatically according to the system needs. While the system may use different services from different cloud service companies, it may also utilize AWS services in order to achieve a high level of compatibility and seamless communication between different processes.
[00069] Latency Solution. FIG. 6 illustrates a process flow that resolves the latency problem for the video content in accordance with at least one embodiment of the present invention. A video recording 610 is provided to the Gallery 615, and then to the Video post/mix process 620, which receives and processes settings, apps, add-ons and other settable features that are provided, set and managed on the Dashboard on the lili Server 670 by the producer or the video recording 610.
[00070] The settable features, including settings, are applied to the video content by the Video Post/Mix process 620. The triggers are inserted into the content (to allow settable and dynamic options, apps and add-ons). Also, color codes for different settable features are inserted into the contentment. The color codes indicate the specific settable features and/or combinations of settable features for the content (based on the lili Server Dashboard organization and selections, set by the producer). At the 620 process step, the processor executes software code that determines whether the color coding is needed to implement the lili Dashboard features/elements and combination of features/elements selected by the producer. If so, the processor executes instructions to insert the appropriate color codes into the video stream. In addition, it inserts triggers for the add-ons, features and settings and for latency problem.
[00071] The video content is then encoded by a hardware encoder 630 or a software encoder 635 (using multi-stream process). Once encoded, the video content is sent for distribution to the Computer Distribution Network (CDN) 640, and to Facebook 642, YouTube 644 and Instagram 646, or other social media platforms.
[00072] During distribution of live streaming content, the latency problem (described above) 650 is introduced into communications and live playbacks due to certain delays in delivery of the video content through the Distribution MUX 680 (distribution server) to each viewer device. The Content that is distributed to the viewers has the triggers and color settings/indicators imbedded as part of the video stream. [00073] Each user device is connected to the Server Platform Frontend, or operates an XML app or JAVA code that checks for triggers. In another example, the user/viewer device 690 receives the video stream from the Distribution MUX 680 (with encoded features and settings), communicates with a lili Server, where the content is checked for triggers and the experience features are applied to the content, including triggers that operate to establish the specific instance for the delivery of time-sensitive features (like lyrics to the song that is played in a concert video). In some embodiments, the software code for checking the video stream for color codes and triggers may be done on a separate server or the user device with downloaded Java applets (which execute software instructions for this processing).
[00074] The color codes and triggers may be pre-set and organized by the producer through the lili Server Dashboard, allowing the producer to have multiple sets of pre-arranged added functions and features for each variant of the delivered content. In addition, the settable features that are selected by the producer may include an option for activation and delivery of dynamic features that may be activated in real time. This allows the producer to activate one or more special add-ons, settings or features during actual live play and streaming of the video content. Thus, the producer may set/select settable features (i.e., effects, settings and actions) for the content that may be activated by the producer by selecting those dynamic actions on the lili Server Dashboard during actual live content delivery to the viewers. For instance, the producer may choose to switch content to another gate (i.e.„ another content delivery point for a different audience) during actual playing of the content. In another example, the producer may decide to add or allow some additional features, like purchasing the featured items during actual livestream, or allow some viewers (who pay or purchase some options) to have access to additional settable features and experiences.
[00075] It will be understood by those skilled in the art that each of the above steps or elements of the system will comprise computer-implemented aspects, performed by one or more of the computer components described herein. For example, any or all of the steps of collection, evaluation, processing and modeling of the frustration factors and data may be performed electronically. In at least one exemplary embodiment, all steps may be performed electronically - either by general or special purpose processors implemented in one or more computer systems such as those described herein.
[00076] It will be further understood and appreciated by one of ordinary skill in the art that the specific embodiments and examples of the present disclosure are presented for illustrative purposes only, and are not intended to limit the scope of the disclosure in any way.
[00077] Accordingly, it will be understood that various embodiments of the present system described herein are generally implemented as a special purpose or general-purpose computer including various computer hardware as discussed in greater detail below. Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media which can be accessed by a computer, or downloadable through communication networks. By way of example, and not limitation, such computer-readable media can comprise physical storage media such as RAM, ROM, flash memory, EEPROM, CD- ROM, DVD, or other optical disk storage, magnetic disk storage or other magnetic storage devices, any type of removable non-volatile memories such as secure digital (SD), flash memory, memory stick etc., or any other medium which can be used to carry or store computer program code in the form of computer-executable instructions or data structures and which can be accessed by a computer executing specific software that implements the present invention, or a mobile device.
[00078] When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed and considered a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a computer or processing device such as a mobile device processor to perform one specific function or a group of functions that implement the present system and method.
[00079] Those skilled in the art will understand the features and aspects of a suitable computing environment in which aspects of the invention may be implemented. Although not required, the inventions are described in the general context of computerexecutable instructions, such as program modules or engines, as described earlier, being executed by computers in networked environments. Such program modules are often reflected and illustrated by flow charts, sequence diagrams, exemplary displays, and other techniques used by those skilled in the art to communicate how to make and use such computer program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types, within the computer. Computer-executable instructions, associated data structures, and program modules represent examples of the program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps.
[00080] Those skilled in the art will also appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, and the like. The invention is practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
[00081] An exemplary system for implementing the inventions, which is not illustrated, includes a computing device in the form of a computer, laptop or electronic pad, including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The computer will typically include one or more magnetic hard disk drives (also called “data stores” or “data storage” or other names) for reading from and writing to. The drives and their associated computer-readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer. Although the exemplary environment described herein employs a magnetic hard disk, a removable magnetic disk, removable optical disks, and/or other types of computer readable media for storing data, including magnetic cassettes, flash memory cards, digital video disks (DVDs), Bernoulli cartridges, RAMs, ROMs, and the like.
[00082] Computer program code that implements most of the functionality described herein typically comprises one or more program modules may be stored on the hard disk or other storage medium. This program code, as is known to those skilled in the art, usually includes an operating system, one or more application programs, other program modules, and program data. A user may enter commands and information into the computer through keyboard, pointing device, a script containing computer program code written in a scripting language or other input devices (not shown), such as a microphone, etc. These and other input devices are often connected to the processing unit through known electrical, optical, or wireless connections.
[00083] The main computer that effects many aspects of the inventions will typically operate in a networked environment using logical connections to one or more remote computers or data sources, which are described further below. Remote computers may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically include many or all of the elements described above relative to the main computer system in which the inventions are embodied. The logical connections between computers include a local area network (LAN), a wide area network (WAN), and wireless LANs (WLAN) that are presented here by way of example and not limitation. Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets and the Internet.
[00084] When used in a LAN or WLAN networking environment, the main computer system implementing aspects of the invention is connected to the local network through a network interface or adapter. When used in a WAN or WLAN networking environment, the computer may include a modem, a wireless link, or other means for establishing communications over the wide area network, such as the Internet. In a networked environment, program modules depicted relative to the computer, or portions thereof, may be stored in a remote memory storage device. It will be appreciated that the network connections described or shown are exemplary and other means of establishing communications over wide area networks or the Internet may be used.
[00085] Calculations and evaluations described herein, and equivalents, are, in an embodiment, performed entirely electronically. Other components and combinations of components may also be used to support processing data or other calculations described herein as will be evident to one of skill in the art. A computer server may facilitate communication of data from a storage device to and from processor(s), and communications to computers. The processor may optionally include or communicate with local or networked computer storage which may be used to store temporary or other information. The applicable software can be installed locally on a computer, processor and/or centrally supported (processed on the server) for facilitating calculations and applications.
[00086] In view of the foregoing detailed description of preferred embodiments of the present invention, it readily will be understood by those persons skilled in the art that the present invention is susceptible to broad utility and application. While various aspects have been described in the context of a preferred embodiment, additional aspects, features, and methodologies of the present invention will be readily discernible from the description herein, by those of ordinary skill in the art. Many embodiments and adaptations of the present invention other than those herein described, as well as many variations, modifications, and equivalent arrangements and methodologies, will be apparent from or reasonably suggested by the present invention and the foregoing description thereof, without departing from the substance or scope of the present invention. Furthermore, any sequence(s) and/or temporal order of steps of various processes described and claimed herein are those considered to be the best mode contemplated for carrying out the present invention.
[00087] It should also be understood that, although steps of various processes may be shown and described as being in a preferred sequence or temporal order, the steps of any such processes are not limited to being carried out in any particular sequence or order, absent a specific indication of such to achieve a particular intended result. In most cases, the steps of such processes may be carried out in a variety of different sequences and orders, while still falling within the scope of the present inventions. In addition, some steps may be carried out simultaneously. [00088] The foregoing description of the exemplary embodiments has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the inventions to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. [00089] The embodiments were chosen and described in order to explain the principles of the inventions and their practical application so as to enable others skilled in the art to utilize the inventions and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present inventions pertain without departing from their spirit and scope.
[00090] Accordingly, the scope of the present inventions is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein.
[00091] While certain exemplary aspects and embodiments have been described herein, many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, exemplary aspects and embodiments set forth herein are intended to be illustrative, not limiting. Various modifications may be made without departing from the spirit and scope of the disclosure.

Claims

CLAIMS What is claimed is:
1. An automated computerized video content management system comprising: at least one processor executing a plurality of computer instructions stored in memory, causing the processor to operate the system to perform the following steps:
(1) organizing and providing access to at least one of a setting, an add-on app, an asset, an additional feature and a trigger as a settable group of features for a video stream;
(2) receiving instructions to select one or more features in said settable group of features for a video stream as a first settable group of features;
(3) encoding at least one feature in said first settable group of features selected in the receiving step into a video stream to generate an encoded video stream;
(4) maintaining and allowing automated access to said first settable group of features through a computerized and automated dashboard user front end for managing video content in the video stream;
(5) modifying one or more features in the first settable group of features using the computerized and automated dashboard user front end;
(6) transmitting the encoded video stream to a viewer device, having a processor, a computer memory and a video display, through a content distribution network; and
(7) causing to automatically apply a modified first group of features set in the modifying step to the encoded video stream during replaying of the encoded video stream on the viewer device.
2. The system of claim 1, wherein at least one feature of the modified first group of features is specified in or inserted into the encoded video stream as at least one of a color code or a trigger.
3. The system of claim 2, wherein the at least one trigger inserted into the encoded video stream is processed during replaying of the encoded video stream to provide synchronization for at least one time-sensitive feature of the modified first group of features, said at least one time-sensitive feature executed and provided to the viewer during the replaying of the encoded video stream on the viewer device.
4. The system of claim 3, wherein the encoded video stream is a livestreaming event comprising displaying at least one time-sensitive feature on the viewer device based on the modified first group of features.
5. The system of claim 4, wherein the encoded video stream comprises a musical video content, and wherein at least one time-sensitive feature comprises displaying lyrics that are synchronized with the replaying of said musical video content.
6. The system of claim 1, wherein the encoded video content includes at least one active object feature that allows a viewer to purchase one or more products or services displayed as or included in the encoded video content.
7. The system of claim 6, further comprising a processor on a server, executing computer instructions that cause the software to perform object recognition and identification of an object that is included in the encoded video stream.
8. The system of claim 1, wherein the encoded video stream is one of a live video, a scheduled video, a video-on-demand, a redundancy back-up video, a spatial audio, a 4K resolution video or a 3D live video.
9. The system of claim 1, wherein the processor executes additional computer instructions, causing the processor to operate the system to perform the following additional steps:
(8) receiving instructions to select one or more features in the settable group of features for a video stream as a second settable group of features;
(9) encoding at least one feature in said second settable group of features selected in the receiving step into a video stream to generate a second encoded video stream;
(10) maintaining and allowing automated access to said second settable group of features through a computerized and automated dashboard user front end for managing video content in the video stream;
(11) modifying one or more features in the second settable group of features using the computerized and automated dashboard user front end;
(12) transmitting the second encoded video stream to at least one viewer device, having a processor a computer memory and a video display, through a content distribution network; and (13) causing to automatically apply a modified second group of features set in the modifying step to the second encoded video stream during replaying of the second encoded video stream on the at least one viewer device.
10. The system of claim 9, wherein the second encoded video stream, modified with the modified second group of features, is replayed at the same time as the encoded video stream modified with the first modified group of features, and wherein the second encoded video stream is replayed on a different viewer device from the encoded video stream modified with the first modified group of features.
11. An automated method for video content management comprising steps of
(1) organizing and providing access to at least one of a setting, an add-on app, an asset, an additional feature and a trigger as a settable group of features for a video stream;
(2) receiving instructions to select one or more features in said settable group of features for a video stream as a first settable group of features;
(3) encoding at least one feature in said first settable group of features selected in the receiving step into a video stream to generate an encoded video stream;
(4) maintaining and allowing automated access to said first settable group of features through a computerized and automated dashboard user front end for managing video content in the video stream; (5) modifying one or more features in the first settable group of features using the computerized and automated dashboard user front end;
(6) transmitting the encoded video stream to a viewer device, having a processor a computer memory and a video display, through a content distribution network; and
(7) causing to automatically apply a modified first group of features set in the modifying step to the encoded video stream during replaying of the encoded video stream on the viewer device.
12. The method of claim 11, further comprising: inserting into the encoded video stream at least one feature in the first settable group of features as at least one color code or trigger.
13. The method of claim 12, further comprising: processing the at least one color code or trigger inserted into the encoded video stream during replaying of the encoded video stream, and utilizing the at least one color code or trigger inserted into the encoded video stream to provide synchronization for at least one time-sensitive feature of the modified first group of features, said at least one time-sensitive feature executed and provided to the viewer during the replaying of the encoded video stream on the viewer device.
14. The method of claim 13, wherein the encoded video stream is a livestreaming event, and further comprising: displaying at least one time-sensitive feature on the viewer device based on the modified first group of features.
15. The method claim 14, wherein the displaying of the at least one time-sensitive feature comprises displaying lyrics that are synchronized with the replaying of a musical video content in the encoded video stream.
16. The method of claim 11, further comprising: including as part of the encoded video content at least one active object feature that allows a viewer to purchase one or more products or services displayed as or included in the encoded video content.
17. The method of claim 16, further comprising: performing object recognition and identification of an object that is displayed as or included in the encoded video content.
18. The method of claim 11, wherein the causing to automatically apply a modified first group of features step is performed on an encoded video stream comprising a live video, a scheduled video, a video-on-demand, a redundancy back-up video, a spatial audio, a 4K resolution video or a 3D live video.
19. The method of claim 11, further comprising:
(8) receiving instructions to select one or more features in the settable group of features for a video stream as a second settable group of features; (9) encoding at least one feature in said second settable group of features selected in the receiving step into a video stream to generate a second encoded video stream;
(10) maintaining and allowing automated access to said second settable group of features through a computerized and automated dashboard user front end for managing video content in the video stream;
(11) modifying one or more features in the second settable group of features using the computerized and automated dashboard user front end;
(12) transmitting the second encoded video stream to at least one viewer device, having a processor a computer memory and a video display, through a content distribution network; and
(13) causing to automatically apply a modified second group of features set in the modifying step to the second encoded video stream during replaying of the second encoded video stream on the at least one viewer device.
20. The method of claim 19, further comprising: replaying the second encoded video stream, modified with the second group of features, at the same time as the encoded video stream modified with the first modified group of features, and wherein the second encoded video stream is replayed on a different viewer device from the encoded video stream modified with the first modified group of features.
PCT/US2023/013157 2022-02-15 2023-02-15 Advanced interactive livestream system and method with real time content management WO2023158703A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263310452P 2022-02-15 2022-02-15
US63/310,452 2022-02-15

Publications (1)

Publication Number Publication Date
WO2023158703A1 true WO2023158703A1 (en) 2023-08-24

Family

ID=87578992

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/013157 WO2023158703A1 (en) 2022-02-15 2023-02-15 Advanced interactive livestream system and method with real time content management

Country Status (1)

Country Link
WO (1) WO2023158703A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232705A1 (en) * 2017-02-15 2018-08-16 Microsoft Technology Licensing, Llc Meeting timeline management tool
US20200186887A1 (en) * 2018-12-07 2020-06-11 Starship Vending-Machine Corp. Real-time broadcast editing system and method
KR20210028786A (en) * 2019-09-04 2021-03-15 주식회사 옴니씨앤에스 Individual or group treatment system in virtual reality environment and the method thereof
KR20210153840A (en) * 2020-06-11 2021-12-20 에스케이텔레콤 주식회사 Method, System And Computer Program for Providing Streaming Content Group Play
US20220030284A1 (en) * 2019-12-11 2022-01-27 At&T Intellectual Property I, L.P. Methods, systems, and devices for identifying viewed action of a live event and adjusting a group of resources to augment presentation of the action of the live event

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232705A1 (en) * 2017-02-15 2018-08-16 Microsoft Technology Licensing, Llc Meeting timeline management tool
US20200186887A1 (en) * 2018-12-07 2020-06-11 Starship Vending-Machine Corp. Real-time broadcast editing system and method
KR20210028786A (en) * 2019-09-04 2021-03-15 주식회사 옴니씨앤에스 Individual or group treatment system in virtual reality environment and the method thereof
US20220030284A1 (en) * 2019-12-11 2022-01-27 At&T Intellectual Property I, L.P. Methods, systems, and devices for identifying viewed action of a live event and adjusting a group of resources to augment presentation of the action of the live event
KR20210153840A (en) * 2020-06-11 2021-12-20 에스케이텔레콤 주식회사 Method, System And Computer Program for Providing Streaming Content Group Play

Similar Documents

Publication Publication Date Title
US11818417B1 (en) Computing network for synchronized streaming of audiovisual content
US10033967B2 (en) System and method for interactive video conferencing
US10924796B2 (en) System and methods for interactive filters in live streaming media
US10306324B2 (en) System and method for presenting content with time based metadata
US10999650B2 (en) Methods and systems for multimedia content
US9270715B2 (en) System and method for coordinating display of shared video data
US11070599B2 (en) Broadcasting and content-sharing system
US9043386B2 (en) System and method for synchronizing collaborative form filling
US10484736B2 (en) Systems and methods for a marketplace of interactive live streaming multimedia overlays
US20090070673A1 (en) System and method for presenting multimedia content and application interface
US11272251B2 (en) Audio-visual portion generation from a live video stream
US20190104325A1 (en) Event streaming with added content and context
US20180211342A1 (en) Control of content distribution
US20140173644A1 (en) Interactive celebrity portal and methods
WO2009139903A1 (en) System and method for providing a virtual environment with shared video on demand
CN109379618A (en) Synchronization system and method based on image
TW201001188A (en) Extensions for system and method for an extensible media player
CN102937860A (en) Distribution semi-synchronous even driven multimedia playback
WO2023158703A1 (en) Advanced interactive livestream system and method with real time content management
KR20180041879A (en) Method for editing and apparatus thereof
KR102566312B1 (en) Method and apparatus for providing platform album service
Konstanteli et al. Combining Social, Audiovisual and Experiment Content for Enhanced Cultural Experiences
Bibiloni et al. An Augmented Reality and 360-degree video system to access audiovisual content through mobile devices for touristic applications
Zorrilla et al. Next Generation Multimedia on Mobile Devices
KR20230121700A (en) Method and apparatus for providing platform album service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23756855

Country of ref document: EP

Kind code of ref document: A1