WO2020152506A1 - A system and method for interactive content viewing - Google Patents

A system and method for interactive content viewing Download PDF

Info

Publication number
WO2020152506A1
WO2020152506A1 PCT/IB2019/052108 IB2019052108W WO2020152506A1 WO 2020152506 A1 WO2020152506 A1 WO 2020152506A1 IB 2019052108 W IB2019052108 W IB 2019052108W WO 2020152506 A1 WO2020152506 A1 WO 2020152506A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
user
devices
live event
server device
Prior art date
Application number
PCT/IB2019/052108
Other languages
French (fr)
Inventor
Harsha Chaturvedi
Anil Kumble
Original Assignee
Spektacom Technologies Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spektacom Technologies Private Limited filed Critical Spektacom Technologies Private Limited
Priority to GB1906556.4A priority Critical patent/GB2578498B/en
Priority to AU2019203202A priority patent/AU2019203202A1/en
Publication of WO2020152506A1 publication Critical patent/WO2020152506A1/en
Priority to AU2021200238A priority patent/AU2021200238B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/252Processing of multiple end-users' preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Definitions

  • the present invention generally relates to the field of computer related technologies, and more particularly, to a method and system for an interactive content viewing.
  • television is one of the commonly used means by the users for indulging into viewership of the broadcasted content.
  • the advent of video over the Internet is starting to change the control relationship between a user and a broadcaster. Apart from the television, the users can view the broadcasted content onto their handheld devices or other portable electronic devices.
  • the users are provided with freedom of timing and provided with controls like pause and storing of the content. Still, the nature of that viewing experience is not significantly improved.
  • the users are still not able to interact with the broadcasted content, such as accessing information about a particular part of the broadcasted content, sharing their views about the broadcasted content among their peers, and other such interactive activities. For instance, conventionally, users cannot assess information about a particular player during live telecast of a cricket match or the users cannot directly share/exchange their views about the player during live telecast of the cricket match, and other similar activity.
  • the conventional systems also lack capabilities of providing personalized viewership experience where users can have an option to view, access, purchase, procure, or gamify any information that is being broadcasted in real time.
  • the existing eco systems in the market are suited for manually analysing by commentators of the sport events, where comparison of a current event is performed with historical data. There is no such provision where viewers/commentators/users/audience/coaches/media/ stakeholders, etc., can search on the fly for historical data relevant to a current event and use the results for varied purposes like evaluating performance of a sports man, predicting occurrence of an event, coaching, broadcasting, etc.
  • a method performed by a server device for facilitating interactive content viewing includes receiving first content which is broadcasted to plurality of user devices on a primary communication channel from one or more edge devices.
  • the first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event.
  • the method further includes receiving in real time at least one user input based on the broadcasted content received by the one or more user devices from the one or more user devices. Further, the method includes generating, second content related to the broadcasted first content and at least one user input received from the one or more user devices and a transmitting the generated second content to the one or more user devices , in response to the received at least one user input, on one or more secondary communication channels.
  • a method performed by a user device for facilitating interactive content viewing includes receiving, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event.
  • the method further includes displaying the received first content, and receiving one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content.
  • the method includes transmitting a request to a server device, the request indicating data specified through the received one or more user inputs and another receiving unit, in response to the transmitted request, second content on one or more secondary communication channels, wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
  • a server device for facilitating interactive content viewing.
  • the server device includes a receiver, a processor and a transmitter.
  • a receiver is configured to receive first content from one or more edge devices, the first content is broadcasted to plurality of user devices on a primary communication channel, wherein the first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event.
  • the receiver is further configured to receive, from the one or more user devices, in real time at least one user input based on the broadcasted content received by the one or more user devices.
  • the processor is configured to generate second content related to the broadcasted first content and at least one user input received from the one or more user devices and the transmitter is configured to transmit, in response to the received at least one user input, the generated second content to the one or more user devices on one or more secondary communication channels.
  • a user device for facilitating interactive content viewing.
  • the user device includes a transceiver, a display and a processor.
  • the transceiver is configured to receive, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event.
  • the display is configured to display the received first content.
  • the processor is configured to receive, through an I/O interface, one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content.
  • the transceiver is further configured to transmit a request to a server device, the request indicating data specified through the received one or more user inputs, and it is further configured to receive, in response to the transmitted request, second content on one or more secondary communication channels, wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
  • the system includes a server device and a user device which are described as above.
  • FIG. 1 illustrates an architecture of the system of the present invention.
  • FIG. 2 illustrates a block diagram of a server device, in accordance with an embodiment of the present invention.
  • FIG. 3 illustrates a block diagram of a user device, in accordance with an embodiment of the present invention.
  • FIG. 4 illustrates an exemplary view of a screen of a user device, in accordance with an embodiment of the present invention.
  • FIG. 5 illustrates a method performed by a server device, in accordance with an embodiment of the present invention.
  • FIG. 6 illustrates a method performed by a user device in accordance with an embodiment of the present invention.
  • a system and method for interactive content viewing is disclosed.
  • the system allows viewers to interact with the system in order to access the content in a way desired by the user, thereby facilitating interactive content viewing for the viewers.
  • FIG. 1 an architecture of the system (100) of the present invention, is disclosed.
  • the system (100) includes one or more edge devices which are located at a place where a live event is happening.
  • one edge device (104) has been described and shown herein the description and drawings, however a plurality of edge devices may be present at various locations inside the place depending on the coverage required for the live event.
  • the place may refer to but not limited to an appropriate location depending on an event which is taking place at the location.
  • the place herein may refer to a cricket stadium if the event is a cricket match. In another exemplary embodiment, the place may refer to a studio if the event is a singing show.
  • the edge device (104) is present in a non-obtrusive way at the place where the event is taking place. In one exemplary embodiment, the edge device (104) may be located in stumps used in a cricket match. In another exemplary embodiment, the edge device (104) may be present at a boundary of the cricket stadium.
  • the edge device (104) may receive data from one or more input devices (102) 1 ... (102) n .
  • the one or more input devices (102) 1 ... (102) n may refer to one or more sensors located at the place where the event is happening. The one or more sensors are located in such a manner so as to sense data corresponding to various activities happening at the event.
  • the one or more input devices may include sensors present on a bat of each player, microphone present at the stumps, cameras installed at various locations inside the cricket stadium, and the like.
  • the one or more input devices (102) 1 ... (102) n may refer to iPlay devices which are present at the place where the event is happening.
  • the iPlay devices may receive data from other input devices, such as sensors and in real time may transmit data to the edge device (104).
  • the one or more input devices (102) 1 ... (102) n may refer to sensors known in the art, which may provide an audio-video data corresponding to sensed events happening at the location where the live event is taking place.
  • the one or more input devices (102) 1 ... (102) n may transmit the sensed data to the edge device (104) in real-time.
  • the transmission by the input devices (102) 1 ... (102) n to the edge device (104) may be a wireless transmission.
  • the transmission by the input devices (102) 1 ... (102) n to the edge device (104) may be a wired transmission.
  • the edge device (104) receives data from the one or more input devices (102) 1 ... (102) n and further wirelessly transmit first content corresponding to the received data, to a broadcasting computer (108) and a server device (106).
  • the broadcasting computer (108) and the server device (106) may be remotely placed from each other, and the server device (106) may receive the first content from the broadcasting computer (108).
  • the broadcasting computer (108) may form part of the server device (106) so as to be one entity and the server device (106) may directly receive the first content from the edge device (104).
  • the one or more input devices (102) 1 ... (102) n and the edge device (104) may be low powered devices which are capable of transmitting and receiving small or weak signals across short distances to reduce battery consumption.
  • the edge device (104) may transmit strong signals across long distances, in predetermined conditions for a predetermined time duration.
  • the one or more input devices (102) 1 ... (102) n and the edge device (104) may be configured to operate in a stealth mode so as to avoid interference with signals from other devices present at the live event. Due to the use of the edge device (104) along with the plurality of input devices (102) 1 ... (102) n , the system of the present invention is efficient, power saving, and cost-effective, as compared to conventional prior art systems in which all of the devices directly transmit data to a server or any other remote computer.
  • the server device (106) may include one or more processors, such as a processor 202, one or more memory, such as memory 204, a receiver 206, and a transmitter 208.
  • the processor 202 may be communicably coupled with the receiver 206 to receive first content from the edge device (104) or data from other devices.
  • the transmitter 208 may be configured to transmit second content generated by the processor 202.
  • the processor 202 is in communication with the memory 204, wherein the memory 204 includes program modules such as routines, programs, objects, components, data structures and the like, which perform particular tasks to be executed by the processor 202.
  • the broadcasting computer (108) may live telecast an event, wherein the telecast is based on the first content from the edge device (104).
  • the server device (106) may be communicably connected with the broadcasting computer (108) to receive the first content.
  • the server device (106) may be communicably connected with storage devices (112) to provide the received first content to the storage devices (112).
  • the storage devices (112) is used to store historical data about the event which is taking place, such as data about the game which is being played at the cricket stadium, statistical data about the event which is happening, analytical data about the event which is happening, and the like.
  • the storage devices (112) may store historical data about users who are part of the event, such as players playing the match, or a singer performing at a music concert, and the like.
  • the server device (106) is further connected to one or more user devices (110) 1 ... (110) n .
  • the storage devices (112) may be one or more databases archiving historical data or may be one or more computers of one or more external agencies storing records and historical data.
  • the user device (110) may include one or more processors, such as a processor (302), one or more memory, such as memory (304), a transceiver (306), one or more I/O interfaces, such as an I/O interface 308 and a display 310.
  • processors such as a processor (302), one or more memory, such as memory (304), a transceiver (306), one or more I/O interfaces, such as an I/O interface 308 and a display 310.
  • the processor (302) may be communicably coupled with the transceiver (306) to receive signals from the server device (106), from the broadcasting computer (108), and/or from other devices, wherein the signals include first content, second content, third content, and/or additional sub-content. Further, the transceiver (306) may be configured to transmit signals generated by the processor (302), wherein the signals generated by the processor (302) may include user inputs.
  • the processor (302) is in communication with the memory 304, wherein the memory (304) includes program modules such as routines, programs, objects, components, data structures and the like, which perform particular tasks to be executed by the processor (302).
  • the user device (110) may be connected to other user devices either wirelessly or by using the I/O interface (308).
  • a display 310 may be utilized to receive user inputs from a user using the user device 110, wherein the display 310 may be a touch screen display.
  • the I/O interfaces 308 may include a variety of software and hardware interfaces, for instance, interface for peripheral device(s) such as a keyboard, a mouse, a scanner, an external memory, a printer and the like.
  • the one or more user devices (110) 1 ... (110) n may refer to but not limited to mobile phones, tablets, laptops, personal digital assistant, and other handheld devices. In another embodiment, the one or more user devices (110) 1 ... (110) n may refer to smart display devices, such as smart televisions, LEDs, LCDs, smart kiosks, and the like.
  • the user device (110) may receive the broadcasted first content from the broadcasting computer (108) or from the server device (106), on a primary communication channel. In one embodiment, the primary communication channel may refer to but not limited to a radio channel.
  • the first content is displayed by the user device (110) on the display (310).
  • the user device (110) may receive one or more user inputs through the I/O interfaces (108) or through interaction of a user on the display (310) which is a touch screen display.
  • the one or more user inputs may correspond to one or more actions performed by a user on the user device (110), such as voice commands, commands through the I/O interfaces 308, and/or commands through the display (310).
  • one or more user inputs may correspond to the commands through a remote control device which may be connected with the user device (110).
  • the one or more user inputs may correspond to the commands through gesture control functionality supported by the user device (110).
  • the user device (110) may comprises an application programming interface (API) which provides means for interaction such as voice control unit (VCU).
  • API application programming interface
  • the VCU is controlled by the processor (302) which enables the user device (110) to receive inputs from a user in preferred language of the user.
  • the VCU allows the user to speak in his native language which is decoded by the processor (302) to generate user inputs which may be sent to the server device (106) for further processing.
  • the VCU further generates an output for the user in a preferred natural language of the user. For example, if a user wants to interact using a German language, then content received from the server device (106) may be output to the user in German language.
  • the output may also be displayed on the display (310) of the user device (110).
  • the user device (110) may refer to a smart large screen display devices installed at a place where the live event is happening.
  • the user device (110) may be located remotely from the server device (106) and may receive communication from the server device (106) through a wireless connection or a wired connection.
  • the user device (110) may also be located in vicinity to the one or more input devices (102) 1 ... (102) n .
  • the server device (108) may receive a request from the user devices (110) 1 ... (110) n' indicating data specified through the one or more user inputs. Based on the received request, the server device (106) may communicate with the data storage devices (112) to extract data therefrom. The data extracted from the data storage device (112) may be processed by the processor (202) to generate second content which is related to the first content which was broadcasted to the one or more user devices (110) 1 ... (110) n . In one embodiment, the processor (202) of the server (106) utilizes neural networks to process one or more user inputs received from the user devices (110) 1 ... (110) n and to determine historical data which is to be extracted from the data storage devices (112). In one embodiment, the neural networks are models having specific set of algorithms which are executed to implement machine learning and artificial intelligence.
  • the processor (202) uses the neural network to parse the received user inputs and based on the parsed user inputs, may determine which of the data storage devices (112) needs to be communicated to extract required historical data.
  • the processor (202) uses the neural network to breakdown the user inputs into smaller fragments which are further analyzed by the neural network to interpret the meaning of the user inputs.
  • the server device (106) may further use machine learning, and artificial intelligence through neural networks to analyze the extracted historical data to generate the second content, wherein the historical data refers to but not limited to statistical data and analytical data related to the first content.
  • the neural network correlates the parsed and analyzed query components with the historical analytical data and historical statistical data to generate the second content requested by a user in the user request.
  • the server device (106) may receive additional sub-content from the edge device (104) based on the user inputs parsed by the processor (202) using the neural network.
  • the processor (202) may use the neural network to determine whether any additional content is requested by the users in the user inputs.
  • the neural network parse the user inputs and breakdown the parsed user inputs to understand requirements specified in the user inputs.
  • the processor (202) using the neural network determines that the additional content is requested by the user, then the server device (106) may instruct the edge device (104) to transmit the required additional content to the server device (106).
  • the server device (106) may search the requested additional content in the received first content itself, once the server device (106) identifies the additional content which is requested in the user requests.
  • the received or generated additional sub-content is related to the first content which was broadcasted by the broadcasting computer (108) to the one or more user devices (110) 1 ... (110) n .
  • the additional sub-content is used in the generated second content.
  • the generated second content is sent by the server device (106) to the one or more user devices (110) 1 ... (110) n on one or more secondary communication channels.
  • the one or more secondary communication channels may refer to radio channels different from the primary communication channel which was used for broadcasting the first content.
  • the processor (202) of the server device (106) may use the neural network to generate one or more recommendations based on the user inputs parsed by the processor (202) using the neural network.
  • the recommendations may comprise suggestions related to the generated second content.
  • the recommendations may include suggestions regarding probable user inputs which may be sent by the user devices in future.
  • the generated recommendations are transmitted to the one or more user devices (110) 1 ... (110) n on the secondary communications channels.
  • the server device (106) may connect with a social media platform according to profile of a user of the user device (110) to extract preferences, likes, and/or dislikes of the user, which may be used by the server device (106) to generate the recommendations.
  • the user inputs received from one user device (110) may be multicast to every other user devices (110) 1 ... (110) n by the server device (106).
  • Each of the user devices (110) 1 ... (110) n may receive multicast comprising user inputs from every other user device.
  • the server device (106) may multicast second content to each of the one or more user devices (110) 1 ... (110) n .
  • the first content received by the server device (106) may include voice data of users present at the live event which is captured by the one or more input devices (102) 1 ... (102) n located nearby to such users.
  • the processor (202) uses the machine learning through the neural networks to determine emotions and context of conversations included in the voice data captured by the input devices (102) 1 ... (102) n . Based on the determined emotions and context, the processor (202) generate second content which is in accordance with determined emotions and context and may transmit the second content to one or more user devices (110) 1 ... (110) n which are present at the live event.
  • the processor (202) using the neural network may determine whether there is a need to connect to a social media platform or to an e-commerce provider to meet demands of users identified based on the determined emotions and context.
  • the generated second content in accordance with determined emotions and context may include relevant statistics related to the live event, advertisement to buy for a particular product relevant to determined context and emotions, and the like.
  • the server device (106) receives first content about the live event and merges it with viewership information, such as emotions, conversations, history of performers at the live event, and viewers in such a way that a broadcaster may engage with viewers simultaneously at different levels, such as at the live event, television viewers, and/or social media interactions.
  • viewership information such as emotions, conversations, history of performers at the live event, and viewers in such a way that a broadcaster may engage with viewers simultaneously at different levels, such as at the live event, television viewers, and/or social media interactions.
  • Conventional prior art systems disclose collecting information for using in commercialization, however the system of the present invention merge emotions with data of the proceeding of the live event and a situation on hand which is not at all performed by prior arts.
  • the server device (106) may create secondary communication channels for providing second content or additional sub-content to the one or more user devices (110) 1 ... (110) n , wherein the second content or additional sub-content is related to the first content which was broadcasted on the primary communication channel.
  • the input devices (102) 1 ... (102) n comprises different cameras capturing the live event from different camera angles
  • the server device (106) may provide an option to the user of the user device (110) to view the first content from different camera angles (A1-A9) perspective which is implemented using the secondary communication channels presenting the second content and/or the additional sub-content.
  • the application programming interface (API) on the user device (110) may enable displaying of options on the display (310) of the user device 110, from which different camera angles may be selected by the users.
  • API application programming interface
  • a camera angle may be fixed on a particular object in the live event for which the user is interested.
  • the system of the present invention allows the users to enjoy broadcasted content of the live event as per their choice and allows them to interact with the content in real-time.
  • the process flow of the system of the present invention is described herein below with respect to an exemplary scenario of a cricket match.
  • the system of the present invention may be utilized for interactive cricket match viewing.
  • the edge device (104) may be located inside stumps used at the cricket ground during the play.
  • the edge device (104) may wirelessly receive, preferably through Bluetooth, data from a plurality of input devices (102) 1 ... (102) n which may be present at various locations inside the cricket stadium.
  • the input devices may include sensors present on bats of the player, ultra-motion cameras, Spidercam, microphones & camera installed on a hat of the umpire, camera installed on the stumps, and cameras located at various other places within the stadium.
  • the cameras may also be installed so as to focus specifically on each player of teams playing the match.
  • the first content based on data i.e., video from various cameras, data from sensors present on bats of the players, and other such data, captured by the edge device (104) from the input devices (102) 1 ... (102) n is further transmitted to the broadcasting computer (108) and the server device (106).
  • the broadcasting computer (108) which is handled by a broadcaster, may broadcast the cricket match play to one or more user devices (110) 1 ... (110) n .
  • the server device (106) enables users of the user devices (110) 1 ... ( 110)n to ask questions and receive answers in real-time regarding the first content which is being broadcasted to the devices by the broadcasting computer (106).
  • the server device (108) may encompass the broadcasting computer (108).
  • the server device (106) may be a separate entity and remotely placed from the broadcasting computer (108).
  • the server device (106) may facilitate an application programming interface (API) on the user devices (110) 1 ... (110) n using which the users may interact with the server device (106).
  • the users may perform a text search by typing their comments or queries using the API.
  • the users may perform a voice search using the API.
  • the API may support natural language processing to allow users to interact using the voice controls.
  • the server device (106) may use artificial intelligence and neural networks to parse the received queries and further correlate the parsed queries with historical data available from the data storage devices (112) in order to generate the second content.
  • the second content may include response to the queries of the users.
  • the data storage devices (112) may comprise data stored from previous matches and statistical & analytical data which is available from other public forums. More specifically, for instance, any viewer, commentator, or user using the user devices, after or during the event will have an option to post queries or comments regarding the event, such as how many centuries have been scored by a specific player or that the current shot of the player was good or bad quality, and the like. Similarly, other viewers/users using the user devices may have an opportunity to answer such queries or exchange comments about the live event in real time.
  • the server device (106) may provide response to the users using the voice controls.
  • the server device (106) may deliver second content to a voice control unit provided by the API on the user device (110), wherein the second content may include a voice response which will be played by the I/O interfaces (308) along with a text which may be displayed on the display (310) of the user device (110).
  • the second content generated by the server device (106) may be pushed to various user devices (110) 1 ... (110) n and may be displayed simultaneously on multiple user devices (110) 1 ... (110) n .
  • the server device (106) may allow for picture-in- picture using which a predetermined number of secondary communication channels may be broadcasted to the user devices (110) 1 ...
  • a predetermined number of secondary communication channels may be selected and used to show the second content from a perspective of commentators at the match, based on questions being asked, based on region, and the like.
  • the commentator in a live match may use a headset which may be connected to a voice control provided by the application programming interface (API) of the user device (110).
  • the commentator may speak in any language of their choice using the voice control and may ask a question, such as "What is the number of wickets taken by player xxx in yyy cricket match".
  • the API may push this question to the server device (106) which may use neural network to parse the received question and interpret the language of the commentator.
  • the server device (106) may use machine learning algorithms by the neural network to decide on a right source from the data storage devices (112) to pull the data from, based on the question that has been asked.
  • the second content which includes an answer to the question is provided only to the commentator in the headsets using the voice control. Additionally, the second content which includes the answer to the question may also be displayed on the user device of the commentator. Furthermore, the answer to the question may be displayed on multiple user devices including the device of a TV producer.
  • the edge device (104) may directly transmit data to the user device (110) of the commentator along with the transmission to the broadcasting computer (108).
  • the commentator may engage with a robot which is having the voice control.
  • the commentator may ask a question to the robot which further connects with the server device (106) to execute the functionality as described herein above.
  • the robot may output the voice response and may additionally display the received response as a text.
  • the system and method of the present invention enables viewers of a live event broadcast or recorded shows to ask questions regarding content which is being presented to the viewers.
  • the system provides an answer to the queries of the viewers in real time.
  • the answer to the queries may be presented to multiple end-user devices simultaneously by creating secondary communication channels, wherein the user devices may display the answers superimposed on a live broadcasted content which is being displayed on the end-user devices and which is received on a primary communication channel. Therefore, the system facilitates interaction facility to viewers, thereby enhancing viewership experience of users in real-time.
  • the server device (106) may store the questions asked by the users in the data storage devices (112). This helps the server device (106) to generate recommendations by anticipating next question that users may ask further. Thus, the server device (106) may present such recommendations in advance to the users on their user devices (110) 1 ... (110) n , in order to enhance user experience.
  • the server device (106) store all questions that have been asked by other users who might have asked the same question, the server device (106) may remember and narrow the probability of the next question that a person would ask.
  • the recommendations vary based on a profile of the user, the region, the language in which the question is being asked, the side that they are representing in a game, and the like.
  • These parameters may be dynamically created using machine Learning by the neural network which ensures that the most probable question that a user may ask is offered up to the user as a recommendation in advance. This may help the user to keep a dialog without even having to talk to the API using the voice control. This evolving and learning interaction with the system helps to improve the user interaction and to improve broadcast rating because of the better fan engagement.
  • the server device (106) may allow users of the user devices (110) 1 ... (110) n to relate to previous questions that were asked.
  • the server device (106) facilitates retention of the previous question as a reference, the server device (106) may have an ability to predict the next question that the user is going to ask. For example, if a user asks a question "What is the score of player xxx in this match?" The next question may be like "What is the number of boundaries the player xxx has scored?" Thus, the server device (106) may allow users to relate to the previous question and that way make the next question and interaction much easier.
  • the server device (106) may also provide to the user devices (110) 1 ... (110) n a list of the frequently asked questions, trending questions, and 'people who ask this question also asked' options to help encourage fan engagement.
  • the server device (106) may provide operational Intelligence or analytics about the match in terms of questions being asked and fan engagement.
  • the server device (106) may connect with online retail systems, which helps the server device (106) to get information on certain buying habits of a user of the user device (110) from which the user inputs have been received by the server (106).
  • the server device (106) may correlate the information corresponding to the profile of the user with information obtained from online retail systems to determine buying habits of the user.
  • the server device (106) may intelligently determine, using the neural network, a player or a team that the user is a fan of or asking questions about, or does not like.
  • the server device (106) may push certain advertisements or promotional information to the user device (110) based on determined likes and dislikes of the user.
  • the system of the present invention helps to improve the customer experience, and fan engagement.
  • much of the useful and different information such as bat or racquet speed, player running speeds, swing path, ball tracking, game prediction, and other such information may be collected.
  • the server device (106) and the broadcasting computer (108) may facilitate sub-games for engagement of users. Since, the information about the sub-games will be on the server device (106) and will be available for users at any time, the users may watch and interact with the sub-games in which they are most interested in, which can be convenient for users.
  • the server device (106) may create secondary communication channels of telecast for primary communication channel of telecast provided by the broadcasting computer (108).
  • the input devices (102) 1 ... (102) n comprises different cameras capturing the live event from different camera angles
  • the server device (106) may provide an option to the user of the user device (110) to view the first content from different camera angles (A1-A9) perspective which is implemented using the secondary communication channels presenting the second content.
  • the application programming interface (API) on the user device (110) may enable displaying of options on the user device 110, from which different camera angles may be selected by the users.
  • a camera angle may be fixed on a particular player for which the user is interested. Thus, allowing the users to see the match from a perspective of their favorite player.
  • a commentator in a cricket match is using the user device 110
  • the commentator may have access to video feeds of the cricket match obtained from the server device (106), which may help commentator to create his own story using the statistics and analytics provided by the server device (106).
  • the story created by the commentator using the user device (110) may be multicast to other user devices.
  • the users of other user devices may subscribe the data or story or the views that are presented by the commentator.
  • the server device (106) may enable the users of other user devices to interact with the commentator using the user device (110). Therefore, the system of the present invention may make the entire viewership truly on demand and interactive.
  • the server device (106) may generate a third content that is relevant to the first content and may provide an automated voice requests to the API on the user devices (110) 1 ... (110) n to generate questions and answer that trend and reflect the sentiments of the users.
  • the generated questions and answers may be displayed on the user devices (110) 1 ... (110) n which will help to increase the viewership and fan engagement.
  • the server device (106) may allow directed ads based on the questions asked by users of the user devices (110) 1 ... (110) n . Further, the server device (106) may create ROI for Ads and a click through option may be provided that will allow users to buy directly while viewing the match.
  • the server device (106) may facilitate a video of a player tagged with the kind of jersey that they are wearing and/or the bats they are using.
  • a user of the user device (110) clicks on this, the user may be directed to the vendor which may give special discounts that may drive revenue and may increase users connect through the system.
  • the server device (106) may push the right advertisements based on the right situation, conversations, and emotions of the fans in the stadium.
  • the input devices (102) 1 ... (102) n present at the stadium may hear the conversations happening between viewers, and may enable the server device (106) to create a context to a list of advertisers and display the right advertisement at the right time on the displays.
  • the server device (106) may replay a certain event, such as shot played by a player, on the displays at a certain stand based on what the viewers are saying or because their view was blocked by something. This may improve the viewers' engagement and may further help in driving better revenues.
  • the server device (106) For registered viewers where the server device (106) is made aware of favorite players and situations, media may be pushed directly to their user devices wherein the pushed media may be played using the API installed on their user devices.
  • the server device (106) may determine trends of emotions in certain areas of the stand and may correlate this with the events on the stand, which can be used to display a specific product advertisement on the displays installed at that particular stand.
  • the server device (106) may be connected to retailers that will review the information and send a push notification to see if the viewers on the field or users off the field wants to buy a product.
  • the retailers are equipped with strong goods operations and management, the retailers' product may be brought to the one or more users and/or viewers right at the game while they are still watching the game.
  • the user device (110) may be used by players which may help them as they practice.
  • a session of the play of the player may recorded by the user device (110).
  • the API on the user device (110) may convert the recoded video automatically into a highlight session.
  • the highlight session may further be uploaded on the server device (106).
  • the server device (106) may compare the recorded highlight session with sessions of their favorite professional players or with a professional player that matches with the playing style. Thus, the server device (106) may enable a player to review his entire highlights of his play session. Further, the server device (106) may enable the player to share this information with a market place of coaches.
  • the system may facilitate an instant feedback via I/O interfaces (308) to a player while practicing the game.
  • the server device (106) may also store relevant data that can later be accessed by authorized users.
  • a viewer viewing a telecast of game may select or touch on one or more players playing on the game telecast and receive information about their heart rate and even their clothing. This information might lead the viewer to click on the same button and buy the product from within the telecast as the system of the present invention is integrated with the retail organization that can deliver the product to their door step.
  • the sport event is a live sport event or a recorded sport event
  • the captured physical movement of the player comprises a speed of the player, and/or a distance travelled by the player on-field by using the plurality of sensors (i.e., input devices) located on-field in the sporting contest which can all lead to different story lines, channels and different buying opportunities for the viewers.
  • FIG. 5 and FIG. 6 a method performed by a server device and a method performed by a user device, respectively, have been disclosed.
  • the steps recited in FIG. 5 have been performed by the server device (106) and its components, as described above in the description with respect to FIG. 1 - FIG. 3.
  • the steps recited in FIG. 6 have been performed by the user device (110), as described above in the description with respect to FIG. 1 - FIG. 3.
  • These computer program instructions may also be stored in a computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus like a scanner/check scanner to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the function(s) noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved

Abstract

A system for facilitating interactive content viewing is disclosed. The system includes a server device and a user device. The server device includes a receiver configured to receive first content from one or more edge devices, a processor configured to generate second content related to the broadcasted first content and a transmitter configured to transmit the generated second content. The user device includes a transceiver configured to receive the first content, a display configured to display the received first content and a processor configured to receive, through an I/O interface, one or more user inputs on the displayed first content. The user device transmit a request indicating the one or more user inputs to the server device and receives, in response to the transmitted request, second content, wherein the server device generates the second content based on at least one user input received in real time from the user device.

Description

A SYSTEM AND METHOD FOR INTERACTIVE CONTENT VIEWING
FIELD OF THE INVENTION
The present invention generally relates to the field of computer related technologies, and more particularly, to a method and system for an interactive content viewing.
BACKGROUND
This section is intended to provide information relating to the field of the invention and thus any approach or functionality described below should not be assumed to be qualified as prior art merely by its inclusion in this section.
Today, users have multiple means for watching a live telecast or a recording of a show, sport event, or any other media event. For instance, television is one of the commonly used means by the users for indulging into viewership of the broadcasted content. The advent of video over the Internet is starting to change the control relationship between a user and a broadcaster. Apart from the television, the users can view the broadcasted content onto their handheld devices or other portable electronic devices.
With the growing use of technology, viewing experience of the users has improved a little bit. The users are provided with freedom of timing and provided with controls like pause and storing of the content. Still, the nature of that viewing experience is not significantly improved. The users are still not able to interact with the broadcasted content, such as accessing information about a particular part of the broadcasted content, sharing their views about the broadcasted content among their peers, and other such interactive activities. For instance, conventionally, users cannot assess information about a particular player during live telecast of a cricket match or the users cannot directly share/exchange their views about the player during live telecast of the cricket match, and other similar activity.
Furthermore, the conventional systems also lack capabilities of providing personalized viewership experience where users can have an option to view, access, purchase, procure, or gamify any information that is being broadcasted in real time. The existing eco systems in the market are suited for manually analysing by commentators of the sport events, where comparison of a current event is performed with historical data. There is no such provision where viewers/commentators/users/audience/coaches/media/ stakeholders, etc., can search on the fly for historical data relevant to a current event and use the results for varied purposes like evaluating performance of a sports man, predicting occurrence of an event, coaching, broadcasting, etc.
Today there is only one channel of broadcast and the monitory benefits of the broadcaster is limited to only one broadcast channel of the same event.
There is another issue that the current broadcasts of live events (Sports, etc.) don't cater to or haven't leveraged the fact that you have access to end users and don't have to depend on a TRP rating.
There are no solution to the broadcasters to the fact that viewers are interested in on-demand information and are not watching even a three hour game or an event. They are interested in only the highlight packages. These highlight packages are also limited to the packages that are made at the studio to show the highlights and don't have different highlights that can provide different streams of revenue based on the user demographics.
In view of the existing limitations, there is an imperative need to provide a system and method for an interactive viewing of a broadcasted content.
OBJECTS OF THE INVENTION
This section is intended to introduce certain objects of the disclosed method and system in a simplified form, and is not intended to identify the key advantages or features of the present disclosure.
It is an object of the present invention to provide a system and method which facilitates an interactive content viewing.
It is also an object of the present invention to provide a system and method which allows viewers to interact with other viewers during live telecast of an event. It is also an object of the present invention to provide a system and method which improves viewership experience of the viewers, thereby driving increased revenues for the broadcasters and other stakeholders in the overcall eco system.
It is also an object of the present invention to provide a system and method for providing real time data for training, monitoring, gaming, and retail purposes.
It is also an object of the present invention to provide a system and method which enables customized viewership for each and every viewer.
SUMMARY
In accordance with one aspect of the present invention, there is provided a method performed by a server device for facilitating interactive content viewing. The method includes receiving first content which is broadcasted to plurality of user devices on a primary communication channel from one or more edge devices. The first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event.
The method further includes receiving in real time at least one user input based on the broadcasted content received by the one or more user devices from the one or more user devices. Further, the method includes generating, second content related to the broadcasted first content and at least one user input received from the one or more user devices and a transmitting the generated second content to the one or more user devices , in response to the received at least one user input, on one or more secondary communication channels.
In accordance with another aspect of the present invention, there is provided a method performed by a user device for facilitating interactive content viewing. The method includes receiving, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event. The method further includes displaying the received first content, and receiving one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content. Further, the method includes transmitting a request to a server device, the request indicating data specified through the received one or more user inputs and another receiving unit, in response to the transmitted request, second content on one or more secondary communication channels, wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
In accordance with one more aspect of the present invention, there is disclosed a server device for facilitating interactive content viewing. The server device includes a receiver, a processor and a transmitter. A receiver is configured to receive first content from one or more edge devices, the first content is broadcasted to plurality of user devices on a primary communication channel, wherein the first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event. The receiver is further configured to receive, from the one or more user devices, in real time at least one user input based on the broadcasted content received by the one or more user devices. The processor is configured to generate second content related to the broadcasted first content and at least one user input received from the one or more user devices and the transmitter is configured to transmit, in response to the received at least one user input, the generated second content to the one or more user devices on one or more secondary communication channels.
In accordance with another aspect of the present invention, there is provided a user device for facilitating interactive content viewing. The user device includes a transceiver, a display and a processor. The transceiver is configured to receive, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event. The display is configured to display the received first content. The processor is configured to receive, through an I/O interface, one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content. The transceiver is further configured to transmit a request to a server device, the request indicating data specified through the received one or more user inputs, and it is further configured to receive, in response to the transmitted request, second content on one or more secondary communication channels, wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
In accordance with another aspect of the present invention, there is provided a system for facilitating interactive content viewing. The system includes a server device and a user device which are described as above.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 illustrates an architecture of the system of the present invention.
FIG. 2 illustrates a block diagram of a server device, in accordance with an embodiment of the present invention.
FIG. 3 illustrates a block diagram of a user device, in accordance with an embodiment of the present invention.
FIG. 4 illustrates an exemplary view of a screen of a user device, in accordance with an embodiment of the present invention.
FIG. 5 illustrates a method performed by a server device, in accordance with an embodiment of the present invention.
FIG. 6 illustrates a method performed by a user device in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, numerous examples have been set forth in order to provide a brief description of the invention. It will be apparent however, that the invention may be practiced without these specific details, features and examples; and the scope of the present invention is not limited to the examples provided herein below.
Exemplary embodiments now will be described with reference to the accompanying drawings.
The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this invention will be thorough and complete, and will fully convey its scope to those skilled in the art. The terminology used in the detailed description of the particular exemplary embodiments illustrated in the accompanying drawings is not intended to be limiting. In the drawings, like numbers refer to like elements.
The specification may refer to "an", "one" or "some" embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms "includes", "comprises", "including" and/or "comprising" when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Furthermore, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
A system and method for interactive content viewing is disclosed. The system allows viewers to interact with the system in order to access the content in a way desired by the user, thereby facilitating interactive content viewing for the viewers. Referring to FIG. 1, an architecture of the system (100) of the present invention, is disclosed. The system (100) includes one or more edge devices which are located at a place where a live event is happening. For the ease of reference, one edge device (104) has been described and shown herein the description and drawings, however a plurality of edge devices may be present at various locations inside the place depending on the coverage required for the live event. Flerein, the place may refer to but not limited to an appropriate location depending on an event which is taking place at the location. In one exemplary embodiment, the place herein may refer to a cricket stadium if the event is a cricket match. In another exemplary embodiment, the place may refer to a studio if the event is a singing show. Typically, the edge device (104) is present in a non-obtrusive way at the place where the event is taking place. In one exemplary embodiment, the edge device (104) may be located in stumps used in a cricket match. In another exemplary embodiment, the edge device (104) may be present at a boundary of the cricket stadium.
The edge device (104) may receive data from one or more input devices (102)1 ... (102)n. In one embodiment, the one or more input devices (102)1 ... (102)n may refer to one or more sensors located at the place where the event is happening. The one or more sensors are located in such a manner so as to sense data corresponding to various activities happening at the event. For example, in a game of cricket which is happening at a cricket stadium, the one or more input devices may include sensors present on a bat of each player, microphone present at the stumps, cameras installed at various locations inside the cricket stadium, and the like. In one more embodiment, the one or more input devices (102)1 ... (102)n may refer to iPlay devices which are present at the place where the event is happening. The iPlay devices may receive data from other input devices, such as sensors and in real time may transmit data to the edge device (104). In one more embodiment, the one or more input devices (102)1 ... (102)n may refer to sensors known in the art, which may provide an audio-video data corresponding to sensed events happening at the location where the live event is taking place.
The one or more input devices (102)1 ... (102)n may transmit the sensed data to the edge device (104) in real-time. In one embodiment, the transmission by the input devices (102)1 ... (102)n to the edge device (104) may be a wireless transmission. In another embodiment, the transmission by the input devices (102)1 ... (102)n to the edge device (104) may be a wired transmission. The edge device (104) receives data from the one or more input devices (102)1 ... (102)n and further wirelessly transmit first content corresponding to the received data, to a broadcasting computer (108) and a server device (106). In one embodiment, the broadcasting computer (108) and the server device (106) may be remotely placed from each other, and the server device (106) may receive the first content from the broadcasting computer (108). In another embodiment, the broadcasting computer (108) may form part of the server device (106) so as to be one entity and the server device (106) may directly receive the first content from the edge device (104).
In one embodiment, the one or more input devices (102)1 ... (102)n and the edge device (104) may be low powered devices which are capable of transmitting and receiving small or weak signals across short distances to reduce battery consumption. In one embodiment, the edge device (104) may transmit strong signals across long distances, in predetermined conditions for a predetermined time duration. Furthermore, the one or more input devices (102)1 ... (102)n and the edge device (104) may be configured to operate in a stealth mode so as to avoid interference with signals from other devices present at the live event. Due to the use of the edge device (104) along with the plurality of input devices (102)1 ... (102)n, the system of the present invention is efficient, power saving, and cost-effective, as compared to conventional prior art systems in which all of the devices directly transmit data to a server or any other remote computer.
Referring to FIG. 2, a block diagram of the server device (106) is disclosed. As shown, the server device (106) may include one or more processors, such as a processor 202, one or more memory, such as memory 204, a receiver 206, and a transmitter 208. The processor 202 may be communicably coupled with the receiver 206 to receive first content from the edge device (104) or data from other devices. The transmitter 208 may be configured to transmit second content generated by the processor 202. The processor 202 is in communication with the memory 204, wherein the memory 204 includes program modules such as routines, programs, objects, components, data structures and the like, which perform particular tasks to be executed by the processor 202.
The broadcasting computer (108) may live telecast an event, wherein the telecast is based on the first content from the edge device (104). In one embodiment, the server device (106) may be communicably connected with the broadcasting computer (108) to receive the first content. Further, the server device (106) may be communicably connected with storage devices (112) to provide the received first content to the storage devices (112). In an embodiment, the storage devices (112) is used to store historical data about the event which is taking place, such as data about the game which is being played at the cricket stadium, statistical data about the event which is happening, analytical data about the event which is happening, and the like. In one embodiment, the storage devices (112) may store historical data about users who are part of the event, such as players playing the match, or a singer performing at a music concert, and the like. The server device (106) is further connected to one or more user devices (110)1 ... (110)n. In one embodiment, the storage devices (112) may be one or more databases archiving historical data or may be one or more computers of one or more external agencies storing records and historical data.
Referring to FIG. 3, a block diagram of a user device is disclosed. For the purpose of mere illustration, the following description is provided with respect to a single user device (110), however it may not be construed to only one user device as all the user devices (110)1 ... (110)n shown in FIG. 1 comprising same architecture and configuration. As shown, the user device (110) may include one or more processors, such as a processor (302), one or more memory, such as memory (304), a transceiver (306), one or more I/O interfaces, such as an I/O interface 308 and a display 310.
The processor (302) may be communicably coupled with the transceiver (306) to receive signals from the server device (106), from the broadcasting computer (108), and/or from other devices, wherein the signals include first content, second content, third content, and/or additional sub-content. Further, the transceiver (306) may be configured to transmit signals generated by the processor (302), wherein the signals generated by the processor (302) may include user inputs. The processor (302) is in communication with the memory 304, wherein the memory (304) includes program modules such as routines, programs, objects, components, data structures and the like, which perform particular tasks to be executed by the processor (302). The user device (110) may be connected to other user devices either wirelessly or by using the I/O interface (308). A display 310 may be utilized to receive user inputs from a user using the user device 110, wherein the display 310 may be a touch screen display. The I/O interfaces 308 may include a variety of software and hardware interfaces, for instance, interface for peripheral device(s) such as a keyboard, a mouse, a scanner, an external memory, a printer and the like.
In one embodiment, the one or more user devices (110)1 ... (110)n may refer to but not limited to mobile phones, tablets, laptops, personal digital assistant, and other handheld devices. In another embodiment, the one or more user devices (110)1 ... (110)n may refer to smart display devices, such as smart televisions, LEDs, LCDs, smart kiosks, and the like. The user device (110) may receive the broadcasted first content from the broadcasting computer (108) or from the server device (106), on a primary communication channel. In one embodiment, the primary communication channel may refer to but not limited to a radio channel. The first content is displayed by the user device (110) on the display (310). On the displayed first content, the user device (110) may receive one or more user inputs through the I/O interfaces (108) or through interaction of a user on the display (310) which is a touch screen display. In one embodiment, the one or more user inputs may correspond to one or more actions performed by a user on the user device (110), such as voice commands, commands through the I/O interfaces 308, and/or commands through the display (310). In another embodiment, one or more user inputs may correspond to the commands through a remote control device which may be connected with the user device (110). In one more embodiment, the one or more user inputs may correspond to the commands through gesture control functionality supported by the user device (110).
In one embodiment, the user device (110) may comprises an application programming interface (API) which provides means for interaction such as voice control unit (VCU). The VCU is controlled by the processor (302) which enables the user device (110) to receive inputs from a user in preferred language of the user. The VCU allows the user to speak in his native language which is decoded by the processor (302) to generate user inputs which may be sent to the server device (106) for further processing. The VCU further generates an output for the user in a preferred natural language of the user. For example, if a user wants to interact using a German language, then content received from the server device (106) may be output to the user in German language. In one embodiment, along with the generated output in the preferred language of the user, the output may also be displayed on the display (310) of the user device (110). In one embodiment, the user device (110) may refer to a smart large screen display devices installed at a place where the live event is happening. The user device (110) may be located remotely from the server device (106) and may receive communication from the server device (106) through a wireless connection or a wired connection. In one embodiment, the user device (110) may also be located in vicinity to the one or more input devices (102)1 ... (102)n.
The server device (108) may receive a request from the user devices (110)1 ... (110)n' indicating data specified through the one or more user inputs. Based on the received request, the server device (106) may communicate with the data storage devices (112) to extract data therefrom. The data extracted from the data storage device (112) may be processed by the processor (202) to generate second content which is related to the first content which was broadcasted to the one or more user devices (110)1 ... (110)n. In one embodiment, the processor (202) of the server (106) utilizes neural networks to process one or more user inputs received from the user devices (110)1 ... (110)n and to determine historical data which is to be extracted from the data storage devices (112). In one embodiment, the neural networks are models having specific set of algorithms which are executed to implement machine learning and artificial intelligence.
In one embodiment, the processor (202) uses the neural network to parse the received user inputs and based on the parsed user inputs, may determine which of the data storage devices (112) needs to be communicated to extract required historical data. The processor (202) uses the neural network to breakdown the user inputs into smaller fragments which are further analyzed by the neural network to interpret the meaning of the user inputs. The server device (106) may further use machine learning, and artificial intelligence through neural networks to analyze the extracted historical data to generate the second content, wherein the historical data refers to but not limited to statistical data and analytical data related to the first content. The neural network correlates the parsed and analyzed query components with the historical analytical data and historical statistical data to generate the second content requested by a user in the user request.
In one embodiment, the server device (106) may receive additional sub-content from the edge device (104) based on the user inputs parsed by the processor (202) using the neural network. The processor (202) may use the neural network to determine whether any additional content is requested by the users in the user inputs. The neural network parse the user inputs and breakdown the parsed user inputs to understand requirements specified in the user inputs. In an event, the processor (202) using the neural network determines that the additional content is requested by the user, then the server device (106) may instruct the edge device (104) to transmit the required additional content to the server device (106). In one embodiment, the server device (106) may search the requested additional content in the received first content itself, once the server device (106) identifies the additional content which is requested in the user requests. The received or generated additional sub-content is related to the first content which was broadcasted by the broadcasting computer (108) to the one or more user devices (110)1 ... (110)n. The additional sub-content is used in the generated second content. The generated second content is sent by the server device (106) to the one or more user devices (110)1 ... (110)n on one or more secondary communication channels. In an embodiment, the one or more secondary communication channels may refer to radio channels different from the primary communication channel which was used for broadcasting the first content.
In one embodiment, the processor (202) of the server device (106) may use the neural network to generate one or more recommendations based on the user inputs parsed by the processor (202) using the neural network. In one embodiment, the recommendations may comprise suggestions related to the generated second content. In another embodiment, the recommendations may include suggestions regarding probable user inputs which may be sent by the user devices in future. The generated recommendations are transmitted to the one or more user devices (110)1 ... (110)n on the secondary communications channels. In one embodiment, the server device (106) may connect with a social media platform according to profile of a user of the user device (110) to extract preferences, likes, and/or dislikes of the user, which may be used by the server device (106) to generate the recommendations.
In one embodiment, the user inputs received from one user device (110) may be multicast to every other user devices (110)1 ... (110)n by the server device (106). Each of the user devices (110)1 ... (110)n may receive multicast comprising user inputs from every other user device. Along with the user inputs, the server device (106) may multicast second content to each of the one or more user devices (110)1 ... (110)n.
In one embodiment, the first content received by the server device (106) may include voice data of users present at the live event which is captured by the one or more input devices (102)1 ... (102)n located nearby to such users. The processor (202) uses the machine learning through the neural networks to determine emotions and context of conversations included in the voice data captured by the input devices (102)1 ... (102)n . Based on the determined emotions and context, the processor (202) generate second content which is in accordance with determined emotions and context and may transmit the second content to one or more user devices (110)1 ... (110)n which are present at the live event. The processor (202) using the neural network may determine whether there is a need to connect to a social media platform or to an e-commerce provider to meet demands of users identified based on the determined emotions and context. For example, the generated second content in accordance with determined emotions and context, may include relevant statistics related to the live event, advertisement to buy for a particular product relevant to determined context and emotions, and the like.
The server device (106) receives first content about the live event and merges it with viewership information, such as emotions, conversations, history of performers at the live event, and viewers in such a way that a broadcaster may engage with viewers simultaneously at different levels, such as at the live event, television viewers, and/or social media interactions. Conventional prior art systems disclose collecting information for using in commercialization, however the system of the present invention merge emotions with data of the proceeding of the live event and a situation on hand which is not at all performed by prior arts.
Referring to FIG. 4, an exemplary embodiment of a screen of the user device (110) is shown. As described above, the server device (106) may create secondary communication channels for providing second content or additional sub-content to the one or more user devices (110)1 ... (110)n, wherein the second content or additional sub-content is related to the first content which was broadcasted on the primary communication channel. In an event, the input devices (102)1 ... (102)n comprises different cameras capturing the live event from different camera angles, then the server device (106) may provide an option to the user of the user device (110) to view the first content from different camera angles (A1-A9) perspective which is implemented using the secondary communication channels presenting the second content and/or the additional sub-content. The application programming interface (API) on the user device (110) may enable displaying of options on the display (310) of the user device 110, from which different camera angles may be selected by the users. Using the neural network by the processor (202) of the server device (106), a camera angle may be fixed on a particular object in the live event for which the user is interested. Thus, the system of the present invention allows the users to enjoy broadcasted content of the live event as per their choice and allows them to interact with the content in real-time.
The process flow of the system of the present invention is described herein below with respect to an exemplary scenario of a cricket match. The system of the present invention may be utilized for interactive cricket match viewing. The edge device (104) may be located inside stumps used at the cricket ground during the play. The edge device (104) may wirelessly receive, preferably through Bluetooth, data from a plurality of input devices (102)1 ... (102)n which may be present at various locations inside the cricket stadium. The input devices may include sensors present on bats of the player, ultra-motion cameras, Spidercam, microphones & camera installed on a hat of the umpire, camera installed on the stumps, and cameras located at various other places within the stadium. In one embodiment, the cameras may also be installed so as to focus specifically on each player of teams playing the match. The first content based on data i.e., video from various cameras, data from sensors present on bats of the players, and other such data, captured by the edge device (104) from the input devices (102)1 ... (102)n is further transmitted to the broadcasting computer (108) and the server device (106). The broadcasting computer (108) which is handled by a broadcaster, may broadcast the cricket match play to one or more user devices (110)1 ... (110)n.
In an exemplary scenario, the server device (106) enables users of the user devices (110)1 ... ( 110)n to ask questions and receive answers in real-time regarding the first content which is being broadcasted to the devices by the broadcasting computer (106). In one embodiment, the server device (108) may encompass the broadcasting computer (108). In another embodiment, the server device (106) may be a separate entity and remotely placed from the broadcasting computer (108). The server device (106) may facilitate an application programming interface (API) on the user devices (110)1 ... (110)n using which the users may interact with the server device (106). In one embodiment, the users may perform a text search by typing their comments or queries using the API. In another embodiment, the users may perform a voice search using the API. The API may support natural language processing to allow users to interact using the voice controls. The server device (106) may use artificial intelligence and neural networks to parse the received queries and further correlate the parsed queries with historical data available from the data storage devices (112) in order to generate the second content. In one embodiment, the second content may include response to the queries of the users.
The data storage devices (112) may comprise data stored from previous matches and statistical & analytical data which is available from other public forums. More specifically, for instance, any viewer, commentator, or user using the user devices, after or during the event will have an option to post queries or comments regarding the event, such as how many centuries have been scored by a specific player or that the current shot of the player was good or bad quality, and the like. Similarly, other viewers/users using the user devices may have an opportunity to answer such queries or exchange comments about the live event in real time.
The server device (106) may provide response to the users using the voice controls. The server device (106) may deliver second content to a voice control unit provided by the API on the user device (110), wherein the second content may include a voice response which will be played by the I/O interfaces (308) along with a text which may be displayed on the display (310) of the user device (110). The second content generated by the server device (106), may be pushed to various user devices (110)1 ... (110)n and may be displayed simultaneously on multiple user devices (110)1 ... (110)n. In an embodiment, the server device (106) may allow for picture-in- picture using which a predetermined number of secondary communication channels may be broadcasted to the user devices (110)1 ... (110)n along with a primary communication channel on which broadcast is performed by the broadcasting computer (106) to the user devices (110)1 ... (110)n. For instance, a predetermined number of secondary communication channels may be selected and used to show the second content from a perspective of commentators at the match, based on questions being asked, based on region, and the like.
In one instance, if the user device (110) is used by a commentator, the following set of events may take place. The commentator in a live match may use a headset which may be connected to a voice control provided by the application programming interface (API) of the user device (110). The commentator may speak in any language of their choice using the voice control and may ask a question, such as "What is the number of wickets taken by player xxx in yyy cricket match". The API may push this question to the server device (106) which may use neural network to parse the received question and interpret the language of the commentator. Further, the server device (106) may use machine learning algorithms by the neural network to decide on a right source from the data storage devices (112) to pull the data from, based on the question that has been asked. The second content which includes an answer to the question is provided only to the commentator in the headsets using the voice control. Additionally, the second content which includes the answer to the question may also be displayed on the user device of the commentator. Furthermore, the answer to the question may be displayed on multiple user devices including the device of a TV producer. In one exemplary embodiment, the edge device (104) may directly transmit data to the user device (110) of the commentator along with the transmission to the broadcasting computer (108).
In another instance, instead of using the headset having the voice control, the commentator may engage with a robot which is having the voice control. The commentator may ask a question to the robot which further connects with the server device (106) to execute the functionality as described herein above. The robot may output the voice response and may additionally display the received response as a text.
Therefore, the system and method of the present invention enables viewers of a live event broadcast or recorded shows to ask questions regarding content which is being presented to the viewers. The system provides an answer to the queries of the viewers in real time. The answer to the queries may be presented to multiple end-user devices simultaneously by creating secondary communication channels, wherein the user devices may display the answers superimposed on a live broadcasted content which is being displayed on the end-user devices and which is received on a primary communication channel. Therefore, the system facilitates interaction facility to viewers, thereby enhancing viewership experience of users in real-time.
In an embodiment, the server device (106) may store the questions asked by the users in the data storage devices (112). This helps the server device (106) to generate recommendations by anticipating next question that users may ask further. Thus, the server device (106) may present such recommendations in advance to the users on their user devices (110)1 ... (110)n, in order to enhance user experience. The server device (106) store all questions that have been asked by other users who might have asked the same question, the server device (106) may remember and narrow the probability of the next question that a person would ask. In one embodiment, the recommendations vary based on a profile of the user, the region, the language in which the question is being asked, the side that they are representing in a game, and the like. These parameters may be dynamically created using machine Learning by the neural network which ensures that the most probable question that a user may ask is offered up to the user as a recommendation in advance. This may help the user to keep a dialog without even having to talk to the API using the voice control. This evolving and learning interaction with the system helps to improve the user interaction and to improve broadcast rating because of the better fan engagement.
In one embodiment, the server device (106) may allow users of the user devices (110)1 ... (110)n to relate to previous questions that were asked. The server device (106) facilitates retention of the previous question as a reference, the server device (106) may have an ability to predict the next question that the user is going to ask. For example, if a user asks a question "What is the score of player xxx in this match?" The next question may be like "What is the number of boundaries the player xxx has scored?" Thus, the server device (106) may allow users to relate to the previous question and that way make the next question and interaction much easier. The server device (106) may also provide to the user devices (110)1 ... (110)n a list of the frequently asked questions, trending questions, and 'people who ask this question also asked' options to help encourage fan engagement.
In one embodiment, the server device (106) may provide operational Intelligence or analytics about the match in terms of questions being asked and fan engagement. The server device (106) may connect with online retail systems, which helps the server device (106) to get information on certain buying habits of a user of the user device (110) from which the user inputs have been received by the server (106). Depending on profile of the user of the user device (110), the server device (106) may correlate the information corresponding to the profile of the user with information obtained from online retail systems to determine buying habits of the user. The server device (106) may intelligently determine, using the neural network, a player or a team that the user is a fan of or asking questions about, or does not like. Based on the determined likes and dislikes of a user, the server device (106) may push certain advertisements or promotional information to the user device (110) based on determined likes and dislikes of the user. Thus, the system of the present invention helps to improve the customer experience, and fan engagement. Further, due to placement of the input devices (102)1 ... (102)n at various locations inside a cricket stadium and inside sport equipment, much of the useful and different information, such as bat or racquet speed, player running speeds, swing path, ball tracking, game prediction, and other such information may be collected. With this information that is being collected, the server device (106) and the broadcasting computer (108), alone or in combination, may facilitate sub-games for engagement of users. Since, the information about the sub-games will be on the server device (106) and will be available for users at any time, the users may watch and interact with the sub-games in which they are most interested in, which can be convenient for users.
Referring to FIG. 4, the server device (106) may create secondary communication channels of telecast for primary communication channel of telecast provided by the broadcasting computer (108). In an event, the input devices (102)1 ... (102)n comprises different cameras capturing the live event from different camera angles, then the server device (106) may provide an option to the user of the user device (110) to view the first content from different camera angles (A1-A9) perspective which is implemented using the secondary communication channels presenting the second content. The application programming interface (API) on the user device (110) may enable displaying of options on the user device 110, from which different camera angles may be selected by the users. Using the neural network by the processor (202) of the server device (106), a camera angle may be fixed on a particular player for which the user is interested. Thus, allowing the users to see the match from a perspective of their favorite player.
In an event, a commentator in a cricket match is using the user device 110, the commentator may have access to video feeds of the cricket match obtained from the server device (106), which may help commentator to create his own story using the statistics and analytics provided by the server device (106). The story created by the commentator using the user device (110), may be multicast to other user devices. The users of other user devices may subscribe the data or story or the views that are presented by the commentator. Further, the server device (106) may enable the users of other user devices to interact with the commentator using the user device (110). Therefore, the system of the present invention may make the entire viewership truly on demand and interactive. In one embodiment, the server device (106) may generate a third content that is relevant to the first content and may provide an automated voice requests to the API on the user devices (110)1 ... (110)n to generate questions and answer that trend and reflect the sentiments of the users. The generated questions and answers may be displayed on the user devices (110)1 ... (110)n which will help to increase the viewership and fan engagement.
In an embodiment, the server device (106) may allow directed ads based on the questions asked by users of the user devices (110)1 ... (110)n. Further, the server device (106) may create ROI for Ads and a click through option may be provided that will allow users to buy directly while viewing the match.
In an embodiment, the server device (106) may facilitate a video of a player tagged with the kind of jersey that they are wearing and/or the bats they are using. In an event, a user of the user device (110) clicks on this, the user may be directed to the vendor which may give special discounts that may drive revenue and may increase users connect through the system.
In the stadium where the cricket match is happening, different displays (i.e., user devices (110)1 ... (110)n) at different stands may be connected to the server device (106). The server device (106) may push the right advertisements based on the right situation, conversations, and emotions of the fans in the stadium. The input devices (102)1 ... (102)n present at the stadium may hear the conversations happening between viewers, and may enable the server device (106) to create a context to a list of advertisers and display the right advertisement at the right time on the displays. The server device (106) may replay a certain event, such as shot played by a player, on the displays at a certain stand based on what the viewers are saying or because their view was blocked by something. This may improve the viewers' engagement and may further help in driving better revenues. For registered viewers where the server device (106) is made aware of favorite players and situations, media may be pushed directly to their user devices wherein the pushed media may be played using the API installed on their user devices. The server device (106) may determine trends of emotions in certain areas of the stand and may correlate this with the events on the stand, which can be used to display a specific product advertisement on the displays installed at that particular stand. In an embodiment, if any of the input devices (102)1 ... (102)n determines that one or more users want to paint their face with their country colors or buy a jersey similar to their favorite team, the server device (106) may be connected to retailers that will review the information and send a push notification to see if the viewers on the field or users off the field wants to buy a product. In an event, the retailers are equipped with strong goods operations and management, the retailers' product may be brought to the one or more users and/or viewers right at the game while they are still watching the game.
In one embodiment, the user device (110) may be used by players which may help them as they practice. When a player practice, a session of the play of the player may recorded by the user device (110). The API on the user device (110) may convert the recoded video automatically into a highlight session. The highlight session may further be uploaded on the server device (106). The server device (106) may compare the recorded highlight session with sessions of their favorite professional players or with a professional player that matches with the playing style. Thus, the server device (106) may enable a player to review his entire highlights of his play session. Further, the server device (106) may enable the player to share this information with a market place of coaches. Based on a coach recommendation and previous trends of the players' performance, coach's recommendation, and buying history, suggestions may be made to the user on what to buy. The system may facilitate an instant feedback via I/O interfaces (308) to a player while practicing the game. The server device (106) may also store relevant data that can later be accessed by authorized users.
In one more exemplary embodiment, a viewer viewing a telecast of game may select or touch on one or more players playing on the game telecast and receive information about their heart rate and even their clothing. This information might lead the viewer to click on the same button and buy the product from within the telecast as the system of the present invention is integrated with the retail organization that can deliver the product to their door step. Additionally, in an embodiment, the sport event is a live sport event or a recorded sport event, the captured physical movement of the player comprises a speed of the player, and/or a distance travelled by the player on-field by using the plurality of sensors (i.e., input devices) located on-field in the sporting contest which can all lead to different story lines, channels and different buying opportunities for the viewers. Referring to FIG. 5 and FIG. 6, a method performed by a server device and a method performed by a user device, respectively, have been disclosed. The steps recited in FIG. 5 have been performed by the server device (106) and its components, as described above in the description with respect to FIG. 1 - FIG. 3. The steps recited in FIG. 6 have been performed by the user device (110), as described above in the description with respect to FIG. 1 - FIG. 3.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus like a scanner/check scanner to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and schematic diagrams illustrate the architecture, functionality, and operations of some embodiments of methods, systems, and computer program products for managing security associations over a communication network. In this regard, each block may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in other implementations, the function(s) noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved
While the present invention has been described with reference to certain preferred embodiments and examples thereof, other embodiments, equivalents and modifications are possible and are also encompassed by the scope of the present disclosure.

Claims

We Claim
1. A method performed by a server device for facilitating interactive content viewing, the method comprising:
receiving, from one or more edge devices, first content which is broadcasted to plurality of user devices on a primary communication channel, wherein the first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event;
receiving, from the one or more user devices, in real time at least one user input based on the broadcasted content received by the one or more user devices;
generating, second content related to the broadcasted first content and at least one user input received from the one or more user devices; and
transmitting, in response to the received at least one user input, the generated second content to the one or more user devices on one or more secondary communication channels.
2. The method as claimed in claim 1, wherein the step of generating comprises:
parsing, by a neural network, the received at least one user input;
communicating with one or more storage devices to retrieve historical data corresponding to the at least one user input, wherein the one or more storage devices which are to be communicated are determined by the neural network;
analysing, by the neural network, the retrieved historical data to generate the second content,
wherein the historical data comprises stored statistical data and analytical data related to the live event which is being captured.
3. The method as claimed in claim 2, wherein the step of generating comprises:
receiving, based on the at least one user input, additional sub-content from the one or more edge devices, wherein the additional sub-content is related to the first content which is broadcasted to the one or more user devices;
including the received additional sub-content to the second content.
4. The method as claimed in claim 2, further comprising: generating, by the neural network, one or more recommendations based on the received at least one user input; and
transmitting the generated recommendations on the one or more secondary communication channels to the one or more user devices,
wherein the recommendations comprises second content suggestions and/or user input suggestions.
5. The method as claimed in claim 1, further comprising:
multicasting the received at least one user input to all connected user devices.
6. The method as claimed in claim 1, wherein the at least one user input corresponds to user actions performed on the one or more user devices, the user actions comprising at least one of: voice commands, commands through keypad, commands through a touch screen display, commands through a remote control device, and commands through gesture control functionality.
7. The method as claimed in claim 1, wherein the live event is a sports event.
8. The method as claimed in claim 1, wherein the one or more input devices comprise at least one of: sensors located at various places at the live event, sensors placed inside equipment used in the live event, microphones located at various places at the live event, and cameras installed at the live event.
9. A method performed by a user device for facilitating interactive content viewing, the method comprising:
receiving, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event;
displaying the received first content;
receiving one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content; transmitting a request to a server device, the request indicating data specified through the received one or more user inputs;
receiving, in response to the transmitted request, second content on one or more secondary communication channels, wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
10. The method as claimed in 9, wherein the user actions comprise at least one of: voice commands, commands through keypad, commands through a touch screen display, commands through a remote control device, and commands through gesture control functionality.
11. The method as claimed in claim 9 further comprising superimposing display of the second content on a display of the first content.
12. The method as claimed in claim 9, further comprising receiving one or more recommendations related to the one or more user inputs and/or the second content.
13. The method as claimed in claim 9 comprising the step of receiving third content from one or more other user devices via the server device.
14. The method as claimed in claim 9, wherein the one or more input devices comprise at least one of: sensors located at various places at the live event, sensors placed inside equipment used in the live event, microphones located at various places at the live event, and cameras installed at the live event.
15. A server device for facilitating interactive content viewing, the server device comprising: a receiver configured to receive first content from one or more edge devices, the first content is broadcasted to plurality of user devices on a primary communication channel,
wherein the first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event,
wherein, the receiver is further configured to receive, from the one or more user devices, in real time at least one user input based on the broadcasted content received by the one or more user devices; a processor configured to generate second content related to the broadcasted first content and at least one user input received from the one or more user devices; and
a transmitter configured to transmit, in response to the received at least one user input, the generated second content to the one or more user devices on one or more secondary communication channels.
16. The server device as claimed in claim 15, comprising:
a neural network configured to parse the received at least one user input,
wherein, the user device communicates with one or more storage devices to retrieve historical data corresponding to the at least one user input, wherein the one or more storage devices which are to be communicated are determined by the neural network, and
wherein, the neural network analyse the retrieved historical data to generate the second content, and
wherein, the historical data comprises stored statistical data and analytical data related to the live event which is being captured.
17. The server device as claimed in claim 16, wherein,
based on the at least one user input, the receiver receives additional sub-content from the one or more edge devices, wherein the additional sub-content is related to the first content which is broadcasted to the one or more user devices, and
wherein the processor include the received additional sub-content to the generated second content.
18. The server device as claimed in claim 15, comprising:
a neural network configured to parse the received at least one user input,
wherein,
the neural network generates one or more recommendations based on the at least one user input, and
wherein, the generated recommendations are transmitted on the one or more secondary communication channels to the one or more user devices, and
wherein, the recommendations comprises second content suggestions and/or user input suggestions.
19. The server device as claimed in claim 15, wherein the transmitter is configured to multicast the received at least one user input to all the connected user devices.
20. The server device as claimed in claim 15, wherein the at least one user input corresponds to user actions performed on the one or more user devices, the user actions comprising at least one of: voice commands, commands through keypad, commands through a touch screen display, commands through a remote control device, and commands through gesture control functionality.
21. The server device as claimed in claim 15, wherein the live event is a sports event.
22. The server device as claimed in claim 15, wherein the one or more input devices comprise at least one of: sensors located at various places at the live event, sensors placed inside equipment used in the live event, microphones located at various places at the live event, and cameras installed at the live event.
23. A user device for facilitating interactive content viewing, the user device comprising: a transceiver configured to receive, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event;
a display configured to display the received first content;
a processor configured to receive, through an I/O interface, one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content,
wherein the transceiver is further configured to transmit a request to a server device, the request indicating data specified through the received one or more user inputs, and
wherein the transceiver is further configured to receive, in response to the transmitted request, second content on one or more secondary communication channels, and
wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
24. The user device as claimed in claim 23, wherein the user actions comprise at least one of: voice commands, commands through keypad, commands through a touch screen display, commands through a remote control device, and commands through gesture control functionality.
25. The user device as claimed in claim 23, wherein the processor is configured to superimpose display of the second content on a display of the first content.
26. The user device as claimed in claim 23, wherein the transceiver is configured to receive one or more recommendations related to the one or more user inputs and/or the second content.
27. The user device as claimed in claim 23, wherein the transceiver is configured to receive third content from one or more other user devices via the server device.
28. The user device as claimed in claim 23, wherein the one or more input devices comprise at least one of: sensors located at various places at the live event, sensors placed inside equipment used in the live event, microphones located at various places at the live event, and cameras installed at the live event.
29. A system for facilitating interactive content viewing, the system comprising:
a server device, the server device comprising:
a receiver configured to receive first content from one or more edge devices, the first content is broadcasted to plurality of user devices on a primary communication channel,
wherein the first content is captured from a live event by the one or more edge devices communicatively coupled with one or more input devices located at the live event,
wherein, the receiver is further configured to receive, from the one or more user devices, in real time at least one user input based on the broadcasted content received by the one or more user devices;
a processor configured to generate second content related to the broadcasted first content and at least one user input received from the one or more user devices; and a transmitter configured to transmit, in response to the received at least one user input, the generated second content to the one or more user devices on one or more secondary communication channels; and
a user device, the user device comprising:
a transceiver configured to receive, on a primary communication channel, broadcasted first content which is captured from a live event, wherein the first content is captured by one or more edge devices communicatively coupled with one or more input devices located at the live event;
a display configured to display the received first content;
a processor configured to receive, through an I/O interface, one or more user inputs on the displayed first content, wherein the one or more user inputs correspond to user actions performed on the user device in response to display of the first content,
wherein the transceiver is further configured to transmit a request to a server device, the request indicating data specified through the received one or more user inputs, and
wherein the transceiver is further configured to receive, in response to the transmitted request, second content on one or more secondary communication channels, and
wherein the second content is based on the transmitted one or more user inputs and the broadcasted first content.
PCT/IB2019/052108 2019-01-21 2019-03-15 A system and method for interactive content viewing WO2020152506A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1906556.4A GB2578498B (en) 2019-01-21 2019-03-15 A system and method for interactive content viewing
AU2019203202A AU2019203202A1 (en) 2019-01-21 2019-03-15 A system and method for interactive content viewing
AU2021200238A AU2021200238B2 (en) 2019-01-21 2021-01-15 A system and method for interactive content viewing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201941002529 2019-01-21
IN201941002529 2019-01-21

Publications (1)

Publication Number Publication Date
WO2020152506A1 true WO2020152506A1 (en) 2020-07-30

Family

ID=71736814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/052108 WO2020152506A1 (en) 2019-01-21 2019-03-15 A system and method for interactive content viewing

Country Status (3)

Country Link
AU (2) AU2019203202A1 (en)
GB (1) GB2578498B (en)
WO (1) WO2020152506A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7448063B2 (en) * 1991-11-25 2008-11-04 Actv, Inc. Digital interactive system for providing full interactivity with live programming events
US8032508B2 (en) * 2008-11-18 2011-10-04 Yahoo! Inc. System and method for URL based query for retrieving data related to a context

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8286218B2 (en) * 2006-06-08 2012-10-09 Ajp Enterprises, Llc Systems and methods of customized television programming over the internet

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7448063B2 (en) * 1991-11-25 2008-11-04 Actv, Inc. Digital interactive system for providing full interactivity with live programming events
US8032508B2 (en) * 2008-11-18 2011-10-04 Yahoo! Inc. System and method for URL based query for retrieving data related to a context

Also Published As

Publication number Publication date
GB201906556D0 (en) 2019-06-26
AU2021200238B2 (en) 2022-10-27
AU2019203202A1 (en) 2020-08-06
GB2578498B (en) 2022-05-25
AU2021200238A1 (en) 2021-03-18
GB2578498A (en) 2020-05-13

Similar Documents

Publication Publication Date Title
US11711584B2 (en) Methods and systems for generating a notification
US11716514B2 (en) Methods and systems for recommending content in context of a conversation
US11860915B2 (en) Systems and methods for automatic program recommendations based on user interactions
AU2018214121B2 (en) Real-time digital assistant knowledge updates
US11736540B2 (en) Systems and methods for establishing a voice link between users accessing media
US20150248918A1 (en) Systems and methods for displaying a user selected object as marked based on its context in a program
US9510047B2 (en) Systems and methods for automatically performing media actions based on status of external components
US20110106536A1 (en) Systems and methods for simulating dialog between a user and media equipment device
US20110107215A1 (en) Systems and methods for presenting media asset clips on a media equipment device
CN109964275A (en) For providing the system and method for slow motion video stream simultaneously with normal speed video flowing when detecting event
US10063911B1 (en) Methods and systems for re-integrating a PIP catch-up window with live video
US11375287B2 (en) Systems and methods for gamification of real-time instructional commentating
US20150082344A1 (en) Interior permanent magnet motor
US11451874B2 (en) Systems and methods for providing a progress bar for updating viewing status of previously viewed content
AU2021200238B2 (en) A system and method for interactive content viewing
US20230396858A1 (en) Technologies for communicating an enhanced event experience

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 201906556

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20190315

ENP Entry into the national phase

Ref document number: 2019203202

Country of ref document: AU

Date of ref document: 20190315

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911138

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19911138

Country of ref document: EP

Kind code of ref document: A1